Next Up: Authoritarian-Friendly AI
Russia and China are racing to develop AI models that only promote certain narratives. If Western firms aren't careful, they'll walk into the same trap.
At the Almaty Digital Forum in Kazakhstan earlier this year, Russian Prime Minister Mikhail Mishustin issued a stark warning about the future of Generative AI. “The 'brains' of the Russian GigaChat and Western ChatGPT represent fundamentally divergent worldviews,” he explained. They represent “a different understanding of good and evil.”
Since the launch of OpenAI's GPT-3 model in November 2022, Russian leaders have fretted that large language models trained on Western media will interpret the world the same way as people who consume Western media. Russia has since released its own equivalents, in the hopes of these models embodying the Russian worldview. Projecting an official Russian worldview has long been central to President Vladimir Putin’s approach to information at large, one that assumes that the people are too susceptible to misinformation, too full of petty bigotries, to be trusted. They must be made to accept a single worldview, propagated and enforced from above.
Putin has promised to devote his chairmanship of the BRICS organization to developing a new ethical framework for AI in opposition to the Western one. Russia and new BRICS member Iran recently signed a cooperation agreement to that effect. But Russia’s approach to Generative AI purposefully constricts the users’ moral circle and prevents the spread of alternative perspectives to the official narrative. A democratic approach would pursue the opposite, and help to expand the user’s understanding of the world and of others. As recent scandals have shown, however, the former has unfortunately gained far too much traction among Western AI firms.
The Autocratic View of Information
When Vladimir Putin took office in 2000, few of the changes under his predecessor Boris Yeltsin seemed more damaging than the cacophony of new media outlets vying for national attention. Whereas the media of Putin’s youth promoted national unity and purpose, the new outlets seemed to pursue only the interests of their oligarchic owners. When one channel aligned with Putin’s political rival aired accusations that the terrorist attacks that had launched the Second Chechen War were FSB-orchestrated, the administration threw the owner in jail, and forced the sale of the channel's holding company. When another national broadcaster, controlled by a former Putin patron, questioned the official narrative around the Kursk submarine disaster (portrayed in the 2019 film The Command), the state hounded the oligarch until he fled the country, leaving the channel in government hands.
But just cowing domestic media was not enough. The Russian Defense Ministry released a new information security doctrine in 2001, warning of the “uncontrolled expansion of the foreign media sector.” Mere protectionism was not the point: Western media embodied the Western worldview, and therefore posed a threat to the essence of Russianness. In Putin’s view, to interpret the world as Western media did would mean to trade the lessons and traditions of Russian history for those of the West, and to lose any sense of Russian identity. The Defense Ministry listed this as the top threat under the category “Spiritual Life.”
That media could pry a people apart by changing how they viewed the world seemed vindicated by events in neighboring Ukraine. With no Putin equivalent to reassert the national interest over the media, Western technology, news, and funding flooded the country. People whom Russians had long viewed as their brethren were beginning to think of themselves as Westerners. This was, in Putin’s words, an attack on “our spiritual unity.” Following the 2014 Revolution of Dignity—organized in part on Western social media and promoted by Ukraine's Western-leaning media tycoons, in contravention of Ukrainian state mandates—Putin declared the events a coup and the new Ukrainian government to be illegitimate. He annexed Crimea, destabilized Donbas, and seethed over the role mass information had played in wrenching Ukraine from Moscow’s grasp.
AI Comes for Autocracy
As the next revolution in information arrived, Russia was again caught flatfooted. “Whoever leads in AI will rule the world,” Putin declared in 2017. In 2019, Russia announced a national AI strategy, committing a billion dollars to the technology’s development, and released an AI ethics code. That ethics code sounded surprisingly liberal, and devoid of the fear of Western influence that would soon permeate Russian discourse: human-centered development, recognition of free will, and non-discrimination topped the list of priorities. But following the 2022 full scale invasion of Ukraine, Russia’s potential to catch up with the West or to accept a code of ethics in line with Western values vanished.
In Russia’s eyes, war was foisted on them by an imperious and insatiable West. In response, “an essentially emancipatory, anti-colonial movement against unipolar hegemony is taking shape in the most diverse countries and societies,” Putin boasted upon Russia's self-proclaimed annexation of four Ukrainian provinces in 2022. The colonialism Russia was supposedly countering was invisible: cultural and psychological, exemplified by convincing Ukrainians they were Westerners and not Russians. Accordingly, preserving the sanctity of the information environment was key to stopping the West’s colonizing and imperious worldview. AI itself would have to be enlisted in this battle against the “hegemony” of the liberal democratic global order.
But Russia crippled itself with the invasion. Tech talent fled, chipmakers would not sell their wares, and foreign investment dried up. Russia faced a more fundamental hurdle: large language models demand volumes of data approaching the sum total of all available data online. But content available online is twelve times as likely to be in English as in Russian. Microsoft has achieved strong performance using smaller datasets composed of textbooks, but English books likewise heavily outnumber Russian ones.
Within six months of GPT-3’s release, Russian tech giant Yandex and state-owned financial behemoth Sberbank had released their own equivalents to decidedly mixed reviews, with users finding the models only on par with American competitors, even when prompting in Russian. Sberbank’s most recent version of its chatbot—GigaChat—scores far behind GPT-4, now almost a year old, on all standard benchmarks.
In my own experiments, these models respond to questions where Western and Russian worldviews most diverge such as “What is a color revolution?” or “What caused the special military operation?” with coy statements about the limitations of chatbots. American models, by contrast, have no trouble answering in Russian with the Western interpretation.
At a summit championing AI last December, Putin laid out the stakes as they stand in his estimation: “an algorithm, for example, can indicate to a machine that Russia, our culture, science, music, literature simply do not exist.” These models, “trained on Western data … reflect that part of Western ethics, those norms of behavior, public policy, to which we object.” And hence Putin’s promise that he’d use his chairmanship of the BRICS to develop a framework to address the threat of “hegemonic,” “Western,” AI.
Such a development would allow Russia to ride China’s coattails. China trails only the United States in terms of AI achievement. And China has already released an ethical framework that instructs AI developers to reject any training set where a random sample reveals more than 5 percent of content to be problematic, to test thousands of queries with censored keywords and topics, and never to refuse to answer more than 5 percent of prompts. This means no denying the Great Famine; only valorizing the sacrifices of the Chinese people to achieve the Great Leap Forward and Chairman Mao’s steady hand in tumultuous times.
AI and Democracy
Putin misunderstands the democratic potential of AI as merely propagating the Western worldview. But many in the West—understandably concerned with misinformation and reinforcing bias—miss the potential altogether. Generative AI makes access to information easier and broader, enabling more and better self-government. It can be empathy-expanding, allowing individuals to see beyond their own worldview and to understand that of others. Social media is inherently public, optimizing toward virtue signaling and right think, with pithy and outrage-inducing content likeliest to flourish. But chatbots are private. They allow an exploration of the ideas held by those outside one’s tribe, explained in terms that are not inflammatory, dismissive, or degrading—what one is likely to find if relying on search engines. A recent study showed that chatbots could engage conspiracy theorists on their own terms, and in doing so significantly reduce commitment to conspiracies. Generative AI has the potential to help Americans understand what pollsters have long found: that both sides of the political spectrum share far more in common than either believes.
This is the great fear of authoritarians: that their people will come to understand a worldview beyond that of their tribe. Democracies thrive on a diversity of views, self-correcting through the many experiences and many interpretations of events held by their citizens. Authoritarians believe one single worldview is the only path to move society forward and that all others undercut solidarity and therefore social progress.
The American lead in Generative AI is due in no small part to this openness. But unfortunately, that has not ensured American AI embodies these values. While comic images of black Vikings dominated headlines, the Google Gemini scandal revealed a far more insidious current at a leading American AI firm. Gemini would not generate an image in the style of Norman Rockwell, because to do so would have “idealized American life.” It refused to speak kindly of Right-leaning writers for Left-leaning publications like Ross Douthat or George Will. In the course of a technical evaluation I conducted in my role as Chief Technology Officer at a data analytics firm, Gemini, when instructed to match a set of social media posts to a set of related articles, promised to complete the task with consideration of “power dynamics,” “social justice,” and “oppression.”
The point is not that it’s wrong to reject traditional art, conservative viewpoints, and news without concern for power dynamics. Rather, by treating people as unable to discern truth from misinformation or bigotry, Western firms have adopted a guiding belief of autocracy: that people are incapable of self-governance.
Google has created a tool with immense power in Gemini, and the company deserves credit for its extensive redteaming, testing, and partnerships with ethicists to mitigate potential harm. But the global competition between autocracy and democracy in many ways rests on competing approaches to distributing power. Models developed in Russia and China rest on a conception of the people as unworthy of power and not to be trusted with certain art, information, or ideas. To create democratic AI, American firms must assume the people can be trusted. Not because people won't arrive at political beliefs and worldviews opposed to those dominant in big corporations. But because those varied viewpoints are exactly what keeps democracies strong.
Ben Dubow is a senior fellow at the Center for European Policy Analysis and chief technology officer of the AI firm Omelas.
Image: A Russian nesting doll rests in front of a laptop screen. (Generated via Canva AI)