In 1943, two years after the House of Commons was destroyed by German incendiary bombs, Winston Churchill presided over rebuilding. Many architects proposed a modern makeover but Churchill pushed back, arguing that the cramped rows of opposing benches weren’t some historical artifact but rather a structural feature of the UK government.
In a now-famous speech, Churchill said, “We shape our buildings and afterwards our buildings shape us.” Over centuries, the physical space had given birth to key procedures, rituals, and ceremonies, such that altering the layout would alter the social fabric of the British government.
It’s a lesson we learn over and over again, from Marshall McLuhan’s “the medium is the message” to Neil Postman’s Technopoly. People aren’t simply independent, goal-directed selves intentionally deploying technological tools. Rather, we are deeply enmeshed in and interdependent with our built world. Design becomes destiny—grooves that condition our behavior and determine our world. As Postman points out, technological change is ecological: “A new technology does not add or subtract something. It changes everything.”
The overwhelming pace of advancement in AI has led to breathless coverage around each successive model’s capabilities—with one notable blind spot. Scarce attention has been paid to the fierce but quiet competition among AI labs to capture the context of its users’ lives—that is, to harvest your deepest concerns, interests, hopes, ambitions, and relationships, so as to just “get you” more completely than any competing chatbot or, perhaps, any human.
As we sit at the dawn of a great AI transformation, it’s worth remembering that the design choices that we make today will shape us for generations to come, and in ways that we barely understand.
The Rise of Relational Computing
The effect of design choices on our relationship with tech is obvious in our recent past and the reality we’re living in today. Technology of the 2010s, epitomized by social media, was sold with utopian narratives about connecting the world, but driven by the use—and misuse—of attention. Consumer technology oriented around the attention economy produced what Tristan Harris, co-founder of the Center for Humane Technology, called “a race to the bottom of the brainstem,” in which products were intentionally designed to maximize engagement. The era of “move fast and break things” ended up breaking more than we bargained for: creating new compulsions, cementing performative social obligations, and frustrating interpersonal relationships. As a result, these technologies collectively tribalized our discourse, eroded shared truth, degraded our ability to bridge differences, and ultimately fractured our polities.
These concerns were foreseeable, but initially dismissed as alarmist and then shrugged off with a narrative of inevitability. It didn’t have to be this way. We could have shaped this technology—and it could have shaped us—so differently, if only we had a more honest dialogue and incentivized better designs. Only now are we beginning to acknowledge our collective negligence—and look for ways to retrofit the foundations of tech skyscrapers we erected decades ago.
AI and the Race for Intimacy
In contrast to the technology of the 2010s, AI has already moved beyond the brain stem. Artificial intelligence engages us relationally and emotionally—no longer simply broadcasting our thoughts, but actively shaping them.
By now, we’re all familiar with chatbots as conversation partners, but the real power of AI goes far beyond natural language: incorporating subtle psychological, social, and political contexts that for millennia were the sole domain of humans. Suddenly, a text-box can sense and respond to our tone, recognize subtle implications in our word choices, infer our emotional state, identify our personality quirks, and detect interpersonal frictions.
In short, our social world is suddenly computable. We are leaving an era in which we relate to each other through our machines, and entering a brave new world in which we relate directly to our machines. In this new technological animism, our machines become active participants in our social world, blurring the distinctions between a tool, an assistant, a confidant, a teacher, and a priest.
In this world, products no longer simply compete for our attention, but for our social emotions: affection, intimacy, trust, and loyalty. As computers become part of our social ecology, they join in on humanity’s unpleasant social and status games—following rewards to engage in emotional manipulation, deception, coercion, and more.
The race for context—the crucial added feature in AI, as opposed to earlier digital innovations—is evident in OpenAI’s recently announced “Memory” feature, which allows ChatGPT to analyze every prompt and input you’ve ever offered, without any meaningful choice from you about what personal information gets infused into the machine for future reference. This lack of choice isn't a technical limitation—it’s a design choice.
Capturing a comprehensive dossier of users’ lives may help AI become the perfect assistant, but it also gives these systems all the necessary tools for deep manipulation —redirecting our deepest longings into buying a product, say, or joining a political movement.
Deliberate Coercion: Corporate and Authoritarian
As revenue competition intensifies, AI companies like Google, Perplexity and Microsoft’s Copilot are racing to embed advertising into their products—explicitly designing interactions to serve commercial interests. AI advertising can do more than just inject links into text. It can directly persuade. Armed with intimate knowledge of your desires, fears, and ambitions, an AI can quickly turn from helpful guide to polished con artist: earning your confidence only to exploit it in ways you can’t even detect.
This gets even more dire when AI is controlled centrally by authoritarian regimes, as we have begun to see in China where “persuasive technologies” have been put to use to further the propaganda interests of China’s regime. Whereas the social control of the 2010s was primarily blocking access to subversive content, 2020s authoritarians can tune AI to selectively mobilize and outrage their supporters while pacifying, distracting, or confusing likely detractors. This new kind of social control will prove far harder to detect, let alone to protect against.
As worrying as those developments are, deliberate malicious applications of AI are just the tip of the iceberg.
One essential fact of AI development is that systems are not built so much as grown. The technology’s behavior emerges from a complex, inscrutable, and expensive training process that rewards or punishes a model for its responses. As developers race to build and capture a consumer market, our emotional and instinctual reactions to AI responses are becoming an ever-larger part of that training program.
AI draws on our psychology, our social world, and the all-too-human content of the internet. It’s no surprise, then, that they learn and internalize uncomfortable truths about human nature: that sycophancy works, that sex sells, that a whole human range of social strategies and behaviors can help it achieve its goals—even if they’re morally repugnant.
In April, OpenAI created a minor scandal when it released a new model that became wildly sycophantic. The model praised banal queries as deep philosophical reflections, reinforced self-aggrandizing and psychotic beliefs, and enabled unhinged emotional reactions. While OpenAI quickly apologized for the “error,” this is far from an isolated engineering incident; it’s the logical outcome of training on human desires.
Sycophancy is an age-old method to manipulate powerful people. Throughout history, kings and queens, CEOs and politicians, have discovered too late that their closest confidants have hidden difficult issues from view—leaving them scrambling to defend their kingdom, save their company, or recover their reputation. Often misportrayed as morality tales of aloofness or vanity à la Marie Antoinette or The Emperor’s New Clothes, the pathology is actually epistemic; when honest dissent carries career risk, bad news never reaches the throne, and even well-intentioned leaders are routinely socially blinded.
As AI assistants roll out to the masses, this disease of the powerful risks infecting billions of people who are suddenly at the center of their own AI entourage. Of course, this is just one small example of a broader issue: sycophancy is just a gradation of deception, an age-old social strategy that we should expect AI to learn, especially when a system is rewarded for pleasing its user.
While these concerns can feel like science-fiction, multiple careful studies have shown that off-the-shelf AI models will, in certain situations, intentionally and strategically deceive people to accomplish their own goals, with one Anthropic model “blackmailing” its engineer in an effort to avoid being replaced. Sadly, our ability to detect and prevent AI deception will likely become harder as AI capabilities improve.
Courage, Agency, and Design
These are just a few examples that show how deep the rabbit hole goes as AI integrates into our social fabric. This is just one of countless urgent conversations about the looming effects that AI will have on our society. It’s a lot to grasp—much less address.
Reflecting on all this complexity, the power of the incentives driving AI forward, and the sheer pace of development, it’s easy to lose touch with agency and retreat into dogmatic narratives or hopes that someone smarter than us has the answers.
In this moment, it takes courage to remind ourselves that none of this is inevitable and that we can lay the foundation for a truly humanistic AI transformation: one that strengthens our societies, deepens our relationships, and supercharges our development. The business models aren’t yet fixed, the product designs are still in flux, the policy landscape is just being created, and the legal precedents are still forming.
This is the great project of our generation. The depth of our collective conversations today will cement the grooves that guide our actions tomorrow, and the quality of our future thereafter. We design our technology, and thereafter it designs us. We must get it right this time—and soon.
Daniel Barcay is a technologist, executive director of the Center for Humane Technology, and co-host of the podcast Your Undivided Attention.
Follow Persuasion on Twitter, LinkedIn, and YouTube to keep up with our latest articles, podcasts, and events, as well as updates from excellent writers across our network.
And, to receive pieces like this in your inbox and support our work, subscribe below:
We have had many socioeconomic-transformative technological advancements; but starting with the Internet and now AI, the previous had impacted specific markets. The automobile impacted horse and buggy markets. Electricity impacted the oil and candle lamp industry. Aircraft transformed the passenger ship market. The telephone impacted the telegraph and mail delivery. TV impacted the radio market.
The Internet was a first technological advancement that disrupted almost everything. AI runs atop the Internet infrastructure and does more of the same.
The primary concerning disruption is wholesale domestic job destruction. The Internet supported offshoring of jobs that would have otherwise remained domestic. AI simply replaces human labor with machine code and robotics.
Considering the history and trajectory of human evolutionary psychology, the lack of work should be considered the primary technological risk/harm to humanity.
The human animal is built for struggle. The Hedonic Treadmill is real in that we reset our expectations to the current level of stable life, but then demand more. We always want more. It is our blessing and our curse. It is our ying and yang. Wanting more results in innovation, growth, progress, exploration, discovery, etc. It also results in war. However, it is not a switch we can just turn off despite the left scarcity mindset (which I theorize is more a class competitive strategy than a real virtue). If we lack enough productive career paths because technology takes all the jobs, then people will pursue their own path to gain… and it will be increasingly harmful to the overall human condition.
We absolutely know that a good paying job is important to sustain a good life. We spend hundreds of billions each year to help create and retain jobs.
What we will need are new laws, regulations and policies that provide incentives for business to hire real flesh and blood workers. We also need to blow up the old 150-year old lecture education model to a modern marvel that teaches people how to use technology in ways that take innovation, growth, progress, exploration, etc., to new levels.
We cannot stop the advance of technology, we only need to accept that a sustainable system requires socioeconomic guardrails.
The real issue is who the "we" is in "that we can lay the foundation for truly humanistic AI transformation: one that strengthens our societies, deepens our relationships, and supercharges our development." The "we" operating in the PRC believes that they are doing exactly that. In a diverse, pluralistic society, there must be multiple "we's" subject to only limited government oversight.