Discover more from Persuasion
Humans Suck at Seeing Into the Future
Breathless predictions about nuclear power put today’s AI “revolution” into perspective.
In 1945, humanity first released the massive power of nuclear fission, on the plains of the Jornada del Muerto in New Mexico. This was Trinity, the first nuclear explosion in history. The result was far from a controlled harnessing of energy, but it wasn’t meant to be; the Manhattan Project’s mandate was to produce bombs of unprecedented destruction, and in that it succeeded wildly. But thanks to further investigations into fission, largely undertaken to better understand the nature of the atom, six short years later nuclear power was harnessed to produce electricity for the first time. Three years after that, the American submarine USS Nautilus launched, and for the first time nuclear power was used for a sustained and practical purpose. In 1957, the Virginia-based power plant known as SM-1 began supplying power to the electrical grid, and the age of nuclear power was upon us.
Imagine if a particularly prescient person observed what was occurring. Developed by Lise Meitner and Otto Frisch in 1938, the practical science of nuclear fission had gone from theory to commercial application in less than two decades. Nuclear had gone from tearing apart Hiroshima and Nagasaki to providing clean energy for the citizenry in a dozen years. Even without expert knowledge of nuclear power, our observer would be totally justified if they believed that the technology would irrevocably change the world. Such a person would not have a sophisticated modern understanding of the importance of carbon release in the production of energy and its attendant effect on global temperatures, but they would surely still understand the value of dramatically reducing the amount of air pollution being released into the world.
We now know, of course, that the burning of fossil fuels releases carbon into the atmosphere, which then traps heat, gradually raising the world’s temperature, particularly at the poles. This change in climate presents many potential problems for humanity and the environment, both of which exist in an exquisite balance. Even the more restrained projections of the destructive potential of climate change demonstrate that its effects could be exceedingly costly for civilization. If given an education in the modern understanding of climate, our prescient observer might have noted that, widely and responsibly deployed, nuclear power could have helped prevent one of the biggest global challenges in human history.
If they grasped nuclear power’s capacity for severing the relationship between global supply lines and energy production, the consequences for their current era might have seemed even more vast. The story of the 20th century, to a remarkable extent, was the story of the fight for fossil fuels, the immense strategic, diplomatic, and espionage resources spent in the pursuit of convenient forms of portable energy. Wars were fought for oil, governments toppled, regions destabilized. All of these came at the expense of blood, treasure, and the serial violation of human rights by great powers. Quite literally, since the end of World War II there has been no question of greater international strategic importance than the question of who has access to fossil fuels—and yes, that holds true even considering the existence of nuclear weapons. Today, nothing changes a country’s international standing more considerably or more quickly than the discovery of oil. It turns ghost towns into boom towns, it props up dictators, it shifts the balance of world power. It’s the stuff that drives the world.
To pick one particular region, in 1953 the effort to secure British access to oil compelled the United States to help depose the established government of Iran and re-install the Shah, leading directly to the conditions that enabled a conservative Muslim revolutionary movement that still holds power in the country today; Iran would go on to engage in one of the bloodiest wars of the past half-century against fellow regional power Iraq, with the United States hedging its bets by sending weapons to both sides; saddled by war debt, Iraq would go on to throw its weight around by invading oil-rich Kuwait; this action was seen as imperiling Saudi Arabia, whose own theocratic government controls the second largest proven oil reserves in the world, and who invited American troops into the country for their defense; the presence of those troops in Islam’s holiest places enflamed a rich zealot named Osama bin Laden, who orchestrated an audacious terrorist attack against the United States; hungry for vengeance, the Americans invaded Afghanistan and, later, Iraq…. You get the idea. In a world where nuclear energy had become a dominant form of energy production, the path of recent history would have been radically different. It’s true that fissile materials are themselves natural resources that require extraction, and could spark conflict, but it’s also true that they are not nearly as rare as believed in the popular imagination, and vastly less of them is required to produce a comparable amount of energy as oil.
There was every reason for a nuclear revolution to happen. Nuclear energy is far safer than many people think, with only a small handful of adverse incidents in over 70 years of using fission to generate electricity. Indeed, the overall reliability of nuclear power plants is believed to exceed that of most other forms of energy generation. Initial costs for establishing plants can be steep, but not unreasonably so relative to building a conventional fossil fuel plant, and such costs can be recouped with nuclear power’s efficient energy generation. The land footprint of nuclear power plants is small, typically requiring something like one square mile, while wind and solar farms take up vast stretches of geography. The actual electricity-generation function of nuclear fission is zero-carbon, and while the plants themselves generate some carbon impacts, they’re a rounding error compared to burning gas or coal. Widespread nuclear use could have hastened overall electrification of our society, which in addition to combating climate change has a tendency to reduce maintenance costs, and not just in cars; replacing gas-powered engines with electric motors in most contexts can provide potential maintenance advantages. Nuclear power would not have come at zero cost—nothing good does—but the potential benefits to society were immense, and in the latter half of the 20th century, we had every ability to grab them. And the truth is we have no idea just how different the world would look now, had the adoption of nuclear power taken what seemed to be its natural course.
Then the nuclear revolution just… didn’t happen.
A graph of the development of nuclear power, since its inception, looks like a hopeful thing at first. From its beginnings in the late 1950s, the number of plants and kilowatts of energy being generated rises steadily. Europe, in particular, seemed poised to adopt nuclear at scale, with France being particularly aggressive in opening plants. In 1973, The New York Times declared a “building boom for nuclear power plants.” But, at around the 1990 mark, the growth in nuclear energy plateaus, and at some point in the mid-2000s, begins to fall. While some new plants are still being built, a projection of future closures and the paucity of new construction suggests a rapid decline in the use of the technology. In the recent past there have been enough green shoots, so much so that, in 2008, the Times suggested that we were entering an era of revival. But such optimism has mostly not panned out, and now renewables like solar and wind have more public support, more buzz, and higher government subsidy. As if to put a bow on things, Germany recently shut down its last nuclear power plant—during an energy crisis that has underlined the country’s historical dependence on Russian natural gas.
An informed observer of these developments would have been perfectly justified in assuming that, even under conservative estimates, the technology would result in massive changes in the lived experience of human life. And yet today nuclear power is largely a curio, and one with a bad reputation. Very misguided (but consistently effective) protestors have led a lot of people to reflexively oppose nuclear without knowing why, and that’s just in the abstract—in specific locations, even many people who are ostensibly in favor of nuclear power vociferously oppose new plants. Now people only want to talk about “renewables.” Today, only 10% of worldwide electricity generation comes from nuclear power. That isn’t nothing. But the technology remains a revolution that could have happened and didn’t. And I think it’s an object lesson in epistemological humility: we don’t really know what’s going to happen, today or in the future, about technology or anything else.
Now, when I have made this point privately in the past, I’ve gotten some version of this response: “but not developing widespread nuclear power was stupid! The people who oppose nuclear power are idiots!” To which I say, yeah, mostly. Nuclear energy is a no-brainer in a world of climate change. And we can still change course on expanding its use. But my point here is not to say anything in particular about nuclear power, other than this: sometimes things don’t happen when you have every reason to think they will. Sure, opposition by stupid people sank nuclear energy, not its safety or efficiency. But stupid people are the most powerful force in the world, and they will follow us into the future no matter what we do. They’re one of a myriad of chaos agents that bend the course of human history, unpredictably and in defiance of all of our extrapolation. Betting on nuclear power’s large-scale adoption would have been a very smart thing to do, back in 1957. It just also would have turned out to be a losing bet, against all sense. Sometimes history rolls snake eyes.
I’m saying all of this as a prologue to an argument about artificial intelligence. Recent developments in machine learning and “neural networks” have left our news media and our intellectual culture, let’s say, a little excited. The media is absolutely festooned with what I would call “AI maximalism.” The term is meant to denote both people who think artificial intelligence is going to save humanity from all of its problems and those who think it’s going to kill us all, and both relatively soon. A term that refers to both is necessary because both stem from the same impulses—the desire for an entirely new world, the common human yearning for revolutionary change, the tendency of our culture to reward hype. Every day, you can read a new article in a stuffy publication expounding on all the ways that AI is going to completely change the world; every day, you can read thousands of tweets by people certain that the AI-dominated future is not just coming but coming sooner than you think. There’s a weird self-denying aspect to all of this; we have a media conversation about AI that is as alarmist and sensationalistic as I can imagine, and yet many on social media routinely complain that we’re not taking AI seriously enough. This has the advantage of making them seem like wise Cassandras, but it defies all evidence. It’s genuinely unclear to me how the AI hype could grow more heated than it stands now.
You can read a lot of intricate arguments about what AI will and will not be capable of in the near future. Unfortunately, there’s about 1% careful engagement to 99% insane hype. (Actual headline from a professional publication: “AI Can Now Make You Immortal—But Should It?”) The trouble, in part, is that there’s just so little professional advantage in telling people to slow down and take a breath. The nature of contemporary journalism and social media ensures that “AI IS IN YOUR HOUSE WITH A KNIFE RIGHT NOW” will always, always outdraw “AI Might Have These Interesting Consequences, But Maybe Not.” What I can’t understand is why so few are adjusting to this reality by deciding to be more consciously cautious. Look at the current landscape. Which is more likely? That the hype now is too little, or too much?
The problem is that people mistake rapid development in a given technology for a rapid onset of predictable consequences. As nuclear power shows us, those aren’t nearly the same thing. What so many people who talk about AI seem not to understand is that the argument against AI maximalism is not an argument about technology. It’s an argument about the contingency of history. Human beings have a terrible track record when it comes to futurism, and there is no reason to believe that we’ve gotten better at it in the recent past. I could come up with a whole argument about the mathematical reality of chaos and the tendency of minor changes to make massive impacts on human events, but it’s easier simply to observe that we try and fail to predict the future, again and again, and we’re always wrong, and then we go on confidently making predictions again.
When I got to Purdue University in 2011, in pursuit of my Ph.D., I met a couple of very earnest grad students working with 3D printing, which was having something of a moment back in those days. And these guys—extremely intelligent, well-credentialed, brilliant in the technical capacities of their field—told me that, in ten or twenty years’ time, stores like Walmart would have to radically retool or close their doors, because everyone would have a large-scale 3D printer in their garage that would produce most of their physical goods. They very sincerely believed that to be true, and they were deeply versed in the reality of the technology. They weren’t alone. A dozen years or so later, and adoption of the small-scale hobbyist 3D printers that are commercially available remains anemic, owing to the limitations and frustrations associated with their use. Self-driving car hype was so intense seven or eight years ago that writers at serious publications could solemnly predict that it would soon be illegal to drive your own car. Today self-driving cars remain a niche of a niche, employed in only a few select locales because even mildly inclement weather can paralyze their systems. Just a few years ago, 5G cellular networks were predicted to have massive effects; today, even in those places where the required infrastructure has been deployed, very few people can point to any material change in their lives at all.
You can say that these are all minor examples, and maybe they are. But they point, again, to the simple reality that we’re bad at predicting the future, and that the incentives of hype in our information economy are now so intense that it’s effectively impossible to be wrong by telling people to calm down. It appears today that not even The New York Times or The New Yorker can resist the siren song of sensationalism, so I guess it falls to me to say: calm down. AI is not coming to save you from your ordinary life, not with utopia nor with apocalypse. You have to make peace with this life, here, in a world that will go on being more or less the world you know. I’m sorry to disappoint you.
Will AI change some things? Probably. I’m sure it will disrupt some industries, to use that tedious phrase. It already does some cool things. But the basic technological situation is this: though many people believe that we live in some technologically revolutionary period, the fact of the matter is that human scientific advancement peaked from around 1860 to around 1960 and has slowed considerably since; the only field where tremendous progress has been made is information science and communications; and what architects call the built environment (that is, the corporeal reality in which we actually live) has changed remarkably little in a half-century. And I think that dynamic persists with AI: it will likely transform computing but have far more limited influence on the real world. As the Silicon Valley people say, bits are easy, atoms are hard. Meanwhile, we will adapt to the social and cultural changes and preserve the disappointing grind that is modern life. Normal always adapts.
I can anticipate an objection to my analogy here: nuclear power requires a tremendous amount of physical infrastructure to deploy, while AI requires far less; nuclear power (thankfully) involves a great deal of regulation and permitting, while AI development at present requires none. Therefore, an AI takeover is more likely than a nuclear-powered future was in the 20th century.
I would respond by saying that, first, I think this line of reasoning underestimates just how vast the infrastructure powering AI really is, as these systems require massive amounts of computing power and thus large networks of server farms, which are expensive to build and run. But more to the point, I think this mindset misunderstands the point: the argument is not “nuclear power was likely and didn’t happen, so AI dominance is less likely.” The argument is “history is full of arbitrary turns and random occurrences, causality chokepoints, and chaos injections, and thus we should be far more humble about our predictions about the future.” AI looks inevitable right now. So have many things that haven’t happened.
Someday, people are going to look back at us and laugh, just as we look back and mock those 1950s futurists who predicted a nuclear power plant in every home by the end of the century. There is absolutely no prediction about AI that can be made with the same confidence as the prediction that people in the future—the near future—are going to look back in astonishment at how wrong our futurism was. The most reliable human prediction is that we will go on making bad predictions. The only reason we refuse to grapple with that certainty is because, frankly, our tendency to get lost in hubris is extraordinary. And that’s the greatest impediment we have to putting artificial intelligence and its consequences into their proper context, not bad media incentives or technological ignorance or our relentless drive to tell exciting stories, but the weight of our own egos.
Freddie deBoer is a writer and academic. He lives in Brooklyn.
And, to receive pieces like this in your inbox and support our work, subscribe below: