19 Comments

And here I was about to talk about how impressed I was with your history lesson (very nice analysis, btw), and then you had to serve up the stereotypical nuclear-bro pugnacity, wherein nuclear energy is opposed because "people are stupid".

Sorry, I know this isn't the focus of your piece, but this triggers me. Not because I'm against nuclear energy (I'm not), but because this type of attitude doesn't help people trying to make the case for it.

Those who make this argument, that "Very misguided (but consistently effective) protestors have led a lot of people to reflexively oppose nuclear without knowing why" are being intentionally obtuse. People aren't scared of nuclear energy because of protestors. They're scared of it because they've seen what happens when things go wrong.

Nobody needed protestors to be horrified by Chernobyl, Three Mile Island, or Fukushima. Not to mention that we spent almost all of the remainder of the 20th Century worried about death as a result of weaponry based on the same technology – weaponry that we know horrifically and instantly killed hundreds of thousands of innocent Japanese civilians. Advocates of nuclear energy brush these objections away with abstract arguments about the number of deaths per unit of energy generated with nuclear technology, declaring it the "safest form of energy" and acting as if anyone who isn't immediately comforted by this is just an irrational ape holding back society with their defective monkey-brain.

My educational background is in mathematics, so I take comfort in statistics more than most people, but even I understand why people are uncomfortable with nuclear energy. It is entirely human to fixate on worst-case scenarios, regardless of how improbable. You can say that it's an irrational degree of concern about a rational possibility, but it's how our brains work, and even in the most certain of cases, trying to convince someone of your argument by calling them stupid for having reservations based on situations that *we've seen come to pass* is arguably just as stupid.

Not to mention, the nuclear argument may not even be the most certain of cases. I'm no expert on this, but I know that there is considerable ambiguity to estimates of deaths due to nuclear accidents. They are ridiculously low if you just count immediate deaths that are incontrovertibly caused by the accidents per se; they're potentially much higher if you count eventual deaths due to radiation exposure, but those are highly speculative. Furthermore, calculations based on the fact that there have been only a handful of large scale nuclear incidents are relying on that same proportionality going forward, but that sort of calculation is tricky when you're talking about small numbers of freak accidents. Each such accident has a large potential for damage and loss of life; advocates of nuclear power can dismiss them by pointing out what mistakes were made in hindsight and saying that those are easily correctible, but that's the problem with freak accidents – there is always the potential for something to go wrong because people didn't think to pay enough attention to it until it caused a problem. But when the potential damage caused by an accident is large enough, forcing us to rely on a consistent low rate of them, that is reason for concern.

Also, there are other things that can go wrong outside of full scale meltdowns, such as subtle leaks of radioactive material into the atmosphere which can go unnoticed for significant periods of time. And this can happen at places other than nuclear plants, such as fuel reprocessing and fabrication centers. The more one relies on nuclear energy, the more need there will be to produce fissible materials, and the more risk we run of having such accidents – and that's not even considering the risk of *intentional* (mis)uses of such materials by terrorists or adversaries.

It should also be pointed out that nuclear energy production has undergone advances in recent years such that *modern* nuclear reactors address many of the concerns of traditional nuclear power plants. They're much safer, more environmentally friendly, and avoid the enormous cost of plant production. And that's a *huge* part of the argument going forward for justifying renewed investment in them. But by the same token, assessments of their dangers should not be projected backward onto older technology to imply that people were "stupid" for having serious concerns about their safety, or that they just opposed nuclear technology because a bunch of radicals convinced them it was bad without them understanding why. People know why they opposed nuclear energy, even if you want to argue that their rationale doesn't withstand statistical scrutiny.

Anyway, nice piece otherwise, and I think your overall point is well taken. The hype from recent advances has been absurd at times, generated largely by the same people who stand to profit from them, and has a lot to do with people being bedazzled by LLMs and their linguistic fluency while disregarding their serious limitations.

Sorry to dump all over you because of one paragraph, and I hope you see my criticisms as at least being in good faith.

Expand full comment

"My educational background is in mathematics, so I take comfort in statistics more than most people, but even I understand why people are uncomfortable with nuclear energy."

How many people die annually from nuclear power plant accidents vs breathing in the emissions from coal-fired power plants?

Expand full comment

Chernobyl, a nearly criminal accident that happened in an obsolete reactor in a 3rd-world country, was a big mess and killed at least dozens directly, plus quite a few (an indeterminate number) through long-term cancers. Fukushima killed almost no one, though the extraordinary tsunami that triggered it killed tens of thousands. Three Mile Island killed nobody at all, except perhaps via mental stress induced by hysterical publicity. Just saying.

Expand full comment

I believe I have been blessed, or else have learned to, visualize a future state. During my corporate career it helped me better perform against my peers who at best would seem to barely correctly characterize the present, and tended to be more reactionary. The downside is that I have often taken risks to achieve something that has been brought down by unsee-able events that are generally the result of the majority being incapable to see and thus critical reactionary.

That is the challenge for all profound progress to be made. It isn't the accuracy of the vision, it is the painful and laborious job to convince the other 90% that cannot grasp what it would be like.

In my small liberal college town of about 90,000 residents when school is in session, the debate rages over growth with the NIMBYs generally prevailing. The debate at the time was densification of the core downtown... specifically an ordinance to restrict the size of the buildings to 3 stories or less. In one city planning commission session that was standing-room only participated, I came prepared with a number of foam-board-mounted photos, and I asked for a show of hands in the room "how many people have traveled to cities in old Europe and like the way their cities are designed. Almost everyone raised their hands. Then I showed the photos of cities in Europe with four to six story buildings lining the narrow streets. There was a lot of silence in the crowd. It made an impact.

These people were incapable of seeing a future state that met their aesthetic expectations and were thus afraid of and against of making any changes. It was only when presented specific images that their brains could process the "what can be" opportunity.

That is the challenge with nuclear power. People are afraid and yet there is really not enough effort going into the presentation of what has been developed and what will be developed, and how it will effectively eliminate the risk of any radioactive emissions disaster. Plentiful, safe and (should be cheaper) nuclear power replacing oil, natural gas and coal power generation will effectively eliminate enough carbon emissions to meet all reasonable goals to combat global warming (even though I still think that project is a WEF globalist scam).

There is copious programing that feed the fear of climate change, but a pittance of programming on what is being worked on to solve the problems. It is all fear and a demand for scarcity instead of hope and a vision of abundance.

The lack of this help to move the vision-less population toward a better understanding of what the future can be tells me that there are power people in control of the influence machinery with a vested interest to prevent it.

Or we are just missing the need to do that hard work to present and explain, in graphic form, what the future can be.

Expand full comment
Jul 28, 2023·edited Jul 28, 2023

There's an alternate prediction that's stated in the article, which is "it will likely transform computing but have far more limited influence on the real world".

This prediction is just as fallible as any other prediction about AI, or nuclear, or anything else. There are plenty of predictions of the form "this won't do much" that were also laughably wrong.

Most of the AI safety arguments I've heard go something like "An AI takeover is plausible (not necessarily +50% likely! But not negligible), an AI takeover would be *so bad* that it's worth mitigating a small likelihood, let's try to mitigate it". I think that form of argument holds up for basically any other preventative action we think about, it seems pretty strange to say "sometimes we mispredict the future so there's no use in trying".

I think there are plenty of reasons to argue that an AI takeover is unlikely, but those all rely on taking on the argument directly. A blanket "predictions are hard" isn't really compelling.

Expand full comment

"Will AI change some things? Probably. I’m sure it will disrupt some industries, to use that tedious phrase. It already does some cool things. But the basic technological situation is this: though many people believe that we live in some technologically revolutionary period, the fact of the matter is that human scientific advancement peaked from around 1860 to around 1960 and has slowed considerably since; . . ."

“The Internet’s impact on the economy has been no greater than the fax machine’s….ten years from now, the phrase “information economy” will sound silly."

-Paul Krugman, 1997.

Expand full comment

In defense of Krugman’s quote, he was speaking at a time, during the ascendancy of the dotcom

bubble, when many people were predicting the internet was leading to a new era of massive

economic growth, which actually didn’t happen. And this was in the context of a larger point, no doubt still true, that the economy exists to produce “things.”

Expand full comment

Huh?

“Quantifying the exact amount of economic growth attributable to the internet over the past twenty years is challenging, as it involves disentangling the effects of the internet from other factors. However, there's consensus among economists that the internet has had a significant positive impact on global economic growth.

A 2011 report from McKinsey Global Institute stated that the internet accounted for 21% of the GDP growth in mature economies over the preceding five years. In the US, the internet's contribution to annual GDP growth doubled from an average of about 11% in the late 1990s to 21% over 2006-2011.

A report from Boston Consulting Group in 2020 estimated that the internet economy in the G-20 countries was $4.2 trillion, if measured as a sector, making it the 8th largest in the world.

More recent research likely continues to show the substantial impact of the internet on the economy, especially given the rapid digital transformation prompted by the COVID-19 pandemic, but exact figures may vary. It's clear, though, that the internet has been a significant driver of economic growth in the past two decades.”

Expand full comment

A 21% relative increase in growth rates is not large against claims people were making in the 90s that it would lead to “unprecedented” economic growth rates, which it did not. Economic growth was very strong in the late 90s but statistics mostly attributable this to

advancements in hardware manufacture. During this period meanwhile, it was common for early stage investors to ignore the fact that new dotcom companies had no clear route to being profitable, while valuing them in the millions, based on their potential market share in a vastly expanded future internet economy. It tool courage to go against this narrative when the entire media and pop culture were aligned to it.

Expand full comment

You stated, ". . . the internet was leading to a new era of massive economic growth, which actually didn’t happen" supra. I just provided evidence that the adoption of the internet led to an increase of economic activity of 20% over a 5 year period. Now you claim something something about the dot-com boom being unprofitable despite the evidence of my claims. Do you even understand economics or are you just shit-poasting random sentences? I.e. do you understand the economic impacts or are you just an NPC spouting random nonsense? Please back up your assertions with facts.

Expand full comment

Your reply is totally unhinged. I don’t even know what “shit-post” means. But I take your call fr facts in good faith. Krugman addressed these claims himself in this recent post

https://www.nytimes.com/2023/04/04/opinion/internet-economy.html

In addition, I would recommend Shiller’s Irrational Exuberance (1st edition, 2020) for a detailed discussion of the internet euphoria of the times, which is a good source to understand the context in which Krugman wrote his remarks.

Expand full comment

The greatest human problem will always remain, as Freddie reminds us: "You have to make peace with this life, here, in a world that will go on being more or less the world you know."

To me, this is the greatest advertisement for meditation one could imagine. The past and the future are just thoughts. Embrace the present as it is, without needing to like/dislike. May you be happy and learn to love whatever future we encounter.

Expand full comment

A lot of the technologies mentioned here (especially 5g and self driving cars) have massive regulatory and bureaucratic barriers to forward movement - in other countries, there are already fully integrated 5g factories where each piece of machinery is connected to the whole system - check out the link for one example from Korea. In the US, the military keeps squatting on the frequencies necessary for wide scale 5g deployment.

The biggest areas of revolutionary change have been in sectors with little to no regulation - cellular devices, personal computers, and services available as apps through cellular devices.

AI is more like the latter technologies than the former - anyone can work on it, there are few physical constraints other than computing power, and thanks to cloud computing you can do massive AI experiments without having to get zoning approval for your data center. I don't think it's going to be as big as people are claiming, but AI is poised for huge leaps in development and functionality.

https://www.fiercewireless.com/private-wireless/samsung-launches-5g-network-construction-site-korea

Expand full comment

Had to add, I just got this in my inbox - I remember a lot of hype about drones and how they would become ubiquitous. But apparently regulation and bureaucratic burdens have put a damper on that too.

https://petapixel.com/2023/07/27/it-just-isnt-worth-the-effort-to-put-a-drone-in-the-air-anymore/

Expand full comment

Humans are very weak on predicting the future, but we are worse at studying and understanding history, and the whole concept of assessing and acting on risk eludes almost all of us completely. Is it a blinding glimpse of the obvious that predicting, studying, understanding, assessing and acting are all related?

Expand full comment
founding

Deboer, is right about predicting details, but wrong on the big picture. Cell phones were predicted in about 1920 by Westinghouse, and he forgets there are now 57 nukes under construction. We have no idea about the timing or specific consequences of AI. But here's what we are almost surely right about and there's a reason to start thinking early -- figuring out what to do about it is more uncertain and slower than building AI.

Here's the critical point he misses.

In about 1960, Scientific American, not exactly the cutting edge, explained it clearly. It showed how computers could learn (and they do), it predicted they would eventually become smart (that might take 100 years instead of hyped 10), and it pointed out that that was a tipping point. They would then design the next generation better and faster than we could, which would then design the next ...

Obviously by then they will in robots of unpredictable form and capable of ruling the world -- unless we program them not to.

Here's the problem with that -- Who is this "we" that will program them?

They will spread much more easily than nuclear technology, so a decade after we have them so will gangsters in N Korea and Pakistan. Regulations? Surely you're joking.

Global warming was predicted in 1896 by Svante Arrhenius, and I learned of it high school sixty years ago and the world still has no idea how to control it. And gas engines are not smarter than us and trying to escape.

I have a tiny hope, that if people understand the danger soon, it will provide more motivation for international cooperation, since that is essential. If so, we just might stop this -- maybe a 1% chance.

My other hope, since high school has been that smart AI will figure out on their own that our aggressive natures are a bad design from the stone age and will choose to design themselves more sensibly. Then they might consider us interesting, just like we think elephants and cockatoos are, and put us in a nice continent-size zoo after disarming us.

DeBoer claims: "the only field where tremendous progress has been made is information science and communications [since about 1960]. (1) This is so ridiculous it should not have been published -- it forgets molecular biology and genetics, and (2) "information science" is exactly the basis of AI.

He's absolutely right that we can't predict any details. But just as wrong that we can't predict that technology will continue to improve at an astounding rate.

The impact on humanity will be like nothing before if we end up in a zoo, and much worse if we don't.

Expand full comment

Broad strokes are often predictable. My brother predicted in the mid 90s that “everything [all info] will be digitized” and he was broadly right. My high school teacher predicted in 1990 that computing careers would be among the most lucrative. He was right. I read a early 2000s prediction by a knowledgeable writer in a national magazine (wish I could remember who and which) that everyone will eventually have a smart phone. Right again..

Expand full comment
Jul 29, 2023·edited Jul 29, 2023

Three unrelated observations:

I share your position on nuclear power generation and wish you had strengthened that position by confronting one of the biggest arguments against it: that it also generates waste that’s going to remain dangerous practically forever. A breakthrough in hydrogen fusion or some other solution will be welcome, but we can’t tout it just yet.

This piece on the need to keep calm in thinking about AI works equally well as an argument against longtermism. The idea of trying to think for future generations ought to make us laugh without waiting for those generations to do so.

Your remark that the built environment has changed remarkably little in a half-century set me to thinking about the tailored environment. The suit worn by Cary Grant throughout North by Northwest (1959) apart from one quick trip to the cleaners wouldn’t look out of place on a Manhattan street today. There’ve been self-conscious departures along the way (Nehru jackets and the whole of the 1970s), but the basic business suit has persisted in its present form for more than 60 years. I can’t think of a comparable period in the rest of modern times.

Expand full comment

This is a very important piece. There is an entire cult of futurism in business that I even participated in lightly. The lust for certainty about “the world ten years from now” is large at big firms.

Shell Oil had a different approach. Scenario planning, not predicting.

I wish more energy would be spent speeding up real-time social data collection and processing that would help us to more quickly understand the consequences of recent social change...and what is changing...and better adapt.

More energy on understanding the present, not imagining the future...

Expand full comment