Discussion about this post

User's avatar
Rick H's avatar

Regarding p(doom), I’d add that there are four distinct AI risks & those discounting p(doom) invariably address only one or two of them.

1. Alignment risk is the most often discussed. It is often dismissed but comes with so many unknowables. As you said, it is still far from solved.

2. Bad actor risks are the clearest. But the greatest bad actor risk may not be a malign government, tech bro, corporate giant, terrorist organization, multinational crime group, hacker cooperative or lone wolf. Isn't it more likely human hubris?

3. The true scope of unexpected consequences risks is unknowable, but we can infer from the scale of the known pending social, economic & political changes that it will be unprecedented.

4. But risks of catastrophic preemptive conflict have received very little attention. It’s not just the risk that Putin was correct in 2017 that “…Whoever becomes the leader in this (AI) sphere will become the ruler of the world”. We must also add the risk that he was wrong but that he, the CCP, some persuasive foreign policy advisor or the next Aum Shinrikyo will mistakenly believe it. Ironically, this least discussed AI risk is the most amenable to prevention now.

Expand full comment

No posts