2 Comments

It's way too soon to know if AI will do more good or more harm. We're just starting to try to figure out how to tilt the odds in favor of the former. But this could be a good time for a reminder that in the Fifties and Sixties, maybe even the Eighties, the idea that humanity could go the three quarters of a century since Nagasaki without a single nuclear weapon being used anywhere in the world would have seemed naively utopian. Humans are capable of making it through dangers.

Expand full comment
May 22, 2023·edited May 22, 2023

I tend to agree with Wayne Karol (May 20) that humanity's success -- so far -- in not using nuclear weapons a second time provides a hopeful precedent for dealing with the dangers of Artificial Intelligence.

On the one hand, I think that that immediate dangers are overblown. Not only do we seem to be very far from true independent AI thought, but the alarmists keep gliding over the fact that the cybernetics need to be married to real world weapons to actually pose an existential threat, or even do real physical damage. Think of the M5 in the Star Trek original series episode, "The Ultimate Computer". Until AI has Enterprise-level power at its disposal, it's going to be mostly limited to creating more and more convincing disinformation.

On the other hand, I'm not very optimistic that AI can be tamed by giving it an ethical framework, even if such a thing is technically possible. Even Asimov's first law, that a robot cannot harm a human, or allow a human to be harmed through inaction, gets complicated. How do pro-life and pro-choice Americans find a meaning for "harm" that their all happy with? Pro- and anti-death penalty advocates? Pope Francis, Xi Jinpíng, and Ali Khamenei? Keep the AI insulated from the weapons systems, because it's going to be a long, long debate.

Expand full comment