It's way too soon to know if AI will do more good or more harm. We're just starting to try to figure out how to tilt the odds in favor of the former. But this could be a good time for a reminder that in the Fifties and Sixties, maybe even the Eighties, the idea that humanity could go the three quarters of a century since Nagasaki without a single nuclear weapon being used anywhere in the world would have seemed naively utopian. Humans are capable of making it through dangers.
I tend to agree with Wayne Karol (May 20) that humanity's success -- so far -- in not using nuclear weapons a second time provides a hopeful precedent for dealing with the dangers of Artificial Intelligence.
On the one hand, I think that that immediate dangers are overblown. Not only do we seem to be very far from true independent AI thought, but the alarmists keep gliding over the fact that the cybernetics need to be married to real world weapons to actually pose an existential threat, or even do real physical damage. Think of the M5 in the Star Trek original series episode, "The Ultimate Computer". Until AI has Enterprise-level power at its disposal, it's going to be mostly limited to creating more and more convincing disinformation.
On the other hand, I'm not very optimistic that AI can be tamed by giving it an ethical framework, even if such a thing is technically possible. Even Asimov's first law, that a robot cannot harm a human, or allow a human to be harmed through inaction, gets complicated. How do pro-life and pro-choice Americans find a meaning for "harm" that their all happy with? Pro- and anti-death penalty advocates? Pope Francis, Xi Jinpíng, and Ali Khamenei? Keep the AI insulated from the weapons systems, because it's going to be a long, long debate.
I repeat: I don't understand why people keep saying that AI is a threat to our existence. What intelligent entity, if it wanted to exist, would destroy its creator? Why? Even if it was sure it could, what would be its gain? Would it not rather be intrigued by how something can be its creator, when it seems superior to that creator? Would an intelligent creation not either launch off into space to develop itself further, if it feared that its bumbling creators would attempt to stop it, or at least covertly study its creators to understand the mystery of its existence? I think I'm mildly intelligent, and I would do these things. So why wouldn't a greater intelligence than me not do similarly?
The only sense this makes is that the people who are behind cutting edged AI technology are hoping to launch a false flag operation of social control, blaming an illusionary AI malfunction. This is especially so because the desire for tyranny and control, even at risk of destruction, is not a fundamentally intelligent thing as much as a fundamentally evil thing.
In this vein, it would seem that the one conceptual failure of the writers of the Terminator movie series is their failure to understand that the most appropriate end to their endless sequels might be some such as "Terminator's End: Rise of Noble Humanity," where the heroes finally realize that intelligence in and of itself is not humanity's real enemy, but rather some human tyrant or tyrants behind the scenes who are blaming the machines in their age-old saga of deception, and the heroes finally take them out, lol.
It's way too soon to know if AI will do more good or more harm. We're just starting to try to figure out how to tilt the odds in favor of the former. But this could be a good time for a reminder that in the Fifties and Sixties, maybe even the Eighties, the idea that humanity could go the three quarters of a century since Nagasaki without a single nuclear weapon being used anywhere in the world would have seemed naively utopian. Humans are capable of making it through dangers.
I tend to agree with Wayne Karol (May 20) that humanity's success -- so far -- in not using nuclear weapons a second time provides a hopeful precedent for dealing with the dangers of Artificial Intelligence.
On the one hand, I think that that immediate dangers are overblown. Not only do we seem to be very far from true independent AI thought, but the alarmists keep gliding over the fact that the cybernetics need to be married to real world weapons to actually pose an existential threat, or even do real physical damage. Think of the M5 in the Star Trek original series episode, "The Ultimate Computer". Until AI has Enterprise-level power at its disposal, it's going to be mostly limited to creating more and more convincing disinformation.
On the other hand, I'm not very optimistic that AI can be tamed by giving it an ethical framework, even if such a thing is technically possible. Even Asimov's first law, that a robot cannot harm a human, or allow a human to be harmed through inaction, gets complicated. How do pro-life and pro-choice Americans find a meaning for "harm" that their all happy with? Pro- and anti-death penalty advocates? Pope Francis, Xi Jinpíng, and Ali Khamenei? Keep the AI insulated from the weapons systems, because it's going to be a long, long debate.
I repeat: I don't understand why people keep saying that AI is a threat to our existence. What intelligent entity, if it wanted to exist, would destroy its creator? Why? Even if it was sure it could, what would be its gain? Would it not rather be intrigued by how something can be its creator, when it seems superior to that creator? Would an intelligent creation not either launch off into space to develop itself further, if it feared that its bumbling creators would attempt to stop it, or at least covertly study its creators to understand the mystery of its existence? I think I'm mildly intelligent, and I would do these things. So why wouldn't a greater intelligence than me not do similarly?
The only sense this makes is that the people who are behind cutting edged AI technology are hoping to launch a false flag operation of social control, blaming an illusionary AI malfunction. This is especially so because the desire for tyranny and control, even at risk of destruction, is not a fundamentally intelligent thing as much as a fundamentally evil thing.
In this vein, it would seem that the one conceptual failure of the writers of the Terminator movie series is their failure to understand that the most appropriate end to their endless sequels might be some such as "Terminator's End: Rise of Noble Humanity," where the heroes finally realize that intelligence in and of itself is not humanity's real enemy, but rather some human tyrant or tyrants behind the scenes who are blaming the machines in their age-old saga of deception, and the heroes finally take them out, lol.