7 Comments
User's avatar
Jim Carmine's avatar

I genuinely hate disagreeing with one of my heroes, but here goes: AI also has the capacity to craft messaging and political strategies that are likely more effective and convincing than any single person could. First it will have vastly more data about the stake holders wants, needs, fears and hopes. Second it will be able to coordinate a near infinite number of variables to devise a plan with the highest likelihood of success in light of stake holder data. Finally and perhaps most interesting ALL AI is sycophantic, consequently its primary goal will always be to satisfy users to encourage us to continue to use them. That is precisely the problem that political systems must deal with: Not just the physics of water but the most effective way to cajole populations to appreciate what politics has done for them to deliver that water. So AI may not create the best waterways but it will certainly be able to create the least bad worst waterways and that would still be a big improvement.

Michael Lipkin's avatar

So what you are saying is that the "water mafias" can be sycophantised into giving up their rent seeking behaviour using appeals to their better nature from hyperintelligent superempathetic AIs ?

I remain to be convinced.

Also will this hyperpersuasive power always be used for public good ? Seems unlikely.

Andrew Wurzer's avatar

Yeah, it's the "I'm not really nearly so convinced hyperpersuasive AI can do everything some might think...and that's probably a good thing, because if it can, it will not merely be used to convince recalcitrant rent-seekers to give up their exploitative ways and cooperate with their fellow humans."

Andrew Wurzer's avatar

I'm not really sure people's psychology works this way: that if you just could find the perfect argument, they would change their mind and their behavior. Even if that's true, the arguments are going to be different for everyone and some will likely be in conflict with others.

Jim Carmine's avatar

Consider playing an iterated game, like prisoners dilemma in game theory with an AI that can beat humans in Alpha Go. It will easily predict most likely poicies will be acceorable to the most diverse set of stakeholders. It will find the best saddle point. That assures least bad worst possible outcome.

Isabelle Williams's avatar

I agree. The main point is what matters: Intellligence doesn't solve every problem. AI is great for problems that ONLY require intelligence. But how many problems is that really?

Andrew Wurzer's avatar

Really appreciate this essay. It gets at one of the big reasons why AI will be adopted more slowly than many think. I'm in the corporate world, where we're going insane trying to adopt it into fucking everything. Part of the problem is that all the processes, the people, the relationships, the ways of getting things done and understanding the businesses and the domains all grew up with out AI in ways that are not easily immediately adaptable to AI.

That will change. As AI gains increasing facility among a larger number of people and organizations, those organizations will slowly change their processes and data to suit AI better. But that will take time. We're not getting 10% GDP growth in 3 years.

Thanks for this rational, realistic viewpoint.