Persuasion
The Good Fight
Will Humanity Survive AI?
2
0:00
-1:00:11

Will Humanity Survive AI?

Yascha Mounk and Gary Marcus discuss the strengths and the shortcomings of artificial intelligence.
2

Gary Marcus is an expert in artificial intelligence, a cognitive scientist and host of the podcast “Humans vs Machines with Gary Marcus.”

In this week’s conversation, Yascha Mounk and Gary Marcus discuss the shortcomings of the dominant large language model (LLM) mode of artificial intelligence; why he feels that the AI industry is on the wrong path to developing superintelligent AI; and why he nonetheless believes that the eventual emergence of superior AI may pose a serious threat to humanity.

The views expressed are those of the speakers, not those of Persuasion. The transcript and conversation have been condensed and lightly edited for clarity.


Yascha Mounk: Let's jump straight in. Why should we be worried about rapid developments in the field of artificial intelligence?

Gary Marcus: We don't really know how to control the AI that we're building now or AI that we might build later. That has many manifestations. In the first instance, we know that the current AI tools can be used by bad actors. And we know it can accidentally create lies. We know that you can give it a request and it may or may not honor that request. And we also know that people are giving more and more power to these systems. It was one thing when they were laboratory curiosities as recently as a few months ago, but now people are putting in their personal data, their company data; they're attaching these things to heaven knows what with new tools that allow people to connect these things to anything on the internet, to write source code or directly access memory. 

The current AI is actually mediocre compared to what we will build eventually. It's not reliable. It's not really general-purpose reasoning, even though people might think that. But mediocre AI connected to the information grid of the world poses a lot of risks. And then you put all of that in the hands of bad actors and you have more problems. And then you think about general intelligence that might be smarter than us, at some point—the fact that we have no clue how we're going to control that. There are a lot of reasons to worry both short and long-term.

Mounk: First, let's get into the debate about how good AI is and how good it's going to get. The jump from ChatGPT-3 to 3.5 has been enormous. We are clearly seeing rapid progress in the field. 

Is there reason to think that in 10 or 20 years AI is going to be better at some core human activities at which humans still currently outperform AI? Or do you think that there are barriers which are not obvious to a wider public that might be out of reach for a long time?

Marcus: I don't think anybody knows where we’ll be in 20 years, but I do take a more skeptical view. They still make lots of silly mistakes. They still make stuff up. I said that when GPT-4 comes out, it's going to be better than GPT-3, and people are going to be excited, but they'll quickly start to find mistakes. They'll quickly find that it has trouble with physical and psychological reasoning, with hallucinations; that it will be able to be used for generating misinformation; that we won't be able to count on it for medical advice. That all turned out to be true. There are ways in which these systems have improved, undoubtedly, but especially since we've no idea what's in the training set, we don't really have a good scientific understanding of those improvements. And the qualitative problems that have always been there remain. It's tempting to draw a simple graph and extrapolate and just assume that the progress will continue. But my own view has long been that these systems are not doing what we need, which is to build cognitive models of the world, to reason over them, and to have common sense about the world. 

I don't think that the techniques we have now are actually going to solve these problems of stability, reliability, and truthfulness. I think that we will be more and more impressed by the plausibility of the text that they create, which makes them excellent tools for creating misinformation. But we won't be able to trust them anytime soon.

Mounk: Are these AI systems going to surpass humans at the ability to write a poem or novel that feels like it's at the pinnacle of human achievement, or produce a movie that is as entertaining as anything that Hollywood might produce? Now, those systems might still be very bad for the world. And that's an important question that I want to come to in a moment. But in terms of assessing the ways in which these forms of AI might be about to displace what humans think of as their role in the world (as happened with Deep Blue and chess in the 1990s), is that about to happen in all these other realms of human endeavor? 

Marcus: Well, I think it's important to separate the moral issues and how they're used from the capabilities. But I think truthfulness is part of the capabilities. I don't think in 20 years we will be building architectures like the ones that we're building now, because I think they're inherently flawed. So then, for me, your questions actually transform into a different one, which is “Will we make the new discoveries we need to get to AI that we can trust?” In terms of its own ability to connect to the world, we actually need a paradigm shift. When will we get to that paradigm shift? Nobody knows.

A parallel that I often think about is in the early 1900s, when almost everybody thought that all the genes were made up of protein. They were all trying to figure out which protein genes were made of, and it turned out genes are not made of protein. They're made of an acid—DNA. And it took almost 30 years for people to stop pursuing the wrong hypothesis. Right now, I think AI people are pursuing the wrong hypothesis. What happened in molecular biology is that Oswald Avery finally did the process of elimination experiment that needed to be done and showed that it was not a protein. That was in the mid ‘40s. Then it wasn't long before Watson and Crick figured out the structure (with Rosalind Franklin's unwitting help) of DNA in 1953. And then things move very, very fast. Science is self-correcting, but it can take a while. In my view, we are pursuing the wrong hypothesis. The particular thing that everybody's obsessed with right now is, I think, inherently limited.

Mounk: Explain to an audience that is mostly used to thinking about politics rather than technology how the current AI systems like ChatGPT work and what you see as their limitations.

Marcus: There's a model called Galactica. It preceded ChatGPT by a few weeks but uses basically the same mechanisms without the so-called “guardrails,” and it was able to make fluent text. It was a large language model. Someone asked it to use the words “Elon Musk” and “car crash” in the same sentence. And the system came up with “On March 18th of 2018, the Tesla CEO, Elon Musk was involved in a fatal car collision.” And then it goes on to make clear that Elon Musk was the person who died in that collision. Well, we know that Elon Musk did not die in 2018. We have enormous amounts of evidence to show that he didn't. He's in the news every day. If there's anybody that we know is still alive, it's Elon Musk. 

Why does the system do this? There's lots of data in the training set. There's lots of sources you can consult. Well, we have a tendency to over-attribute to these systems the notion that they're intelligent, but they're really just doing one kind of thing. They're not generally intelligent, they're not clever enough to go check Wikipedia. But what they do is they glom bits of text together: bits of text in the database say that somebody died in a car crash in 2018, and that some of those somebodies were in Teslas, and there are bits of text that associate Elon Musk and Tesla. But the relationships between those words are not understood by the system. It doesn't understand that Musk's relation to Tesla is that he owns it. But the word “March” is plausibly followed by “18,” and “18” is plausibly followed by “2018.” The systems are ultimately just doing text prediction. They are producing cohesive bits of text, but they are not fact-checking them. They're just not doing it the way people do. That’s very hard to swallow for people who have not thought about cognitive science. It's easy to see the answers and just assume that these devices are intelligent like we are but they're not. They're text predictors.

I don't think anybody could get anything like a coherent novel out of one of these systems right now—certainly not a readable novel. Maybe you could get 5000 words, not quite a long New Yorker article. That would probably be a lot for the systems right now. Even that, I think, is probably pretty far outside the scope of what you can get right now. Eventually, that will grow. Most of what I've seen is more like 1000 words. And these things do quickly lose coherence because they don't have an internal model of what they're talking about. 

Mounk: Is there a reason to think that because they are just a form of predictive text they can't have this model of internal coherence? They just, in principle, aren't able to do that? Or is it possible that, 20 years from now, we'll just have even more data and processing power and they will have evolved in such a way that they are suddenly able to produce that kind of coherent novel?

Marcus: I think it's very unlikely that simply scaling the models, making them bigger, would solve these problems. Sam Altman, the CEO of OpenAI, said that he thought that we were getting about as far as we can with the current systems and that scaling is not the solution. 

Mounk: The second question I had is about the veracity point. Now, as I understand it, it's not clear to me that the AI system understands because, as you're pointing out, it doesn't understand anything. But it's supposed to generate things that are truthful. 

Wouldn't it be possible to add a tool in which the text it has produced is then checked against news reports and other sources you have in order to figure out whether or not its text was plausible?

Marcus: Your first tool is a large language model that predicts things. The second tool is a filter that decides what's true or not. But the second tool requires a new kind of technology. People are trying it now and they're getting into trouble. So they're trying to do it mostly with large language models themselves. And the large language models don't really have good reading comprehension. 

Here’s an example. Jonathan Turley is a lawyer. Another lawyer typed into ChatGPT something like “Tell me about some lawyers involved in sexual harassment.” It spat out, “Jonathan Turley said he was guilty of sexual harassment, and it happened in Alaska.” And it was made up. It made up a reference to a Washington Post article that didn't exist. [Reporters] Pranshu Verma and Will Oremus tried it on Bing. And Bing repeated the lie. And it went one step deeper, repeating this allegation, which was not true, and pointing to a source. And the source that it pointed to was Jonathan Turley’s own op-ed. They don't really interpret anything, but it took as support something that meant the exact opposite. This is what happens when you have text prediction itself doing the fact checking.

What you need in order to have an AI system that can take a sentence and verify it against some database of the world is an AI system that does what classical AI tried to do and what large language models are failing to do, which is to be able to parse sentences into logical form and reason over them. I think this is theoretically possible but don't think it's possible with current tools. So then the question is, when do people recognize that and start building new tools?


Read More: “Our Deep Blue Obsolescence” by Sam Kahn


Mounk: Tell us about an alternative way to build AI. 

Marcus: We need something that combines old things in some kind of new way. In the 1940s and ‘50s there were always two approaches to AI. One of them was the symbol manipulating approach that you see in classical logic, mathematics and computer programming, where you have, basically, a mental algebra. You define functions and variables. And that's been a very successful approach—almost all of the world's software is written with symbol manipulation, including some AI. Most of Google Search is set up that way (or it used to be, at least). All of your GPS locating systems were classical: route planning, to take you from place A to place B, is symbolic. 

Then there was another approach which struggled for a long time but is now doing great, which was the neural network approach that said “We'll build AI to be something like the human brain.” It's really a gross oversimplification. But each of these two approaches has real world application. They both have problems. The classical approach is very cumbersome; you have to handwrite a lot of rules, at least in the ways that we know how to build it right now. Every new domain takes a lot of work. The neural network approach is much more flexible—you can throw it at a lot of problems. But where the classical approach was pretty good at truth and reasoning (that's what it was built around), the neural network approach doesn't really do that at all. 

What we need is to combine the strengths. We need to have the reasoning capacity and the ability to represent explicit information of symbolic AI. And we need the learning from lots of data that we get from neural networks. Nobody's really figured out how to combine the two—I think, in part, because there's been almost a holy war in the field between the people following these two approaches. There's a lot of bitterness on both sides. If we're going to get anywhere, we're going to need to build some kind of reconciliation between these two approaches. 

Mounk: There's a famous joke that you go to a restaurant and you complain that the food is terrible and the portions are small. I can't quite figure out whether what you're concerned about with AI is the fact that it is quite powerful and that it therefore allows bad actors to do worrying things and that we don't have a solution to the problem of making sure that AI remains a faithful tool of humanity; or that you're worried that because we are following the wrong scientific approach, we're not going to be able to produce really powerful AI at all. 

If you're telling me that the systems are really constrained, and as long as we keep barking up the same tree, we're never going to get to some form of artificial general intelligence, that sounds reassuring to me, but that seems to be what you're worried about. Help us puzzle through this bad-food-and-small-portions problem.

Marcus: I have two sets of worries. One is about the current AI, which I find to be mediocre, and the other is about future AI, which could be very powerful. Right now the problem is we can't trust the AI that we have. And people tend to trust it. The systems are way too stupid to have guardrails that actually would keep them from being used to generate misinformation. They can create the defamatory stuff I talked about. They're not smart enough not to tell someone to commit suicide. A little bit like teenagers, they are starting to be powerful, but they don't really know how to rein that power in. 

What happens if they really do get smart? That's a separate set of questions and also a poorly answered set of questions. In both cases, it's really about control. One is about controlling a mediocre intelligence. And the other would be: “How do we control an intelligence that is smarter than us?” We're not on that ladder, but they're both really about control. We don't really have control of these things. It's not that we can't make machines orderly, predictable, and verifiable. The software that we use in airplanes now, for example, is formally verified. We know that it will work. Whereas large language models, these chatbots, are not formally verified to do anything. You never know what you're gonna get from these systems. 

I'm concerned right now about systems where you never know what you're going to get, particularly because they've been radically and quickly adopted. We have hundreds of millions of people using them. We don't really fully understand the downside risks. And we're giving them too much power. So, for example, people in the last month have been playing around with something called Auto-GPT, where an unreliable AI system calls another unreliable system, and they've set it up so that these systems have direct internet access, direct memory access, and source code access. Just from a cybersecurity perspective alone, that's a complete disaster waiting to happen, if you have bots that aren't necessarily going to do what you want on any given trial, writing code that isn't necessarily going to be reliable. I talked to someone very high up at Microsoft recently who had worked in cybersecurity for a long time, and they've spent years trying to teach programmers how to follow certain conventions so the code will be safe and won't be hacked. These systems don't have the conceptual wherewithal to do that. These systems are not smart enough to say, “Well, I'm being used now in a phishing thing, where people are trying to steal credentials.” They'll happily comply. You could imagine where we'd say it's not legal to produce software that is going to be used at mass scale to steal people's credentials. But this software can be used for that. And it will be. There's no law to protect it. Interpol is very frightened right now.

Mounk: Let's start with the problem of mediocrity. And then we'll move to the problem of security later in the conversation. All of that stuff sounds like bad-ish things are going to happen. None of that is trivial. And I don't want to make light of that. But that feels to me like it is relatively easily solvable. It all seems like the kind of stuff that people can figure out by human custom and regulation.

Is this a sort of transitional problem where people are rushing to the use of these systems without thinking about it too much? And when we're going to realize that there's this very real problem and then we can fix it, or is there a more profound threat here?

Marcus: I think it's transitional for the most part. But it’s serious. If you have all of this unreliable software, you can get into situations where you have, for example, accidental nuclear interventions. Somebody decides they’re going to use these tools to take over the stock market, and they’re going to make all of these bad things happen in order to drive the market this way or that and then we blame it on Russia, let’s say (even though Russia, in this case, is not guilty), and then we go attack Russia. Things can get really bad really fast. There's also some kind of chaos risk because of the scale at which these things can be used. Maybe we get good at detecting those things. But right now, we have no regulation and no tools, and so the transition could be really tough. It could get really ugly before we get a good handle on these things. I think we probably do get a handle on them eventually. But we're in for a pretty rough ride for the next several years.

You trust humans to make the decisions. But some fool hooks up a large language model that hallucinates things to the train network and 500 trains go off of bridges. There are some scenarios where humans get fooled by new kinds of things that machines suddenly can do. There are many such possible scenarios, and I think each of them, individually, is pretty unlikely. But you sum all of those risks up—it's enough to make me nervous. Let's put it that way.

Mounk: I want to make sure that we actually get to talk about not the risk of mediocrity, but the risk of superiority. Let's say that people listen to you and they figure out a way of combining these two different approaches to AI, the neural network approach to AI and the classical approach. We break through those barriers and AI is no longer mediocre. It's capable of figuring out what's true and what's false and it’s capable of producing those 100,000 word novels that are beautiful and internally coherent. 

What kind of threat does that pose to humanity, and what do we need to do now in order to get ahead of it?

Marcus: What you left out, in some sense, is any notion of a moral or ethical module, Asimov's laws and things like that. If there is no such notion in these machines then we’re in trouble. People have these weird examples about paper clip maximizers (the machine will want to do nothing but make paper clips and will turn us all into paper clips), and I've always found them to be fanciful. But whether they're fanciful partly depends on whether the machines can, first of all, calculate the consequences of the actions that they might undertake, and, secondly, whether they care about the consequences of those actions on humanity. If you build a smarter artificial intelligence that can reason, it seems to me that one of the things it could reason about are ethical values: if your plan has the consequence of reducing the number of human beings, you better have an awfully good reason for that and not do it capriciously (or, better yet, don't do it at all). And if you had systems that are bound by law to do ethical reasoning, and are capable of doing ethical reasoning, then a lot of our worst fears might go away. But I don't think we have any clue at this moment how to do that. I don't think there's enough research on it. 

Nobody is really working hard enough on this problem right now. Unless the machines have some kind of ethical values, we are in trouble. If you have smarter machines that essentially function like sociopaths, that are indifferent to the consequences of their actions, and are empowered to control our electrical grid or manipulate people, then you have a lot of potential problems. The manipulation part is really important. This is already a problem; these machines can tell people to do stuff, and they can say that in ways that convince people, and so that opens a lot of risks of harm.

Mounk: I guess I’m instinctively skeptical of the idea that some abstract reasoning ability will necessarily lead machines towards any moral boundaries—you would have to have a kind of slightly strange set of assumptions about how logic necessarily leads to a certain set of moral values. We know that we as humans have not acted in moral ways. We think of ourselves as moral agents and are in important ways guided by values, but we also think that we have some kind of moral justification for acting in very dominating ways towards non-human animals. And humans have thought that they had all kinds of plausible justifications for acting in very dominating ways towards other humans. Even though we're pretty smart, and we're pretty good at means-ends reasoning and at accomplishing our goals, that didn't somehow imply a stop on us acting in those ways. 

I think you'd have to have a pretty fanciful theory of morality to argue that a form of morality would spontaneously evolve in machines that always spares humans or takes the well-being of humans as a very, very weighty input.

Marcus: I'll put it this way. If we build super-smart machines, give them a lot of power, and they have no norms and values and laws, then we're in deep trouble. Whether we can solve that problem depends on whether we can do a really good version of a really hard problem that you just put your finger on. 

And if we can't, maybe we shouldn't be building those machines, right? Or not empower them and keep them in sandboxes. But it's not clear that we can keep them in sandboxes. And so these are serious problems. I'm not with Eliezer Yudkowsky, saying there's a 100% chance that all humans die. I think that partly depends on what the machines even want to do and on whether we can construct them with an adequate moral sense. But it's a real problem. 

And I am with Yudkowsky in saying that we don't really have a plan here. He has his followers, but most people kinda make fun of him. But nobody really has a plan. I think that the plan has to involve engaging in exactly the hard questions that you just talked about: how do you make a system that can behave ethically, that doesn't just license a lot of bad behavior? I think that the existence of systems of human laws that are reasonably functional give us some hope that this might be a solvable problem but don't guarantee that it's a solvable problem. And those weaknesses are likely to be inherited by machines. So we need a real plan here. I don't think we have one.


Read more: “Why AI Will Never Rival Human Creativity” by William Deresiewicz


Mounk: Going back to the problem of terrible food and too-small portions, if you're convinced that it’s going to be incredibly difficult to govern these actually superior machines, then shouldn't you be cheering on your field going in the wrong direction? Surely, aren't you actually posing a huge danger to democracy and to humanity if you're right about your underlying beliefs of what will make for effective AI? Shouldn't you be hoping that people keep going in the direction of ever more sophisticated, mediocre AI that gets a little bit better and a little bit better at writing fake op-eds but doesn't ever go beyond that?

Marcus: I worry about it. Sometimes, I think we're going to wind up building better AI at some point no matter what I say and that we should prepare for what we're going to do about it. I think that the concerns with over-empowered, mediocre AI are pretty serious and need to be dealt with no matter what. I signed that letter on a pause. I don't expect that it's going to happen. But I think that we as a society should be considering these things. I think we should be considering them even in conjunction with our competitors. But the geopolitical reality is probably that people will not. We have to prepare for that contingency as well. Sooner or later, we will get to artificial general intelligence and we should be figuring out what we're going to do when we get there.

I'm not here to say I see a great solution to all of this. These are serious problems, I'm worried about them. And the best solution that I propose is to build an international agency, at least a little bit like the Atomic Energy agencies, in which there's international collaboration and scientists working on the problems—there's money to do scientific research to build the new tools that we need and coordination on the governance. That's a meta solution. It's not a detailed, specific solution. But I think that the problems here are serious enough that we can't just assume that they're going to solve themselves. We have so little global coordination around what to do about any of this stuff. It's a mess right now. I can’t guarantee that any of these solutions are going to work. In one of Geoff Hinton’s first televised interviews after he left Google, somebody asked “So how do we answer this?” And he said, “I don't know. I'm trying to raise awareness of the problem.” So Hinton and I have historically totally disagreed about the role of symbolic AI and we've had nasty disagreements. He has his “favorite Gary Marcus quote” on his web page, which tells you how much he doesn't like me. And yet, on this, we basically agree. I said that there’s short-term risk, there's long-term risk, and we need international scientific collaboration. And Hinton said, “I pretty much agree with every word.” So you have people on opposite sides of the spectrum regarding the issues we talked about earlier, both saying, “Yes, there's a lot to worry about short-term and long-term. We need more scientists involved and we don't yet have a solution.” We really need to wake up and start dealing with this right away.


Please do listen and spread the word about The Good Fight.

If you have not yet signed up for our podcast, please do so now by following this link on your phone.

Email: podcast@persuasion.community

Podcast production by John T. Williams and Brendan Ruberry.

Connect with us! Spotify | Apple | Google

Twitter: @Yascha_Mounk & @JoinPersuasion

YouTube: Yascha Mounk, Persuasion

LinkedIn: Persuasion Community

2 Comments
Persuasion
The Good Fight
The podcast that searches for the ideas, policies and strategies that can beat authoritarian populism.