IS AI AN EXISTENTIAL THREAT?
Many people were surprised by the March 2023 open letter from tech leaders requesting a moratorium on AI development. Three thought leaders debate whether or not AI is an existential threat to humanity.
Steve Paikin: In March of 2023, an open letter was released by technology leaders calling for a six-month pause on AI development. Elon Musk signed it, as did Apple co-founder Steve Wozniak. It said, in part:
AI systems with human-competitive intelligence can pose profound risks to society and humanityâŠrecent months have seen AI labs locked in an out-of-control race to develop and deploy ever-more powerful digital minds that no one â not even their creators â can understand, predict or reliably control.
Soon after, Geoffrey Hinton, known as the Godfather of AI, announced his resignation from Google, saying: âLook at how it was five years ago and how it is now. Take the difference. Propagate it forwards. Thatâs scary.â Gillian, how would you characterize this moment in history for AI?
Gillian Hadfield: I think we are at a real inflection point in thinking about AI. As you pointed out, we are seeing tremendous advances. What we saw in the fall of 2022 with ChatGPT was exciting and new, and I think people are now saying, âHey
folks, maybe weâre going a bit too fast. Maybe thereâs a lot more happening here than weâve understood.â I think thatâs what the pause letter is about.
SP: Pedro, what is your view on this?
Pedro Domingos: I donât think weâre going too fast at all. In fact, I donât think weâre going fast enough. If AI is going to do things like cure cancer, do we want to have the cure years from now or yesterday? What is the point of a six-month moratorium? To me, that letter is a piece of hysteria. The bigger worry for me â and most AI researchers â is not that AI will exterminate us, itâs that a lot of harm will be done by putting in restrictions, regulations and moratoria that are not needed.
SP: JĂ©rĂ©mie, whatâs your take?
JĂ©rĂ©mie Harris: Itâs clear that weâve taken some significant steps towards human-level AI â in the last three years in particular. So much so that we have many of the worldâs top AI
researchers, including two out of three of its earliest pioneers, wondering aloud about it. I might not go that far, personally, but this is being talked about through that lens. By throwing more data and processing power at these techniques, we might be able to achieve something like human-level or even superhuman AI. If that is even a ballpark possibility, we need to contemplate some fairly radical shifts in the risk landscape that society is exposed to by this technology.
There are many dimensions to consider, but a key one is malicious use. As these systems become more powerful, the destructive footprint of malicious actors that use them will only grow. Weâve already seen, for example, China using powerful AI systems to interfere in Taiwanâs electoral process. Weâve seen cybersecurity breaches and malware being generated by people who donât even know how to code. Then, of course, thereâs the risk of catastrophic AI accidents, which folks like Geoff Hinton are flagging. Those two broad categories of risk are very significant.
SP: Gillian, letâs circle back to Pedroâs initial comment, that if you want a cure for cancer, you donât move slower, you move faster. What do you say to that?
GH: I actually agree. There are lots of potential benefits to be had and we absolutely want them to materialize. We want to continue doing our research and building the system. Thatâs a really important point. But if you think about medical research, it takes place within a regulated structure. We have ways of testing it and there are clinical trials. We have ways of deciding which pharmaceuticals, medical devices and treatments to put out there. With AI, weâre seeing such a leap in capability that the ways in which it transforms research and work have basically outstripped our existing regulatory environments â and we havenât yet built a system that ensures AI is working the way we want it to.
SP: Pedro, on the topic of searching for a cure for cancer, what about the notion that patient-protection regulations donât yet exist for AI, and therefore we ought to be careful?
PD: I think the analogy between AI, medicine and drug approval is mistaken. Each area faces problems of its own. The drug approval mechanisms that we have in place are costing lives, and even the Food and Drug Administration in the U.S. under-
stands that it needs to change. So that is hardly a good model. Regulating AI is not like regulating drugs. Itâs more like regulating Quantum Mechanics or Mechanical Engineering. You can regulate the nuclear industry, cars or planes; you can regulate â and should and do regulate â specific applications of AI. But regulating AI itself doesnât make sense.
I think we definitely need to âask questions before we shoot,â make sure we understand the technology and figure out what needs to be regulated and what doesnât. I think what OpenAI has been doing is great. The best way to make AI safe is to put it in the hands of everybody, so everybody can find the bugs. We know this from Computer Science: The more complex the system, the more people need to look at it. What we donât need is to put this in the hands of a committee of regulators or experts to figure out whatâs wrong with AI. I think thatâs the wrong approach.
SP: Jérémie, do you think AI poses an existential risk to humanity?
JH: I think the argument that it does is backed by a lot more evidence than most people realize. Itâs not a coincidence that Geoff Hinton is on board here. Itâs not a coincidence that when you talk to folks at the worldsâ leading AI labs â the ones that are building the worldâs most powerful AI systems, the GPT4s â you hear people talking about the probability that this will amount to an existential risk.
One of the things these people are discussing is the concept of âpower-seekingâ in sufficiently advanced systems. Thatâs one concern that Geoff Hinton put on the table, and itâs been written up and studied empirically. I think itâs something we should take seriously. Nothing is guaranteed. Thatâs part of the unique challenge of this moment. Weâve never before made intelligence systems smarter than us. Weâve never lived in a world where those systems exist. So we have to deal with that uncertainty in the best way we can. Part of that entails consulting with folks that actually understand these systems and who are experts in technological safety.
SP: Gillian, what was your reaction when you heard Geoffrey Hintonâs comments?
GH: I think Geoff was truly surprised by the advances heâd seen in the previous six months. Obviously, heâs been tremendously
As these systems become more powerful, the destructive footprint of malicious actors that use them will only grow.
close to this. I think a number of people didnât realize that scaling up large language models and generative models would produce the kind of capabilities weâre seeing. Iâve had lots of discussions with Geoff â heâs on my Advisory Board at the Schwartz Reisman Institute â and this was an important moment for him as to the nature of the risk. If you listen to what he has to say, itâs not âI know for certain that there is an existential risk,â itâs âThere is so much uncertainty about the way these things behave that we should be studying that problem and not getting ahead of it.â That is an important clarification.
SP: Pedro, when someone like Geoffrey Hinton rings the existential bell, does it not give you pause?
PD: Iâve known Geoff for a long time. Heâs a great researcher; but heâs also an anarchist from way back and a bit other-worldly, and I think we need to be careful about overinterpreting what he says. People need to know that most AI researchers do not think AI poses an existential threat. But of course, itâs interesting to try to understand why some do people believe that.
There are many researchers who donât think weâll ever get to human-level AI, which is quite possible. I do happen to think weâll get there, but it wonât happen tomorrow. People need to understand that, number one, we are still very far from humanlevel intelligence AI. Geoff has said things like, âWhat if an AI wants to take over?â To which Metaâs Head of AI Yann LeCun responded, âBut AIs donât want anything.â Itâs an algorithm; itâs not something that we canât control.
When you use machine learning, the AI does something you canât predict. But it is being done to optimize the functions that you have determined. Thatâs where the debate needs to be, is how to optimize that. AI is for solving problems that are intractable, meaning it would take exponential time to solve them. Thatâs the technical definition of AI. But itâs easy to check the solution.
GH: I want to put the existential threat question in a broader lens, as well. Itâs true that malicious use is something to be concerned about, but I donât think the risks are around a ârogue AIâ that develops its own goals and, Terminator-style, sets out to kill us all.
The thing I worry about is our complex systems â our financial systems, economic systems and social systems. I worry
about the capacity for autonomous agents â which are out there already, by the way, trading on our financial markets and participating in our labour markets, reviewing candidates for jobs. When we start introducing more and more autonomous agents into the world, how do we make sure they donât wreak havoc on our systems?
SP: Jérémie, do you worry that we are entering Terminator territory?
JH: A couple things. First off, on the question of âThese are just algorithms, what do they really want?â and so on, this is where the entire domain of power-seeking comes up. As I indicated, this is an entire subfield in AI safety that is well researched. These objections are very robustly addressed, at least in my opinion. This is a real thing. And the closer you get to the centres of expertise at the worldâs top labs â the Google DeepMinds, OpenAIs, the very labs building ChatGPT and the next generation systems â the more you see the emphasis on this risk class.
A poll from a few months ago showed that 48 per cent of general AI researchers estimate a 10 per cent or greater probability of catastrophic risk from AI. Imagine if you were looking to get on a plane and 50 per cent of the engineers that built it said, âThere is a 10 per cent chance that this plane is going to crash.â
SP: Yann LeCun recently tweeted the following: âWe can design AI systems to be both super-intelligent and submissive to humans. I always wonder why people assume that intelligent entities will necessarily want to dominate. Thatâs just plain false, even with the human species.â JĂ©rĂ©mie, why do you assume that if AI does become more intelligent than us, it will automatically want to conquer us?
JH: This is actually not an assumption, itâs an inference based on a body of evidence in the domain of power-seeking. Pioneering work at frontier labs suggests that the default path for these systems is to look for situations to occupy and position themselves in. Basically, the systems seek high optionality, because that is useful for whatever objective they might be trained or programmed to pursue. As AI systems get more Intelligent, the concern is that they will start to recognize and act more and more on these incentives.
SP: While you were speaking, Pedro had a big smile on his face. Whatâs behind that smile, Pedro?
PD: I just have a hard time taking all of this seriously. The people who tend to be most hysterical about AI are the ones who are furthest from actually using it in practice. Thereâs room for a few of these people in academia; the world needs that. But once you start making important societal decisions based on them, you need to think twice.
I agree with Gillian that these Terminator concerns are taking attention away from the real risk we need to be talking about, which is the risk of malicious use. We are going to need something like âAI copsâ to deal with AI criminals. We have to face the problem of AI in the hands of totalitarian regimes and democracies need to start making better use of the technology.
The biggest problem with AI today, not tomorrow, is that of incompetent, stupid AI making consequential decisions that hurt people. The mantra of the people trying to control AI is that we need to restrict it and slow it down, but itâs the opposite: Stupid AI is unsafe AI. The way to make AI safer is by making it smarter, which is precisely the opposite of what the moratorium letter is calling for.
GH: As indicated earlier, I signed that letter, and I signed it so that we would have these very conversations â not because I think itâs essential that we stop progressing. I donât believe weâre on a precipice; but I do think itâs critical to think carefully about all this. These systems are being built almost exclusively inside private technology labs. I work with folks at OpenAI, so I know there is a lot of concern there about safety. Theyâre making decisions internally about how to train AI so it âspeaks betterâ to people, when to release it, to whom and what limits and guardrails should be put in place. These are things we should be deciding publicly and democratically, with expertise outside of the engineering expertise that is currently dominating this arena. As a social scientist, an economist and a legal scholar, I think about the legal and regulatory infrastructure that we need to build, and I donât see us paying enough attention to that set of questions.
SP: When the printing press was invented, we didnât know what impact it would have. We had to use it for a while in order to understand that. Same goes for the Internet, the cotton gin and the steam engine. Why is this technological moment any different from previous discoveries?
JH: Because we are currently on a trajectory to build something potentially smarter than ourselves. That may or may not happen, but if it does, weâre going to find ourselves in a place where we just canât predict anything important around how the future is going to unfold.
Throughout history, human intellectual capacity has been the one constant. Weâre all born with biological thinking hardware, and thatâs all weâve had to work with. But we just donât have a point of reference here, which is why itâs so important to track this and think deeply about where it might go.
PD: Itâs true that AI introduces uncertainty because it is powerful and can be used for lots of different things, and we canât possibly predict them all. But that is good! The best technology is like the examples you gave earlier. Most of the best applications are things no one could have anticipated. What happens is, we all work to optimize good applications and to contain bad ones. AI is still subject to the laws of physics and the laws of computation, and to the sociology of sociotechnical systems. Weâre actually going to have a lot of control over it, even if there are aspects of it that we donât yet understand.
A good analogy here is a car. As a user, you donât feel an urgent need to understand exactly how the engine works just because it could blow up one day. Thatâs for the mechanic. What you do need to know is where the steering wheel and pedals are, so you can drive it. And AI does have a âsteering wheelâ â itâs called the objective function, and we need to understand how to âdriveâ that.
I very much agree that we need more than just the technologists thinking about this. The limiting factor in the progress of AI is actually not on the technical side, itâs around the human ability to use it and come to terms with it. Thatâs whatâs going to set the right limits.
SP: Gillian, Do you think this technological moment is different from previous ones?
GH: I do. What is critical here is the speed with which AI transforms entire markets. For example, in the legal domain, Iâve seen how, in just a few minutes, tools like ChatGPT can do what it would usually take a week for a lawyer to do. That is going to be very disruptive, and the potential scale is massive. Because itâs a general purpose technology, it can and will show up everywhere.
The people who tend to be most hysterical about AI are the ones who are furthest away from using it in practice.
To return to the analogy of the automobile, what I am concerned about is that we live in a world with copius regulation around how to build and drive automobiles. It took 50 years before we built all that regulatory structure. There was basically nothing in the beginning. And thatâs just one part of our economy. I donât think we can approach AI as we have with previous technologies and say, âLetâs just put it out there, find out how it works and then figure out how to regulate it.â I think we have to be much more proactive about it.
SP: There appears to be consensus that AI must be developed in a responsible way. How do we do that?
PD: We donât even know what a real AI is going to look like, because we donât have one yet. So trying to regulate in advance is almost guaranteed to be a mistake. What needs to happen is, the government and the regulatory organizations need to have their own AI, whose job it is to deal with the AIs of the Googles and the Amazons and so on.
There is no fixed, old-fashioned set of regulations that will work with AI. We need something as adaptive on the government side as it is on the corporate side, so AIs can talk to each other. This is already starting to happen in the financial markets, because there is no choice. Thereâs a lot of bad activity going on and youâve got to have the AI in place to deal with it.
GH: There are two things I want to pick up on. First, I agree that weâre going to need AI to regulate AI. I actually think we need to build that as a competitive sector unto itself. But itâs important to recognize that there are some building blocks that we donât have in place currently that allow us to regulate other parts of the economy. For example, one thing we can do right now is create a national registration body â a registry system so that we have eyes on where AIs are, how theyâve been trained and what they look like.
I think this should be a government function. Every corporation in the country has to register with a government agency. They have an address on record; they have the name of someone who is responsible. The government can say, âOkay, we know youâre out there.â We register our cars so we know where all the cars are. Right now we donât have that kind of visibility into AIs. So thatâs the starting point.
People would have to disclose basic pieces of information about the models to government â not publicly, not on the
Internet. That would give us visibility into this as a collective. It would also provide us with the tools needed if a dangerous method of training or a dangerous capability emerges. We donât even have that basic infrastructure in place yet.
SP: Last word goes to Jérémie. How do we need to proceed?
JH: I think itâs worth noting that the most advanced AI capabilities require giant processing-power budgets â on the order of hundreds of millions of dollars. OpenAIâs latest model cost is in the hundreds of millions. And weâre seeing that cost rise and rise. That immediately implies a bunch of counter-proliferation levers. And OpenAI has really led by example by inviting third parties to audit their AI models for behaviours like power-seeking and malicious capability. It would be great to see a lot more of that in the AI community.
Gillian Hadfield is a Professor of Law at the University of Toronto and Professor of Strategic Management at the Rotman School of Management. She is Director of the Schwartz Reisman Institute for Technology and Society at the UofT as well as AI Chair of the Canadian Institute for Advanced Research (CIFAR). Pedro Domingos is a Professor Emeritus of Computer Science and Engineering at the University of Washington and author of The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (Basic Books, 2018). Jérémie Harris is Co-founder of Gladstone AI and the author of Quantum Physics Made Me Do It: A Simple Guide to the Fundamental Nature of Everything (Viking, 2023).
This article is a condensed version of an interview from TVOâs The Agenda, hosted by Steve Paikin. Video of the entire conversation is available on the TVO website: www.tvo.org/theagenda