Transcript: The Path Forward: Artificial Intelligence with Mustafa Suleyman

MS. PASSARIELLO: Hello, and welcome to Washington Post Live. I am Christina Passariello, The Washington Post Deputy Business Editor.
Today, in our Path Forward series, we're joined by Inflection AI CEO, Mustafa Suleyman. Welcome to The Washington Post Live, Mustafa.
MR. SULEYMAN: Great to be with you. Thanks for having me.
MS. PASSARIELLO: And congratulations on the release of your book this week, "The Coming Wave." So, let's get started. In your book, you essentially make the argument that, with AI, we can't live with it and we can't live without it.
Explain the paradox that we're in, sort of rushing towards AI everywhere, and the potential dystopia you see.
MR. SULEYMAN: Well, I think the first thing to try to accept is that this is a continuation of the fundamental process of science and technology over many, many centuries. And what that means is that these models will have a tendency to get easier to use, cheaper; and therefore, spread far more widely, just as every other technology in the history of our species has also had exactly that same pattern. People demand it and so they want to get access to it all over the world. And if you start from that kind of factual basis, then you can start to think, okay, what are the ramifications of widespread proliferation of very capable AI systems.
Advertisement
MS. PASSARIELLO: So, tell us a little bit about how you see AI being used in the world right now. Explain the way that you see it transforming our society in terms--from the usefulness perspective.
MR. SULEYMAN: Right. Well, the way I think about it is that everything that we have produced as a species is the product of intelligence. Intelligence is the ability to process information and use that information to make predictions about the way that the world might unfold around you.
And once you've made a prediction, whether it's getting up and going to open a door or imagining how the stock market might unfold, you can use that prediction to make an intervention. You can invent something; create something; physically do something; say something to somebody else. And that is, at a high level, what these models are actually doing. They're learning from very large amounts of past data to make predictions about how the future unfolds and then, ultimately, to take actions based on those predictions, to try to intervene to adjust the course of some environment.
Advertisement
And you can obviously imagine that with a self-driving car, but you can also imagine it with an AI doctor that is able to read many millions of radiology scans, for example, to try to improve the accuracy of its diagnostics. In every setting, in every area of our lives, AIs are, for sure, going to make us radically more efficient, way more productive, save us vast amounts of time, and generally deliver really phenomenal value.
MS. PASSARIELLO: I mean, it's really fascinating because now we have access to these chatbots that can have conversations with us, provide information; they can, you know, produce images; they can produce voices. It's really come so far, so quickly to the public in the last, you know, less than a year. I want to go back to 2010 for a moment, because you cofounded Deep Mind that year, to, quote, "replicate the very thing that makes us unique as a species: our intelligence."
Can you take us back to that time and help us recall what was the AI context at that time. What was it like to imagine creating intelligence, then? And could you begin to see, at that point, where we are now in terms of AI?
Advertisement
MR. SULEYMAN: Well, AI was very a very obscure field. It was really only a tiny group of academics that were thinking about these ideas, let alone actually working on them. And in fact, there weren't any companies at the time that were actually engineering AI. You know, now, this is a mass, you know, big tech industry and full of open-source developers. It's a huge, you know, ecosystem. At the time, it was very, very obscure. And my personal motivation for trying to start Deep Mind was I realized that if software could find more efficient ways to learn and to make good predictions, then we could use that to make the world a better place.
And I had come from working in local politics, in activism. I had started a nonprofit when I was younger. And I always believed that a life worth living was one where you really had a positive impact in the world. And I saw AIs as a way to accelerate that objective, and really see if we could actually use it to invent new knowledge. You know, knowledge is really the bedrock of our civilization. You know, we discover things and invent things and then share that information widely across the world. And over decades, it's taken up by other people; it evolves and adapts. And we just collectively improve the corpus of knowledge that helps us to live well as a civilization.
And now, I think we're on the cusp of having AIs, or intelligent tools, that can help create new knowledge, right, and give us all access to information and help us to make predictions, help us organize our lives. And I really think that this is going to end up being one of the most productive periods in the history of our species, if not the most productive.
Advertisement
MS. PASSARIELLO: And yet, you also identify these massive--I mean, just existential risks that come with that productivity.
Help us define what that is and when did you begin to conceive of the risks that are involved with this, that come alongside this AI?
MR. SULEYMAN: I mean, any new technology comes with serious downsides. You know, society always has to adapt to a new suite of risks, whether it was nuclear power or the steam engine or electricity, cars, trains, planes.
You know, I have a funny anecdote in my book that the first time people in the late 1800s saw a train on a railway track, a number of spectators, including the local MP at the time, in Liverpool, actually didn't realize that the train was moving towards them and that they needed to get off the tracks to be able to avoid being hit by the train. They ended up being killed on the day of the launch. And it's just kind of a remarkable insight--it's hard to even believe that's true, now, that a new technology could be so unfamiliar and so strange that we wouldn’t even grasp the concept of it moving towards us and not being able to stop.
Advertisement
And you know, I think that's a kind of really interesting metaphor for all technologies. At first, they seem really threatening and really confusing and overwhelming. But, over time, we realize their limitations and we see the ways in which they're fallible and the ways in which we need to restrict and control them. And we ultimately end up harnessing the benefits whilst minimizing the downsides. I think there are lots of examples of this throughout history.
If you take aircraft flight, for example, you know, it's kind of an amazing thought to think that we all could get into a tin tube 40,000 feet in the sky and, you know, fly halfway around the world, and it to be a consistent, reliable, and safe way of getting around--one of the safest forms of transport. And that's because, you know, the industry as well as the regulators have spent the last, you know, six or seven decades putting in place regulations of all kinds, big and small, and adopted new best practices every time they see an error.
You know, there is an onboard flight recorder, the black box recorder, tracking absolutely everything that happens in real time on that aircraft, everything that the cockpit says, the pilot says, the staff says, all the telemetry and so on. And that information is then fed back into other competitors if there is a major incident and best practices need to be updated. So, there's plenty of precedent for good regulation, here. And I actually think we're at a moment where we need to sort of quell the alarm a little bit. I think people have got a little bit ahead of their skis in terms of some of the panic we're seeing around the consequences, here. And I also think that we really have to focus on the huge potential upsides.
Advertisement
MS. PASSARIELLO: But let us come back to the idea of--you know, you speak of the precedence with trains and with planes. Was there a moment for you that, with AI, that was similar--[audio distortion]--was there a moment at which you thought you saw these risks of the AI? Was it a launch or an incident that happened with AI?
MR. SULEYMAN: Well, I mean, I was aware of and thinking about the risks and the ethics of AI from the day we started the company. Our business plan read, "Building artificial general intelligence safely and ethically for the benefit of humanity." I mean, we were very aware that, you know, if you trained an AI to recursively self-improve, that is, in an uncontrolled way update its own code and try and improve itself without a human in the loop, then that could potentially cause, you know, really significant harm in the very, very long term.
So, there's always something that we were conscious of and have been thinking about and advocating for throughout, you know, essentially my entire career. I mean, we've experimented with lots of different governance boards and oversight mechanisms, different legal charters. You know, we've really been thinking about, what is the new type of structure that will help to contain AI and help us make sure that we get as many of the benefits as possible.
Advertisement
I wouldn't say there was a specific moment, per se; but over the years, I've seen the incremental progress of many different algorithms, added together, you know, we can now really start to see that that progress is becoming exponential. You know, the amount of compute that is used to train these models is growing by 10x per year, every year for the last decade. So, that's an unbelievable trajectory of more and more processing power going into training these models.
And you know, I think we're all very conscious that that is a trajectory which is going to deliver huge benefits but, you know, is also going to increase risk.
MS. PASSARIELLO: [Audio distortion]--of that sort of 10x increase, at least in the past--in the past year.
Now, in the midst of calling out the risks of AI, you also founded this new AI company, called Inflection, which, as we noted in the opening video, recently released a chatbot named PI.
Advertisement
Tell us how it is that you are ringing the alarm on the risks of AI, and yet you are part of this arms race, this rush to create and release AI onto the world.
MR. SULEYMAN: I don't see it as an arms race. I see it as the natural development of science and technology. I mean, we are inventing tools and, as with any new tool, that does carry risk, and we just have to make sure that we manage and mitigate the risks. It doesn't mean that we shouldn't progress these technologies. They're going to deliver enormous benefits, and many, many millions of people will be developing these models around the world.
The important thing to remember is that these techniques are also available in open source about 12 to 18 months after they've been developed at the largest big AI labs in the world, like my own, Inflection AI. And so, you know, it's a kind of contradictory process in the sense that the biggest models are getting enormous and they're getting better and better. But at the same time, there are plenty of very small and capable models that are proliferating in the open source. And both of those types of power need to be checked and held accountable, both the big tech, centralized power, which is also accelerating, but also the ability for anybody in the open source community to potentially use these models for dangerous purposes.
For example, you know, these models are pretty good at teaching and coaching. So, you can imagine that the AI could potentially get good at teaching somebody how to make a bomb or how to manufacture a biological weapon, for example. And we've already seen evidence of this among some of the big labs, and have obviously reported it directly to the agencies. And we've actually put together a working group for being able to red team or investigate these models, adversarially attack them and demonstrate their weaknesses, and share those best practices with each of the other labs so we can all patch those gaps as quickly as possible.
But we, of course, can't be sure that the open-source efforts are going to be as good and as responsive of that in terms of mitigating some of the downsides. And that's one of the areas of concern that I've been raising.
MS. PASSARIELLO: And so, tell me how you're approaching this PI chatbot. What makes it different than, you know, ChatGPT, which, as we saw recently in a Pew survey, about a quarter of Americans have experimented with.
Share this articleShareHow is your approach different? What are you trying to deliver that's different?
MR. SULEYMAN: Well, we believe that everyone in the world is going to end up having a personal intelligence. Just as we all had a personal computer, the future of computing is intelligence. And you're going to want a personal AI that is accountable to you, controlled by you, on your side, you know, in your corner. And it's going to get to know you pretty closely. You know, you'll have really close conversations with it about your past and your history, about your work, about your plans for the weekend, about what you want to cook for dinner this evening. And it will help you to prioritize and plan and book things.
Think of it as a chief of staff or a personal assistant. Instead of having to search around on Google and spend lots of time on a ton of different websites, you'll just ask your AI, you know, what's the answer to this question; can you help me get this thing done; can you remind me to do this at this time?
And so, you know, it isn't going to be quite ready this year or next; but certainly, in the next two to three years, it's going to get really, really good at some of those very practical tasks.
So far, we've designed it to have high emotional intelligence. So, if you look at ChatGPT and others, they have high IQ, right? They're pretty factual. They can give you a pretty kind of dry, like, Wikipedia-like response in some ways, but very useful.
We focused on the conversational style. So, PI is very natural and fluent. It will ask you follow-up questions; it will clarify what you've said; it will reflect back what you've said to make you feel heard and understood and really offers support and companionship. I mean, it's also very factual and very knowledgeable. And up to this point, we have one of the best large language models in the world. We published a tech report a few months ago that showed that we are as good as ChatGPT 3.5 on all of the public benchmarks in terms of factuality, groundedness, and accuracy.
So, you know, we're a very small startup but we are definitely punching above our weight and doing our best to build a really safe and high-quality AI.
MS. PASSARIELLO: I mean, it sounds very fascinating. One of the things that is quite a topic of discussion these days is how the chatbots are trained, and what information they are trained upon.
We've seen a lot of news organizations begin to restrict access of their information to ChatGPT because of, you know--
[Technical difficulties]
MR. SULEYMAN: Yeah, I mean, I think you're asking about the training. Sorry, I may have cut out there, briefly. But on the training side of things, you know, we do train our models on very large amounts of data that is available on the open web.
And so, you can think of training as a two-step process. The first part is where you basically ask the model to look at hundreds of billions of words that are available on the open web and have been published on websites. And it kind of learns an all-to-all connection between each word and all of the other words that it's seen. And that way it sort of is able to broaden its context. Given a sentence or a paragraph, it can see, you know, sort of all the adjacent words and paragraphs that often appear when it sees that constellation of words and sentences at any given moment.
And so, what it gets quite good at is being able to predict what is likely to come next. So, it hasn't been programmed in any specific, structured way other than just learning to predict the next word in a given sequence. So, when you ask it a question, all it's doing is just continuing the sequence of words that it thinks is most likely to appear just shortly after that.
And that's actually produced amazingly accurate and very, very useful--very, very useful models.
MS. PASSARIELLO: Thank you, Mustafa. And yes, I apologize for my question freezing, there.
So, let me follow that up with a question just about the film industry, publishing, book publishers are beginning to get more aggressive about limiting access to their content for training these models.
Do you think that that kind of information is fair use for training models or should, you know, the film industry, book publishers, news publishers be compensated for the information that is training these chatbots?
MR. SULEYMAN: Yeah, I think if it's available on the open web then, so far, in the history of the internet, that has been the default rule. If it is available, then anybody can use it for any purpose.
If it is, you know, sort of publisher, copyrighted material and it's a film or something like that, or a copyrighted book, then obviously that's a different story. So, I think that the existing regulation should just provide sufficient guidance, here. I'm not sure that, you know, there needs to be anything above and beyond what's already there.
I know that, you know, some of the news outlets have locked up their APIs because they don't want the AIs to be able to use that material. And in general, that seems just totally correct. I mean, you know, news outlets should just flag and signal where they don't want to be part of the training corpus.
MS. PASSARIELLO: Okay. That's interesting.
Speaking of the movie industry, we're going to go to an audience question. We have a question from David Ceci, or "Chichy" [phonetic]. This is a good Italian name--from New York who asks, "Some people refer to AI as 'plagiarism software.' How would you address the concerns voiced by striking WGA and SAG-AFTRA members about the AMPTP's attempts to use AI to replace writers' words and actors' images?"
And AMPTP is, of course, the Alliance of Motion Picture and Television Producers.
MR. SULEYMAN: Yeah, I mean, you know, I think that throughout history ideas evolve because people copy them, edit them, update them, and iterate and produce new versions of them.
So, I think this is the natural course of things. You know, if these AIs produce word-for-word material that they have ingested then, I agree, that becomes a problem. That would be monetizing something that was otherwise copyrighted and produced by somebody else.
But if it's really capturing the zeitgeist, capturing the essence of a style or a story arc, you know, that doesn't seem to me that different to what happens, you know, typically with most writers, directors, creators, musicians, filmmakers, entrepreneurs, investors, you know, financial trading strategists. Everybody is always constantly looking at what everybody else is doing and trying to copy it, update it, adapt it, improve it. That's the nature of, you know, our kind of memetic evolution.
So, you know, I think there's a difference there to be struck between the word-for-word copying and then really just capturing the essence of something.
MS. PASSARIELLO: All right. That's a fascinating debate.
I know that there is a lot of effort to begin to regulate AI in the U.S., and you were a part of the Biden administration's recent meeting on it. There is another meeting coming up, I believe, next week, in fact, about--that is being hosted by Chuck Schumer, where he's gathering AI representatives, including Elon Musk and Sam Altman.
And from what I understand, you are not part of that group that's meeting next week. So, if you were there, what do you think that lawmakers and executives should focus on?
MR. SULEYMAN: So, look, I think the good news is that lawmakers are moving faster than we've seen them move in any other situation previously. So, I think it's a good thing that, you know, regulators are really paying attention; they're alert to it; and they've learnt the lessons of the last sort of 10 or 15 years or so of not being quick enough, I think, to regulate and to oversee some of the past waves of technology in big tech.
I think in this situation, the key thing that we've got to focus on is making sure that the AIs are transparent and accountable. So, you should be able to understand, you know, what has this model been trained on. Which data was included versus excluded? I think that's an important signal and a lot of people will care about that. But I think you've also got to observe the outputs. So, we need ways of auditing, you know, when it makes mistakes, how frequently, and when it says things which have the potential to cause harm.
You know, so, I think if it is coaching somebody to break the law or if it is, like, encouraging and inciting violence, you know, those things are, I think, relatively measurable and really should be excluded and we have to be able to find ways to have independent regulators be able to test for those kinds of things.
I think the good news is that there's going to be a commercial incentive to drive towards safety, right? So, the big companies do not want to create experiences that are harmful or damaging, and you know, certainly don't want to create experiences that encourage people to break the law. And so, you know, we've actually got to just get good at observing the outputs of these models.
If you take a look at PI, our AI, for example, that you can try at PI.AI or on the app store, it's very, very safe. None of the existing jailbreak mechanisms or the prompt hacks or all the ways that people have been developing over the last six to nine months or so to try to, you know, trick these models and undermine them. None of them work on PI. And you know, many people have tried. And it just shows you that if you try to lead with a "safety first" approach, you know, then you can make a lot of progress.
If you try to encourage PI to be racist or homophobic or sexist in some way, you know, then it's actually going to be pretty resistant. And it will do so in a really polite and respectful way. It won't judge you or belittle you. It won't just give you a stock phrase saying, "I can't talk about this," which is what a lot of the other AIs will do. But it will ask you, you know, why do you think that; where are those thoughts and feelings coming from; you know, what makes you worried about that? You know, can you imagine what it might feel like for the other side? So, it's deliberately trying to encourage respect and empathy in others.
And you know, I think that that's a demonstration that, as these models get bigger and we get better at understanding them, they actually get safer and more controllable. And that's an important story and, you know, people can verify that. It's not just my claim. You can actually go and test the experience that we've built and actually see if you can induce it to say something which is biased or harmful or toxic or potentially causes harm in any way.
And so, you know, I think that the incentives are aligned here because there's a commercial interest to try and do it. But at the same time, that's never going to be enough, and we've seen that in the past many times. So, we need regulators to be technical and proactive and competent. And really, I think we're starting to see that in the space, which is good.
MS. PASSARIELLO: Well, it's definitely really interesting to think about the role of regulators. And yet, we're at this point where the companies have so much power. They are moving so quickly.
You left Google at the beginning of last year and in your book, you write about that it was just painstakingly slow to get products launched. And you were--when you were there, you were working on AI safety; you were leading in the role of AI safety. Now that we've seen Google come out with its own chatbot and really accelerate its programs this year in AI, do you think that Google is taking the appropriate--that it is taking the appropriate safeguards?
Because we saw for a long time that people who raised alarms within Google around the risks about AI, such as Timnit Gebru and, last year, Blake Lemoine with the LaMDA Program, were not listened to. So, how do you--with your experience at Google, how do you think about that balance between going fast but being cautious?
MR. SULEYMAN: Yeah, I think that's a great point. I mean, just to clarify, I did not lead AI safety at Google. I was one of the members of the team on LaMDA, the conversational AI, which you just mentioned. And I was also a VP of AI policy. So, I worked on the ethics and governance of AI at Google and at Deep Mind for many years.
You know, I think that Google moved slowly mostly because it was, and is, a large bureaucracy and didn't feel threatened. And I think that these new language models undermine Google's core search business. So, Google has been saying for many, many years that its job is to provide access to the open web and, you know, encourage people to go and visit third-party websites. Whereas these AI models actually just give you a perfect answer instantly. You know, and unfortunately, Google's business model has actually led to it being quite difficult to find information on the web, right? You know, you type something in; you see ten blue links; you go to a webpage. You're inundated with ads. The webpage creator is incentivized to create long, complicated pages to make you stay on that page for as long as possible because the longer you stay, the more ad revenue they'll earn.
So, the entire ecosystem needs rethinking. It's not working for users and, you know, it's actually slowing down us getting access to good answers and good information.
And so, as these new sort of chatbots like PI have come up, they've shown that there's actually a much more efficient and helpful way to get access to information and take decisions. And I think that we're going to see huge adoption of those models very, very soon.
MS. PASSARIELLO: Well, Mustafa, that is all we have time for today, unfortunately. There is so much to think about from the role of regulators and companies, and the consumers who are going to be using and are using so many of these technology products.
Thank you so much for joining us today, Mustafa.
MR. SULEYMAN: Thanks very much. It's been great to be here. Thank you.
MS. PASSARIELLO: Thank you.
And thanks to all of you for joining us. To check out what interviews we have coming up, please head to WashingtonPostLive.com to find more information.
I'm Christina Passariello. Thanks again.
[End recorded session]
ncG1vNJzZmivp6x7uK3SoaCnn6Sku7G70q1lnKedZMSiv8eipaCsn6N6sbvSrWSloaaafHN8kWxmaXFfZYRwwNGapaybop69tXnPmquhZZakv7it0Z1kmqqknrOqr8iao2ahnqmyrbjIoJynm5VixKrAx2akrquklrOiedKuo56xnZa7cA%3D%3D