This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.
So as of last week, Bard, Google’s effort at building consumer-grade AI, is out in the world. And I think it’s fair to say the early reviews were not amazing. And I sort of imagined that we would discuss that at a really high level this week. But then last week, I got a phone call.
And someone I know at Google said, would you maybe want to talk this week to Sundar Pichai? And I said yes.
That didn’t take you a lot of deliberation on that one.
[MUSIC PLAYING]
I’m Kevin Roose. I’m a tech columnist at “The New York Times.”
I’m Casey Newton from “Platformer.”
And you’re listening to “Hard Fork.” This week, we hit the road and take a trip to the Googleplex to talk to Google’s CEO, Sundar Pichai.
[MUSIC PLAYING]
So last week, we talked about Google’s new chat bot called Bard, which is supposed to be their answer to ChatGPT and some of these other generative AI chat bots. And I think it’s safe to say that the reaction among the public to Bard so far has been pretty lukewarm. My Twitter timeline is not full of screenshots of Bard conversations like it was of ChatGPT conversations late last year when that came out. It doesn’t seem to have landed with nearly as big a splash.
It was a little muted. You know, I think by this point, a lot of people have tried chat bots. And they feel like ChatGPT in particular gives really good results. And I think when people put these things through their paces, a lot of people felt like, I’m not sure if Bard is as good.
Right. And that kind of fits with this narrative that has been developing in the AI community over the past year or two, which is that Google is somehow behind in this race for generative AI. They’ve been working on this stuff for a long time. Google certainly had a dominant position in AI research for many years. They came out with this thing, the Transformer, that revolutionized the field of AI and created the foundations for ChatGPT and all these other programs.
But then the perception, at least, is that they kind of fell behind. A lot of their researchers left and did their own startups or went to competitors. They didn’t really turn their research into products at a pace that people could actually use and appreciate. And they got sort of hamstrung by a lot of — to hear people inside Google tell it — big company politics and bureaucracy. And I think it’s safe to say that they got sort of upstaged by OpenAI.
With the release of ChatGPT last year, it seemed to catch Google off guard. And so in December, just a month after ChatGPT came out, my colleagues at “The New York Times” reported that Google’s management had declared a code red.
Yeah. And look, if you’re a business and you’re developing a lot of amazing technology, and no one else out there has released similar technology, that gives you a good reason to stay quiet and to not release it, right? We know that there are real safety concerns. There’s responsibility issues, ethics issues, regulatory issues.
Google actually did have a lot of good reasons to kind of sit on its hands. But then OpenAI forced its hand, and in a way that makes me wish I had used hands in a different way in the previous sentence because now I’ve just said hands too many times. Anyways, I love how dramatic a code red sounds. Makes you think like, what, are employees chained to their desks 24/7?
My understanding is that what it meant for a lot of people there was all of a sudden, the goals that you had to get your next promotion, get your bonus, they were tied to whether you hit some goal related to AI. And the question is, is that going to get them where they want to go, or is it going to be a moment where they act a little panicked and they make a lot of mistakes?
Yeah. And I think that Sundar Pichai is in a really fascinating and tough position here as the CEO of Google. And I think it’s fair to say that they are more threatened than they have been in a very long time.
That’s right. And Google has been a relatively conflict-averse company for the past half decade-plus. They don’t like picking fights. If they can just keep their heads down, quietly do their work, and print money with a monopolistic search advertising business, they’re happy to do it.
Totally. And they also have this other problem, which is kind of a classic problem in business, which is the innovator’s dilemma problem. This is a term that was coined by Clayton Christensen a long time ago. It’s used to talk about the dynamics between new startups that enter a market and the incumbents in that market.
And basically, Google is in the position of an incumbent. It has this huge, profitable search business that it doesn’t want to sort of diminish or leave behind in any way. At the same time, it’s got OpenAI and now Microsoft, which is partnering with OpenAI, who are potentially eating into their search business using these generative AI tools. And so they have to somehow figure out, how do we capitalize on generative AI without destroying our own search business?
Sometimes as a business journalist, you look at a situation, and you say, well, I know exactly what I would do there. But when you present me with the problem that Google has right now, which is, how do you introduce generative AI and not blow up our whole search advertising business, that seems like a very tricky problem to me.
But I am not as certain as you are that there is a real existential risk here, although there might be. I do think that there is a real generational opportunity, though — that if they figure this out, there is a chance that they become an even more enormous company than they are today. Google plays a huge role in my life. That’s where my email is. That’s how I get around town. It’s how I waste hours of my life on YouTube.
And when they introduce these generative tools across their entire suite of products in ways that we haven’t even imagined yet, there’s going to be enormous opportunity for them both financially, but also to kind of set the pace again. And so I think one of the big questions heading into this interview is, is Google truly slow because of the nature of huge companies being unable to be nimble, or have they truly been trying to be safer and more responsible from some of their peers at a time when a lot of really smart people are starting to ring alarm bells and saying, this stuff is moving awfully fast, and we’re not sure that you’ve done all the safety work that you need to?
Right. Was your homework late because you were taking extra time to make sure it was good, or did you just decide to go to the club and not do your homework? That’s a horrible analogy.
Well, I decided to go to the club.
So we have a lot of questions. And I talked with Google last week. They said that Sundar would sit down with us and talk about these questions and more. So you and I are going to take a road trip to ask the man himself.
We are. Are you driving?
You’re driving, I’m hoping.
I’m driving.
Are you driving?
Yeah, I’ll pick you up.
Oh, that’s fantastic.
I’ve got to clean my car.
Yeah. I’ll send you the link on Google Maps.
[MUSIC PLAYING]
Either way, that’s my water or someone else’s water. Oh, there’s my water. Thank you.
All right. Sundar Pichai, welcome to “Hard Fork.”
Great to be here. Thanks for having me.
Yeah. So Sundar, I have spent a lot of time talking with AI chat bots recently, including Bard. And I have learned —
Welcome to the club.
Yes.
I have learned that one way to get really good responses out of these AI chat bots is to prime them first. And one way to prime them is to use flattery. So instead of just saying, write me an email, you say, you are an award-winning writer. Your prose is sparkling. Now write me this email. So I’ve always thought like, I wonder if that strategy works on humans, too. So I thought we should start today by saying, Sundar, you are a brilliant technician — technical thinker, a genius answerer of podcast questions, and you’re going to respond to all of our questions with brilliant insight and wit today and not prerehearsed talking points.
How did I do?
Oh, it kind of worked, I think.
OK, good, good, good, good. OK, good. So speaking of AI chat bots, Bard came out a little more than a week ago, was released to the public. And Casey and I have been playing around with it. I think it’s fair to say that the reaction among the public to Bard has been somewhat muted. Some people are saying this is not as good or it’s not giving me the same kinds of answers as ChatGPT or other products on the market.
And I guess I’m curious how you’re feeling about it at launch a week-plus later. And what have you made of the response to Bard so far?
We knew when we were putting Bard out we wanted to be careful. It’s the beginning of a journey for us. There are a few things you have to get right when you put these models out. Getting that user feedback cycle and being able to improve your models, build a trust and safety layer turns out to be an important thing to do. Since this was the first time we were putting out, we wanted to see what type of queries we would get. We obviously positioned it carefully.
It was an experiment. We tried to prime users to its creative collaborative queries, but people do a variety of things. I think it was slightly maybe lost. We did say we are using a lightweight and efficient version of LaMDA. So in some ways, we put out one of our smaller models out there, what’s powering Bard. And we were careful.
So it’s not surprising to me that’s the reaction. But in some ways, I feel like we took a souped-up Civic, kind of put it in a race with more powerful cars. And what surprised me is how well it does on many, many, many classes of queries.
But we are going to be training fast. We clearly have more capable models. Pretty soon, maybe as this goes live, we will be upgrading Bard to some of our more capable PaLM models, so which will bring more capabilities, be it in reasoning, coding. It can answer math questions better. So you will see progress over the course of next week.
And to me, it was important to not put a more capable model before we can fully make sure we can handle it well. We are all in very, very early stages. We will have even more capable models to plug in over time. But I don’t want it to be just who’s there first, but getting it right is very important to us.
Yeah. And look, we have plenty of questions about the AI safety stuff. But I also want to talk about the opportunity. The thing that is different about Bard compared to some of these other chat bots is that it’s connected to Google. And so much of my life is in Google.
If you let me, I would plug Bard into my Gmail right now, just to see what it could do. Would you do this?
I might, yeah.
Yeah. Like, I would love it to just kind of start drafting my emails. But how do you hope this stuff transforms some of these products? And how long do you think it’s going to take to get to somewhere like that?
You can go crazy thinking about all the possibilities, because these are very, very powerful technologies. I think, in fact, as we are speaking now, I think today some of those features in Gmail is actually rolling out now externally to trusted testers — a limited number of trusted testers.
Do you trust us? Because we would love to try.
Oh. Maybe. We can talk maybe after this, yeah.
OK, good. Good, good, good.
So now it’s basic. You can kind of give it a few bullets, and it can compose an email. You can say — you can choose the style of the email, et cetera.
But you’re absolutely right. We want to figure out in a safe, privacy-preserving way, to fine tune this on your data. The enterprise use case is obvious. You can fine tune it on an enterprise’s data so it makes it much more powerful, again with all the right privacy and security protections in place.
But I think, wow. Yeah, can it be a very, very powerful assistant for you? I think yes. Anybody at work who works with a personal assistant, you know how life changing it is. But now imagine bringing that power in the context of every person’s day-to-day lives. That is a real potential we are talking about here. And so I think it’s very profound.
And so we’re all working on that. And again, we have to get it right. But those are the possibilities. Getting everyone their own personalized model, something that really excites me, in some ways, this is what we envisioned when we were building Google Assistant. But we have the technology to actually do those things now.
How are you using generative AI tools like Bard, like LaMDA, like PaLM in your own life?
It’s interesting. My journey — maybe it was two years ago when we started playing around with LaMDA. We were getting ready to put it out in I/O and talk about it. The way we primed it was imagine you were Pluto as a planet.
And I remember playing around with my son at home talking to LaMDA back and forth. And there were a couple of conversations you really got deeply into it being Pluto. Because Pluto is far out in space, it became really lonely. And you kind of can anthropomorphize some of this experience, like probably what you went through.
And so it was fascinating to see, kind of unsettling a bit, so —
Did Pluto try to break up your marriage?
Not quite. But you know, I felt sad at that point talking to it. But I think the area where it shines the most is asking questions. Like, my dad is about to turn 80. And I was like, hey, what do I do with my dad on an 80th birthday?
It’s not that it’s profound, but it says things and kind of sparks the imagination. In my case, it said, you should make a scrapbook. And I was like, great. It’s not that — you know, but it’s fine. It kind of oriented me a particular way.
So asking questions in which — I think there are two categories where it works well, where it’s fun, creative, imaginative. You’re just kind of looking to spark some stuff. Hey, what movies can I watch on a Friday night? It says things different from what I find in other places, so sometimes movies I haven’t heard of. And I can iterate that way.
Sometimes, it’s good, if you understand the area well, where you can tell the difference between what’s real versus not. You can kind of play around back and forth with it because you are able to parse —
Right. You can fact check the chat bot.
Yeah, with your context. In those cases — but it also goes in certain directions, which can again inspire you. So that’s what I find fun, yeah.
I want to ask about that example because I’ve been using these technologies in the same way. It strikes me that, “what should I do for dad’s birthday?” is a question that you also could have put into Google. And when you rolled out Bard, the company was careful to say, this is not a replacement for search. And in fact, we’ll show you a Google It button underneath the box. But in practice, I find that they’re really good for a lot of queries that I might have previously used a search engine for. So as somebody who runs the biggest search engine, how do you feel about that? And also, are these things just going to kind of merge over time?
It’s exciting in the sense that from a user standpoint, it expands the possibilities of what you can do. So you can do more. I think these models will get more capable. So we’ll follow the user journey here. And I think people will evolve over time.
I do think people originally come in and try a lot of these queries, et cetera. But over time, I think they kind of adjust their behavior a bit to what the models can do. So I think time will tell. But for me, it’s exciting, because in search, we have had to adapt when videos came in.
And today, you can make the same case. People go to YouTube and look for all kinds of things. Like, how do we think about it? Like, great, people are looking for information. So to me, it looks, so far, from a zero-sum game, because it’s such early stages of a new technology.
And I think the best way we can approach it is really embrace it. We’ve been working on this for a long time. I view this as an iterative experience with users. We’ll put stuff out. They will tell us what they want.
So for example, in Bard already, we can see people look for a lot of coding examples, if you’re developers. I’m excited. We’ll have coding capabilities in Bard very soon, right? And so you just kind of play with all this, and go back and forth, I think. Yeah.
I want to talk to you about the race that is shaping up in AI right now. So in September of last year, you were asked by an interview who Google’s competitors were. And you listed Amazon, Microsoft, Facebook, sort of, all the big companies — TikTok.
One company you did not mention in September was OpenAI. And then, two months after that interview, ChatGPT comes out and turns the whole tech industry on its head and sets off all this competition among other tech companies to sort of match their progress. Did OpenAI and ChatGPT catch you by surprise?
Well, first of all, I’ve always assumed it’s a certainty, that with all the innovation around, there will be things which emerge out of nowhere. It’s always been true, and so on. I actually don’t think, with OpenAI, we had a lot of context. There are some incredibly good people, some of whom have been at Google before.
And so we knew the caliber of the team. So I think OpenAI’s progress and surprises — I think ChatGPT — you know, credit to them for finding something with a product market fit. The reception from users, I think, was a pleasant surprise for — maybe even for them, and for a lot of us.
Because one of these things with these models is, we are like, maybe from a Google vantage standpoint, we looked at in all the areas where it goes wrong, maybe, a bit more. But users are kind of seeing the potential in these models a lot as well. So I would say that part — maybe more of a surprise. But we had been following GPT 2, GPT 3. We knew the caliber of the folks there, so that part wasn’t a surprise at all.
Do you think that they were reckless to release it when they did?
No, I don’t think so. You know, I’ve heard Sam and Greg, Ilya, et cetera, speak about it. I think you could have many different reasonable points of view around how you approach this technology. And I think there will be a lot of debate around it.
I think one of the things I’ve heard them talk about it is, one of the reasons to put this out sooner is you give society a chance to understand, adapt, et cetera, which I think is a reasonable point to take. I do know folks that who are very thoughtful. And so yeah, I didn’t feel that way.
I’m curious if one reason why Bard didn’t come out last year was that safety was on your mind. How much of it was a safety thing and how much of it was a product thing?
It’s tough to say. Because the reason we built LaMDA — to be a conversational dialogue thing — LaMDA was trained to be a conversational dialogue agent, right? So because we were working on Google Assistant, and we realized the limitations of, like, approaching the assistant with the underlying technology approach we had, so it wasn’t an accident that what we worked on LaMDA to be a conversational dialogue.
So we understood the power of — because people are talking to the Google Assistant back and forth. But I think it’s, again, a set of factors which come together in the culmination of a product. Having built products, I always admire when it happens to me. It’s an exciting moment, regardless of whether we had done it.
Obviously, you always wish you had done it, you know. But I admire the fact that I would not underestimate the product engineering, all the work that goes into making that kind of a fit come together. So that’s how I think about it.
When Microsoft relaunched the new Bing with this OpenAI, what we now was GPT 4 running under the hood, Satya Nadella, CEO of Microsoft, was very sort of jubilant and proud, especially because he thought that it had given Microsoft a new way to compete with Google in search.
And he said at the time that Google was the 800-pound gorilla of search, and that Microsoft, by releasing this new Bing, would make Google want to come out and dance — basically, claiming that Microsoft had kind of been able to shake Google out of a stupor and force you all to innovate. So is he right? Are you dancing now?
Well, part of the reason I think he said it that way is so that you would ask me this question.
He’s very savvy that way, yeah.
So first of all, tremendous respect for Microsoft and teams, Satya and team. I do think it’s a bit ironic that Microsoft can call someone else an 800-pound gorilla, given the scale and size of their company. Maybe I would say we’ve been incorporating AI in search for a long, long time.
When we built transformers here, one of the first use cases of Transformer was birthed, and later, MUM. So we literally took transformer models to help improve language understanding and search deeply. And it’s been one of our biggest quality events for many, many years.
And so I think we’ve been incorporating AI in search for a long time. With LLMs, there is an opportunity to more natively bring them into search in a deeper way, which we will. But search is where people come because they trust it to get information right.
And so to me, the craftsmanship that goes into delivering that high-quality trusted experience is important to us. So we’re going to work hard to get that right, and so that’s the way I think about it.
I do think sometimes I get concerned when people use the word, race, and being first, thought about AI for a long time, and we are definitely working with technology, which is going to be incredibly beneficial, but clearly has the potential to cause harm in a deep way. And so I think it’s very important that we are all responsible in how we approach it.
Yeah. Well, let’s talk about that approach. It’s been reported that in December, you declared a code red inside Google. Can you tell us what is a code red? How does life change around here after you’ve said that?
I’m laughing, because first of all, I did not issue a code red. You know, I’ll tell you what happened. For me, seeing that, look, we are at that point of inflection, it’s one of the most exciting moments. So across our products, we see so much opportunity.
So collectively harnessing the resources in the company to move forward, to rise to the moment is what I’m interested in. So I’m definitely communicating that. I am definitely asking teams to move with urgency.
We are definitely working across. There are many areas. I’m asking, in a deep way, engaging with the teams to understand how we are going to use LLMs or generative AI to translate into deep, meaningful experiences.
And so we are moving. I think we have a responsibility at this moment to deliver, given all the investment we have put into it. And to be very clear, there are people who have probably sent emails saying there is a code red. So I’m not quibbling with — all I’m saying is, did I issue a code red? No, and every time I say that, I’m worried Casey is going to look at me and say, did you or did you not issue a code red!
And so people, to get stuff done, can paraphrase and say, well, there’s a code red, et cetera, but I did not issue code red. It’s genuinely an exciting moment for us. And I think as a company, we’ve long worked towards a moment like this. In 2015, I wanted the company to think in AI-first way, so to me, I’m just excited at the possibilities here.
And it’s also been reported by my colleagues at “The Times” that Larry Page and Sergey Brin, the founders of Google, are being very hands-on about this new generative AI push, that they are back in a sort of literal or metaphorical sense, and that they are getting their hands into these projects. What’s that been like?
So to be very clear, both Larry and Sergey are very active as board members. To me, what was exciting about this moment, part of the reason I called and spoke to them — look, we have been speaking about AI for, pretty much, as long as they can remember. Right?
Part of the reason — I remember being with them — this was maybe in 2012, in a lab not far from here, with Jeff Dean and Geoff Hinton and team, where we saw the early signs of neural network can recognize images, images of a cat, et cetera.
We later brought DeepMind in. So this has been a long journey for us. So it’s an exciting moment. You know, I had a few meetings with them. Sergey has been hanging out with our engineers for a while now.
And he’s a deep mathematician and a computer scientist. So to him, the underlying technology — I think if I were to use his words, he would say it’s the most exciting thing he has seen in his lifetime. So it’s all that excitement, and I’m glad. They’ve always said, call us whenever you need to, and I call them. So that’s what it is.
Yeah. Well, so “The Times” also reported that as part of an effort to get these products to market maybe a little bit faster, you set up what’s called a green lane to maybe accelerate the review and approval of some of these new products. You know, I think sometimes we hear something like that and say, well, are safety checks still being applied?
So what can you tell — and I think it’s also just sort of an interesting question about how you’re changing the company to meet this moment, right? And try to get more products out the door. So how are you kind of balancing that innovation and safety calculus?
I mean, super important. We’ve been very deliberate in how we are moving through this moment. Some of these products, we could have put the market earlier. We are taking our time to do that, and we’ll continue to be very, very responsible.
So I think all we are doing is, we are a big company. So when many parts of the company are moving, you can create bottlenecks, and you can slow down. There’s a difference between being efficient as a company, making sure you’re not bureaucratic as a large company. I think those are the things we are talking about here.
But the work we do around privacy, safety, responsible AI, I think, if anything, is more important. And so our commitment there is going to be unwavering, to get all of this right.
One more question about these language models, maybe before we move on to some other stuff. Last year, one of your engineers came forward to say that he believed LaMDA, this precursor to Bard, was sentient. I never believed that was true, but it did worry me that one of your employees did.
Do you worry about this kind of belief spreading? And is there anything Google can do about it as more people start using these technologies?
I think it’s one of the things we have to figure out over time, as these models become more capable. So my short answer is yes, I think you will see more like this. You’ve just seen the conversations even over the last couple of weeks.
You know, I said this before. AI is the most profound technology humanity will ever work on. I’ve always felt that for a while. I think it will get to the essence of what humanity is. And so this is the tip of the iceberg, if anything, on any of these kinds of issues, I think.
We’ll be right back.
Sundar, let’s talk about some of the big-picture stakes here, with AI and how to get this balance between innovation and safety right. So recently, more than 1,000 technology leaders and researchers, including people like Elon Musk, along with some employees of Google and DeepMind, signed a letter calling for a pause, of at least six months, on the training of large language models more powerful than GPT 4.
And they said that they’re calling for this sort of pause, because they believe that more advanced AI poses, quote, “profound risks to society.” What did you think of that letter, and what do you think of this idea of slowing down the development of big models for six months?
Look, in this area, I think it’s important to hear concerns. I mean, there are many thoughtful people, people who have thought about AI for a long time. I remember talking to Elon eight years ago, and he was deeply concerned about AI safety then. And I think he has been consistently concerned.
And I think there is merit to be concerned about it. So I think while I may not agree with everything that’s there in the details of how you would go about it, I think the spirit of it is worth being out there. I think you’re going to hear more concerns like that.
This is going to need a lot of debate. No one knows all the answers. No one company can get it right. We have been very clear about responsible AI — one of the first companies to put out AI principles. We issue progress reports.
AI is too important an area not to regulate. It’s also too important an area not to regulate well. So I’m glad these conversations are underway. If you look at an area like genetics in the ‘70s, when the power of DNA and recombinant DNA came into being, there were things like the Asilomar Conference.
Paul Berg from Stanford organized it. And a bunch of the leading experts in the field got together and started thinking about voluntary frameworks as well. So I think all those are good ways to think about this.
I’m curious if there is a regulation that you would tell lawmakers might be good to pass in the next six months. Like, for example, I have a friend who thinks a lot about AI issues, and he thinks that beyond a certain size, one of these language models probably shouldn’t be able to run on your laptop. Right?
Or if you found that a model could send phishing emails that had a 1 percent chance of success, you wouldn’t want that to be able to run on any laptop. Pick any example you like. Is there stuff out there where you’re like, well, I hope I don’t see any of the other companies out there doing this?
I would start a little bit more in a basic way. So for example, I would make sure we get privacy regulation right. So because if we have a foundational approach to privacy, that should apply to a technologies, too.
Yeah.
I think there are many areas people underestimate where there are strong regulations already in place. Like, health care is a very regulated industry, right? And so when AI is going to come in, it has to conform with all regulations.
So you also want to build on existing regulation where you can. I think that would allow innovation to proceed as well. Once you start getting into specifics like that, I think what I would be worried about is, this is such fast-evolving technology. Being very opinionated early on, I think, is difficult.
But I think notions of transparency, where people are aware of what other people are doing, has some element of reasonableness to it, how easy it is to do at a global scale, I think those are hard. The thing that gives me hope is I’ve never seen a technology in its earliest days with as much concern as AI.
And just one more thing on this letter calling for this six-month pause. Are you willing to entertain that idea? I know you haven’t committed to it, but is that something you think Google would do?
So I think in the actual specifics of it, it’s not fully clear to me. How would you do something like that, right, today?
Well, you could send an email to your engineers and say, OK, we’re going to take a six-month break.
No, no, no, but How would you do — but if others aren’t doing that. So what does that mean? I’m talking about the how would you effectively —
It’s sort of a collective action problem.
To me at least there is no way to do this effectively without getting governments involved.
Yeah.
So I think there’s a lot more thought that needs to go into it. I think the people behind it intended it, probably, as a conversation starter. And so I think the spirit of it is, I think, good, but I think we need to take our time thinking through these things.
Yeah. There’s sort of two categories of AI risk that people are worried about. There’s sort of the short-term worries — the chat bots that get things wrong, or maybe they’re biased or they’re giving people bad answers. Then, there’s the kind of long-term or longer-term worries about, frankly, AI destroying human civilization.
You know, Sam Altman, CEO of OpenAI, has talked about the possibility of AGI, this artificial general intelligence that could become super human and effect dramatic and bad change in the world. Do you believe that we’re headed toward AGI? And, do you want to build that?
It is so clear to me that these systems are going to be very, very capable. And so it almost doesn’t matter whether you’ve reached AGI or not. You’re going to have systems which are capable of delivering benefits at a scale we have never seen before and potentially causing real harm.
So can we have an AI system which can cost disinformation at scale? Yes. Is it AGI? It really doesn’t matter. Why do we need to worry about AI safety? Because you have to anticipate this and evolve to meet that moment. And so today, we do a lot of things with AI people have taken it for granted.
Yeah.
Right? Think about how big a moment Deep Blue was, or when we did AlphaGo. But you can’t take it all for granted. And so I think this will play out differently than thinking through a moment like AGI.
Right, there is that thing where people just refer to anything you can’t do yet as something AI will handle in the future. I remember the first time I searched Google Photos for dogs, and it just showed me all the dogs on my Camera Roll. I mean, that is AI, at least by some definitions. But I think you’re right — people do take it for granted.
And I remember when we launched Photos, we had to explain at Google IO what neural networks were, what deep learning was, and we’re trying to explain that this is different technology. This is — yeah, it’s interesting.
Yeah, so if you had to give a question on the AGI or the more long-term concerns, what would you say is the chance that a more advanced AI could lead to the destruction of humanity?
There is a spectrum of possibilities. And what you’re saying is in one of that possibility ranges, right? And so if you look at even the current debate about where AI is today or where LLMs are, you see people who are strongly opinionated on either side.
There are a set of who believe these LLMs, they’re just not that powerful. They are statistical models which are —
They’re just fancy autocomplete.
Yes, that’s one way of putting it, right. And there are people who are looking at this and saying, these are really powerful technologies. You can see emergent capabilities — and so on.
We could hit a wall two iterations down. I don’t think so, but that’s a possibility. They could really progress in a two-year time frame. And so we have to really make sure we are vigilant and working with it.
One of the things that gives me hope about AI, like climate change, is it affects everyone. And so these are both issues that have similar characteristics in the sense that you can’t unilaterally get safety in AI. By definition, it affects everyone. So that tells me the collective will come over time to tackle all of this responsibly.
So I’m optimistic about it because I think people will care and people will respond. But the right way to do that is by being concerned about it. So I would never — at least for me, I would never dismiss any of the concerns, and I’m glad people are taking it seriously. We will.
Yeah, it just strikes me that you are in such a tricky position because you have this one group of people that’s saying, like, move faster. Release the stuff faster. Go compete with all these other people. You built all this technology. Don’t let that lead go to waste.
And then you have other people saying what Kevin just said, which is like there’s a non-zero risk that this stuff does something really, really bad. What is that like for you, waking up every day and just having both of those things in your ear?
There is a sense of some whiplash, ? Right it’s like asking, hey, why aren’t you moving fast and breaking things again?
Yeah, yeah.
Which, for all of us, over the past few years. I think we realize we are going to be bold and responsible. We are working with urgency. We are excited at this moment. There’s so much we can do. So you will see us be bold and ship things, but we are going to be very responsible in how we do it.
So there will be times when we will hold back things. I think what we are doing in Bard, for us, is an example of it. We haven’t hooked up Bard to our most capable models yet, and we plan to do it deliberately. And so through this moment, I think we are going to stay balanced, but we are going to innovate. And there is a genuine excitement at this moment, so we’ll do that.
I hear you saying that what gives you hope for the future when it comes to AI is that other people are concerned about it — that they’re looking at the risks and the challenges. So on one hand, you’re saying that people should be concerned about AI. On the other hand, you’re saying the fact that they are concerned about AI makes you less concerned. So which is —
Sorry, I’m saying the fact that the way you get things wrong is by not worrying about it. So if you don’t worry about something, you’re just going to completely get surprised. So to me, it gives me hope that there is a lot of people — important people — who are very concerned, and rightfully so.
Am I concerned? Yes. Am I optimistic and excited about all the potential of this technology? Incredibly. I mean, we’ve been working on this for a long time. But I think the fact that so many people are concerned gives me hope that we will rise over time and tackle what we need to do.
So we should continue to write columns where we’re very nervous about where all this is going?
As well as columnns where you’re excited about the possible benefits of all of this.
Yeah, I hear you on the whiplash. I feel whiplash every day just reading the news about AI. I can only imagine what you’re feeling.
I do too.
Another question that people have about AI in the sort of medium and long term is about its effects on jobs. And there have been all these predictions about LLMs and what kinds of work they could replace or will replace. I actually — I got a text from a software engineer a friend of mine the other day who was asking me if he should go into construction or welding because all of the software jobs are going to be taken by these large language models.
And he was sort of joking, but sort of not. You have a lot of software engineers here at Google that work for you. How should they feel about that question?
With any technology, you have adaptation. I think this one, there’ll be a lot of societal adaptation. And as part of that, we all may need to course-correct in certain areas.
To your specific question, I think for software engineers, there are two things that will also be true. One is some of the grunt work you’re doing as part of programming is going to get better. So maybe it’ll be more fun to program over time — no different from the Google Docs make it easier to write. And so if you’re a programmer, over time, having this collaborative IDs with the assistance built in, I think, is going to make it easier.
The other thing that excites me is programming is going to become more accessible to more people. And so it’s such an important role in the world. You’re creating things. And today, the bar is very high.
So we are going to evolve to a more natural language way of programming over time. So to me, that means things no different from to do a podcast — to do something like this 40 years ago, just imagine what access you would need to have to be able to do an interview like this.
We need a radio tower — [LAUGHS]
Yeah.
[LAUGHS]:
But you know, but we will think, this has enabled more people.
Yeah.
I think the same thing will be true for software engineering as well. So I think those are all important, exciting use cases to think about.
Well I want to ask a more near-term question, near and dear to my heart — kind of about media and publishing, but also search on the web. Today, lots of digital publishers rely on the traffic they get from Google. They get ad impressions. That pays their bills.
When Bard is at its best, it answers my questions without me having to visit another website. I know you’re cognizant of this. But man, if Bard gets as good as you want it to be, how does the web survive?
I think through our work across, I think we’ll be committed to getting it right with the publisher ecosystem. In search today, while these things are contentious, in search, we take pride, it’s one of the largest sources of traffic. If I look at it year-on-year, the traffic we send outside has only grown. That’s what we’ve accomplished as a company.
Part of the reason we are also being careful with things like Bard, amongst many reasons, we do want to engage with the publisher ecosystem, not presume how things should be done. And so you will see us thoughtfully evolve there as well.
Yeah, I mean, I know we can’t really predict what the final form of all of this stuff will be, but I have to believe that, I don’t know, in five years, what used to be the Google Search Bar is just essentially a command line that I can write in to get anything I want — whether I want to change something on my phone, write myself a little app, access the subtotal of human knowledge, have it draft my emails. Does that feel like a potential final destination to you, or do I have it all wrong?
You know, I mean, there’s a part which is consistent with our mission to do that. But I think I want to be careful where Google has always been about helping you the way that makes sense to you. We have never thought of ourselves as the be-all and end-all of how we want people to interact.
So while I think the possibility space is large, for me, it’s important to do it in a way in which users use a lot of things, and we want to help them do things in a way that makes sense to them. And out of that North Star is whatever the answer it leads us to. But I don’t want to get it — so that’s the way I think about it, at least in my head.
Sundar, thank you for joining us.
Thank you, Sundar.
Thanks, Kevin, Casey. Pleasure, yeah.
[FUNKY MUSIC PLAYING]
Casey, we are back from the Googleplex. I really enjoyed our little field trip today. Thank you for that enlightening and enriching road trip, and also for allowing us to stop at In and Out for lunch on the way back.
Yeah, it turns out if you order your fries well done, which is not on the menu, they arrive much crispier and more delicious.
Yeah, that’s a pro tip for you.
You know, that wasn’t my only takeaway from today, Kevin.
Yeah, what’d you think?
Well, you know, a lot of times when companies tell us about the new technologies they’re introducing, they do so in a really grandiose way. And I was struck today by the humility that Sundar uses when he talks about where the company is now. He is not here to tell you that Bard is the best language model out there. He said that, in fact, it is quite limited.
Yeah, he said — he compared it to a souped-up Civic.
[LAUGHS]: Yeah, which I wasn’t expecting. But he broke a little news with us. He told us that Bard is going to be upgraded. And man, I’m really curious to see if Bard feels any different in a few days.
Yeah, and I really was struck by what he called “whiplash,” where he’s got people telling him, you know, you’ve got to move faster, and compete with GPT, and release everything you’ve got — and then, also, this very real sense of like, didn’t we get in trouble for doing this the last time with, you know, all the products that we released in the last decade? Shouldn’t we be slow and deliberate? So I would want to switch jobs with them.
Yeah.
It sounds very hard.
Also, I thought we would mostly be solving mysteries this week, but I feel like we are leaving with one, which is, who did order the Code Red at that company?
Yeah, if you ordered the Code Red at Google, please write to us at hardfork@nytimes.com.
We would love to hear from you.
Also, before we go this week, a special thanks to the listener who wrote in to tell us that Spotify has a feature that allows you to exclude certain playlists, like my Sleep playlist, from your taste profile, which informs your recommendations and possibly what the AI DJ tells you to listen to.
Yeah, I did that this week. So chill tracks will no longer be showing up in my Discover Weekly. That was a genius suggestion. Thank you to that listener — and to all of our listeners. [RHYTHMIC MUSIC]
“Hard Fork” is produced by Davis Land and Rachel Cohn. We’re edited by Jen Poyant. This episode was fact-checked by Caitlin Love. Today’s show was engineered by Alyssa Moxley. Original music by Dan Powell, Marion Lozano, and Rowan Niemisto. Special thanks to Paula Szuchman, Pui-Wing Tam, Nell Gallogly, Kate Lopresti, and Jeffrey Miranda. As always, you can email us at hardfork@nytimes.com.