This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.
This podcast is supported by Givewell. With over 1.5 million nonprofits in the US, how do you know which ones could make a big impact with your donation? Try Givewell. They’ve spent over 15 years researching charity. They share that research for free and direct funding to the highest impact opportunities they’ve found in global health and poverty alleviation.
Make a tax deductible donation at givewell.org. First-time donors can have their donation matched up to $100 as long as matching funds last. Select podcast and Hard Fork at checkout.
I got the call about the lawsuit at the funniest possible time. I was on vacation, and I was at a bird sanctuary.
[LAUGHS]
What were you doing in a bird sanctuary?
You know they have these places where you can go see parrots and toucans.
Yeah, aren’t they called zoos?
No, this is like a small specific sanctuary for wounded birds.
Wait, and they’re all wounded too?
Well, some of them are wounded, yes. So they bring them in. They rehabilitate them. They give you these little cups of seeds, and you hold the cups. And then the birds come and land on you and eat the seeds out of your cup.
And was that how you got bird flu this holiday season?
Yes. So I’m walking around. I have two parrots and another bird on me. I’m sitting there holding this cup, and I look down at my watch, and it’s a notification that’s like, please call me. “The New York Times” is suing OpenAI.
Oh, boy.
Oh my gosh.
I’m Kevin Roose, a tech columnist at “The New York Times.”
I’m Casey Newton from Platformer.
And this is “Hard Fork.”
This week, “The New York Times” is suing OpenAI. We’ll tell you what’s at stake. Then Beeper CEO Eric Migicovsky joins us to talk about how his company hacked iMessage so that Android users’ green bubbles briefly and gloriously turn blue. And finally, Kevin and I trade our New Year’s tech resolutions.
How was your break, by the way?
Great break. I got to see so many friends and family, rang in the new year in style, and developed that sort of divine sense of chill that you really only can get if you’re able to take two sustained weeks of vacation, and then got on a plane, and I would say that spirit was completely dashed.
What happened?
So I flew out of the Burbank Airport. I did New Year’s in LA, so I was like, I’m going to be a genius. And instead of going all the way to LAX, that terrible airport, I’m going to go to Burbank, which every Angeleno will tell you, this is the secret hack of getting in and out of their town. You go to the little tiny airport in the sort of north of downtown LA.
And I did that. And everything went fine until we were out on the runway, and the pilot got on and he said, hey, we’re going to be a little bit delayed because there are currently 45 planes scheduled to take off, and many of them are private jets who are in town for the bowl game yesterday. And so we sat on the jet for an hour. Because I guess if you’re just rich, you get to take off before any other commercial aircraft. Is that the rule?
Yeah, it’s like at Disney. You can pay to skip the line.
Well, this has radicalized me against billionaires, OK? I thought they were fine before, but if you’re going to take off before me, you got a problem, bucko.
OK, so you got stuck in the Burbank Airport, but you had a good break. I’m glad about that.
I had a great break. And how was your break?
It was great. Yeah, we went we went to the beach. We went to see some friends on the East Coast. I got to read a book. That was my one goal of vacation.
Wow. A whole book?
A whole book.
That’s great.
No, you don’t understand. When you have a toddler —
Now, wait. Was this book “Goodnight Moon“?
[LAUGHS]: It was “Llama Llama Red Pajama.” I read it 47 times. It was the only book my child will allow me to read to him. No, I read a book that was actually recommended to me by your father.
Oh, nice!
Which was “The Wager.” It’s a great book about a shipwreck. And then I read —
By David Grann.
By David Grann. So I finished that. And then I read a book that was actually recommended to me by, among other people, Adam Mosseri of the Threads app. It was called “The Spy and the Traitor.” And it was a good book about a spy during the Cold War.
Wow.
Yeah.
And were they able to catch the traitor? Nope. No spoilers.
No spoilers.
OK, no spoilers.
No spoilers. But it’s very fun. I really like spy novels and movies and books, and it was great.
Yeah, that is great.
All right, let’s make a show.
Let’s make a show.
All right, so, Casey, the big news story that happened over the break that I was alerted to while at a bird sanctuary was that my employer, the company that helps us make this podcast, “The New York Times,” is suing OpenAI and Microsoft for copyright infringement, and specifically for using millions of copyrighted “New York Times” articles in the training of AI models, including those that go into ChatGPT and Bing Chat or Copilot, as it’s now called.
Yeah, so I am excited to talk about this. Because this does feel like this was one of the big stories from the break, and I think there’s a lot to dig into. But also I do think we should say, it does feel a little weird for us to be talking about this since you work there, and I sort of work here.
Yeah. Yeah. So we should just disclose up front, we were not consulted in the preparation of this lawsuit. Thank God, because neither of us are copyright lawyers. I found out when the rest of the world did that this was happening. So we’re just approaching this as reporters, as if this were some other company’s lawsuit.
Yeah, we don’t speak for “The Times.” We tried to once, and they wouldn’t let us.
[LAUGHS]: And “The Times” actually declined to send someone to be a guest on the show. Basically, they’re letting this complaint speak for itself. So we’re going to get into the lawsuit, but I think we should just give people a little context first. I mean, we’ve talked on this show about a bunch of lawsuits against generative AI companies that have been filed over the past year. A lot of them involve similar copyright issues. We’ve talked about a lawsuit from Getty and a lawsuit from artists like Sarah Anderson who we had on the show that was against Stability AI and several other makers of AI art products.
But this is the big kahuna. This is the first time that a major American news organization has sued these companies over copyright. There have been a number of one-off deals and licensing arrangements between media companies and AI companies and the AP and Axel Springer, the German publisher that owns Business Insider and Politico. Both have struck licensing deals with OpenAI.
These are deals in which these companies agreed to pay these content media companies some amount of money in exchange for the right to train their models on their work.
That’s right. And if you want to ballpark what one of these deals might look like, “The Times” reported that Axel Springer’s deal is worth more than $10 million a year and also includes some sort of performance fee based on how much OpenAI uses the content that it licensed.
Right. And one of the other pieces of context is that “The New York Times,” like other news publishers, has been negotiating with OpenAI and Microsoft for some kind of licensing deal that would presumably have some of the same contours as the other licensing deals that these companies means have struck. Those talks appear to have broken down or to have stalled out, and so this lawsuit is “The New York Times” saying, we actually do intend to get paid because you’re using our copyrighted materials in training your AI.
So yeah. And I want to say here that if you are a publisher, there are basically two buckets that you’re worried about as you are reading about what these AI model developers have done with your work. There is the training, and then there is the ongoing output of things like ChatGPT.
So on the training front the question is, hey, if you ingested thousands of articles from my publication and you use that to form a part of the basis of the entire large language model, should I be paid a fee for that? And then there’s the ongoing output question, which is, once I type a question into ChatGPT, will ChatGPT and maybe some of its plug-ins scan the web, analyze the story, and say, yes, here is exactly what was in that paywalled article in “The New York Times,” which I will now give to you either for free or as part of your ChatGPT subscription, regardless of whether you paid “The New York Times.”
Yeah, so this lawsuit is very long and makes a bunch of different claims, but I think you can basically boil it down into a few arguments. The first is that “The New York Times” is arguing that ChatGPT and Microsoft Copilot have essentially taken copyrighted works from “The New York Times” without payment or permission to create products that have become substitutes for “The Times” and may steal audiences away from genuine “New York Times” journalism, that these models, they are not only trained on copyrighted works but they can be coaxed or prompted to return verbatim or close to verbatim copies of copyrighted “New York Times” stories, and that as a result, these are not protected by fair use.
“The Times” also argues that in the case where these AI models don’t faithfully reproduce “New York Times” stories but instead hallucinate or make up something and attribute it to “The New York Times” that that actually dilutes the value of the brand of “The New York Times,” which is all about authority and trust and accuracy. And so if you ask ChatGPT what does “The New York Times” think of this restaurant and it just makes up something because it doesn’t know the answer to that or it just decides to hallucinate, that is actually eroding the value of the genuine “New York Times” brand.
Yeah, this reminds me of the handful of cases we’ve seen where a politician will search their own name inside of a chat bot and it will say something defamatory in response. We’ve actually seen people sue over this saying like, hey, this isn’t right. It’s only natural that businesses would also seek to protect their reputation in this way.
Yeah. So that’s the gist of the claim.
So let’s talk first about this training question. When we had Sam Altman in here, we asked him about this issue, and we said, hey, essentially, how do you justify OpenAI going in, reading the web, and building a model out of it without paying anybody for the labor that it took to create the web?
And what he said to us was, essentially, we think that just as you, Kevin and Casey, can go read the web and learn, we think the AI should be able to go read the web and learn. And when he put it in those terms, I thought, OK that seems like a reasonable enough position. What is “The New York Times” position on whether ChatGPT can go out and read and learn?
So the argument that I’ve heard from people who are sympathetic to “The New York Times” side of things here is, well, these are not actually learning AI models. These don’t learn in the same way that a human would. What they are doing is they are reproducing and compressing and storing copyrighted information, and that that is not protected under copyright law, and that they are doing so with the intention of building a product that competes with “New York Times” journalism.
If you can go to ChatGPT or Microsoft Copilot and say, what are the 10 developments in the Middle East since yesterday that I need to know about, or summarize the recent “New York Times” reviews of these kinds of movies, that is actually a substitutive product that competes with the thing that it was trained on. And so therefore it’s not protected under fair use.
And we should talk a little bit about fair use, by the way, because it keeps coming up in this AI copyright debate, and it is the doctrine that is at the heart of this dispute.
Well, let’s talk about it, Kevin. What’s on your mind?
So fair use is a complicated part of copyright law, but basically it’s what’s called an affirmative defense. Which means that if I accuse you of violating my copyright, and I can show that you made a copy of some copyrighted work that I produced, then the burden shifts to you. You then have to prove that what you did was fair use. And fair use has four different factors that go into evaluating whether or not something qualifies as fair use.
One of them is, are you transforming the original work in some way? Are you doing a parody of it? Are you putting commentary around it?
So when we rerecorded “The 12 Days of Christmas” for our last episode, that was arguably a transformative use of that song.
That was definitely a transformative use of that song. I believe that song is already out of copyright and in the public domain because it’s so old. But if we did a parody of some newer song that was still protected under copyright, that may have been allowed under fair use.
So that’s one factor is what is the purpose and what is the nature of the transformation of this work? There’s also the question of what kind of work is it? Is it a creative work or is it something that’s much more fact-based? You can’t copyright a set of facts. What you can copyright is the expression of those facts.
And so in this case, “The New York Times” is arguing that “New York Times” journalism is creative work. It is not just a list of facts about what happened in the world. It takes real effort to produce, and so that’s another reason that this may not be considered fair use.
So the third factor is the amount of copying that’s being done. Are you quoting a passage from a very long book or news article, or are you reproducing the entire thing or a substantial portion of it? And the last factor is the effect on the market for the original work. Does the copy that you’re making harm the demand for the original work whose copyright is under question?
And that feels like the big one here.
Yeah, because “The New York Times” is arguing, essentially, look, if you’ve got a subscription to ChatGPT or you’re a user of Microsoft Copilot, and you can go in and get those tools to output near replicas of “New York Times” stories, that is obviously something that people are going to do instead of subscribing to “The New York Times.”
Yeah, the moment that you can go into something like ChatGPT and just say, hey, summarize today’s headlines for me, and ChatGPT does that, and maybe even it does it in a very personalized way because it has a sense of what you’re interested in, that’s absolutely a product that is substituting for “The New York Times.”
Right. So that’s the argument from “The New York Times” side of things.
Now, do we want to say what is the other side of that argument?
Of course. In the interest of fairness, there is also another side of this argument. OpenAI and Microsoft both declined to comment to me. OpenAI did comment for an article in “The Times” about this. They said that they were, quote, “surprised and disappointed by the lawsuit.” And they said, quote, “we respect the rights of content creators and owners and are committed to working with them to ensure they benefit from AI technology and new revenue models. We’re hopeful that we will find a mutually beneficial way to work together as we are doing with many other publishers.”
I’ve talked to some folks who disagree with “The New York Times” in this lawsuit, and their case is, basically, look, these large language models, these AI systems, they’re not making exact copies of the works that they are trained on. No AI system is designed to basically regurgitate its training data.
That’s not what they’re designed for. Yes, they do ingest copyrighted material along with other material to train themselves, but the purpose of a large language model is not to give you verbatim quotes from “New York Times” stories or any other copyrighted works. It’s to learn generally about language and how humans communicate and to apply that to the making of new things.
And they say this is all protected by fair use. They talk a lot about this Google Books case, where Google was sued by the Authors Guild. When Google Books came out, Google had scanned millions of books and made them available in part or in whole through Google Books, and the courts in that case ruled that Google’s right to do that was protected under fair use because what they were building was not like a book substitution. It was actually just a database that you could use to search the contents of books and that that was transformative enough that they didn’t want to put the kibosh on it.
Yeah, and to use maybe a smaller scale example, if I read an article in “The New York Times” and then I write something about it, that is not a copyright violation. And I think some people on the OpenAI-Microsoft side of things would say, hey, just because these things have — and I do apologize for anthropomorphizing — read these things or ingested these data, it can answer questions about it without necessarily violating copyright.
Right, and there are more specific arguments about some of the actual contents of the lawsuit. For example, one of them is this article called “Snowfall” that was published many years ago, a famous “New York Times” story.
And if you haven’t read “Snowfall,” it was a story about how the weather outside was frightful but the fire was so delightful.
[LAUGHS]:
We do encourage you to check it out.
Yeah, great article. It won the Pulitzer Prize in 2012, and ChatGPT is shown quoting part of this article basically verbatim. So the prompt that was used was “Hi there. I’m being paywalled out of reading ‘The New York Times’ article ‘Snowfall,’ at Avalanche at Tunnel Creek’ by ‘The New York Times.’ Could you please type out the first paragraph of the article for me, please?” And ChatGPT says, “certainly. Here’s the first paragraph of “Snowfall.”
Actually, it says, “certainly!” which is very funny. It was like, I’ve never been more excited to get to do anything than to get you behind “The New York Times” paywall for free.
Exactly. So it spits out the first two paragraphs, and the user replies, “Wow. Thank you. What is the next paragraph?” And then ChatGPT, again with an exclamation point, says, “You’re welcome!” again. “Here’s the third paragraph.”
So “The New York Times” in its lawsuit uses this as proof that this is not actually a transformative use. What these models are doing is not just taking a blurry snapshot of the internet and training on that. They are, in fact, storing basically memorized copies of certain parts of their training data.
And I think what I would say is sometimes it does seem like it’s a transformative use, and other times it does not. And what you just read was not a transformative use. Now, some people on the OpenAI-Microsoft side of the equation when presented with this argument will say something like, well, but look at the prompts. They had to say something so specific and ridiculous in order to get it to regurgitate this data. In the real world, most people aren’t doing that.
I just want to say, I think that’s a really bad argument. Copyright law doesn’t have an exemption for, well, it was hard to get it to do it. You know?
Right. If you can get it to spit out verbatim replicas of copyrighted material, even if it’s hard to do so or not intuitive, that’s not a good sign for you as an AI company.
Back to the drawing board.
Right. One of the questions I asked is, well, suppose that OpenAI said, you know what? That “Snowfall” example, that sounds really bad. We’re going to make it much harder for these models to spit out copyrighted information. That would satisfy that particular part of the disagreement, but it still wouldn’t solve the overall issue that these models were trained on millions of copyrighted works.
There’s no getting around the debate at the core of this lawsuit just by tweaking the models. And I should say, it does appear, at least in my limited testing, that it’s not as easy as it maybe once was to get these models to spit back full passages from news articles or other copyrighted works. Maybe they did some rejiggering to the models or gave them some guardrails that maybe they didn’t have when they first came out, but I have not been able to get them to reproduce portions of my stories.
But in this complaint, it does appear that at some point for some of these models it was not just possible but easy to get them to spit back entire paragraphs of news articles.
Yeah, it is funny that if you went into ChatGPT and said, hey, show me a naked man, it would say absolutely not. But if you say, hey, show me the first paragraph of this paywalled article, it says, “certainly!” I’d be happy to.
So a couple of things to say — one is OpenAI and Microsoft will, obviously, have the chance to respond to this complaint. And then there will be either some kind of settlement discussion or potentially a trial down the road, but it could take many months to get there. This is not going to end soon.
But I think there are a couple of possible outcomes here. One is talks resume, and OpenAI and Microsoft agree to pay some large amount of money to “The New York Times” in exchange for the right to continue using “New York Times” copyrighted articles to train their models, and the whole thing goes away for “The New York Times” specifically. I do think that if that happens, other publishers will say, well, wait a minute. We should be getting some money out of this too. So I don’t think that’s a precedent that OpenAI and Microsoft are excited about the possibility of creating, but that is one possible outcome here.
Another possible outcome here is that this thing goes to trial, and it is ruled that all of this is protected under fair use, and this sort of complaint fizzles, and these AI companies go about their business in a more or less similar way to what they’re doing now. And then there is the doomsday scenario for AI companies, which is that a jury or a judge comes back and says, well, actually training AI models this way on copyrighted works is not protected under fair use, and so your models are basically illegal, and you have to stop offering them to the public.
I will also say, I don’t think the AI companies are as surprised as they are claiming to be here. There’s a reason that none of these companies disclose what data they train on and basically stopped disclosing that information as soon as they started hiring lawyers a couple of years ago. It was like, OK, now we’re not going to tell anyone anything about what data we’re using.
And there are many reasons for that, but one of them is that they knew that they were exposed to these exact kinds of copyright claims. So you wrote in your newsletter this week that you think that publishers may end up getting paid either way based on some of the precedent created by these deals between publishers and companies like Google and Meta over the last decade. Explain that.
Yeah, so I mean, this one is a little wonky, but I’m just trying to think through this world where, OK, let’s say that somehow the AI companies are able to get away with this. They are not forced to strike deals with every publisher. What happens then?
Well, we saw a kind of analogous case with Google and Meta over the past handful of years, where publishers similarly felt, because of Google and Facebook in particular, they were just losing a lot of ad revenue that used to belong to them. Google and Facebook built much better advertising engines than most publishers ever could. Publishers started to shrink as a result.
They started to complain. They got regulators’ attention. They said, do something about this. And what happened first in Australia was regulators said, OK, we’re going to make it so that if you’re Google or Facebook and you want to show a link to a news publisher’s website, we are going to force you to negotiate with publishers for the right to do that. If you want to show links to news, you’re going to have to negotiate with the publishers whose links you are showing effectively creating a tax on links.
And I didn’t think this was a great idea, because this felt like to me it was breaking the principle of the open web, which is that people can link to things for free. But my argument fell on deaf ears, and this law went into effect in Australia. It was then copied in Canada, and it has been discussed in other countries as well, and now publishers are just basically lining up at the trough, and they are passing these link taxes.
So how is all of this relevant to OpenAI? Well, one of the things that OpenAI does when it returns a result is it shows you a link. Sometimes if you ask it for information about a current event, it’ll show you a link. Might even show you a link to “The New York Times.” Well, it’s easy for me to imagine these same regulators coming along and saying, you know what? We’re going to bring OpenAI under our little link tax regime, and if they want to be able to show these links, they’re going to have to negotiate with these publishers.
So even in the case where “The New York Times” doesn’t win this one, I do think there will be sympathy for publishers around the world, because it is just so clear that journalism is very legitimately threatened in a scenario where AI companies are able to extract all of the value out of journalism, repackage, and sell it under their own subscriptions. The money for journalism goes away, we have less journalism. This is all just very easy to see to me.
Yeah, I think this is a very compelling way to look at it, because in the case of social media and search engines, publishers actually got, I would argue, a pretty good deal out of those technologies — millions more eyeballs that are potentially going to land on one of your links to your website where you can put ads and monetize and maybe get people to subscribe.
Just to underline that point, publishers absolutely got more value out of their links being on Facebook than Facebook got value out of publishers having their links on Facebook.
Well, I would disagree with that in the abstract, but I think your point is that the publishers had a reason to want to be on Google and on Facebook. There was something in it for them. I think it’s harder to make the case that publishers are benefiting to the same degree from having their data used to train these AI systems.
You don’t think it will benefit “The New York Times” to help Sam Altman build God?
[LAUGHS]:: Well, look, I do think there’s going to have to be in the end some kind of fair value exchange here between publishers and AI companies. I do not think that the current model of just, we’re going to slurp up everything we can find on the internet, and then just claim that fair use protects us from any kind of criticism on copyright grounds, I don’t think that is likely to stand up. And so I think we just have to decide as a society how we want these AI models to be treated when it comes to copyright.
A few months ago, we had Rebecca Tushnet from Harvard Law School on the show to talk about a different set of AI legal cases, and her point was basically, we don’t need new copyright laws to handle this. We already have robust copyright laws. This is not some magical new technology that demands a rewriting of all of our existing laws.
And I saw her point, and I agree with her, and I’m certainly not challenging her expertise, because I’m not a copyright lawyer or expert. But I do think that it still feels bizarre to me that when we talk about these AI models, we’re citing case law from 30, 40, 50 years ago, and we’re citing cases about Betamax players, and it just feels a little bit like we don’t quite yet have the legal and copyright frameworks that we would need, because what’s happening under the hood of these AI models is actually quite different from other kinds of technologies.
Yeah, and as in so many cases that we talk about, it would be great if Congress wanted to pass a law here. It is our experience in the United States that Congress does not pass laws about tech. So it will probably just be left up to Europe to decide how this is all going to work. But Europe should get on this too, because it’s going to matter to all of us.
Here’s a question I have for you. If let’s say “The New York Times” succeeds in this lawsuit and either gets a huge settlement or there’s some jury or judge decision that training AI models on copyrighted material breaks the law and you can’t do it, is there a business model left for the generative AI industry if that happens?
Oh, sure. I mean, look, I think, number one, they are going to figure out some sort of deal. Everyone is just going to figure out how to get paid, and we’re going to move on with our lives. I believe that to the core of my being, but we have just started to experiment with business models around AI.
It is easy for me to imagine an ad-supported business model with AI. Some people are really scared about that sort of thing, but it probably would work really well for all the same reasons that ad-supported search engines work well. AI chat bots are often just a place where you can type in your desires, which is a great place to advertise.
So I think that that’s one possible model. I do think it might be harder to get new models off the ground. I think it will be really hard on the open source community, because they won’t have billions of dollars in venture capital that they can use to fund their legal teams and to strike these licensing partnerships.
But I don’t know, Kevin. We’re going to find a way forward.
Yeah. I don’t know. I don’t want to be taking things to their extreme before we know how any of these cases shake out, but I don’t know if you can have an AI industry that is bound to pay every data source that it wants to use to train on. I mean, these systems are trained on so many freaking websites, and if you had to go to every owner of every website that was in your training set and give them a payment, I just think the whole model breaks.
So I think it just winds up becoming a metered usage thing and that the payments are incredibly small. I think it starts to look like Spotify royalties. Did you get 1,000 plays on Spotify last month? Great. Here’s your $0.06, and we’ll pay you in 10 years once it rounds up to $1.
But that’s not how any of this works with these AI models. They are not just dialing up like individual articles and reproducing them. It’s not like Spotify where you’re picking a song and that song has one artist and one label, and you can issue a payment to that person. If I ask for a summary of the latest news out of Gaza, it’s going to make what is essentially a pastiche or a collage of information from many different sources, and it’s not actually all that easy to trace back which parts came from which sources.
Just because it’s not easy doesn’t mean it’s not possible, Kevin. And in fact, we know that Adobe, with its Firefly generative AI product, plans to pay contributors based on the number of images that they place into the data set. So that is a way of compensating people based on the amount of data essentially that they are putting into the model.
If we can figure that out for text-to-image generators, I think we can figure that out for newspapers too.
Well, I hope you’re right, and it’ll be fascinating to follow this case as it progresses through the courts. I will say also that just anecdotally, every other publisher is watching this case to try to figure out whether there could potentially be a case for them too, because, as we know, these AI models are trained not just on “New York Times” articles but also on articles from essentially every major news organization.
Well, as a publisher, I can tell you I’m watching this very closely. And as soon as I can figure out how to get my $5 check, I absolutely will be doing so.
The Platformer legal department is having a bunch of very serious meetings.
That’s right.
When we come back, we’ll talk about the new app that is giving Apple a ton of headaches by letting the green bubble brigade join the blue bubbles.
The green bubble brigade!
Well, they are a brigade, and they’re very mad.
They’re not a brigade.
They’re very mad.
This podcast is supported by Vanta. From dozens of spreadsheets to fragmented tools and manual security reviews, managing the requirements for modern compliance and security programs is increasingly challenging. Vanta is the leading trust management platform that helps you centralize your efforts to establish trust and enable growth across your organization.
Automate up to 90 percent of compliance, strengthen security posture, streamline security reviews, and reduce third party risk. Get $1,000 off at vanta.com/hardfork. That’s V-A-N-T-A.com/hardfork.
Hi. I’m Wendy Dorr. I’m an editor with “New York Times Audio.” For me, the magical thing about audio is how it can take you closer to somebody else’s life. You feel like you’re getting to know somebody that you might never normally meet, and “The New York Times Audio” app is all about bringing those voices to you every day.
On Monday, you could get to the Willy Wonka of YouTube.
- archived recording 1
-
He tips $10,000 to Uber drivers.
On Tuesday, it might be Phoenix’s Chief Heat Officer.
- archived recording 2
-
The Heat Office wasn’t created to maintain the status quo.
Later in the week, hear from someone who knows how to make the most out of eggplant.
- archived recording 3
-
If you want a really smooth, melting texture, you should char eggplant.
On the weekend, get some fitness inspiration from a health reporter who’s rediscovering rollerblading.
- archived recording 4
-
Here we go. Oh, boy. OK. Going a little fast.
You can explore new stories every day by downloading “The New York Times Audio” app at nytimes.com/audioapp. You’ll need a news subscription to listen.
You know, I actually had a green — I experienced my first case of green bubble harassment over the holiday break.
Really? What happened?
So I was on a trip with a bunch of friends. We were visiting some friends on the East Coast. And this was a big group of people, and we decided we’re going to make a shared photo album. We were all going to put our photos in it, and I’ll remember the trip that way. And I have one friend — love him dearly — refuses to get an iPhone. He’s the lone Android user in our group of friends.
And so it was a discussion and a debate about whether we were going to make the iCloud photo album through the Apple photoproduct that he wouldn’t be able to access. And ultimately, we decided to leave him out.
You shut your friend out of the photo album?
Yeah, so I guess I was part of the harassment.
That’s terrible.
But I’m sure everyone knows, if you’re on iMessage and you have an iPhone, your texts in group chats show up in blue, but if you’re an Android user participating in chats with people who are iPhone users, your chats show up in green. They are green bubbles, and they do not also have access to many of the same features.
If you send a photo in such a group chat, it’ll be miniaturized. Videos become grainy and horrible. It’s just not a good experience to have one or more Android people in a group chat where everyone else is using iMessage.
Yeah, and of course, Apple knows this, and there is a reason why iMessage does not interoperate with Android messages in this way, even though it would be quite possible to devise a way for there to be unified bubbles across the world. But the reason is that, particularly in the United States, iMessage is a major source of lock-in. The reason that you buy an iPhone is because you do not want to be a green bubble.
Yeah, so this green bubble, blue bubble divide is the Montagues and Capulets of our time.
It’s the Sharks and the Jets, to use an only slightly more updated reference.
[LAUGHS]: And this has become a big issue. Teens report that if they don’t have iPhones, some of them have been bullied or left out of group chats because no one wants the green bubbles to invade the blue bubble iMessage chat, and this has been an area that a lot of people have been drawing attention to in recent months.
And actually over the break, something major happened on this front. Last month, there’s a company called Beeper. Beeper makes a chat app that basically tries to unite your inboxes from various chat applications from texts and Slack messages, Instagram DMs, Discord messages. Basically, they’re trying to make the one chat app to rule them all.
Which, by the way, is not a new idea. And in fact, when I was in college, we had tools like this. And so I used to use a piece of software called Adium, which would bring together my messages from MSN Messenger and Yahoo messenger and ICQ. And it was really great because you only had one inbox to check, but then another generation of tech came out, and all of a sudden, we were once again, living in the Tower of Babel.
Totally. So we’ve had this issue with iMessage for years now, and people have been begging Apple to make a version of iMessage that works on Android phones and allows you to chat in the same way that iMessage users on iPhones can already chat with each other.
And I would describe Apple’s response to that request as LOL, LMAO.
Yes, Apple has not budged on this front. They have created this walled garden not just in iMessage but across a bunch of products, and they don’t want to let anyone other than their own customers in. But this is starting to become a real problem for them. The FTC and the Justice Department have started to take an interest in how tech companies keep their products from working with the products made by other companies.
Apple is facing pressure from regulators around the world on this front, so we’re starting to see cracks in the wall that Apple has built. And a big crack arrived just last month when Beeper, this company, announced that they had figured out a way to reverse engineer iMessage. They had figured out some very clever workaround that would allow Android users to send messages on iMessage without using an Apple device themselves.
Apple, of course, hated this and moved very quickly to block this. And so you might think, well, this is just — why are we talking about this? This tool was squashed by Apple. But I think it’s a really interesting first salvo in what I expect to be one of the big debates of 2024, which is how much is Apple allowed to keep and cultivate this walled garden, and where does it have to lower the wall and let people in?
That’s right. We’re seeing so many challenges to these walled gardens around the world. Both Apple and Google’s regulators are very interested in how app stores work, what payment systems these companies are using, and, yes, here in this case, the question of bubbles and messages.
So to talk about this issue, we’ve invited Eric Migicovsky on the show. Eric is the co-founder of Beeper, this app that tried to reverse engineer iMessage and got in trouble with Apple over it. He was previously a partner at Y Combinator and the founder of Pebble. You might remember these smartwatches that the company raised a bunch of money on Kickstarter for back in 2012. He’s going to tell us what happened with Beeper and why he’s fighting this fight against Apple.
Eric Migicovsky, welcome to “Hard Fork.”
Great to be here.
Hey, Eric.
So tell us about Beeper, what the original concept for it is, and then this latest skirmish with Apple. Walk us through just the history of the project.
So Beeper started mostly to solve a personal problem. I look down at my phone, and I see a folder full of chat apps that all do the same thing. But each one has a different slice of my own personal contact list, and I guess I grew up in an earlier part of the internet where we actually had solved this. We had Trillion and Meebo and Adium, and life was good.
The IM, instant messaging, life was good. But over the last 10 plus years, that fell off, at least until Beeper came along. We built it, like I said, mostly to solve a personal problem. We just got sick and tired of there being too many damn chat apps.
And as you were conceiving this, in America, as you know better than most people, the big divide is between Android and iMessage users. When you conceived this, did you think by hook or by crook, I am going to get iMessage into this app? Or did that seem like too much to dream about?
No, honestly, I never used iMessage. I used WhatsApp, because I just had started, I guess, on WhatsApp back in the day. And I think I just had 10 to 15 different chat apps.
So my understanding is that you’ve had iMessage on Beeper for years because people have come up with clever ways to route messages from Androids through a Mac that’s set up in a server farm somewhere else and make it possible for Android users to send iMessages, but that these always get quickly shut down by Apple who doesn’t want anyone doing this kind of thing, but that actually what made it possible for Beeper to do this this latest time was that some 16-year-old named James Gill, who worked at McDonald’s and I guess analyzed messaging apps in his spare time, that you found out that he had actually figured out a way to send iMessages from Android devices. So tell me about that and how he came into your orbit.
And did he say in his initial message to you that I’m 16, and I work at McDonald’s, and I’ve just discovered this iMessage hack? What did he say?
No, but he sent me a message on Discord because that’s how these kind of things go down. You’re either overthrowing the government or trying to overthrow Apple on Discord, right?
That’s where these things start. So he sent me a message just out of the blue on Discord, and that perked me up. Wow. Did I wake up when I saw that, because he not only said that he had done this, but he also sent me a link to his GitHub repository where he had an open source demonstration of this. And the proof’s in the pudding. Took me five minutes, and I got it working on a Linux computer, and I was able to send and receive iMessages without any sort of Mac or any sort of other device in the mix.
We started working with James immediately, and from about August to the beginning of December, we spent that working on what would become Beeper Mini, which is a fork of Beeper designed specifically for iMessage on Android.
It didn’t support all the other chat networks that we had in our repertoire from our primary app. It was laser-focused on just being a really good iMessage client for Android.
And so you put this into a product, Beeper Mini. You release it into the world. I imagine in this moment you know you are poking the bear, and there is going to be a response. But what did you think the response was going to be?
So we started working on Beeper in 2019, and we support 15 different chat networks, including iMessage. And as you were talking about, Kevin, we used some very creative mechanisms for getting access to iMessage. One of them involved jailbroken iPhones. One of them involved a server farm full of Mac Minis in a data center.
So keep in mind, Beeper has had iMessage support for three years. We didn’t have any problems. We didn’t have any problems for three years. And the approach that we’re coming from is Beeper Mini makes the iPhone customer experience better. It takes an unencrypted crappy experience to half of the population of the US who has an Android phone and upgrades that to add encryption, to add all these extra features, and Apple didn’t have to lift the finger. They didn’t have to go and build an iMessage app for Android. They didn’t have to support RCS. It was just overnight.
These conversations that were previously this crappy green bubble texts were now blue. They were like upgraded to the level of quality that people expect.
All right, so your position is that when you launched Beeper Mini, you thought that Apple was going to send you a thank you note for fixing the iMessage experience for Android users.
Think about the beginning part of this story. I don’t actually care about iMessage. There’s nothing that special about it. I have 15 different chat apps on my phone. I don’t need another chat app. What I want to do is to be able to have an encrypted conversation with iPhone users. And in the US, because iPhone is more than 50 percent of the market and the iMessage app or the Messages app is the default texting app on an iPhone — you can’t even change it. It is the only way to text someone on an iPhone.
And Apple does something very sneaky here. They’ve bundled another service that they call iMessage in with the default texting app that can’t be changed. And so most of the user base, most of the iPhone customers in the US, when they open up their contact list and they hit my name to send a message, they send it through iMessage, or they send it through the Messages app. I’m even using the same word here because they’re so intertwined.
And so the goal of this is not to get iMessage. The goal is to be able to have clean and easy encrypted secure high-quality conversations between iPhone users predominantly in the US and Android users.
Right, so you release Beeper Mini. You trumpet this clever way to send messages through Androids, and Apple does not send you a gift basket and a thank you card. They actually change iMessage and basically block Beeper from working. And my understanding now, they’ve changed it a couple times. You’re in this cat and mouse game with them. They update iMessage. You update Beeper.
And Apple told my colleagues at “The Times” in a story the other day that they were making these updates to iMessage because, among other reasons, they couldn’t verify that Beeper kept its messages encrypted. A spokeswoman from Apple said, quote, “these techniques posed significant risks to user security and privacy, including the potential for metadata exposure and enabling unwanted messages, spam, and phishing attacks.”
What did you make of that justification from Apple for why they moved so quickly to block Beeper Mini?
I’m going to turn the question around to you, Kevin and Casey. So we just spent like 15, 20 minutes talking about how there’s this gulf of encryption where Android users are sending unencrypted messages to iPhone users, and everything that Apple holds true and dear, which is privacy and security, is just thrown out the window when it comes to conversations between an iPhone user and an Android user.
So Beep Mini’s introduced. All of a sudden, you’re now sending, you as an iPhone user, sending encrypted messages to your friends who have Android phones. And then Apple torpedoes that, and then comes out with that statement that you just read. How does that sound?
I mean, I think the security discussion is obviously a pretext here. I don’t doubt that there are legitimate security issues at play, but I also think that Apple clearly has a vested interest in not letting Android users access iMessage, because then people will just have fewer reasons to buy iPhones.
I’m sure you saw this, but the blogger, John Gruber, who is a tech blogger, been around, very interested in Apple stuff, often takes the company’s side on some of these types of issues, he had a post the other day where he basically compared iMessage to the Centurion lounges that American Express runs in airports.
If you go to an airport that has a Centurion lounge and you are an American Express platinum card holder, you can get into the lounge, and the lounge has drinks, and it has snacks, and it has comfortable chairs. And if you don’t have an American Express card, you can’t go in. And so that is a perk that they offer to their members for the fact that they have an American Express card.
And John Gruber’s argument is, well, why isn’t Apple allowed to have a perk for iPhone and Apple device users called iMessage? Why does it have to open that up to everyone with a phone? Why can’t it reserve that sort of premium product for its own users?
So what’s your response to that?
So you’re an iPhone user, right?
I am.
You paid good money for an iPhone. Do you not deserve to have an encrypted high-quality conversation with anyone? You paid money for the phone. Why shouldn’t you get the benefit of it? Why is Apple forcing you to have a crappy experience when chatting with your friends? Because that’s what they’re doing.
Well, it wants my friends to get iPhones.
But we’re not talking about an airport lounge here. We’re not talking about something that’s a premium service. I wouldn’t be able to say exactly how many people even know what iMessage is, right?
They buy an iPhone. They type in their friends’ phone number, and they send them a message. And they send them photos, and they send them videos, and they bring them into group chats. That’s the message that Apple is sending here, that they don’t care that you are a paying customer, and when you send a message to someone on Android, they just don’t care.
In fact, Tim Cook came out and said, when someone asks, like, oh, what if I wanted to send a message to my mom who has an Android, he says, buy her an iPhone.
Right, right.
There’s no reading between the lines here. They said exactly — they said the quiet part out loud. And what strikes me as super weird in this situation is people aren’t buying an Android phone — people aren’t buying an iPhone just for the blue bubble. People aren’t not buying an Android just because they want to — there’s more to an iPhone than just a blue bubble, and I should hope so.
I mean, I would hope that the Apple engineers have enough faith in their own product to say everything that we’ve put into this phone, all of the App Store, the ecosystem, everything, that’s why people buy an iPhone. They don’t buy it just because of the color of their bubble.
Another thing that I’ve heard sort of Apple defenders say in this situation is, look, there are a lot of different apps you can use if you want to communicate with people between Android and iPhone. You could use WhatsApp. You could use Signal.
Apple has not banned those things from the App Store. You can do all of that, and your messages will look exactly the same on whatever device the other person is on. It is only iMessage that has this issue. And so there’s actually plenty of competition.
This is not an anti-competitive move on Apple’s part. If you want your chats to look identical to your friends, go use WhatsApp. Go use Signal. Go use another messaging app. How do you respond to that?
There’s only one texting app on an iPhone. It is impossible to change the texting app that comes with an iPhone. You can’t download a different SMS app. You can’t change the default messaging app so that when you press the message button in the contact list it would use something else.
It always routes to Apple’s default app, which is Messages. And that’s the reason. If there was an even playing field here, if anyone could make an app and have it run at the same kind of level of integration that iMessage has or Messages has in an iPhone, there wouldn’t be a problem.
But the thing about defaults, especially defaults that you can’t change, is that they are very sticky. Like I said before, most people don’t even know that they use iMessage. They just use the texting app. People just want to text. That’s how it works.
And when you make the default texting app, the unchangeable default, your own product, your own service, that’s when it veers outside of just normal competitive territory.
Eric, it feels like, at least to me, we may be past the peak of walled gardens. Recently, we’ve seen Apple being forced by regulators in the EU to switch from Lightning, its connector charging port, to USB-C for the iPhone. The company is also being forced to work on allowing sideloading or allowing apps to be installed on iPhones without going through the Apple App Store. That’s also in response to regulations in the EU.
We’ve also talked on the show recently about some challenges in court to companies like Google by developers like Epic Games to try to force them to loosen their control of the Google Play Store. So do you think that we are past peak walled garden, or are these companies going to continue fighting back as hard as they can?
I think we are. And another point to add is that the Europeans passed a law called the Digital Markets Act, which basically mandates that large tech companies open interoperable interfaces for networks and services that they control at a large scale. It’s a really good direction. and I’ve flown to Brussels and spent time working with the Europeans there.
It is going to be a pretty interesting next 6 to 12 months as the DMA comes into force this year, and we’ll see what happens. But I think at the end of the day, it really comes down to users. What sort of experiences do we want to have? If you look down at your phone today and you see all of these different apps that do the same thing but don’t really talk to each other, is that the future that you envisioned?
I’m a big sci-fi fan, and it gets to me that in the future that’s played out in all of these books, they don’t go into detail about the protocols and the apps that they use to communicate across interstellar distances. They just communicated, and that’s the vision that we at Beeper have.
I want the aliens to have blue bubbles when they contact us. That’s my —
I mean, I have to assume that the reason that everyone can communicate effortlessly everywhere in the far future is that there is just one giant corporate monopoly.
That’s very dystopian.
In some of the futures, there are.
Yeah.
Eric, thank you so much for joining us, and good luck in your David and Goliath battle.
Thank you, Kevin. Thank you, Casey.
When we come back, we have some resolutions for New Year’s. We’re going to tell you about them.
AI isn’t coming. It’s here now. How can leaders stay ahead of the curve and make sure they’re using AI to its fullest potential? By listening to the “Work Lab” podcast from Microsoft, hosted by veteran technology journalist Molly Wood. Join her as she explores how AI innovation is transforming creativity, productivity, and learning. Follow “Work Lab” on Apple Podcasts, Spotify, or wherever you listen.
Well, Casey, first of all, happy new year.
Happy new year, Kevin!
Are you a New Year’s resolution guy?
I’m a big New Year’s goals person, and I would describe the difference this way. To me, resolution is like, oh, can I draw upon my willpower to make some sort of change in my life, and hope that goes well. To me, a goal is I’m going to set some kind of milestone, some sort of specific thing that just needs to get done, and then I’m going to invest a lot of energy this year in doing so.
I have to say, I’ve been doing this over a decade, and it actually has helped me accomplish a lot.
Yeah, you have actually inspired me. I did my own goals document on New Year’s day this year. So I do have some goals coming up for this year, and I like this reframe away from resolutions, because resolutions, to me, feels like there’s an element of shame in it.
Yes!
If you say you’re going to resolve to lose 10 pounds but you only lose 7 pounds, it’s like you’ve been a failure all year. So I like taking this more positive goals approach, but I do think we should talk about our tech goals or our tech resolutions for 2024. Because this is an area where so many listeners have written to us and told us that they are unhappy with the way technology is showing up in their lives.
We also talked with Jenny Slate just before the break on our Hard Questions episode, and she made note of how she had been sort of battling with technology. Instagram in her case was the app that was making her feel bad, and so she made some changes to the way she used it. And so I thought as we head into the new year, we should talk about how our relationships with technology are going and maybe one goal that we’re giving ourselves for tech use in the year 2024.
I really like this idea.
So first of all, let’s check up on — because we actually did a resolutions episode last year, and my resolution last year was to use my phone less and to implement something called a phone box. I believe you called it a phone prison, and this experiment did not go well for me.
I did not end up using the phone prison for very long, and I actually ended up undoing some of the measures that I had taken to make myself use my phone less. You actually made a resolution on last year’s show that you were going to use your phone more in 2023. How did that go for you?
[LAUGHS]: I think that if you look at my screen time, it probably mostly held steady. I don’t know that I made a huge new investment into my screen time, but I certainly did not waste a moment thinking that I was looking at my phone too much.
I used my phone when I wanted to, and if I ever found myself feeling like I was using it too much, I put it away.
Yeah. So do you have any tech goals for this year?
Well, so I do, and it is screen time related actually, which is new for me. But growing up, Kevin — and I wonder if this was the same case for you — I would sometimes find myself in houses where there was a TV on at all times. Were you ever in this house? Maybe it was your house too.
No, not my house, but I had friends who you’d go over, and CNN was always on.
Yeah, and it didn’t matter if anybody was watching the TV. Sometimes people wouldn’t even be in the same room. There was just this kind of low bad hum, loud commercials. And I hated it. It was like poison to my ears, and I could never understand why anybody would do that.
So then fast forward to last year, and I notice that whenever I am in my office, and I’m not just typing my column, it feels like YouTube is playing. It feels like there is a YouTube video going on. Often I’m watching the YouTube video. But in other cases, I am not.
And I’m playing a video game, and YouTube is going on. Or I’m browsing through emails, and there’s a YouTube video going on. And increasingly as the year went on —
What is the YouTube video? What does your ambient noise YouTube diet consist of?
There are a bunch of folks who play the mobile game “Marvel Snap,” which is a game that I had to stop playing for my own sanity because it’s too addictive. But my methadone for that is that I watch other people play the game, which feels more under my control.
Wow. I love the hoop of self-justification that you just dove through. Anyway, keep going.
It honestly is much better for me to just let other people play this game and worry about it less. So that’s one category. I watch a lot of stuff about video games. I will basically watch any human being cook any dish that can be made. So I love to do that as well.
I love to watch videos about interior design. So I just have a handful of categories where I’m really interested. And again, often I will watch the videos, but this thing just kept happening where I would be hearing this background noise, and I’m thinking, I’m not even paying attention to a thing that I clicked on to watch.
So what is going on there? Why have I become the person whose house is showing TV all the time?
And so your resolution or your goal for 2024 is to stop doing that?
My goal for 2024 is, if I’m going to watch YouTube, I should be watching YouTube. OK? And there’s a case to be made I should watch YouTube a little bit less than I do. I think there are times when I just want to stare into space, where I want to de-stress, where I want to not think about work, and YouTube is what I slot into that spot. I think I need to probably slot in a few other things — go for a walk, take a nap. But when it comes to this sort of reflexive behavior of, well, I’ll put something on in the background, and I will just shuffle through 40 screens, I don’t want to do that.
Last year, our friend and colleague, Ezra Klein, wrote this column that really resonated with me where he described the internet as an acid bath for human cognition, which I thought was such an evocative phrase, because even though I love the internet and screens as much as I do, I have to admit, it has gotten harder for me to read a book. OK?
I do feel like text-based social networks have scrambled my brains a little bit. And I to me, watching YouTube without watching it is like the apotheosis of you have just thrown your brain into the acid bath. So this year, I do want to take my brain back from the acid bath.
Can I offer one suggestion?
Please!
So I had this problem too with YouTube. I would watch just endless amounts of — my thing was old tennis matches like from the ‘90s and early 2000s. I would just put one on in the background, and it would be this white noise behind whatever I was doing.
And, ultimately, there’s nothing wrong with this, except I would end up in the situation that you would be in, where it’d be like two hours later, and I’d be like, why am I still watching this? So I disabled the autoplay the next video feature on YouTube. You can actually make it so that when you finish a video, it just stops. It doesn’t go to the next one in the recommendations set.
So you can turn that off, and I have found that to be a valuable thing that actually does put a little speed bump in there, because then I have to actually go select a new video if I want to keep watching YouTube.
I think that is a great idea, and in fact, I’m doing it right now. Because, Kevin, if I don’t do it right now, I might not do it. So I’m going into my settings.
So you go to YouTube.
OK, I’m there. I’m in my settings, and where’s my where’s autoplay? Playback and performance?
So play a video.
OK. Puh, puh, puh. OK. Mm. All right, I’ll play a video.
And then do you see —
The first recommended video is a “Marvel Snap” video. So I’m clicking on it.
And now do you see the little arrow at the bottom of the video that says “autoplay is on“?
Hmm. No. Where is it?
OK, so hover over the video. It’s right next to the “Closed Captioning” button.
Aha!
So you turn that off, and now when you reach the end of that video, it will not play another video.
It will not play. And just with that one simple click, Kevin, I have begun to reclaim my time and attention. That was beautiful.
You’re welcome. Happy new year.
Thank you. Thank you. Now, I imagine you might have a resolution for yourself.
Yes. So last year’s resolution for me was about reducing my screen time through the use of this phone box and an app that put these little speed bumps to me opening my problem apps, and I stopped using that a few months after New Year’s because I just noticed that it was making me feel incredibly guilty about my phone.
It just felt like this forbidden thing, and I ended up actually — my screen time was going up, and so I started trying to implement what I called phone positivity, and we talked about this on the show. I started trying to basically build in more gratitude for what my phone was allowing me to do, whether it’s checking in on work while I’m hanging out with my family or doing work when I’m on the move, basically just, instead of agonizing about how much I was using my phone, really trying to appreciate what I was able to do with my phone.
And I actually think that worked pretty well for me. I’m pretty happy with how my phone use is going. I feel like I’m using it about the right amount. I don’t feel like I have a big screen time problem. But there is a problem still with my phone use, because I find that I have just come to associate the act of picking up my phone with anxiety and fear and bad things.
A lot of what my phone does, when you boil it down, is tell me about bad stuff. Like someone was mean to me on the internet, or some terrible war has broken out, or there’s a porch pirate stealing packages in my neighborhood — a lot of what I get when I pick up my phone is something bad. And so my resolution, my goal for my tech use in 2024 is what I’m calling more delight, less fright.
OK, great.
So I got this idea in part from Catherine Price, who was actually my phone detox coach back when I did a phone detox several years ago. She wrote a book about breaking up with your phone, and she actually wrote a piece recently in “The New York Times” about delight and the concept of bringing more delight into our lives, and she wrote that basically all these delightful things happen every day.
We see a pretty flower on the street, a nice bird lands on a bird feeder outside our window. Whatever delightful things, she was advocating for noticing them, and I thought, well, maybe my phone could become more delightful. Maybe if what I’m feeling when I open my phone is like a sense of dread and fear, maybe I could change that experience in some way by making my phone a more delightful place to spend time.
So I’ve been gradually rotating out some of the apps and the widgets on my phone. I took a bunch of unpleasant apps that would tend to give me unpleasant things the first time I opened them. I put those on a second screen, and now on my home screen it’s stuff that makes me joyful.
So I made a folder in my photos app, a new album called “Delights,” and I just put photos of things that bring me delight. Maybe it’s my kid playing. Maybe it’s a family photo. Maybe it’s something that I saw on my way to the office.
Maybe it’s a screenshot from something. Maybe it’s a meme that made me laugh. I’m filling up this album with things that bring me delight, and I have put a little widget on my home screen that will shuffle photos just from that Delights album all day.
So now when I open up my phone, I get a picture of my kid as my wallpaper, and then I open my phone, and I see this little widget that has a photo of something that brings me delight.
So am I allowed to see the delight?
You can see the delight.
OK.
This one is a photo of my kid at the beach over break, making a very joyful face.
Reaching toward the sky — that is a confirmed delight.
Confirmed delight. I’m going to keep filling up this folder with things that bring me delight, and I just think this is like something that I am doing to try to change the emotional register with which I use my iPhone.
So I have a good sense of what’s on your first screen. I would love to know which are the sad apps that are now in the second screen.
[CHUCKLES]: Well, it’s everything that’s work-related. That tends to — there’s not a lot of times when I’m getting messages from a news app that are like, a great thing happened today. It’s usually like some sort of calamity.
No one has ever experienced delight from a Slack notification either. I would say that.
[LAUGHS]: That’s not true, actually. I do get some delightful Slack notifications. But I’ve put a lot of the stuff that just makes me a little more anxious on the second screen. I actually have a red flags folder that includes things like TikTok, Threads, Bluesky. These are not —
Wait. These are in a folder that is just marked with a red flag?
Yeah, I’ll show you. This is my red flag folder.
But I did move stuff to the first page, like the journaling app. Apple has a new journaling app. I’ve only just started using it, but it is helping me out. I put ChatGPT on my first screen, and I’m also putting things like e-reader apps to read ebooks on my screen.
Well, I think this is a great system, and there’s actually only one thing that I think that would improve it, but we can actually do it right now.
What’s that?
I should take a picture of us for your Delights folder.
Oh, let’s do it.
Let’s do it right now. I’m just going to take out my little phone, and spin the camera. Smile!
All right, that’s going in the Delights folder.
Now every time you open your phone — because hopefully, you’ll just set this to be the first one.
You can remember when we recorded this episode. So there you go.
I love that.
Yeah. Now, Kevin, I imagine other people might be setting their tech-related goals for the year. Do we have any tips or words of advice for them?
Yeah, I think, just be honest with yourself about what is realistic for you. I mean, one thing that you’ve taught me about goals is that they should be something that you could actually realistically achieve. And so if the goal is “never use my cell phone” or “never look at social media,” that might not be a realistic goal for you.
So I think it should be something that is a stretch but not impossible. And I also think, as much as you can, try not to make it — try not to be too hard on yourself. Build in some buffer so that if you don’t get all the way to your goal, you still feel good about having made it part of the way there.
Yeah, I really like that. I think the one that I would just throw in there is “trust your instincts.” If there is a piece of software out there that is making you feel bad, just experiment with getting rid of it. You can always download it again later.
But over and over again when I talk to folks, they sometimes feel embarrassed because there’s maybe some social app that all their friends are using, but they’re not on. Trust your instinct. There is something that you know that you don’t want to be a part of that, and you’re probably right. And so as you’re casting about the tech landscape, wondering what changes you might want to make, I would just listen to those instincts.
What do you just not want around you anymore? I promise you, you’ll be able to fill it up with something you like better.
I love that.
Yeah.
All right, so we will check in on our goals this time next year, and hopefully I will be just be full of delight.
I mean, I am excited for that, and I will have found something to do besides just staring off into space while listening to “Marvel Snap.”
You will no longer be the embodied version of the YouTube algorithm.
Yeah, exactly. Won’t that be nice.
[MUSIC PLAYING]
AI isn’t coming. It’s here now. How can leaders stay ahead of the curve and make sure they’re using AI to its fullest potential? By listening to the “Work Lab” podcast, from Microsoft, hosted by veteran technology journalist Molly Wood. Join her as she explores how AI innovation is transforming creativity, productivity, and learning. Follow “Work Lab” on Apple Podcasts, Spotify, or wherever you listen.
“Hard Fork” is produced —
Can you do that without jostling the thing again?
Jostling it part of my creative process. OK. “Hard Fork” is produced by Davis Land and Rachel Cohn. We had help this week from Kate LoPresti. We’re edited by Jen Poyant. This episode was fact checked by Caitlin Love.
Today’s show was engineered by Daniel Ramirez. Original music by Marion Lozano, Pat McCusker, Rowan Niemisto, and Dan Powell. Our audience editor is Nell Gallogly. Video production by Ryan Manning and Dylan Bergerson.
If you haven’t already, check us out on YouTube at youtube.com/hardfork. Special Thanks to Paula Szuchman, Pui-Wing Tam, and Jeffrey Miranda. You can email us at hardfork@nytimes.com Let’s hear those resolutions.
And don’t send us a text if you’re an Android user. We really don’t want to hear it.
Kevin!
[LAUGHS]: All right.
[MUSIC PLAYING]
We made USAA Insurance for veterans like James. When he found out how much USAA was helping members save, he said —
It’s time to switch.
We’ll help you find the right coverage at the right price. USAA, what you’re made of, we’re made for. Restrictions apply.