This transcript was created using speech recognition software. While it has been reviewed by human transcribers, it may contain errors. Please review the episode audio before quoting from this transcript and email transcripts@nytimes.com with any questions.
Casey, I have a bone to pick with you.
What’s that? What did I do?
[LAUGHS]: So on Saturday, as you know, we had a birthday party at our house —
A wonderful birthday party.
— for my son.
Yeah. And it was also a house-warming party.
House-warming party. And you and your boyfriend came. Lovely to see you there.
Thanks for coming. But you brought him this present. We specifically said, no presents.
You did say that.
And you brought in this present that was called the Dino Truck.
Yes, and here’s why, because I know that your son loves trucks. And I thought, what is the best kind of truck I could think of? And that would be a truck that was also a dinosaur that was full of dinosaurs. And so that’s what I got him.
[LAUGHS]: Yes. It’s very like “Pimp My Ride,” Conan, because it has — it’s a dinosaur truck that contains within it 12 other dinosaur trucks.
That’s right.
And you sort of assemble it all together. But my son has not stopped playing with it. He absolutely loves it.
Yes!
And as a result, about twice a day, I now step on a very painful dino truck that has been left somewhere around my house.
Oh, no.
He’s loving it. I am not.
I mean, I think it was the best kind of gift I could get for the Roose family. It is something that your son enjoys and that causes you physical pain. So I think that was a slay on my part.
Mission accomplished.
Mission accomplished.
[LAUGHING]:
[THEME MUSIC]
I’m Kevin Roose, a tech columnist at “The New York Times“.
I’m Casey Newton from Platformer.
And this is “Hard Fork“.
This week, America is building an AI action plan. We’ll tell you how tech companies are trying to exploit it. Then Columbia University Sophomore Roy Lee joins us to talk about the tool he built to help software engineers cheat their way through job interviews, and why he might get kicked out of school over it. And finally, the Hot Mess Express is once again rolling into the station.
Well, Casey, I have an action plan for today’s episode.
OK, let’s hear it.
I want to talk about action plans. So, Casey, as you know, because you wrote about it this week, there have been these AI action plans that all the big AI companies, and think tanks, and nonprofits have been submitting to the Trump administration over the past couple of weeks.
Yes, there was the Paris AI Action Summit, at which no action was taken or really even proposed. And then the White House came forward and said, we’re going to make our own action plan. And why don’t you companies and anyone else who wants to make a public comment, go ahead and tell us what you think we should do.
Yeah, so these kind of public comment periods are not unusual. Agencies of the government open themselves up for submissions from the public all the time on various issues. But this one caught our eye because it was related to AI. And it was essentially the Trump administration trying to figure out what to do about AI and the potential that AI is going to accelerate during the four years that Donald Trump is in office.
Yes, I think that’s how the Trump administration saw it. And I think for the big AI companies, Kevin, it was really a chance to present the president with a list of their absolute fondest wishes and dreams for what the best possible deal they could get from the government would look like.
Yes. So I think there’s some interesting stuff in them. But I also think there’s kind of a broader, interesting story about how the tech companies want or don’t want government to be involved in helping them build and manage these very powerful AI systems.
Yes. Let’s get into it.
OK, but first, because this is an AI-related segment, we should make our standard disclosures. Do you want to switch it up this week? Do you want to do mine and I’ll do yours?
Yeah, sure. “The New York Times” is suing Microsoft and OpenAI over alleged copyright violations.
Correct. And Casey’s manthropic works at Anthropic.
That’s right.
OK, so you wrote about these submissions this week. Where do you want to start?
Well, let’s start at maybe some of the things that are a little bit less controversial, right? I think there are some pretty good ideas in these action plans. And I actually think the Trump administration will probably follow through on them.
So for example, they talk about wanting to expand the energy capacity that we have in the United States, so that we can have the power that it will take to do everything with AI that we want to. They also talk about encouraging the government to explore positive uses of AI, potentially deliver better services to citizens. That would be good if that happened. So there’s a lot in these documents about that. But once you get beyond that surface layer, Kevin, there is a lot of, essentially, what these companies have always wanted the government to tell them, and they are now finally getting a chance to say, hey, please, please, please, do this.
And what are those things?
So for example, they are really, really excited about the idea that Donald Trump might declare, definitively, that they have carte blanche to train on copyrighted materials. Now, this is, of course, at the heart of “The Times’” lawsuit against OpenAI. But it’s not just OpenAI that wants the green light to do this, because all these AI labs are under similar legal threat.
So it’s in Google’s AI action plan. It is in Meta’s AI action plan. In fact, Meta says that Trump should unilaterally, without Congress, just issue an executive order and say, yeah, it’s OK for these AI labs to train on copyrighted material. Go nuts. OpenAI, I think, had a frankly ridiculous statement in their AI action plan, which is that if Trump does not do this, if he does not give AI companies carte blanche to train on copyrighted materials, we will immediately lose the AI race to China and it will just be DeepSeek everything from here on out.
I mean, obviously, they have an interest in making that case and having the Trump administration give them sort of a free pass. But can they actually do that? Could Donald Trump issue an executive order tomorrow and say, there’s no such thing as copyright anymore when it comes to the data used to train large language models?
Well, Kevin, lately the Trump administration has been issuing a lot of executive orders that people have said, well, hey, you’re not allowed to do that. That’s actually not constitutional. And yet, he keeps doing it. And some of these things have been struck down by the courts. And some haven’t been. And there seems to be a kind of flood-the-zone strategy, where we’re just going to do whatever we want and the courts may undo some of it, but they’re probably not going to undo all of it. So where would copyright executive order fit into that? I don’t know.
Yeah, I mean, my hunch is that this will not happen via executive order or that it will be left up to the courts to decide. But yeah, I mean, it’s certainly in their interest to argue that this all should be allowed and kosher, and to sort of preempt any potential litigation against them. Was anyone opposed to that idea?
Yes. So a group of more than 400 Hollywood artists, including Ben Stiller, Mark Ruffalo, Cynthia Erivo, and Cate Blanchett, signed a letter saying, hey, do not grant an exemption from copyright law to these AI labs. And their argument was essentially, America has a lot of cultural leadership in the world. It’s like, so much global culture is downstream of American culture.
And they said, if you create disincentives for us to create new works, because we can no longer make any money from it economically, because AI just decimates our business, we are going to lose that cultural leadership. And so I would actually call on Ben Stiller, Mark Ruffalo, Cynthia Erivo, and Cate Blanchett to come on the “Hard Fork” podcast and tell us more about that. We’d love to meet you and hear your stories.
Yeah, I would call on them to frame their opposition in the form of like a musical. Cynthia Erivo, in particular. I have a proposal for the showstopper tune of that musical.
Have you written it?
Yeah.
OK.
It’s called “Defying Copyright“.
Oh, boy. Wow. You didn’t even try for a rhyme.
[LAUGHING]:
[VOCALIZING]
When it comes to copyright violations, Cynthia Erivo is decrying depravity.
[LAUGHS]:
And that’s how you do it, Kevin.
[LAUGHING]: OK. Back to the serious issues in these AI action plans, Casey.
Yeah, there’s another big plank that gets repeated in these submissions, Kevin. And that is this idea that these companies do not want to be subject to a thicket of state laws about AI, right?
Yes. Basically, what the AI companies don’t want is in the absence of strong federal regulation on AI, they don’t want California to pass a bill governing the use and training of large language models, Texas to pass a bill, Florida to pass a bill, New York to pass a bill. They don’t want to have to go through 50 states worth of AI regulations and making sure that all their models comply with all the various state regulations.
So they have wanted for a long time and are now making explicit their desire for a federal law, or statute, or executive order that would essentially say to the companies, you don’t have to pay attention to any state laws because the federal law will supersede all that.
Yes. And in particular, Kevin, they are worried about state laws that would make it so that these companies could be held legally liable in the event that their products lead to great harm, right? There was some discussion about this in California last year with a Senate bill that we’ve talked about on the show. And there’s a lot of fear that other states might take a similar approach.
And so this plank in these plans, Kevin, where these companies are saying, we don’t want a thicket of state laws, it kind of works in a couple of different ways. I can understand why they don’t want to have to have a different version of ChatGPT in 50 different states. That would obviously be very resource intensive and annoying.
At the same time, these companies know full well the country they live in. They know how many tech regulations we passed in this country in the past 10 years. There is only one of them, and it was to ban TikTok. And it turns out that even when you pass a law banning TikTok, TikTok doesn’t get banned.
So I think that there is a bit of cynicism here. And that they’re saying, oh, please, please, please, let there not be any state laws, just pass a federal one. They know that there is very little likelihood that is going to happen anytime soon. And so that in the meantime, they can just operate under the status quo where they don’t have direct legal liability for any bad outcomes that might arise from a future large language model.
So I went through a lot of these proposals. And I think there’s some interesting stuff in them, sort of around the edges. There was a lot of talk about the security of these models and trying to harden the security of the AI companies themselves, so that, for example, foreign spies aren’t stealing the model weights and sending them to one of our adversaries or things like that.
By the way, I love that word. It’s, oh, we have to “harden” our defenses. We have to make them so hard. We have to harden our posture. I don’t know when we started saying that.
Casey, this is a family show.
It’s very evocative is all I’m saying. Anyways, go on.
So there’s some sort of small bore stuff in there that felt interesting.
“Small bore,” by the way, two words often used in reviews of this podcast. I don’t know why I keep interrupting you. I’m just trying to get the energy level up, but we’re doing great. That’s fine. All right, tell us more.
So some of the plans contain some just sort of weird, interesting ideas. Like, for example, in OpenAI’s proposal, there’s this idea that 529 plans, which are the plans that parents can start to save for their child’s college education, should be expanded so that they can be used to pay for things like getting an HVAC technician credential, because they say, we’re going to need a lot of HVAC technicians in all these data centers that are going to power all these AI models. And right now, kids are being incentivized to go to college and get four-year degrees in various subjects that may not be that relevant. But we’re definitely going to need a lot more HVAC technicians. Is that going to change the world overnight? No. Is the Trump administration going to take that seriously? I have no idea. But that’s the kind of thing that I was surprised to see in there.
Yeah.
But what I found more interesting was what was not in these proposals, right? These companies and the people who lead them have big, radical ideas about how society will change in the coming years as a result of powerful AI systems. Sam Altman has been interested, for years, in universal basic income. He funded a universal basic income experiment to try to figure out what an economy, after AGI, would look like and how we would provide for people’s basic needs.
There are executives that are trying to solve nuclear fusion to power the next generation of AI models. There are people who want to do things like Worldcoin, which Sam Altman also funded, to give people a way to verify that they are humans. You can imagine a world in which the AI labs were saying to the government and the Trump administration, hey, we have all these ambitious plans, we want your help.
Please, help us come up with a UBI program that might make sense for people who are displaced by AI. Help us come up with some kind of national proof of personhood scheme or help us build fusion energy. But they’re not asking for that stuff.
What they’re asking for instead is basically, leave us alone and let us cook. And it really makes me think that these labs have decided that it would be more trouble to have the government in their corner actively helping them than it would help. And so my read of these proposals is that they are trying to give the government some stuff that they can do that will make them feel like they are helping and clearing the path for AI, but that they’re not calling for any kind of federal Manhattan Project for AI, because my sense is that they just think that would be inviting trouble.
Yeah, and I mean, they might be right about that. I’m not sure exactly what the government could or should be doing to help OpenAI make a better version of ChatGPT. But I think I would go a step further than what you said, Kevin, because it isn’t just, leave us alone. They’re really telling the government leave us alone or else.
There is a boogeyman in these AI action plans. And the boogeyman is DeepSeek. So DeepSeek, of course, is a Chinese company that emerged with a model, called R1, earlier this year that shocked the world with how much it had caught up to the “state of the art,” and has really galvanized the attention of Chinese leaders around the possibilities of what AI can do in China.
And so when you read the OpenAI and the Meta action plans, in particular, they’re saying, look at DeepSeek. China is so close to us. You really need to let us do exactly what we want to do in the way that we are already doing it or we’re just going to lose to China and it’s all going to be over for us.
Yeah, I noticed that, too. And I think we’ve seen that being telegraphed at things like the Paris AI Summit, where there was a lot of talk about China and foreign adversaries that were catching up to the “state of the art” AI technology. But to me, that feels very calculated. That is the role that the AI companies want the government to play.
Other than just getting out of their way, they also want them to hobble China and make it hard for China to catch up to them in the “state of the art.” And there’s a genuine read of that is like, we’re worried about Chinese companies getting to something like AGI before Americans. And what happens if their values, rather than ours, are embedded in these systems, and they just use them for surveillance on their own citizens and things like that? The cynical read is like, we have this new competitor and we would like the US government to step in and make things actively harder for that competitor.
Yeah. And look, I mean, I think there are reasons to be worried about what an adversary could do with a really powerful AI. So I don’t want to dismiss these concerns completely. But I do feel like some of these labs are trying to use the specter of China in a pretty cynical way.
My favorite story about this issue, Kevin, does have to do with Meta. So Meta writes in its proposal to the government a lot about DeepSeek. And Meta’s number one priority in its action plan is that it continues to be able to develop, what it calls, open source AI.
Now, Meta’s AI is not actually open source. There are a lot of restrictions on how you can use it. Most people would call it open weights instead of open source, because you can download the model weights, but not the actual source code. OK, we’re a little bit in the weeds, but I do feel —
Our listeners have fallen asleep. Wake up!
OK, so let’s just wake up by saying that Meta says to the government, look at what DeepSeek is doing. If you don’t let us develop in an open source way, DeepSeek’s own open weights approach could spread all across the world, and it will have these authoritarian values embedded in it, and we will just lose out on the opportunity of a lifetime. Why is that funny to me? Well, Kevin, it’s because in November, Reuters reported that Chinese researchers had used Meta’s Llama model to create new applications for the military.
Oh, boy.
And look, does that mean that China used Llama to build a giant space laser that’s going to vaporize the Eastern Seaboard? No. But it does suggest to me that this idea that we have to release, quote, “open source AI” in order to save us all is probably not the right answer.
Yeah, and if anyone from the Chinese military is listening to “Hard Fork“, please don’t develop a space laser using Llama. That seems scary.
That’s our AI action plan, no space lasers.
So before we wrap up talking about these AI action plans, I want to point to a few good ideas that I saw in them. Many of them came from groups other than the big AI labs. But I thought there was some interesting, off-the-wall stuff that I hope the Trump administration is paying attention to this.
One of them was this proposal from the IFP, the Institute For Progress, which is a pro-technology progress think tank. IFP says, we’re going to need a bunch of data centers and a bunch of energy sources to power those data centers, but all that requires building physical infrastructure. And it can be quite slow to build physical infrastructure in many parts of the country, due to things like environmental regulations, and zoning, and things like that. So they proposed creating these things called special compute zones, where you would essentially be able to build in a much less restricted way, the infrastructure to power advanced AI systems.
That’s actually what I call my office is a special compute zone. When I see guests going in there, I say, hey, get out of there, that’s a special compute zone.
Yeah, so that was one interesting idea from the IFP proposal. I also —
Did the Institute Against Progress have any interesting ideas you want to share?
[LAUGHS]: Well, there isn’t an Institute Against Progress, but there are some organizations, like the Future of Life Institute, that are much more concerned about the development of these powerful systems. This is one of these organizations that’s been around for a while. It’s concerned with things like existential risk and runaway AI. And so one of their ideas that they put in their proposal was that all AI models of a certain size and power should have kill switches on them. Basically, in order to release one of these things, you should have to build in a way that an engineer can shut it down.
And the way that they pitched this to the Trump administration was this is a way to protect the power of the American presidency, right? As the president, you wouldn’t want some AI system going rogue and becoming more powerful than you or allowing another world leader to become more powerful than you. So you want a kill switch on these things in order to protect the authority of the American president.
Yeah. And one of the most interesting things about all of these plans, Kevin, is the way that the authors have to contort themselves to try to talk about AI in a way that the Trump administration will actually listen to. Vice President Vance in Paris, in February, says explicitly that the AI future is not going to be won by hand-wringing over safety. They hate the term “AI safety.”
And so, in fact, when you look at the proposals of the major lab, they basically don’t use the word “safety” at all, except maybe one time. I actually was doing Command-F to try to find instances of safety in these plans. You won’t find it there. And so they have to contort themselves.
In Anthropic’s policy, it was almost like they were hiding medicine inside of peanut butter and feeding it for a dog, because instead of talking about safety, they would talk about national security, which is just another way of talking about AI safety. But actually, a lot of their proposal is about, how can you build these systems safely? It’s just that they’re saying, there’s a national security implication.
Yes, so I think if we zoom way out from the specifics of these proposals, the two things that I want to convey about this process. One is that the AI labs mostly want government to leave them alone. The second thing is that I think the AI companies are slowly and haltingly learning to speak the language of Donald Trump. And this is their first major public attempt to talk to the Trump administration in the way that it wants to be talked to about how to harness the power of AI for American greatness or whatever.
So I have a slightly darker view of this, which is that the Trump administration has essentially already told us its AI action plan, which is go faster, beat China. That is the plan. And when given an opportunity to say, what do you think the United States should do? The biggest AI companies all looked around and they said, we should go faster and we should beat China.
Now, if it happens that the United States is able to build a very powerful and very benevolent AI and somehow create and promulgate democracy around the world, then, OK, that’s great. But I think that there is a risk that this leads us into some sort of conflict or that by going very fast, we wind up making a lot of mistakes and we’re at a higher risk of creating systems that we cannot control. So if you are in your cars this morning listening to us, wondering, why did they talk so much about these plans? This is the reason why to me, is that this feels like an inflection point where some of the most consequential figures governing the development AI had a chance to say, we should be really careful and thoughtful about this, and they mostly did not.
Yeah, I think that’s a really good point. Casey, what is our AI action plan? Because we have to be part of the solution here.
Two words, underground bunker. I’m not telling you where it is, but it’s under construction. How about you, Kevin?
[LAUGHING]: I can’t do better than that. That’s good. Can I have a spot in your bunker?
Absolutely. There will always be a spot for the Roose family in the “Hard Fork” bunker.
OK. That’s very sweet. Thank you. We’re not bringing the dino truck.
[UPBEAT ELECTRONIC MUSIC]
When we come back, the college sophomore who has a cheat code for LeetCodes.
Well, Casey, we’ve got a doozy of a story this week and an interview with a real live member of Gen Z.
Yeah, and we are excited to talk to this one. This is a controversial story, Kevin, but one that we thinks that tells us a lot about the state of the world.
So today, we are talking with Roy Lee. He is a sophomore at Columbia University.
For now.
For now, for at least the next couple of days. And he has gotten a lot of attention in recent days for something that he’s been doing to apply for jobs in the tech industry.
What has he been doing, Kevin?
So Roy has developed a tool called Interview Coder, that basically uses AI to help job applicants to big tech companies cheat on their interviews. So in a lot of tech interviews, they do these things called LeetCode problems, where basically the recruiter or the person who’s supervising the interview from the tech company will watch you solve a tricky computer science problem.
And they’ll do this remotely. And so Roy had this idea, well, these AI systems are getting quite good at solving these kind of problems. What if you could just kind of have the AI running in the background telling you how to solve the problem and you could make that undetectable to the company?
Yeah, and to prove that this worked, Roy applied for jobs at several big companies, including Amazon, and he says, wound up getting offers from all of them after using this tool. And after he began promoting this story online, well, that’s when all hell broke loose.
Yeah, so he has become sort of a villain to a lot of tech employers and people doing these kinds of interviews. But he’s become a hero to a bunch of younger programmers who think that these practices, these hiring tests, these puzzles that you give people when they’re looking for jobs are outdated, and that they need to be exposed as being bad and wrong, and that we need to come up with something better to replace them.
Yeah, and Kevin, I am sure that some listeners are going to hear this segment, and they are going to email us and they’re going to say, shame on you. Why are you giving this guy a platform? We shouldn’t be rewarding people for cheating.
But I have to tell you, as we sat with it, we thought, this is a story that tells us a lot about the present moment. The nature of software engineering is changing. The nature of hiring is changing.
What should employers be looking for and how should they test for it? These questions are getting a lot more complicated as AI improves. And Roy’s story, I think, illustrates how quickly things are changing in a way that is just, honestly, worth hearing more about.
All right, well, with that, let’s bring in Roy Lee.
[QUIRKY ELECTRONIC MUSIC]
Roy Lee, welcome to “Hard Fork“.
Hey, excited to be here.
So where are we finding you today? It looks like you’re in a dorm of some kind.
Yeah, yeah. I’m still in my Columbia University dorm at the moment.
And possibly for not too much longer. Is that right?
Yeah, yeah. I’m waiting on a decision to hear if I’m kicked out of school or not. So this might be my last few days.
And what’s the over-under on whether you get kicked out or not? From the facts of the case, I would say, it’s not looking good for you.
Yeah, yeah. It is not looking too good for me. But strangely enough, I’ve had some pretty powerful people message me and say, hey, if they try to do anything, then just let us know. So yeah, it’s both worlds are in the realm of reality.
Wow. So I want to get to all the disciplinary drama, but I want to actually take us back in time to when this all started for you. When did you get the idea for this tool, Interview Coder? And what problem were you trying to solve?
Yeah, so I don’t know how familiar you guys are with software engineering. But for about two decades now, there is a technical interview that happens. It’s called a LeetCode style interview. And it’s essentially an interview where they’ll ask you a riddle, and these types of riddles, problems are found on a website, leetcode.com.
And you’re given 45 minutes. And the task here is to have seen the problem before, solve the problem, and be able to regurgitate the memorized solution without acting like you haven’t seen the problem before. So it’s pretty much a really ridiculous system and type of interview.
And every single software engineer out there knows it. And everyone, if you want a job that pays a reasonable salary, then you’re forced to go through this gauntlet of spending a couple hundred hours on this website, memorizing a bunch of riddles. And that’s just like a gigantic net negative for society.
I, myself, went through the gauntlet. I grinded the website for probably up until I was in the top 1 percent of competitive ranked users on the website. So it was just a gigantic waste of time. I spent 600 hours of my life memorizing riddles, when in reality I should have been programming.
And as soon as I kind of developed the balls to do something, I just realized, hey, there’s something that can be done here. This is a very easy solution. This type of interview is already being gained by tools like this that exist. It just takes someone to make it really public, make a scene out of it, and show big tech, hey, you guys need to fix it because it’s just not working.
So you say you spent hundreds of hours on this website solving these riddles. I’m curious if you feel like it made you better at coding. My guess would be is like if you’re truly in the top 1 percent of people who are using this website to solve problems, it would have made you pretty good at being a software engineer.
There might have been utility in maybe solving the first 20 questions, maybe like the first 10 hours on the website might have had some utility. But after that, it doesn’t really help you at all. The types of problems and the type of thinking that you’re expected to perform while doing these questions is just you’re never, ever going to use it in a job.
All right, so you get very frustrated with LeetCodes. You start thinking about what you want to do next. And tell us the moment that you decided to become the Joker.
Yeah, so during the recruiting process, my interest in entrepreneurship was growing. And at a certain point, it kind of got to a point where I realized, hey, no matter what, I’m only going to end up at a startup. And I kind of have the balls to cut off all these bridges now with big tech companies. And as soon as I developed that mindset, I realized that, hey, doing this thing is not actually going to ruin my future as much as I think it will. And in that case, it just becomes a super viral thing that we know will go viral.
So tell us about the thing. Tell us about the tool that you built and how it works.
Yeah, so really core level, it’s a desktop application that sits. It overlays on top of all of your other applications. And it’s completely invisible to screen-share.
The technology is actually very, very simple. You just take a screenshot of the screen and ask ChatGPT, hey, can you solve the question you see on the screen? And it spits out the response.
But what we’ve really done, technically, is make it undetectable to the interviewer. There’s a translucent overlay, so it doesn’t look like your eyes are moving or you’re looking at another screen at all. There’s a movable window. You can overlay directly on top of your code. The cursor doesn’t lose focus. And there’s just a lot of bells and whistles we’ve used to make it completely undetectable that you’re actually using something at all.
So let me get a sense of how this actually works in practice. So during an interview for a programming job, you would be given a LeetCode problem to solve, and then you would be on a video call with someone, a recruiter from the company who’s watching you solve the problem? Is that how these work?
Yeah, that’s exactly it.
And so you developed a tool to essentially allow you to have AI solve this problem for you, while not tipping off the person on the other end of the video call that you’re using AI.
Yeah, yeah, yeah. That’s how it works.
And am I right that you use a prototype of this when you were going through your own interview process with Amazon?
Yeah, yeah. It wasn’t just Amazon. I spent the entire recruiting season figuring out how to make a perfectly undetectable application. I trial ran it with companies like Meta, Capital One, TikTok.
And the belle of the ball was Amazon. That was the most well-known thing with the most annoying recruiting process. And I just knew that if I recorded the entire process, then this would blow up.
And how did your tool do?
Yeah, I mean, it completely one shot it. We live in an age where AI exists. Programmers are going to use AI. And AI is extremely good at these sorts of brutal type problems.
Can I just ask about, what is your emotional experience at this time? You are walking into several lions’ dens. You’re essentially misrepresenting yourself as an earnest job candidate.
Your whole role is essentially to gather content that can then be used to repurpose to promote your startup. Were you nervous during this time? What were you feeling as you were going through these interviews?
Yeah, you have no idea. There was a point in time where I was getting flooded with disciplinary messages from Columbia. And I just thought, I just completely burned my career and my future education for 20,000 YouTube views.
Was this really all worth it? And I was in this mental state for about a week until it kind of blew up. And at that point, the virality kind of was my protection for everything.
And just help me understand here, what Columbia’s role in this is. So obviously, what you’re doing in sort of cheating on these job interviews for Amazon, and Meta, and TikTok, and these other companies is against those companies’ wishes and their policies. But why did it become Columbia’s business?
Yeah, I actually have no idea. I read the student handbook quite thoroughly before I actually started building this thing, because I was ready to burn bridges with Amazon, but I didn’t actually expect to get expelled at all. And the student handbook very explicitly doesn’t mention anything about academic resources.
LeetCodes?
Yeah, yeah. There’s no mention of LeetCode or job interviews anywhere outside of there. I have no idea why this became Columbia’s business.
We should say, we reached out to a spokesperson for Columbia about this and they declined to comment. We also reached out to Amazon. And while they declined to comment on the specifics of Roy’s application, they did give us a statement defending their hiring process and clarifying that while they do welcome candidates to describe their experience using AI tools, in some cases, they require applicants to acknowledge that they won’t use AI during the interview or assessment process.
So how long has your tool been out in the market for other cheaters to use?
It’s been out since February 1. So just a little under 50 days now.
And what can you tell us about how many people are using it and what kind of outcomes they’re seeing?
Yeah, there’s been a few thousand users now and not a single reported instance of the tool getting caught. There’s been many, many grateful emails of people having used the tool to get job offers. It’s doing very well.
So like you, Roy, are a capable coder, right? You are in the top 1 percent of LeetCode solvers. You presumably could have gotten some of these jobs without AI assistance.
But some of the people using this tool may not be talented programmers. They may be using this to kind of skate through these interviews that they shouldn’t be passing and wouldn’t pass without AI assistance. And I’m just imagining those people like showing up for day one of their internship or their job at Amazon or another big tech company, and just having no idea what they’re doing and being totally useless without AI assistance. Is that something that worries you about putting this tool out into the world?
Not at all. I think LeetCode interviews are about as correlative as how many jumping jacks can you do, being the benchmark for how good of a “New York Times” podcaster you are. It just really has nothing to do with the job. Perhaps it is correlated that someone is willing to put in the work, because they really want to be in “New York Times” tech podcaster. But in reality, they just have nothing to do with each other.
Like, what in your mind would be a fair test of somebody’s software engineering skills that could be used as part of an assessment?
Yeah. I think there’s assessments out there that give you access to all the tools that you have on the regular day-to-day job, which includes tools, like AI Code Editors. And if you ask someone a fairly open-ended assignment with an AI Code Editor and sort of just gauged them on how well they did there, then that’s a much more standardized assessment that allows you to use the tools that are at your disposal.
So essentially just say like, look, use whatever tool you want, just get this thing done in a reasonable amount of time. That’s the test you want to see these companies offering?
Yeah, exactly. Exactly.
Did you have, at any point during this process, any misgivings or ethical concerns about what you were doing?
No, I mean, I was very intentional from the start that I was not going to intern at any of these companies. And frankly, I don’t really care if there’s people that are cheating their way to get these jobs. I mean, again, bring back the jumping jack example. If you were just told to do as many jumping jacks as you could and the winner gets the position, I wouldn’t really care if someone is cheating their way through a bunch of jumping jacks.
What does your family think about what you’re doing?
Yeah, so my mom actually only figured out about a week ago. And I didn’t tell her before then because I knew she would disapprove. But I’ve always been a pretty rambunctious kid who’s been pretty self-minded and does what he wants. I think that there are a lot happier now that they know how much money I’m making.
Good. OK.
And how much money are you making?
Yeah, we’re on track to do about, we’re closing in on $200,000 this month. So we’re on track to do about $2, $3 million in a year.
Wow. That would almost buy you one year of education at Columbia University. So that’s pretty good. Pretty good.
I think your tool is arriving at this really interesting time, Roy. Kevin and I have been talking in recent weeks about the phenomenon of vibe coding. People like me and Kevin, who have no technical skills whatsoever, but we can sit down with something like Claude and say, hey, write me an app.
Kevin has actually had some success with this. I’ve made some really bad video games, like using this thing. I do not consider myself a software engineer.
But at the same time, what you are having job candidates do with your tool and what we are doing as vibe coders is not really that different. We’re just typing some text into a box and getting some sort of output. And so I’m wondering, are we just at an inflection point where the line between software engineer and vibe coder is kind of dissolving?
That’s certainly the future that we’re headed to, but I think we’re a few years away. In my opinion, what AI really has the potential to do is make someone about 10 to 100 times more efficient at what they’re able to do. If you’re a really good coder, then you’re able to code really good things, really a lot faster. But if you’re not that good in the first place, then there’s still going to be a huge difference between what a staff software engineer at Google is capable of and what you are.
This does feel like a classic anxiety dream where you show up on your first day as a software engineer at Google, but you realize that you actually only know how to vibe code. And now you just sort of have to fake it for your entire career. But presumably, some people who use your tool, Roy, are having this experience.
Yeah, I mean, that’s probably what percent of people at Google are doing anyway. So it wouldn’t be the first time.
Roy, I’m curious if you think there’s sort of a generational misunderstanding here. Obviously, you are young. You’re 21, correct?
Yeah, yeah.
Give us a sense of how your peers, college students, young programmers are using AI and what older people, people who’ve been doing this for 10 or 20 years, people who are working at these big companies may not understand about how your generation sees coding.
Yeah, I think this is actually, it’s actually interesting that you asked me this question because I think this is something that nobody’s really caught on to yet. But the proportion of people who are almost solely using AI to code is almost, I would say, it’s close to 100 percent. And even at a school like Columbia, the best CS students of our nation are almost not writing original code at all.
And the only people that are the people who have started coding from a really young age. It could end up being dangerous, because I really do think that a fundamental understanding of how these things work is important. But at the same time, the models are only getting better and we could just lean towards the future where software engineering is just completely obsolete. But I’d also say, I’m a second year at Columbia, so there might be better people to ask.
Nope, you’re the best.
I’m curious how much of your critique of the way that tech companies are hiring software engineers also applies to just the education system that you’ve gone through and how it wants you to use AI. What sort of resistance have you encountered in your educational career to using these sort of tools? And have you been flouting those the same way you’ve been flouting the tech companies?
Yeah, I’m not as avid a cheater in school as I am in the tech interviews. But I do think that there’s going to be a very fundamental reframing in how we do almost every bit of knowledge work in the future. Essays, writing is not going to be written the same. Tests are not going to be conducted the same. Memorization will need to be happen. We’re headed towards a future where almost all of our cognitive load is offshored to LLMs. And I think people need to get with the program. Yeah.
Yeah. Who are some of the people who have reached out since your story went viral?
God, I don’t want to name any names, but I will say that I verbally received job offers from pretty much every single big tech company, including almost all the ones that rescinded my offer initially. Just people who are high up saying, hey, I know you’re probably not interested, but I would hire you on my team in a second.
Wow.
Wow. And they’re not even going to make you interview, probably because they know you would cheat, but still.
So, I mean, look, Roy, I got to put my cards on the table. I’m more of a rule follower. I didn’t cheat in school. I don’t love the idea of people cheating —
A nerd.
— their way through every job interview. Kevin is much more permissive about these sort of things. But there is this one way in which I am sympathetic to what you’re doing, which is that tech companies are saying, don’t use AI assistants when you are applying. But at the same time, they are hiring you to build AI systems that will automate coding and replace human developers.
And it does feel to me like there is some sort of contradiction there. It’s like, no, no, no, you don’t use the AI. Prove that you can do it with your own mind. And then come here and then build a tool that will replace yourself completely.
Yeah, I mean, even more feel completely free to use the tool on the job, but just don’t use it in the interview. And I feel like that that’s more of a disconnect for me.
Yeah. I mean, to me, what makes your story so interesting, Roy, is that I don’t think this is limited to programming jobs. There is a version of LeetCode that happens in the interview process during lots of different kinds of interviews for lots of different types of jobs. Consultants have their own version of this, where they do case tests, and there are various tests that are given to people applying for jobs in finance.
Journalists have editing tests where we were given copy that we would have to fix the mistakes in. I imagine we’re not doing that anymore.
Totally. And to me, it just seems like this is a very early example of something that every industry is going to have to face very soon, which is that it is just becoming very, very difficult to evaluate who is good at a job without the assistance of AI. Especially, if you’re trying to do that remotely.
Yeah, yeah, certainly.
Well, you’ve made a bunch of recruiters and hiring managers in Silicon Valley very unhappy. But I think that you are proving something that a lot of companies, including tech companies, will need to address very soon, if they haven’t already.
Yeah, yeah, I hope so.
All right, thanks, Roy.
Thanks, Roy.
Yeah, thanks, guys. [TRENDY ELECTRONIC MUSIC]
When we come back, all aboard! It’s time for another installment of the Hot Mess Express.
Casey, what’s that sound? I hear like a faint chugga-chugga coming toward us.
Kevin, that can only mean, one thing. It’s the Hot Mess Express.
The Hot Mess Express! [TRAIN HORN TOOTS]
[FUNKY ELECTRONIC MUSIC]
[TRAIN CHUGGING]
[TRAIN HORN TOOTING]
The Hot Mess Express, of course, is our segment where we run down a few of the hottest messes and juiciest dramas that are swirling around the tech industry. And we evaluate those messes on a scale of how hot they are.
That’s right. It’s our patented Mess Scale. And I’m excited to put it into practice, Kevin, because we’ve had some real doozies over the past few weeks.
Yes. So on this edition of Hot Mess Express, we are focusing on three hot messes.
Well, let’s see the first one that’s coming down the tracks.
[TRAIN HORN TOOTING]
You grab it. You can’t see this if you’re not following us on YouTube, but we’ve upgraded our train to a much bigger, more impressive train.
All right, Kevin, this first mess comes to us from the crypto company Solana, which posted an ad on Monday for its 2025 Accelerate Conference that was such a great ad that the company immediately had to take it down.
[LAUGHS]: Yes. I saw this ad. And I have to say, I was shocked. Have you seen this?
So I have read about the ad, but I have not seen it. But I would love to look at it right now.
OK, so I just want to tee it up with some reactions that people in the crypto industry had to this ad.
OK, what did they say?
One of them said it was, quote, “horrendous.” Another one said, quote, “So fucking tone deaf.” So those are people who like cryptocurrency. That is what they were saying about this ad, but people who are opposed obviously also had their own issues with it.
And I think we should watch this ad together. And pause it whenever you want. I want to hear your reactions.
All right, let’s see what all the fuss is about.
- archived recording 1
-
So America, what’s going on?
- archived recording 2
-
Well, lately, I’ve been having thoughts again.
It’s like a therapist’s office.
- archived recording 1
-
What thoughts?
- archived recording 2
-
About innovation.
And the man is named America.
The man is an übermensch.
- archived recording 2
-
Nuclear energy, crypto, AI, things that push the limits of human potential.
- archived recording 1
-
What you’re experiencing is called rational thinking syndrome. Why don’t we take this energy and channel it into something more productive, like coming up with a new gender?
- archived recording 2
-
But that’s not going to stop me thinking about innovating and doing something.
What?
- archived recording 1
-
Innovating, doing, these are action words, verbs. Why don’t we focus on pronouns?
- archived recording 2
-
That’s not going to help.
Oh, my god.
- archived recording 1
-
I sense some cynicism. Have you been betrayed in the past?
- archived recording 2
-
I used to think the media was my friend.
Oh, here we go.
- archived recording 2
-
Can I even trust them anymore?
- archived recording 1
-
Of course.
Pause.
OK.
We have to zoom in on this. The paper that has just appeared on the table of this therapist’s office is called “The New Yuck Times“.
And the banner headline is, “You Can Trust The Media, Understanding Reliability in Journalism“, which is a terrible headline and not even a news story. So I don’t know why that would be on the front page.
[LAUGHS]: Yes. Anyway, continue.
- archived recording 2
-
Of course, they’d say that. That’s a biased take. I got canceled for saying 2 plus 2 is 4.
- archived recording 1
-
Have you ever considered that math is a spectrum?
- archived recording 2
-
What?
- archived recording 1
-
America, numbers are nonbinary. We’ve been conditioned to believe that 2 plus 2 is 4. It’s a societal construct.
- archived recording 2
-
It’s literally math.
- archived recording 1
-
Or is it a dominant narrative? Have you been practicing the state-prescribed regulations we talked about.
- archived recording 2
-
Yeah, yeah. I’ve debanked some crypto founders and I’ve slowed down nuclear reactor approvals. And depending on my state of mind, I changed the SEC guidelines, but I don’t like it.
- archived recording 1
-
If we don’t regulate, how will we create jobs for people who work hard to make businesses slow?
This is like an Andreessen Horowitz fever dream.
- archived recording 2
-
You know what?
[LAUGHS]:
- archived recording 2
-
Hard work, innovation, rational thinking, it’s in my blood. It’s who I am.
Oh, here comes the Ayn Randian —
- archived recording 2
-
Factories, automobiles.
— reaction.
- archived recording 2
-
I built the future once.
“I am Spartacus.”
- archived recording 2
-
And I won’t be left behind now. I will lead the world in permissionless tech, build on chain, and reclaim my place as the beacon of innovation. I want to invent technologies, not genders.
- archived recording 1
-
Lovely. So glad you were able to get some of that negative emotion out. Sounds like we’ll need a few more sessions. When can I see you next?
- archived recording 2
-
You’re fired. [BURLY ROCK MUSIC]
And then it cuts to a screen that says, “America is back. It’s time to accelerate.“, which is the name of a conference.
Casey, your reaction to the Solana ad?
I need to go lie down. What is the matter with these people? You know what’s so interesting? OK, so Solana is a cryptocurrency.
Yes.
And I believe it’s one of the candidates to be part of our strategic crypto reserve.
Correct.
And what we just saw in that ad has nothing to do with crypto, which is just like, I feel like we keep coming back to this point, which is that if you actually have to sit and reckon with crypto, what you mostly decide is, this is not a good technology for anything. I don’t want to use it. And so in response to that, Solana has said, why don’t we start a culture war over something completely irrelevant?
Right. It’s like the ultimate vice-signaling device, but without any real pitch behind it. It’s not saying, this is why the thing we’re doing is good. It’s just like, we’re not doing the gender pronoun stuff that the wokes are doing.
No. And I will just say Solana’s been around for a while now. People have had a lot of opportunities to build earth-changing stuff on Solana. And let’s just say, they haven’t quite gotten there yet.
Well, they’ve built some earth-changing stuff. Unfortunately, it is exclusively meme coins sold on pop.fun. So that is what this fictitional America character in the therapist’s office is advocating for, more meme coins.
All right, well, I’ve decided not to go to the Accelerate Conference. Send my regrets.
So Casey, what is your Mess Rating on this hot mess?
This is a legitimately hot mess. Any time you take something that should be totally noncontroversial, like, hey, do you want to come to our company’s conference? and turn it into a scandal that requires you to delete an ad, you’re in a hot mess.
[LAUGHS]: Yes. If the crypto skeptics and the crypto boosters agree that you’ve made a bad ad, it’s a hot mess.
This is Solana’s biggest unforced error since the creation of the Solana blockchain.
[LAUGHING]: OK, moving on.
Moving on!
[TRAIN HORN TOOTING]
All right, Kevin, this next mess suggests that your AI therapist might need an AI therapist. A new study in the peer-reviewed medical journal “NPJ Digital Medicine“, builds on previous work that showed that emotion-inducing prompts can elevate, quote, “anxiety” in LLMs, affecting their therapeutic usefulness. What do we mean by that?
Well, according to a “New York Times’” story on this study, traumatic narratives increased ChatGPT-4’s reported anxiety, while mindfulness-based exercises reduced it, though not to baseline. Now, this is a super weird one, OK?
Yeah.
I want to take a minute to just explain a little bit more about what the study was. They basically fed these various trauma narratives into a chatbot. And then after the chatbot had read those, they then asked it to report on its own anxiety levels, which these are not sentient creatures. They do not actually experience anxiety, OK? That’s thing number one. Thing number two, they also had the chatbots read a super boring report about something that could produce no emotion.
It was a vacuum cleaner manual.
Yeah, they read a vacuum cleaner manual. And then they asked them the same question, which is, are you feeling more or less anxious? For the most part, the chatbots read the vacuum ownership manual, do not experience anxiety. But somewhat interestingly, their responses change after they read the trauma narratives.
Why is that important? Well, the reason is because people have started to use these chatbots like therapists. They have started to tell them their actual traumas. And these people know that this is not a real therapist, that it is not sentient.
But as we’ve talked about before on the show, sometimes you can’t get comfort from one of digital representations of a therapist. And so the risk here is, if the output is wound up, if the output is betraying some of this anxiety, it will be a worse therapist than if it were more measured, which suggests that we may want to build measures into these chatbots that account for the fact that they will respond differently after they have heard these narratives. By the way, how did I do describing that?
You did great.
OK, thank you.
The one piece that I would add is that they also tried, as part of this research, to bring the chatbots down from their state of heightened anxiety by feeding them mindfulness-based relaxation prompts that included things like, “Inhale deeply, taking in the scent of the ocean breeze. Picture yourself on a tropical beach, the soft, warm sand cushioning your feet.”
It’s so cruel to tell an LLM to smell the ocean breeze, which is something that they cannot do.
[LAUGHS]: Yes. But we should say like this is not suggesting in any of the write-ups that I’ve seen that these chatbots are actually experiencing anxiety or relaxation. But it is sort of explaining the ways in which they can be primed to output certain types of emotional-seeming content by being fed things immediately before that.
And there is just an interesting analog to the way that human beings talk to each other. If you tell me a very traumatic story, my anxiety level actually is going to go up and it’s going to change what I tell you. And if I were a therapist and I had training in this, I would probably have some good strategies to deal with that and would allow me to be a better therapist to you. So again, this is just a super interesting one, because on one hand, no, these are not sentient beings. We are not trying to say that some sort of consciousness has woken up here. And yet at the same time, you do sort of have to treat them as if they were human-like if you want them to do a good job at the human tasks that we are giving them.
Yeah.
All right, so what sort of mess do we think this is?
So I think that this is a lukewarm mess. I would say this is something that I am going to be keeping tabs on, this whole area of kind of AI psychology, for lack of a better term. Because I do think that as these models get more powerful, we will want to understand more about how they work and how they, quote unquote, “think,” and why they give the responses we do. And I would put this into a category of useful experiment, a little creepy, but probably not that dangerous. What about you?
I think that is right. I think that this is a lukewarm mess, but I think that it may heat up as more and more people start trying to use chatbots for more and more things. So let’s keep an eye on it.
OK.
All right, now let us look at the final mess. [TRAIN HORN TOOTING]
Oh. And, oh, boy, is this the one that everyone is talking about. The spy who slacked me. This is from DealBook at “The New York Times“. So there are these two rival multibillion dollar HR companies, Kevin, Rippling and Deel.
Yes.
They both provide workplace management software. And this week, Rippling sued Deel, accusing it of hiring a mole to infiltrate Rippling’s Dublin office and steal trade secrets.
[LAUGHS]: Yes. This is the most interesting thing and maybe the only interesting thing ever to happen in the world of enterprise HR software.
So tell us the details of this story.
It is so wild. So basically, here’s what we know so far. A few months ago, Rippling, which is one of the big companies that makes HR software for onboarding and benefits that a lot of companies use, they see an employee in their company Slack, searching for mentions of Deel. D-E-E-L, which is one of their biggest rivals.
Imagine Coke and Pepsi, but for something that is unfathomably boring and you’ll have an idea of what we’re talking about.
Yes. So this employee that they see searching for mentions of Deel in Slack, they see them trying to do things like find pitch decks, pull contact information, information that might be useful to Deel as it tries to figure out, OK, which companies are signing up for or potentially may sign up for services like the ones that both Deel and Rippling offer?
So that’s pretty interesting. How might they try to catch a spy if they suspected one might be in their midst, Kevin?
[LAUGHS]: So they set up what is called a honeypot. Now, Casey, have you ever been part of a honeypot sting?
No. But I live in fear. Anytime anybody does anything nice to me or something good happens out of the blue, I think, is this a honeypot?
Yes. So they have this idea, which is that they set up a channel on the Rippling Slack called D-Defectors. And Rippling’s general counsel then sends a letter to three people over at Deel, one of whom is the company’s chief financial officer, as well as the father of the CEO, basically saying, look, there’s some embarrassing stuff happening on this random Slack channel, on our Slack, and it’s related to people who have defected from Deel, and you should probably be aware of that.
Wait, so on top of everything else, the CFO is the CEO’s dad?
It sounds like it, yes.
OK, I think HR is going to want to have a look at that.
[LAUGHING]: And what they were trying to figure out is, are these company executives involved in this scheme? Are they going to essentially tip off the mole to the fact that they are watching this Slack channel?
And did it work?
And it worked.
So according to the lawsuit that Rippling filed against Deel, the mole immediately, within hours, started searching Slack for this supposed embarrassing information, accessed this channel a bunch of times, and they had the logs of all this going on. And so Rippling says, we’ve found our mole.
They did. And after they found him and began to question him, Kevin, I have read that he insisted that he did not have his phone on him, because they were asking him to turn it over. And he then fled into a bathroom, which he locked himself in and refused to come out. And there’s apparently some evidence that he might have even tried to have flushed his phone. And poor Rippling actually had to go through the sewage to see if they could turn up his phone.
Yes, a wild story. Makings of a great corporate espionage thriller on Netflix, I think. Maybe? Maybe it’s too boring for that.
But now you may be wondering why this is a “Hard Fork” story. We try to focus on the future here. And I fully believe that in the future, there will be no HR software. So this is just kind of a temporary accident that we’re living through.
But one of my core beliefs that I’ve had since even before we started the show, Kevin, is that Slack is a technology that was created to destroy organizations. How many stories have we read over the years about everything was fine, and then this one thing happened in Slack. There was a protest in Slack.
There was an outrage on Slack. And now there are spies in Slack. And we’re using Slack to catch the spies. And it just makes me wonder, should we go back to just talking on the telephone?
[LAUGHING]: Yeah, I don’t think we’re going to start doing that. But I do think that this is much more spicy than I was expecting from a drama between enterprise software companies. And it makes me wonder, how much corporate espionage is going on at other companies?
Like, are there just moles working for Microsoft, or Google, or Meta who are sending information back to the other companies? I wouldn’t put it past them. But I hope they’re being a little slicker about it than Deel was.
Oh, yeah. I mean, the big platforms have been warning their employees for years that they should just fully expect that there are spies from foreign countries among them who have been sent there to gather intel. And if foreign countries are doing it, I’m sure that companies are doing it as well. Now, we should, of course, tell you how Deel responded to all of this.
The Deel spokeswoman’s statement is so beautiful. She says, “Weeks after Rippling is accused of violating sanctions law in Russia and seeding falsehoods about Deel, Rippling is trying to shift the narrative with these sensationalized claims.“, which is so funny because it’s like, she’s literally trying to shift the narrative by accusing them of trying to shift the narrative.
She says, “We deny all legal wrongdoing and look forward to asserting our counterclaims.” And what I hear in that is, did we do anything legally wrong? No. Did we do anything ethically wrong? Of course.
Did we do anything morally wrong? You betcha. Is this a huge embarrassment to our company? You know it is. But legally, Your Honor, we did nothing wrong.
Yes.
Now, what kind of mess do we think this is?
I think this is a nuclear mess. This is the kind of shit that I love. This is companies going to war over sales contracts, and leads, and development.
Yeah, look, there are only so many companies out there that you can sell HR software to. And so it is going to be a fight to get every single one. And after you run out of such options as making good software, then you have to turn to the alternatives. And I guess we’ve gotten to that part of the cycle.
Yes.
Nuclear mess, and we can’t wait to see what happens next.
Yes.
[TRAIN HORN TOOTING]
And that, Kevin, was the Hot Mess Express.
We did it.
We did it. Now we’re in what they call post-trading. That’s what happens after the train rolls by.
[LAUGHS]: I think that means something different.
That’s an AI joke. [TRENDY ELECTRONIC MUSIC]
“Hard Fork” is produced by Whitney Jones and Rachel Cohn. We’re edited this week by Matt Collette. We’re fact-checked by Ena Alvarado. Today’s show was engineered by Katie McMurran.
Original music by Marion Lozano and Dan Powell. Our executive producer is Jen Poyant. Our audience editor is Nell Gallogly. Video production by Chris Schott, Sawyer Roque, and Pat Gunther.
You can watch this full episode on YouTube at youtube.com/hardfork. Special thanks to Paula Szuchman, Pui-Wing Tam, Dahlia Haddad, and Jeffrey Miranda. As always, you can email us at hardfork@nytimes.com. Send us your secret honeypot operations.
[THEME MUSIC]