It’s easy to fear that the machines are taking over: Companies like IBM and the British telecommunications company BT have cited artificial intelligence as a reason for reducing head count, and new tools like ChatGPT and DALL-E make it possible for anyone to understand the extraordinary abilities of artificial intelligence for themselves. One recent study from researchers at OpenAI (the start-up behind ChatGPT) and the University of Pennsylvania concluded that for about 80 percent of jobs, at least 10 percent of tasks could be automated using the technology behind such tools.
“Everybody I talk to, supersmart people, doctors, lawyers, C.E.O.s, other economists, your brain just first goes to, ‘Oh, how can generative A.I. replace this thing that humans are doing?’” said Erik Brynjolfsson, a professor at the Stanford Institute for Human-Centered AI.
But that’s not the only option, he said. “The other thing that I wish people would do more of is think about what new things could be done now that was never done before. Obviously that’s a much harder question.” It is also, he added, “where most of the value is.”
How technology makers design, business leaders use and policymakers regulate A.I. tools will determine how generative A.I. ultimately affects jobs, Brynjolfsson and other economists say. And not all the choices are necessarily bleak for workers.
A.I. can complement human labor rather than replace it. Plenty of companies use A.I. to automate call centers, for instance. But a Fortune 500 company that provides business software has instead used a tool like ChatGPT to give its workers live suggestions for how to respond to customers. Brynjolfsson and his co-authors of a study compared the call center employees who used the tool to those who didn’t. They found that the tool boosted productivity by 14 percent on average, with most of the gains made by low-skilled workers. Customer sentiment was also higher and employee turnover lower in the group that used the tool.
David Autor, a professor of economics at the Massachusetts Institute of Technology, said that A.I. could potentially be used to deliver “expertise on tap” in jobs like health care delivery, software development, law, and skilled repair. “That offers an opportunity to enable more workers to do valuable work that relies on some of that expertise,” he said.
Workers can focus on different tasks. As A.T.M.s automated the tasks of dispensing cash and taking deposits , the number of bank tellers increased, according to an analysis by James Bessen, a researcher at the Boston University School of Law. This was partly because while bank branches required fewer workers, they became cheaper to open — and banks opened more of them. But banks also changed the job description. After A.T.M.s, tellers focused less on counting cash and more on building relationships with customers, to whom they sold products like credit cards. Few jobs can be completely automated by generative A.I. But using an A.I. tool for some tasks may free up workers to expand their work on tasks that can’t be automated.
New technology can lead to new jobs. Farming employed nearly 42 percent of the work force in 1900, but because of automation and advances in technology, it accounted for just 2 percent by 2000. The huge reduction in farming jobs didn’t result in widespread unemployment. Instead, technology created a lot of new jobs. A farmer in the early 20th century would not have imagined computer coding, genetic engineering or trucking. In an analysis that used census data, Autor and his co-authors found that 60 percent of current occupational specialties did not exist 80 years ago.
Of course, there’s no guarantee that workers will be qualified for new jobs, or that they’ll be good jobs. And none of this just happens, said Daron Acemoglu, an economics professor at M.I.T. and a co-author of “Power and Progress: Our 1,000-Year Struggle Over Technology & Prosperity.”
“If we make the right choices, then we do create new types of jobs, which is crucial for wage growth and also for truly reaping the productivity benefits,” Acemoglu said. “But if we do not make the right choices, much less of this can happen.” — Sarah Kessler
IN CASE YOU MISSED IT
Martha’s model behavior. The lifestyle entrepreneur Martha Stewart became the oldest person to be featured on the cover of Sports Illustrated’s swimsuit issue this week. Stewart, 81, told The Times that it was a “large challenge” to have the confidence to pose but that two months of Pilates had helped. She isn’t the first person over 60 to have the distinction: Maye Musk, the mother of Elon Musk, graced the cover last year at the age of 74.
TikTok block. Montana became the first state to ban the Chinese short video app, barring app stores from offering TikTok within its borders starting Jan. 1. The ban is expected to be difficult to enforce, and TikTok users in the state have sued the government, saying the measure violates their First Amendment rights and giving a glimpse of the potential blowback if the federal government tries to block TikTok nationwide.
Banker blame game. Greg Becker, the ex-C.E.O. of Silicon Valley Bank, blamed “rumors and misconceptions” for a run on deposits in his first public comments since the lender collapsed in March. Becker and former top executives of the failed Signature Bank also told a Senate committee investigating their role in the collapse of the banks that they would not give back millions of dollars in pay.
A brief history of tech C.E.O.s seeking constraints
When OpenAI’s chief executive, Sam Altman, testified in Congress this week and called for regulation of generative artificial intelligence, some lawmakers hailed it as a “historic” move. In fact, asking lawmakers for new rules is a move straight out of the tech industry playbook. Silicon Valley’s most powerful executives have long gone to Washington to demonstrate their commitment to rules in an attempt to shape them while simultaneously unleashing some of the world’s most powerful and transformative technologies without pause.
One reason: A federal rule is much easier to manage than different regulations in different states, Bruce Mehlman, a political consultant and former technology policy official in the Bush administration, told DealBook. Clearer regulations also give investors more confidence in a sector, he added.
The strategy sounds sensible, but if history is a useful guide, the reality can be messier than the rhetoric:
-
In December 2021, Sam Bankman-Fried, founder of the failed crypto exchange FTX, was one of six executives to testify about digital assets in the House and call for regulatory clarity. His company had just submitted a proposal for a “unified joint regime,” he told lawmakers. A year later, Bankman-Fried’s businesses were bankrupt, and he was facing criminal fraud and illegal campaign contribution charges.
-
In 2019, Facebook founder Mark Zuckerberg wrote an opinion piece in The Washington Post, “The Internet Needs New Rules,” based on failures in content moderation, election integrity, privacy and data management at the company. Two years later, independent researchers found that misinformation was more rampant on the platform than in 2016, even though the company had spent billions trying to stamp it out.
-
In 2018, the Apple chief Tim Cook said he was generally averse to regulation but supported more strict data privacy rules, saying, “It’s time for a set of people to think about what can be done.” But to maintain its business in China, one of its biggest markets, Apple has largely ceded control of customer data to the government as part of its requirements to operate there.
Buzzword of the week: ‘Algospeak’
Platforms like TikTok, Facebook, Instagram and Twitter use algorithms to identify and moderate problematic content. To avert these digital moderators and allow free exchange about taboo topics, a linguistic code has developed. It’s called “algospeak.”
“A linguistic arms race is raging online — and it isn’t clear who’s winning,” writes Roger J. Kreuz, a psychology professor at the University of Memphis. Posts about sensitive issues like politics, sex or suicide can be flagged by algorithms and taken down, leading to the use of creative misspellings and stand-ins, like “seggs” and “mascara” for sex, “unalive” for death and “cornucopia” for homophobia. There is a history of responding to prohibitions with code, Kruz notes, such as 19th-century Cockney rhyming slang in England or “Aesopian,” an allegorical language used to circumvent censorship in Tsarist Russia.
Algorithms aren’t alone in not picking up on the code. The euphemisms and misspellings are particularly ubiquitous among marginalized communities. But the hidden language also sometimes eludes humans, leading to potentially fraught miscommunications online. In February, the celebrity Julia Fox found herself in an awkward exchange with a victim of sexual assault after misunderstanding a post about “mascara” and had to issue a public apology for responding inappropriately to what she thought was a discussion about makeup.
Thanks for reading!
We’d like your feedback. Please email thoughts and suggestions to dealbook@nytimes.com.