In a sign that the tech industry’s next big boom is picking up steam, Nvidia on Wednesday predicted rapid growth in the already rabid demand for the chips it makes to build artificial intelligence systems.
The Silicon Valley company’s products, called graphics processing units, or GPUs, are used to create the vast majority of A.I. systems, including the popular ChatGPT chatbot. Tech companies ranging from start-ups to the industry’s giants are fighting to get their hands on them.
Nvidia said heavy demand from cloud computing services and other customers for chips to power A.I. systems caused revenue for its second quarter, which ended in July, to jump 101 percent from a year earlier, to $13.5 billion, while profit surged more than ninefold to nearly $6.2 billion.
That was even better than what Nvidia had projected in late May, when its $11 billion revenue estimate for the quarter stunned Wall Street and helped push Nvidia’s market value above $1 trillion for the first time.
Nvidia’s prediction and lofty market cap became a symbol for the growing exuberance surrounding A.I., which is transforming many computing systems and the way they are programmed. They also sharply raised the interest in what Nvidia might say the next time about chip demand for its current quarter, which ends in October.
Nvidia projected third-quarter sales of $16 billion, nearly triple the level a year ago and $3.7 billion more than analysts’ average expectations of around $12.3 billion.
The financial performance of chip makers is often considered a harbinger for the rest of the tech industry, and Nvidia’s strong results could reignite enthusiasm for tech stocks on Wall Street. Other tech companies like Google and Microsoft are spending billions and making little on A.I., but Nvidia is cashing in.
Jensen Huang, Nvidia’s chief executive, said major cloud services and other companies were investing to bring Nvidia’s A.I. technology to every industry. “The number of applications is just quite spectacular,” he said in an interview after a conference call with analysts. “Every single data center will be accelerated.”
Nvidia’s share price was up more than 9 percent in after-hours trading.
Until recently, Nvidia got the biggest share of its revenue from sales of GPUs for rendering images in video games. But A.I. researchers started using those chips in 2012 for tasks such as machine learning, a trend that Nvidia exploited over the years by adding enhancements to its GPUs and many pieces of software to reduce labor for A.I. programmers.
Chip sales for data centers, where most A.I. training is accomplished, are now the company’s biggest business. Revenue from that business grew 171 percent to $10.3 billion in the second period, Nvidia said.
Patrick Moorhead, an analyst at Moor Insights & Strategy, said the rush to add generative A.I. capability has become a fundamental imperative to corporate chiefs and boards of directors. Nvidia’s only limitation at the moment, he said, is its struggle to supply enough chips — a gap that may create opportunities for major chip companies such as Intel and Advanced Micro Devices and start-ups such as Groq.
Nvidia’s roaring sales contrasted sharply with the fortunes of some of its chip industry peers, which have been hurt by soft demand for personal computers and data center servers used for general-purpose tasks. Intel said in late July that second-quarter revenue fell 15 percent, though the results were better than Wall Street had expected. Revenue at Advanced Micro Devices fell 18 percent in the same period.
Some analysts believe that spending on A.I.-specific hardware, such as Nvidia’s chips and systems that use them, is drawing money away from spending on other data center infrastructure. IDC, a market research firm, estimates that cloud services will increase their spending on server systems for A.I. by 68 percent over the next five years.
While Google, Amazon, Meta, IBM and others have also produced A.I. chips, Nvidia today accounts for more than 70 percent of A.I. chip sales and holds an even bigger position in training generative A.I. models, according to the research firm Omdia.
Demand is particularly heavy for the H100, a new GPU made by Nvidia for A.I. applications, which began shipping in September. Large and small companies have been scrambling to find supplies of the chips, which are fabricated in an advanced production process and require equally sophisticated packaging that combines GPUs with special memory chips.
Nvidia’s ability to increase deliveries of the H100 is largely linked to actions by Taiwan Semiconductor Manufacturing Company, which handles the packaging as well as fabricating the GPUs.
Industry executives expect the shortage of H100s to extend throughout 2024, a problem for A.I. start-ups and cloud services hoping to sell computing services that exploit the new GPUs.
Mr. Huang said the company was working diligently with its production partners to get more chips to market, including working with other companies to supplement TSMC’s packaging capability. “Supply will substantially increase the rest of this year and next year,” he said.