OpenAI and Nvidia, the two darlings of AI hype and long-term partners, seem to be having a bit of a falling out.
At the center of this rift is a $100 billion Nvidia investment in OpenAI announced in September 2025. As part of the deal, Nvidia would build 10 gigawatts of AI data centers for OpenAI and invest $100 billion in the company in 10 installments, as each gigawatt comes online. In turn, OpenAI is reportedly planning on using the billions of dollars of investment from Nvidia to lease Nvidia chips.
At the time, the investment sparked worries of circular dealmaking in the AI industry and an intricately woven web of financial dependencies that could be a sign of potential instability, echoing that of the dotcom bubble. That is, if even one cog is faulty and demand doesn’t pan out as expected, it could create a domino effect that takes the whole system down.
In the September announcement, the companies said that the first gigawatt of computing power would come online in the second half of 2026 and that any other details would be finalized in the coming weeks. But in an Nvidia SEC filing from November, the OpenAI investment was still characterized as just “a letter of intent with an opportunity to invest.”
Flash forward a couple of months, and a Wall Street Journal report from last week claims that the talks have still not progressed beyond the early stages and that Nvidia CEO Jensen Huang has been privately criticizing a so-called lack of discipline in OpenAI’s business approach. Huang has reportedly spent the last few months privately emphasizing to industry associates that the $100 billion agreement was nonbinding and not finalized.
Following that report, Huang tried to reassure reporters in Taipei, Taiwan, by praising OpenAI and saying that Nvidia will “absolutely be involved” in the company’s latest funding round ahead of a rumored IPO later this year. Huang described the planned investment as “probably the largest investment we’ve ever made,” but when asked if it would be over $100 billion, he said, “No, no, nothing like that.”
But that was not enough to quell investor fears, because another anonymously sourced report dropped a few days later. Turns out, OpenAI is not happy with the speed at which Nvidia chips can compute inference for some ChatGPT requests, and has been looking for alternative chip providers (such as startups Cerebras and Groq) to take on 10% of its inference needs, according to a Reuters report on Tuesday.
The report also claims that OpenAI has blamed some of its AI coding assistant Codex’s weaknesses on the Nvidia hardware.
In response, it was now OpenAI executives’ turn to praise Nvidia. CEO Sam Altman took to X to say that Nvidia makes “the best AI chips in the world,” and infrastructure executive Sachin Katti said that Nvidia is OpenAI’s “most important partner for both training and inference.”
But it seems that inference and its hefty memory requirements have been weighing heavily on Nvidia lately as well. The importance of inference has been outgrowing that of training as models mature. The agentic AI hype has also increased the amount of data managed by an AI system during the inference stage, further pushing the importance of memory.
To account for this, Nvidia bought Groq (no, not Grok), the AI chips startup reportedly eyed by OpenAI, in its largest purchase ever. Then, last month, Nvidia unveiled its new Rubin platform, with a presentation that boasted inference and memory bandwidth wins.
Google ups the ante
Reportedly, at the center of both Nvidia and OpenAI’s fears about each other is increasing competition, posed particularly by Google.
Late last year, Google became an even fiercer competitor to both leading AI developer OpenAI and top hardware infrastructure giant Nvidia.
First came tensor processing units, Google’s custom AI chips that are designed for inference, and for some tasks are deemed better than GPU chips dominated by Nvidia offerings. Google’s TPUs are not only used by its own AI models, but are also deployed by OpenAI competitor Anthropic and potentially Meta.
According to the Wall Street Journal report from last week, Huang is also worried about the competition both Google and Anthropic pose to OpenAI’s market dominance. Huang reportedly fears that if OpenAI falls behind, it could impact Nvidia’s sales because the company is one of the chipmaker’s largest customers.
OpenAI had to declare “code red” in December, just a few weeks after Google’s latest release, Gemini 3, was considered to outperform ChatGPT. Meanwhile, the company has also been making significant efforts to scale Codex to beat competitor Anthropic’s highly popular coding agent Claude Code.
If investor fears are indeed realized, the deal doesn’t go through as planned, and OpenAI is unable to pay for its towering financial commitments, then the implications would go far beyond just OpenAI and Nvidia. That’s because both companies sit at the center of an intricate, tangled web of AI dealmaking, with numerous multibillion-dollar deals among a handful of companies, including a $300 billion OpenAI-Oracle cloud deal even bigger than the Nvidia commitment. These deals have been a considerable boon for the American economy, and if one deal goes down, it could take everything else with it.

