OpenAI and Broadcom have announced they are jointly building and deploying 10 gigawatts of custom artificial intelligence (AI) accelerators as part of a broader effort across the industry to scale AI infrastructure. While the companies have been working together for 18 months, they’re now going public with plans to develop and deploy racks of OpenAI-designed chips starting late next year. OpenAI had, in recent weeks, announced major deals with Nvidia, Oracle, and Advanced Micro Devices (AMD).
“These things have gotten so complex you need the whole thing,” OpenAI CEO Sam Altman said in a podcast with OpenAI and Broadcom executives that the companies released along with the news.
OpenAI states that it will design the accelerators and systems, which will be developed and deployed in partnership with Broadcom. “By designing its own chips and systems, OpenAI can embed what it’s learned from developing frontier models and products directly into the hardware, unlocking new levels of capability and intelligence. The racks, scaled entirely with Ethernet and other connectivity solutions from Broadcom, will meet surging global demand for AI, with deployments across OpenAI’s facilities and partner data centers,” the company says.
READ: OpenAI inks multibillion-dollar deal with AMD to boost AI chip supply (
Altman also stated that the Broadcom deal provides “a gigantic amount of computing infrastructure to serve the needs of the world to use advanced intelligence,” Altman said. “We can get huge efficiency gains, and that will lead to much better performance, faster models, cheaper models — all of that.”
Hock Tan, president and CEO of Broadcom said that Broadcom’s collaboration with OpenAI “signifies a pivotal moment in the pursuit of artificial general intelligence.”
“OpenAI has been in the forefront of the AI revolution since the ChatGPT moment, and we are thrilled to co-develop and deploy 10 gigawatts of next generation accelerators and network systems to pave the way for the future of AI,” Tan added.
READ: Nvidia, AMD directed to prioritize US over China in chip supply (
Broadcom has been one of the biggest beneficiaries of the generative AI boom, with hyperscalers snapping up its custom AI chips, which the company calls XPUs. While the company does not name its large web-scale customers, analysts have said dating back to last year that its first three clients were Google, Meta and TikTok parent ByteDance.
Altman said that 10 gigawatts is the beginning. “Even though it’s vastly more than the world has today, we expect that very high-quality intelligence delivered very fast and at a very low price — the world will absorb it super fast and just find incredible new things to use it for,” he said. Currently, the company operates only on over two gigawatts of compute capacity.
OpenAI has announced roughly 33 gigawatts of compute commitments over the past three weeks across partnerships with Nvidia, Oracle, AMD and Broadcom. “If we had 30 gigawatts today with today’s quality of models,” Altman said. “I think you would still saturate that relatively quickly in terms of what people would do.”

