OpenAI inks 10-gigawatt chip pact with Broadcom to power next-gen AI tools
OpenAI has announced that it has signed a multiyear agreement with Broadcom to collaborate on custom chips and networking equipment. The two companies plan to add 10 gigawatts of custom AI accelerators, which will be designed by OpenAI and deployed in partnership with Broadcom. The deployment of racks of AI accelerator and network systems is set to begin in the second half of 2026 and complete by the end of 2029.
The new racks, which will be scaled entirely with Ethernet and other connectivity solutions from Broadcom, will be used to meet the increasing global demand for AI and will be deployed across OpenAI and its partner data centers.
The ChatGPT maker says that by designing its own chips and systems, it can embed what it has learned from developing frontier models and other products directly into the hardware and unlock new levels of capability and intelligence.
OpenAI CEO Sam Altman, while talking about the new partnership, said, “Partnering with Broadcom is a critical step in building the infrastructure needed to unlock AI’s potential and deliver real benefits for people and businesses.”
“Developing our own accelerators adds to the broader ecosystem of partners all building the capacity required to push the frontier of AI to provide benefits to all humanity.”
“OpenAI has been in the forefront of the AI revolution since the ChatGPT moment, and we are thrilled to co-develop and deploy 10 gigawatts of next generation accelerators and network systems to pave the way for the future of AI,” Hock Tan, President and CEO of Broadcom, noted.
OpenAI’s growing compute partnerships:
OpenAI, which now has over 800 million weekly active users, has been signing multiple compute deals to meet its AI demand. Last month, Nvidia announced an investment of up to $100 billion in OpenAI to deploy 10 gigawatts of AI data center infrastructure. The partnership involves OpenAI deploying Nvidia’s future Vera Rubin chips, with the first deployment set for 2026.
Meanwhile, just last week, OpenAI signed a multi-year deal with chipmaker AMD to deploy billions of dollars worth of AMD GPUs for building its next-generation AI infrastructure. Under the agreement, OpenAI will deploy six gigawatts worth of AMD GPUs, starting with the first one-gigawatt deployment of AMD Instinct MI450 GPUs in the second half of 2026.
Post Comment