OpenAI has signed a multibillion-dollar deal with Broadcom to purchase 10 gigawatts of custom-designed AI chips, marking the latest and potentially the largest commitment in the company’s infrastructure spending spree.
The deal brings OpenAI’s total chip and data center commitments to over 26 gigawatts of computing capacity, equivalent to the output of 26 nuclear reactors, with total costs potentially exceeding $1.5 trillion.
The Broadcom chips represent OpenAI’s first internally designed AI processors, co-developed with Broadcom specifically for running ChatGPT and other OpenAI models.
Unlike previous deals with Nvidia and AMD, this agreement focuses on inference workloads in the process of responding to user requests rather than training new models.
The Deal Structure and Custom Silicon
OpenAI and Broadcom have collaborated for 18 months to develop custom chips optimized for OpenAI’s specific AI models. Sam Altman, OpenAI’s CEO, announced the deal via podcast (OpenAI Podcast – Episode 8), describing the infrastructure build out as “the biggest joint industrial project in human history.”

The custom chip approach differs fundamentally from OpenAI’s deals with Nvidia and AMD, where the company purchases existing GPU architectures. Custom silicon allows optimization for specific workloads but requires longer development cycles and larger upfront investment.
Why inference chips matter: As AI deployment scales, inference workloads running trained models to respond to user queries are becoming the dominant computing cost. Training a model like GPT-4 is expensive but happens once.
The math: Training GPT-4 reportedly cost over $100 million in compute – Said by Sam Altman. We can easily guess training GPT-5 would have costed even more.
But running ChatGPT for hundreds of millions of daily users costs far more over time. If custom inference chips deliver 2-3x better performance-per-watt than general-purpose GPUs, they could reduce OpenAI’s operating costs by hundreds of millions annually once deployed at scale.
Broadcom CEO Hock Tan positioned AI infrastructure as critical utility: “This is like the railroad or the internet. [AI] is becoming a critical utility over time for 8 billion people globally. But it cannot be done with just one party, it needs a lot of partnerships and collaboration across an ecosystem.”
The Mounting Infrastructure Commitments
OpenAI has now signed chip deals totaling over 26 gigawatts of computing capacity across multiple vendors:
- Nvidia: 10GW deployment, $100 billion deal announced in September
- AMD: 6GW deployment, up to 10% equity stake in AMD through conditional warrants
- Broadcom: 10GW of custom chips, financial terms undisclosed
- Oracle: $300 billion cloud infrastructure deal over five years
At current deployment costs of approximately $50 billion per gigawatt ($35 billion for chips, $15 billion for infrastructure), the 26GW commitment translates to roughly $1.3 trillion just for the chip and data center buildout. Combined with the Oracle deal, total infrastructure spending approaches $1.6 trillion.
For context, OpenAI’s 2024 revenue was approximately $3.7 billion with losses exceeding $5 billion. The company’s current valuation sits around $300 billion. Infrastructure commitments now exceed the company’s market cap by 5x and annual revenue by over 400x.
The Economics: How Does This Get Funded?
The financing question looms over every new deal announcement. OpenAI isn’t just writing trillion-dollar checks. These deals are multi-year commitments structured as vendor financing, equity swaps, and supply agreements.
Nvidia’s deal: Combined $100 billion investment with long-term chip supply, with Nvidia taking an equity stake in OpenAI. The structure means Nvidia’s investment partially flows back through chip purchases.
AMD’s deal: OpenAI receives warrants to acquire up to 10% of AMD conditioned on deployment milestones and AMD’s stock performance. This converts chip purchases into equity acquisition, reducing immediate cash requirements.
Broadcom’s deal: Unlike Nvidia and AMD, Broadcom did not offer financial incentives or equity arrangements. OpenAI pays market rates but expects costs to decline as competition increases and manufacturing scales.
The technical angle that matters: custom chip development with Broadcom may cost less overtime than buying Nvidia GPUs, but it requires massive upfront investment and commits OpenAI to Broadcom’s roadmap. If the custom chips underperform or development delays occur, OpenAI has limited alternatives for those 10GW of capacity.
Strategic Implications: Reducing Nvidia Dependence
The Broadcom and AMD deals serve a clear purpose: reduce OpenAI’s reliance on Nvidia, which has maintained 88% discrete GPU market share and near-monopoly pricing power in AI accelerators.
Altman has repeatedly stated that chip shortages constrained ChatGPT development and new model releases. By diversifying across Nvidia, AMD, and custom Broadcom silicon, OpenAI aims to:
- Control costs: Competition between vendors should reduce per-chip pricing
- Ensure supply: Multiple vendors reduce risk of capacity constraints from any single supplier
- Optimize performance: Custom chips can deliver better efficiency for specific workloads
However, this strategy creates new complexity. Managing three different chip architectures requires maintaining separate software stacks, optimization pipelines, and deployment infrastructure. The operational overhead of multivendor AI infrastructure is substantial.
The Broader AI Chip Ecosystem
OpenAI’s deals have reshaped the AI chip market in weeks. AMD’s stock surged 35% following the OpenAI partnership announcement. Broadcom shares jumped 8% after this deal was revealed. Meanwhile, traditional chip competitors are scrambling to secure their positions.
Intel’s Panther Lake processors, launching in 2026, will include enhanced AI capabilities and improved graphics, but Intel has yet to secure comparable hyperscale AI infrastructure deals. The company recently received a $5 billion investment from Nvidia, but that partnership focuses on foundry services rather than AI chip sales.
The circular nature of these relationships is striking: Nvidia invests in OpenAI, which buys Nvidia chips. OpenAI invests in AMD (via warrants), which competes with Nvidia. Broadcom co-develops custom chips with OpenAI while also supplying networking infrastructure to Nvidia data centers.
Every major AI player is simultaneously customer, supplier, competitor, and investor.
Broadcom’s custom chips target inference workloads specifically, reflecting where AI computing economics are heading. Training large language models requires massive parallel compute Nvidia’s current strength. But as models mature and deployment scales, inference becomes the dominant cost.
This is why Broadcom’s involvement matters. The company specializes in custom silicon for hyperscale customers it already supplies custom chips to Google, Meta, and others for their AI infrastructure. Broadcom’s expertise in high-performance networking and custom ASIC design makes it a credible partner for large-scale inference deployments.
Timeline and Deployment Challenges
The Financial Times first reported in September that OpenAI was working with Broadcom on custom chip development. Mass production is expected to begin next year, with initial deployments likely in 2026.
But deploying 26 gigawatts of computing capacity requires building massive new data centers potentially dozens of facilities globally. Each gigawatt-scale data center needs:
- Land acquisition and permitting
- Power infrastructure (substation connections, backup generators)
- Cooling systems capable of handling tens of megawatts of heat
- Network connectivity with hundreds of gigabits of bandwidth
- Physical security and operational staff
The construction timeline for a single gigawatt-scale data center typically spans 2-3 years. Deploying 26GW across multiple sites will take most of the decade, even with aggressive parallel construction.
OpenAIs’ Power Move
OpenAI has committed to purchasing over 26 gigawatts of AI computing capacity from Nvidia, AMD, and Broadcom, with total infrastructure spending potentially exceeding $1.5 trillion when including Oracle’s cloud deal.
The Broadcom partnership introduces custom inference chips co-designed specifically for OpenAI’s models, marking the company’s first move into proprietary silicon.
The strategic logic is clear: diversify chip supply, reduce Nvidia dependence, control costs, and optimize performance for inference workloads that will dominate AI economics as deployment scales.
The execution risk is equally clear: financing $1.5 trillion in infrastructure on $3.7 billion in annual revenue requires sustained exponential growth in AI demand and OpenAI’s ability to monetize it.
The deals also deepen the circular interconnections in the AI ecosystem. Nvidia invests in OpenAI and Intel. OpenAI invests in AMD and buys chips from all three, plus Broadcom. Oracle builds data centers using chips from these vendors. Everyone’s growth depends on everyone else’s success and on the underlying assumption that AI compute demand will support trillion-dollar infrastructure buildouts.
Whether that assumption proves correct will determine if OpenAI’s chip spending spree becomes the foundation for dominant AI infrastructure or an overcommitted bet on demand that never fully materializes. The first answer arrives in 2026 when initial deployments go live and OpenAI must demonstrate that its infrastructure investments translate to sustainable revenue growth.
Leave a Reply
You must be logged in to post a comment.