Leading AI company reaches $40B compute costs, 6GW power, $200B capex

On Track · Economic Impact · 50% confidence
Predicted: End of 2026 · Updated: 2026-04-02 · Source: AI 2027, page 9, KEY METRICS 2026 dashboard (Late 2026: AI Takes Some Jobs)
OPENBRAIN'S COMPUTE COSTS $40B 2026 ANNUAL · OPENBRAIN POWER REQUIREMENT 6GW PEAK POWER · CAPITAL EXPENDITURE $200B COST OF OWNERSHIP OF OPENBRAIN'S ACTIVE COMPUTE

What AI 2027 Predicted

The scenario’s KEY METRICS 2026 dashboard includes three infrastructure metrics for “OpenBrain” (the paper’s stand-in for a leading AI company). These describe the operational scale of the single dominant AI lab by end of 2026:

  • $40B annual compute costs — the direct cost of running and maintaining their compute fleet
  • 6GW peak power — the maximum power draw of all their datacenter infrastructure
  • $200B capital expenditure — the total cost of ownership of their active compute hardware

These figures sit alongside other KEY METRICS (the separately tracked $45B revenue, $1T global capex, 38GW global power, and 2.5% US power share). While the global metrics are tracked as separate predictions, these company-specific figures represent the scenario’s view of how concentrated AI infrastructure becomes.

How We Track This

  • Quarterly earnings and capex disclosures from Alphabet, Microsoft, Amazon, Meta (the hyperscalers hosting AI workloads)
  • OpenAI infrastructure plans and spending (Stargate project, partnership with Microsoft/SoftBank)
  • Anthropic infrastructure partnerships (Amazon/AWS, Google Cloud)
  • Industry analyst estimates (SemiAnalysis, Futurum Group, CTVC)
  • Datacenter power tracking databases and utility filings
  • Individual datacenter project announcements with MW/GW capacity

Current Evidence

Compute Costs ($40B target)

No single AI company has publicly disclosed $40B in annual compute costs. Proxy estimates:

  • OpenAI reportedly spends heavily on compute via Microsoft Azure but specific figures are not public. Revenue was $20B ARR at end of 2025, and costs are believed to exceed revenue substantially.
  • Major hyperscalers’ total AI-related compute spending is in the $50-100B range combined, but this serves many customers, not a single company’s own research/deployment.
  • The $40B figure implies a company spending roughly the equivalent of Anthropic’s + OpenAI’s entire combined compute budget on its own operations — this scale has not been reached.
  • OpenAI projects $46B total compute costs for 2026 ($32B training + $14.1B inference) — exceeding the $40B threshold if accurate.

Power (6GW target)

  • Individual modern AI datacenter campuses consume 500MW to 1GW
  • No single AI company operates 6GW of peak AI-dedicated power as of early 2026
  • Alphabet plans $175-185B capex in 2026, which could support multi-GW scale, but this serves all of Google’s operations, not AI R&D alone
  • The CTVC database tracks 294 datacenter projects totaling 73.6GW planned capacity across all companies — but most are not yet operational
  • Estimate: the largest single-company AI power footprint is likely in the 1-3GW range for leading hyperscalers, well short of 6GW

Capital Expenditure ($200B target)

The $200B capex figure for a single company is the most aggressive of the three targets:

  • Amazon: ~$200B projected capex for 2026 (most but not all for datacenters; serves AWS broadly, not just AI)
  • Alphabet: $175-185B capex for 2026
  • Microsoft: tracking toward $120B+
  • Meta: $115-135B
  • Combined Big 5: $660-690B total in 2026

Individual company capex is approaching the $200B range, but this includes all infrastructure (not just AI compute ownership), and serves millions of customers — not the single-company-own-compute scenario AI 2027 describes.

Counterevidence & Limitations

  • The “OpenBrain” figures describe a fictional company that is both the leading AI researcher and a massive infrastructure operator. No real-world analog exists — OpenAI relies on Microsoft’s infrastructure, Anthropic on AWS/Google Cloud. The prediction implicitly assumes vertical integration at a scale not yet seen.
  • Hyperscaler capex is approaching $200B per company, but this serves entire cloud businesses, not a single AI research operation
  • The 6GW figure is plausible for a hyperscaler’s total AI footprint by late 2026 but represents a much larger infrastructure than any current AI-focused operation
  • Power grid constraints remain a binding limitation — many announced datacenter projects face 2-4 year interconnection queues
  • Comparing AI-only costs vs. total company costs is inherently ambiguous; the source doesn’t fully distinguish between OpenBrain’s AI research compute and its inference/deployment compute

What Would Change Our Assessment

  • Upgrade to On Track: A single company (or tightly coupled partnership like OpenAI+Microsoft) discloses or is credibly estimated to operate $20B+ in AI compute costs, 3GW+ AI power, and $100B+ AI-dedicated capex
  • Upgrade to Confirmed: Figures approaching $40B/$6GW/$200B for a single entity’s AI operations
  • Downgrade to Behind: By end of 2026, the largest single-company AI infrastructure remains below $15B compute / 2GW power / $80B capex

Update History

DateUpdate
2025-01Microsoft announces $80B capex for FY2025. Stargate Project announced at the White House — OpenAI, SoftBank, Oracle target $500B over 4 years with $100B immediate deployment. DeepSeek R1 released at claimed $5.6M training cost; Nvidia loses $589B in market cap.
2025-03OpenAI signs $11.9B five-year deal with CoreWeave. OpenAI closes $40B funding round at $300B valuation — largest private tech fundraise in history.
2025-05Q1 2025 earnings: Alphabet $17.2B, Meta ~$19.4B, Amazon $24.3B, Microsoft $21.4B quarterly capex. All confirm AI infrastructure acceleration. CoreWeave expands OpenAI agreement by $4B (cumulative ~$16B).
2025-07OpenAI and Oracle announce 4.5GW Stargate expansion exceeding $300B over 5 years. Alphabet raises 2025 capex guidance to $85B. Amazon reports Q2 capex of $31.4B, implying full-year ~$125B+.
2025-09Anthropic closes $13B Series F at $183B valuation. First Stargate data center operational in Abilene, TX (200MW+). CoreWeave expands OpenAI deal by another $6.5B. Five new Stargate sites announced with combined ~7GW planned capacity.
2025-10AMD and OpenAI announce 6GW GPU partnership (~$90B cumulative). OpenAI completes for-profit restructuring. Q3 capex: Alphabet $24B, Amazon $34.2B, Meta $19.4B, Microsoft $34.9B (+74% YoY).
2025-11Sam Altman states OpenAI has ~$20B ARR and ~$1.4T in datacenter commitments over 8 years (~30GW capacity). Peak ambition before later revision.
2025-12Analysts project hyperscaler capex exceeding $600B in 2026 (+36% over 2025). SoftBank completes $22.5B second tranche to OpenAI. Michigan regulators approve 1.4GW for Stargate. Full-year 2025 capex: Amazon ~$131.8B, Alphabet $91.4B, Meta $72.2B.
2026-01Meta guides 2026 capex at $115-135B (nearly double 2025). Stock surges 10.4%.
2026-02Alphabet guides 2026 capex at $175-185B (nearly double $91.4B in 2025). Amazon guides $200B for 2026 (+53%). Big 5 combined: $660-690B. OpenAI revises compute spending to ~$600B by 2030 (down from $1.4T), projects $46B total compute costs for 2026 ($32B training + $14.1B inference) — exceeding the $40B threshold. Anthropic closes $30B Series G at $380B valuation.
2026-03OpenAI closes $122B funding at $852B valuation. Stargate program now encompasses ~10GW planned capacity. Assessment: $40B compute costs likely met (OpenAI projects $46B). 6GW in planned capacity exceeded but operational capacity harder to verify. $200B single-company capex approached by Amazon but serves entire AWS business. Status: Emerging → On Track for compute costs; power and capex remain Emerging. Confidence 0.40 → 0.50.