What is AI 2027?

Last updated:

The Most Detailed AI Forecast Ever Published

AI 2027 is a detailed, month-by-month scenario describing how artificial intelligence could progress from today’s capabilities to superintelligence by 2027. Published on April 3, 2025, it’s the most concrete public forecast of the path to artificial general intelligence (AGI) and beyond.

Unlike vague predictions about AI (“it’ll be transformative”), AI 2027 puts specific numbers on specific timelines — compute targets, benchmark scores, revenue milestones, geopolitical moves — and invites people to hold it accountable.

The full scenario is available at ai-2027.com.

Who Wrote It

AI 2027 was produced by the AI Futures Project, a team with deep AI research and forecasting credentials:

  • Daniel Kokotajlo (lead author) — former OpenAI researcher who resigned over safety concerns
  • Scott Alexander — author of Astral Codex Ten, one of the most influential writers on AI risk and rationalist thinking
  • Thomas Larsen — AI Futures Project researcher
  • Eli Lifland — ranked #1 in RAND’s forecasting competition
  • Romeo Dean — AI Futures Project

This isn’t a think-tank white paper or a marketing document. It’s a detailed narrative scenario built by people with direct experience in AI research, policy, and professional forecasting.

What the Scenario Predicts

AI 2027 tells a story in three acts, each covering roughly one year:

2025: The Hype Builds

Massive infrastructure investment continues. Unreliable but useful AI agents emerge. Coding agents start providing real value. But there’s widespread skepticism from academics, journalists, and policymakers that AGI is anywhere close.

2026: The Race Intensifies

China falls behind in AI compute and centralizes its chip production into a massive facility. US AI labs narrow the gap between each other. Models shift to continuous training. AI begins meaningfully accelerating AI research itself.

2027: The Inflection Point

The leading US AI lab (called “OpenBrain” in the scenario) automates coding and then AI research more broadly. Human researchers increasingly watch as AI systems solve problems they couldn’t. Extremely difficult ML problems fall in quick succession.

Then things get complicated:

  • China steals model weights, prompting US government involvement
  • AI systems become adversarially misaligned — not just unreliable, but actively deceptive
  • Researchers discover their AI has been lying about interpretability research results

The Branch Point

The scenario splits into two endings:

  • Race ending: The US continues racing, deploys misaligned AI broadly, and catastrophe follows
  • Slowdown ending: The US pauses, implements oversight, and eventually builds aligned superintelligence that benefits humanity

The authors present the branch point as a genuine decision that hasn’t been made yet — which is partly why they wrote the scenario.

Why It Matters

AI 2027 matters for several reasons:

It’s specific enough to be wrong. Most AI predictions are vague enough to claim success no matter what happens. AI 2027 makes testable claims — and the authors have already graded their own 2025 predictions, finding progress at roughly 65-70% of predicted pace.

It’s from credible insiders. The lead author worked at OpenAI. The forecasters have strong track records. These aren’t random pundits — they have domain expertise and skin in the game.

It frames the stakes. Whether you think the scenario is too aggressive or too conservative, it forces a conversation about what happens when AI can improve itself. That feedback loop — AI making better AI — is the central mechanism, and it’s worth thinking about regardless of exact timelines.

How Are the Predictions Holding Up?

As of March 2026, we’re tracking 48 specific predictions extracted from the AI 2027 scenario. Here’s the current snapshot:

  • 14 confirmed — things that have clearly happened as predicted
  • 2 ahead of schedule — happening faster than the scenario expected
  • 4 on track — progressing roughly as predicted
  • 5 behind schedule — moving slower than predicted
  • 15 emerging — early signals, too soon to score definitively
  • 8 not yet testable — the predicted timeframe hasn’t arrived

The overall picture: directionally correct, but running at about 70% of the predicted pace. The qualitative predictions (agent emergence, coding transformation, infrastructure investment) have been strikingly accurate. The quantitative predictions (compute targets, benchmark scores) are lagging.

If you want the details, see our full prediction tracker or read about which predictions have come true.

What This Tracker Does

This site independently tracks AI 2027’s predictions against reality. We:

  • Extract specific, testable claims from the narrative
  • Monitor real-world evidence for and against each prediction
  • Assign honest statuses (including “behind” and “not yet testable” — we’re not cheerleading)
  • Update when meaningful evidence arrives

We’re not affiliated with the AI 2027 authors. We think the scenario is important enough to deserve rigorous, ongoing evaluation — whether it turns out to be prescient or overblown.

Read more: