Public unaware of best AI capabilities
OpenBrain 'responsibly' elects not to release it publicly yet (page 10); very few have access to the newest capabilities (page 16).
What AI 2027 Predicted
The scenario argues that as AI approaches superintelligence, the public will be “months behind” internal capabilities at frontier labs. This isn’t framed as a conspiracy but as a structural feature: labs develop models internally before releasing them publicly, safety testing introduces delays, and government classification may further restrict disclosure. The gap between what’s possible inside the labs and what the public can access widens over time.
How We Track This
We monitor:
- Time gap between known internal model capabilities and public releases
- Government classification or restriction of AI capabilities
- Lab disclosures about internal-only models and tools
- Whistleblower reports and leaks about undisclosed capabilities
- Academic and policy analysis of the “adaptation buffer” between internal and public AI
Current Evidence
Multiple indicators suggest this dynamic is already in motion:
Growing capability gap: AI Frontiers published an article (Aug 2025) documenting that “AI systems with capabilities considerably ahead of what the public can access remain hidden inside corporate labs,” calling this hidden frontier “America’s greatest technological advantage — and a serious, overlooked vulnerability.”
Internal deployment gap: A January 2026 arXiv paper (“Internal Deployment Gaps in AI Regulation,” 2601.08005) formally analyzed how “frontier AI regulations primarily focus on systems deployed to external users” while “high-stakes applications can occur internally” — companies deploy highly capable systems internally for R&D acceleration, handling sensitive data, and accelerating business processes, outside public scrutiny.
Security and classification pressure: The Anthropic-DOD confrontation (Feb–Mar 2026) demonstrated that the government is increasingly interested in controlling AI lab outputs. The International AI Safety Report 2026 noted that “large evidence gaps remain” in understanding AI capabilities, partly because labs control disclosure.
Lab opacity: The AI Futures grading noted “high uncertainty due to compute obscurity” — labs are increasingly unwilling to disclose training compute, model architectures, and capability evaluations. No lab publicly discloses its full internal model lineup or capability assessments.
AISI report: The UK AI Safety Institute’s Frontier AI Trends Report documents the “adaptation buffer” between when capabilities are known internally and when they become publicly usable.
Sources:
- The Hidden AI Frontier — AI Frontiers
- Internal Deployment Gaps in AI Regulation — arXiv
- International AI Safety Report 2026
- Frontier AI Trends Report — AISI
Counterevidence & Limitations
- Open-source models (DeepSeek, Llama) partially counteract secrecy by providing public access to near-frontier capabilities
- The gap could be narrowing in some areas as competition drives faster public releases
- There’s no concrete evidence of government-mandated AI classification in the US (yet)
- Some capability gaps are routine (every tech company tests products internally before release) rather than strategic secrecy
- The prediction becomes harder to evaluate precisely because by definition, we can’t easily measure what we don’t know about
What Would Change Our Assessment
- Upgrade to “on-track”: Evidence of multi-month gaps between internal capability milestones and public releases; government-imposed disclosure restrictions
- Upgrade to “confirmed”: Clear documentation of major capabilities withheld from public access for strategic/security reasons; formal classification of AI capabilities
- Downgrade: If competitive pressure continues to drive rapid public releases, keeping the gap to weeks rather than months
Update History
| Date | Update |
|---|---|
| 2026-03 | Structural forces toward secrecy visible — competitive pressure, safety concerns, and government relationships all incentivize capability withholding. Academic analysis documents growing gap between internal and public-facing capabilities, but no confirmed multi-month withholding yet. |
| 2026-03-30 | Anthropic data leak (March 26) revealed “Claude Mythos” (also called “Capybara”) — a model Anthropic describes as a “step change” already in testing with early-access customers, above even Opus tier, with “unprecedented cybersecurity risks.” This provides concrete evidence of multi-month withheld capabilities: the model has been in development and testing while the public knows only Opus 4.6. Fortune/Silicon Angle reporting confirms. Confidence adjusted 0.70 → 0.75. |