High-bandwidth non-text reasoning (neuralese) deployed

Emerging · Model Capability · 35% confidence
Predicted: Early 2027 · Updated: 2026-03-13 · Source: ai-2027.com, March 2027: Algorithmic Breakthroughs
One such breakthrough is augmenting the AI's text-based scratchpad (chain of thought) with a higher-bandwidth thought process (neuralese recurrence and memory).

What AI 2027 Predicted

The scenario describes a major algorithmic breakthrough occurring around early 2027: augmenting text-based chain-of-thought reasoning with a higher-bandwidth “neuralese” thought process. This involves recurrence and memory mechanisms that allow models to reason in latent space rather than through token-by-token text generation. The concept implies models could process information more efficiently by thinking in a compressed, non-linguistic representation — dramatically increasing effective reasoning depth and breadth.

How We Track This

We monitor:

  • Academic research on latent-space reasoning and non-text chain-of-thought
  • Recurrence architectures applied to transformers (state-space models, recurrent elements)
  • Frontier lab publications on reasoning efficiency beyond text-based CoT
  • Deployment of models with non-text reasoning capabilities
  • Papers on “continuous thought” or “thinking in latent space”

Current Evidence

Research activity in this area is substantial, though no frontier model has deployed neuralese-style reasoning in production:

COCONUT (Continuous Chain of Thought): A December 2024 paper from Meta demonstrated training LLMs to “reason in a continuous latent space,” where internal hidden states replace explicit text tokens as the reasoning medium. The approach enables breadth-first search-like reasoning patterns and showed promising results on logical reasoning tasks.

RELAY Framework: A 2025 paper introduced RELAY (REasoning through Loop Alignment iteratively), a two-stage framework for improving auto-regressive model performance on long reasoning tasks through iterative latent-space processing.

Academic discussion: LessWrong and research communities have extensively discussed neuralese concepts, with researchers noting that the approach “greatly increases the serial depth of computation” possible for models. However, most characterize deployed neuralese as “largely theoretical” for frontier production systems.

The key distinction: current chain-of-thought reasoning (as in o1, o3, Claude’s extended thinking) still operates in token space. These are text-based scratchpads — exactly what AI 2027 says neuralese would augment. The research signals are real but the deployment timeline of early 2027 appears aggressive.

Sources:

Counterevidence & Limitations

  • No frontier lab has announced deploying latent-space reasoning in production models
  • Current reasoning models (o3, Claude extended thinking) achieve strong results with text-based CoT, reducing the urgency of switching paradigms
  • The academic work is promising but small-scale; scaling neuralese-style reasoning to frontier model size is unproven
  • It’s possible that labs are working on this internally without public disclosure, making the “emerging” status uncertain in either direction
  • Interpretability concerns may slow adoption — text-based CoT is inspectable, neuralese is not

What Would Change Our Assessment

  • Upgrade to “on-track”: A frontier lab announces or leaks development of latent-space reasoning for production deployment
  • Upgrade to “confirmed”: A shipped model demonstrably uses non-text intermediate reasoning
  • Downgrade to “behind”: If by mid-2027 no frontier model uses anything beyond text-based CoT
  • Status stays “emerging”: As long as research progresses but production deployment remains unconfirmed

Update History

DateUpdate
2026-03Active research area — COCONUT and RELAY papers demonstrate non-text reasoning pathways. No production deployment yet. Predicted timeline of early 2027 appears aggressive given current state of research.