Models shift to continuous/iterative training
By this point 'finishes training' is a bit of a misnomer; models are frequently updated to newer versions trained on additional data or partially re-trained to patch some weaknesses.
What AI 2027 Predicted
The scenario describes a shift away from discrete model releases toward continuous, iterative training. Rather than training a model from scratch and releasing it as a finished product, labs would frequently update models — training on additional data, patching weaknesses, and releasing incremental versions. The term “finishes training” becomes a misnomer.
How We Track This
We monitor:
- Frequency of model version updates from major labs (e.g., GPT-4o → 5 → 5.1 → 5.2 → 5.4)
- Lab announcements describing iterative or continuous training paradigms
- Versioning patterns suggesting ongoing refinement rather than fresh training runs
- API model deprecation schedules showing the pace of updates
Current Evidence
The evidence for this prediction is strong. OpenAI’s GPT-5 family demonstrates this pattern clearly: GPT-5 launched in mid-2025, followed by GPT-5.1, GPT-5.2, GPT-5.2-Codex, and GPT-5.4 in rapid succession — each appearing to build on the same base model with iterative improvements. The AI Futures Project’s own grading assessed this as “correct,” noting that “GPT-4o → GPT-5 → GPT-5.1 appear to be continuations of same base model.”
Anthropic has followed a similar pattern with Claude model refreshes (Claude 3.5 Sonnet → Claude 3.7 Sonnet → Claude Opus 4 → Claude Opus 4.5 → Claude Opus 4.6), with intermediate updates and capability patches. Google has also adopted rapid iteration with Gemini model versions.
The pattern appears broadly adopted across the industry: model versioning now resembles software release cycles more than discrete research publications.
Sources:
- GPT-5 is here — OpenAI
- Introducing GPT-5.2 — OpenAI
- Introducing GPT-5.2-Codex — OpenAI
- Grading AI 2027’s 2025 Predictions — AI Futures Project
Counterevidence & Limitations
- It’s unclear whether iterative updates involve actual continued pretraining on the base model or are primarily post-training refinements (fine-tuning, RLHF, prompt engineering)
- The distinction between “continuous training” (as AI 2027 envisions — weight updates from new data) and “continuous deployment” (same base model, improved scaffolding) matters for the deeper claim
- Some version bumps may be more about marketing than fundamental model changes
What Would Change Our Assessment
- Already confirmed. Would strengthen further if labs explicitly describe online/continual pretraining paradigms
- Downgrade risk: If evidence emerges that version updates are purely superficial post-training changes with no base model modification (unlikely given observed capability jumps)
Update History
| Date | Update |
|---|---|
| 2025-04 | OpenAI releases GPT-4.1, GPT-4.1-mini, and GPT-4.1-nano simultaneously (April 14) while deprecating GPT-4.5 Preview. Multiple model variants at different capability/cost points shipped together signals iterative, continuous development replacing discrete “generation” releases. |
| 2025-05 | Claude Opus 4 and Sonnet 4 released together (May 22). Google Gemini 2.5 Flash at I/O (May 20). Three frontier labs shipping major updates within days of each other. |
| 2025-11 | GPT-5.1 (Nov 12), GPT-5.1-Codex-Max (Nov 18), Gemini 3 six days later, Claude Opus 4.5 six days after that. Three major updates in under a month. The concept of distinct “model releases” is blurring into continuous iteration. |
| 2025-12 | GPT-5.2 released — sixth major GPT version in ~8 months. “Code Red” response to Gemini 3 makes explicit that labs iterate continuously in response to competitive signals. |
| 2025-12 | GPT-5.x series demonstrates rapid iterative updates, with multiple versions released within months. Claude model cadence follows similar pattern. |
| 2026-03 | Continuous/iterative training now standard practice at frontier labs. AI Futures Project graded this prediction as correct. |