About AI Forecast Tracker
What We Do
Everyone predicts AI will change the world. We track the specific claims, score them with evidence, and show you who's actually right.
Each topic on our site represents a specific, time-bound question about AI's impact. We assess the probability using evidence from multiple expert sources, track new evidence as it emerges, and let you weigh in with your own prediction.
Our Methodology
Our probability assessments are inspired by Philip Tetlock's book Superforecasting: The Art and Science of Prediction — the foundational work on how to make better forecasts using structured, evidence-based methods. Key principles we apply:
- Numeric probabilities — every assessment gets a specific number (5-95%), not vague words
- Bayesian updating — we adjust probabilities incrementally as new evidence arrives
- Pre-mortems — for every prediction, we ask “what if this is completely wrong?”
- Base rates — we start with historical frequencies before adjusting with specific evidence
- Source calibration — we track who gets predictions right and weight their input accordingly
Source Attribution
We never put words in experts' mouths. When we show a source's prediction, we quote their actual claim verbatim. Any probability we assign is clearly labeled as our assessment of their claim, not something they stated.
How Probabilities Work
| Range | Label |
|---|---|
| 5-15% | Very unlikely |
| 20-30% | Unlikely |
| 35-45% | Roughly even odds |
| 50-60% | More likely than not |
| 65-75% | Likely |
| 80-90% | Very likely |
| 95% | Near certain |
How We Research
Our research leverages the latest AI deep research capabilities — including Google Gemini Deep Research and Anthropic Claude's extended thinking — to synthesize information across hundreds of sources, verify claims, and cross-reference data points. These tools allow us to process and analyze far more evidence than traditional manual research, while every conclusion is reviewed and validated by humans.
Who's Behind This
AI Forecast Tracker is created by Ondrej Malacka, an Engineering Director near Brno, Czechia. I got tired of AI predictions that were never specific enough to be wrong — so I started tracking them with numbers, evidence, and clear resolution criteria. This is an independent research project combining expert analysis with crowd forecasting to separate signal from hype.