AI Forecast Tracker
← Back to forecasts

Nobody Will Believe Anything They See Online by 2028

Will public trust in online media (news, video, images) fall below 25% in major democracy surveys by end of 2028?

When nobody can tell what's real online, it doesn't just affect media — it affects elections, courts, markets, and your ability to trust what you see.

Target: Dec 2028(1030 days until resolution)
Assessed Probability
50%
More likely than not
Based on 3 expert predictions, 4 evidence items
Community Forecast
Cast your vote
Be the first to weigh in below

Your Prediction

Where do you think this lands?

Join others who've weighed in

5%95%
50% — More likely than not
We're entering what experts call a 'collapse of trust' era — but not for the reason you think. The real threat isn't that AI makes convincing fakes. It's the 'liar's dividend': when anyone CAN generate photorealistic video of anything, real footage becomes dismissable as 'probably AI.' UNESCO calls this a 'crisis of knowing.' Venezuela's 2026 political crisis was amplified by deepfakes that made it impossible to determine what was real. AI voice cloning has been used to impersonate CEOs for fraudulent wire transfers. The acceleration matters here too: inference costs falling 200x per year means deepfake generation becomes nearly free. Anyone with a laptop can now produce content that required Hollywood studios five years ago. But trust in media was already at historic lows before AI — Gallup showed this trend for decades. AI accelerates the decline, but didn't cause it. Content provenance (C2PA) is emerging but adoption is slow.

Scenarios

Current value: Gallup: trust in mass media at 31% (2025) — already near historic lows

S-curve position: Late decline phase — trust has been falling for years, AI adds acceleration

Bear Case

Below 20% (AI deepfakes trigger major democratic crisis, no effective countermeasures)

Base Case

25-30% (continued decline, accelerated by AI but not catastrophically)

Bull Case

Trust stabilizes above 30% (content provenance standards work, regulation effective)

How We'll Know

What we measure
Public trust in online media as measured by major survey organizations (Gallup, Edelman, Reuters Institute)
Confirmed if
Trust in online media falls below 25% in 2+ major democracy surveys by end 2028
Refuted if
Trust in online media remains above 35% in major surveys, OR shows improvement
Data sources
  • Gallup media trust survey
  • Edelman Trust Barometer
  • Reuters Institute Digital News Report
  • Pew Research media trust data

Evidence Trail

Evidence For

  • Mar 7, 2026

    NBC: experts warn of 'collapse of trust online.' UNESCO: deepfakes creating 'crisis of knowing.' Venezuela 2026: AI deepfakes undermined political accountability. AI voice cloning used for CEO impersonation fraud. Gallup: media trust already at 31%. Detection consistently lags generation capability.→ Probability: 40%

  • Mar 7, 2026

    Deepfake files projected at 8M in 2025 (16x since 2023). 40% of all biometric fraud attempts now use deepfakes. Human detection accuracy of high-quality deepfakes only 24.5%. Deloitte: GenAI-enabled fraud could reach $40B by 2027. Italian police froze ~€1M after AI-cloned voice of defence minister used to target business leaders. Canadian election: 5.86% of election images were deepfakes. Audio deepfake detectors at 88.9% but 45-50% drop on newer adversarial deepfakes. Inference cost collapse makes generation nearly free.→ Probability: 45%

  • Mar 9, 2026

    xAI's Grok generated ~3M sexualized images in 11 days. During 2026 Middle East conflict, AI deepfakes fueled misinformation on both sides — fabricated videos gained millions of views. White House released AI propaganda video mixing real military footage with Hollywood/anime. EU AI Act mandates watermarking for AI content (Aug 2026). Open-weight models enable offline generation of phishing, deepfakes, and malware beyond any safety guardrails. Italian police froze ~EUR1M after AI-cloned voice of defence minister targeted business leaders.→ Probability: 50%

Evidence Against

  • Mar 7, 2026

    Trust was already declining pre-AI — this may be continuation, not acceleration. Content provenance standards (C2PA) gaining traction. Regulation responding: South Korea AI Basic Act, China Deep Synthesis rules, EU AI Act. People developed literacy for photoshopped images — may adapt to AI. 25% is a very low threshold.

How Our View Evolved

  • Mar 9, 202645%50%

    Grok 3M deepfake images in 11 days. State-level AI propaganda (White House video). AI deepfakes in Middle East conflict. Voice cloning fraud in Italy. Real-world harm incidents accelerating.

  • Mar 8, 2026Initial assessment: 45%

    Baseline — initial published assessment

What Experts Say

Gartner

Technology research and advisory firm

Track record: 7/10
30% of enterprises will abandon facial verification by 2026 due to AI-generated deepfakes
Feb 2024 | report
We assess this claim as 55% more likely than not

Deloitte Center for Financial Services

Financial services research arm of Deloitte

Track record: 7/10
GenAI-enabled fraud could reach $40 billion by 2027
May 2024 | report
We assess this claim as 50% more likely than not

Siwei Lyu

Professor, University at Buffalo; deepfake detection researcher

Track record: 7/10
2026 is the year everyday people will be fooled by deepfakes
Dec 2025 | interview
We assess this claim as 70% likely

What Could Go Wrong

Content provenance technology (C2PA, watermarking) works well enough to maintain baseline trust. Regulation effectively limits deepfake distribution. People develop 'AI literacy' similar to how they learned to spot Photoshop. The decline was already happening — AI doesn't accelerate it meaningfully beyond the existing trend.

What should we track about this topic?