Nobody Will Believe Anything They See Online by 2028
Will public trust in online media (news, video, images) fall below 25% in major democracy surveys by end of 2028?
When nobody can tell what's real online, it doesn't just affect media — it affects elections, courts, markets, and your ability to trust what you see.
Your Prediction
Where do you think this lands?
Join others who've weighed in
Scenarios
Current value: Gallup: trust in mass media at 31% (2025) — already near historic lows
S-curve position: Late decline phase — trust has been falling for years, AI adds acceleration
Below 20% (AI deepfakes trigger major democratic crisis, no effective countermeasures)
25-30% (continued decline, accelerated by AI but not catastrophically)
Trust stabilizes above 30% (content provenance standards work, regulation effective)
How We'll Know
- What we measure
- Public trust in online media as measured by major survey organizations (Gallup, Edelman, Reuters Institute)
- Confirmed if
- Trust in online media falls below 25% in 2+ major democracy surveys by end 2028
- Refuted if
- Trust in online media remains above 35% in major surveys, OR shows improvement
- Data sources
- Gallup media trust survey
- Edelman Trust Barometer
- Reuters Institute Digital News Report
- Pew Research media trust data
Evidence Trail
Evidence For
- Mar 7, 2026
NBC: experts warn of 'collapse of trust online.' UNESCO: deepfakes creating 'crisis of knowing.' Venezuela 2026: AI deepfakes undermined political accountability. AI voice cloning used for CEO impersonation fraud. Gallup: media trust already at 31%. Detection consistently lags generation capability.→ Probability: 40%
- Mar 7, 2026
Deepfake files projected at 8M in 2025 (16x since 2023). 40% of all biometric fraud attempts now use deepfakes. Human detection accuracy of high-quality deepfakes only 24.5%. Deloitte: GenAI-enabled fraud could reach $40B by 2027. Italian police froze ~€1M after AI-cloned voice of defence minister used to target business leaders. Canadian election: 5.86% of election images were deepfakes. Audio deepfake detectors at 88.9% but 45-50% drop on newer adversarial deepfakes. Inference cost collapse makes generation nearly free.→ Probability: 45%
- Mar 9, 2026
xAI's Grok generated ~3M sexualized images in 11 days. During 2026 Middle East conflict, AI deepfakes fueled misinformation on both sides — fabricated videos gained millions of views. White House released AI propaganda video mixing real military footage with Hollywood/anime. EU AI Act mandates watermarking for AI content (Aug 2026). Open-weight models enable offline generation of phishing, deepfakes, and malware beyond any safety guardrails. Italian police froze ~EUR1M after AI-cloned voice of defence minister targeted business leaders.→ Probability: 50%
Evidence Against
- Mar 7, 2026
Trust was already declining pre-AI — this may be continuation, not acceleration. Content provenance standards (C2PA) gaining traction. Regulation responding: South Korea AI Basic Act, China Deep Synthesis rules, EU AI Act. People developed literacy for photoshopped images — may adapt to AI. 25% is a very low threshold.
How Our View Evolved
- Mar 9, 202645%↑50%
Grok 3M deepfake images in 11 days. State-level AI propaganda (White House video). AI deepfakes in Middle East conflict. Voice cloning fraud in Italy. Real-world harm incidents accelerating.
- Mar 8, 2026Initial assessment: 45%
Baseline — initial published assessment
What Experts Say
Gartner
Technology research and advisory firm
“30% of enterprises will abandon facial verification by 2026 due to AI-generated deepfakes”
Deloitte Center for Financial Services
Financial services research arm of Deloitte
“GenAI-enabled fraud could reach $40 billion by 2027”
Siwei Lyu
Professor, University at Buffalo; deepfake detection researcher
“2026 is the year everyday people will be fooled by deepfakes”