AI and the Industrialization of Brainwashing

Manos Tsagkias

18 January 2026
Keywords: outreach

As Bill Gates outlines how 2026 could shape decades ahead in education, health, energy, AI, and philanthropy [0], I’d add one more footnote:

The danger of AI enabling a worldwide brainwashing machine.

(And no—I don’t mean that ChatGPT, Gemini, or any single model suddenly “goes evil.” The risk is more subtle and far more systemic.)

The combination of multi-modal AI (text, audio, video—including deepfakes), extreme engagement on social media, precise cross-platform tracking of our interests, and the quirks of human decision-making (Ariely’s Predictably Irrational: relativity in choice, decoy effect, emotion, endowment effect) creates the technical conditions for highly potent, personalized content capable of steering public opinion. The capability already exists; what remains uncertain is the scale of its eventual impact.

There is precedent. The Economist reports that a growing number of governments and political actors hire private firms to generate content designed to influence audiences [1]. This activity is not confined by borders: an organization in Greece could pay a company to push a message to 40-year-olds in Paraguay. With generative AI, the same message can be endlessly adapted—one version for liberals, another for conservatives—each optimized to fit a group’s filter bubble and amplified by networks of coordinated or fake accounts.

AI makes this scalable across demographics and languages at near-zero marginal cost. A narrative can be tailored and launched worldwide within hours. If the final message is too far from today’s consensus, intermediates can be released gradually—morphing opinion step by step, much like transforming one image into another. Over months or years, the progression can become impossible to trace, especially given how quickly social media buries the past.

This has the potential to become a global propaganda machine. Unlike older systems controlled by a single state, anyone with enough money can now buy influence. Democracy, foreign policy, even social cohesion risk being shaped by whoever funds the loudest narrative. To me, this is one of the most immediate ways AI can be weaponized against society.

Researchers caution that fears about AI-driven misinformation can be overstated and that evidence of large, decisive persuasion effects is still limited [4]. Yet even modest shifts, when repeated across millions of people and coordinated across platforms, could meaningfully distort democratic discourse. The risk is not magic persuasion—it is industrialization: volume, speed, personalization, and persistence.

Platforms have little incentive to stop it—and even when they try, they often struggle [2, 5]. Engagement remains the core business model, and engaging content is not the same as truthful content. Regulation could help, but shadow operators will persist, much like state-linked paramilitaries of the past.

So what can we do? One path is strong, transparent fact-checking tools integrated directly into social apps—ideally mandated by regulation [3]. Newsrooms and NGOs are already experimenting with AI-assisted verification, though these tools remain uneven, especially in smaller languages [6]. They won’t be perfect, but imperfect defenses are better than none.

Without safeguards, we may see history rhyme: populist propaganda that once led the world to catastrophe will again find a frictionless path to every screen on Earth.


References

[0] The Year Ahead 2026 – Gates Notes

[1] A growing number of governments are spreading disinformation online – The Economist

[2] Israel-Hamas Conflict Was a Test for Musk’s X, and It Failed – Bloomberg

[3] Can We Develop Herd Immunity to Internet Propaganda? – Der Spiegel

[4] Misinformation Reloaded: fears about the impact of generative AI are overblown – Harvard Kennedy School Misinformation Review

[5] Social Media Platforms Were Not Ready for Hamas Misinformation – CSIS

[6] Generative AI is already helping fact-checkers, but less useful for small languages – Reuters Institute, Oxford

More on online propaganda from the Oxford Internet Institute.