How Facebook Enables Dictators

In an age where truth is under siege and democracy teeters under the weight of manipulated narratives, one of the greatest threats to freedom isn’t found in a government building or foreign battlefield, but inside the algorithms of Meta.

Earlier this month, Meta, the parent company of Facebook, quietly notified users of a sweeping policy: it will now use your public posts, photos, captions, and comments to train its generative AI systems, including Meta AI and their powerful new LLaMA 4 model. Most users likely skimmed past the notification, unaware that their online lives were about to become training fuel for artificial intelligence models capable of spreading misinformation faster and more persuasively than any propaganda machine in history.

This isn’t a tech story. It’s a democracy story. And it should terrify every citizen who values truth, agency, and freedom.

The Weaponization of Personal Expression

Meta’s policy effectively allows it to turn your everyday expressions, like your humor, grief, joy, anger, political opinions, and cultural references, into data points for machines that can simulate human language with chilling accuracy. The company assures us that only “public” information will be used, but in an age of digital permanence, what is truly private anymore?

Worse still, Meta continues this data extraction even if you object. If someone else mentions you, tags you, or shares an image of you, you’re still swept into the training set. This isn’t consent. It’s coercion by design.

But the deeper problem is what these models are being trained to do.

AI as a Megaphone for Authoritarian Propaganda

In my research paper on LLMs and their suceptibility to disinformation, Meta’s LLaMA 4 ranked worst among leading AI models at identifying and removing Russian propaganda. In testing, the model repeatedly accepted falsehoods pushed by Kremlin disinformation campaigns as fact, from lies about Ukraine’s sovereignty to conspiracy theories about Western governments.

This isn’t a bug. It’s a crisis.

AI systems trained on massive pools of unfiltered public data, riddled with falsehoods, conspiracies, and extremist rhetoric, begin to mirror the digital noise of our information landscape. And when those systems become tools in the hands of powerful actors, whether rogue states, political operatives, or attention-hungry influencers, they don’t correct the record. They amplify the chaos.

The implications for liberal democracies are existential. Propaganda no longer needs to be broadcast in broken English from fringe websites. It can now emerge in perfect, empathetic, believable language, crafted by AI, shaped by your data, and targeted to your vulnerabilities.

Psychological Profiling at Scale

Perhaps most disturbing is the emergence of psychological profiling as an engine of AI training. Every comment, like, and caption is a thread in the tapestry of your digital identity. These threads are now being harvested to train systems that understand not just what people say, but how they think.

What makes you angry. What scares you. What keeps you scrolling.

This data, your data, is shaping models that can craft emotionally charged narratives, simulate convincing personalities, and deliver tailored content engineered to trigger emotional reactions and override critical thinking.

The same tools used to sell you sneakers or streaming services can now be used to erode your belief in elections, institutions, and each other. In the wrong hands, this isn’t marketing. It’s psy-ops.

The Collapse of Consent and Accountability

Meta claims you have the right to object. But this “opt-out” system is a façade. It assumes users have the time, awareness, and understanding to navigate an intentionally opaque process. Most won’t. And even if they do, the protection is weak.

If you appear in a public photo? Still used. Mentioned in a friend’s post? Still used. Tagged in a meme? Still used.

This is surveillance capitalism at its most extractive: a system where your life is mined for value but you have no meaningful control over the results. And unlike traditional surveillance, which is hidden, this is sold as a feature.

Freedom Eroded by a Thousand Data Points

Let us be crystal clear: this is not a debate about convenience or innovation. This is a battle over whether democratic societies can survive the unchecked power of unaccountable tech giants who harvest human experience and hand the resulting weapon to anyone who can wield it.

Meta is not a neutral player. By refusing to take responsibility for the behavior of its AI, and by building systems that absorb rather than resist manipulation, it is functionally enabling the enemies of democracy, foreign and domestic.

If LLaMA 4 cannot reliably distinguish between fact and authoritarian fiction, it is not just flawed. It is dangerous.

What We Demand

  1. A full moratorium on using public data for AI training without explicit opt-in consent.
  2. Transparent auditing of AI outputs for susceptibility to propaganda and misinformation.
  3. A public accountability framework for companies whose AI systems influence political discourse.
  4. Independent oversight, not self-regulation, of how tech companies interact with democratic institutions.
  5. Immediate safeguards to prevent authoritarian regimes from exploiting these tools.

That’s why we previously wrote two related articles, Critical Thinking Is Not Enough, We Need Verified Facts and We Need An Information Authority to Defend Truth.

Final Word: This Is About Survival

If this seems alarmist, good. Alarm is appropriate.

We are witnessing the construction of a psychological weapon aimed inward, built with our own words, emotions, and identities. And it is being handed to the highest bidder, whether that’s an advertiser, an algorithm, or an autocrat.

Our freedoms depend not only on what we can do, but what others can do to us. In the age of AI, that equation is changing faster than we can respond. We cannot afford to wait.

The battle for democracy will not be fought in parliaments or polling places alone. It will be fought in code, data, and algorithms. Right now, Meta is building the battlefield, and it’s doing so with your help. Whether you know it or not.