10 min read

The Fake News Factory

We are entering a new era of social media where everyone lives in digital silos and the cost of spreading falsehoods is next to nothing.
The Fake News Factory
This image was created via DALL·E

KIA ORA. IT'S WEDNESDAY, AND THIS IS DIGITAL POLITICS. I'm Mark Scott, and this week's edition comes to you from New Zealand. I'm taking a couple weeks off, so the next newsletter (for paying subscribers) will hit inboxes on July 14.

I'm trying something different this week.

Ahead of the 2024 global megacycle of elections, I had the idea of explaining the links between the digital tactics that have now become all too common in how politicians get elected from Pakistan and Portugal to the United Kingdom and the United States.

Life, however, got in the way. (The best I did was this package around artificial intelligence, disinformation and elections.) So, I'm taking another crack at how we all now live in the Fake News Factory.

Let's get started:


The democratization of online tools and tactics

THE LAST DECADE REPRESENTED the second generation of social media. It was an era where the shine had significantly come off Facebook and Twitter (now X.) It was a time of repeated whistleblower reports about tech giants understanding how their content algorithms were pushing people toward polarizing and extremist content. It was a time of serious commercialization of these platforms by politicians eager to bombard would-be voters with billions of dollars of collective ad buys.

That era is now over. It's not that Facebook and YouTube are no longer important. They are — especially YouTube which has transformed itself into a global rival for traditional television in a way that has upended the advertising industry and fundamentally reshaped how anyone under 30-years old consumes video content. But where the 2015-2025 period was primarily defined by the dominance of a small number of Silicon Valley platforms, we're now in an era where fringe platforms, niche podcasts and the likes of vertical dramas have divided people into small online communities that rarely interact with each other.

This was happening before 2025. But we have reached an inflection point in how the online information ecosystem works. It has now shattered into a million pieces where people gravitate to like-minded individuals in siloed platforms. There is no longer a collective set of facts or events that form the foundation for society. Instead, most of us seek out opinions that already reflect our worldview, often demonizing those who we disagree with in an "othering" that only fuels polarization, misunderstanding and, potentially, offline harm.

And you thought this would be an uplifting newsletter.

Thanks for reading the free monthly version of Digital Politics. Paid subscribers receive at least one newsletter a week. If that sounds like your jam, please sign up here.

Here's what paid subscribers read in June:
— Debunking popular misconceptions around platform governance; The demise of the open, interoperable internet is upon us; How oversight over AI has drastically slowed since 2023. More here.
— Internal fighting among Big Tech giants has hobbled any pushback against antitrust enforcement; It's time to rethink our approach to tackling foreign interference; Tracking Europe's decade-long push to combat online disinformation. More here.
— Why the G7 has always been a nothing-burger on tech policy; You should keep an eye out on 'digital public infrastructure' in the battle around tech sovereignty; the United Kingdom's expanding online safety investigations. More here.
— The US is sending seriously mixed messages about its approach to tech policy; How the UK became the second place in the world to mandate social media data access; Artificial intelligence will upend how we consume news online. More here.

This bifurcation in how people consume online content has made it next to impossible for foreign interference operations to flourish like they once did. See, there's a positive point. Even two years ago, the Russians could flood the zone with Kremlin talking points and receive a significant bump in online interactions. The Chinese never went in for that sort of thing — though have progressively targeted Western audiences mostly with overt propaganda in support of the Chinese Communist Party.

Now, such efforts are almost certainly doomed to fail. The siloing of social media usage has been married with a need for authenticity — that sense of belonging and insider knowledge that can only come from deep roots in communities that can smell out an imposter from a mile off. That authenticity is something that foreign (covert) campaigns routinely do badly at. State-backed operations don't know the insider lingo; they don't have the long-standing credibility built up over months/years; and they don't have personal ties required to fully embed in the balkanization of social media.

But where state-backed actors remain a threat is in the amplification of existing domestic influencers often by automated bot-nets and other AI-powered tools aimed at juicing social media giants' recommender systems. The companies say they are on top of these covert tactics. But every time there's a massive global political event (or local election), Kremlin-backed narratives keep popping up in people's feeds — often via local influencers whose views just happen to align with Moscow. These individuals are mostly not connected with Russia. But they have likely received a boost from Kremlin-aligned groups seeking to spread those messages to the widest audience possible.

It's about domestic, not foreign

IN TRUTH, STATE-BACKED ACTORS are a very public sideshow to the main event driving ongoing toxicity within the information environment: domestic actors. Be they influencers, scammers, politically-aligned media or, ahem, politicians, they are the key instigator for much of the current. Many of these domestic players see some form of benefit from spreading harm, falsehoods and, in some cases, illegality online. That, it should be added, is then amplified by social media platforms' algorithms that have been programmed to entice people to stay on these networks, often by promoting the most divisive content as possible.

Such a dynamic has been around for years. It isn't a left- or right-wing issue — though repeated studies have shown that conservative social media users promote more falsehoods than their liberal counterparts. It's a basic fact that domestic social media users both know their audience better than foreign influence campaigns and that they have greater credibility with siloed local audiences than Russia, China or Iran.

What has shifted, though, is the ability for almost anyone to run a domestic influence campaign — or, you know, a mainstream political campaign — as if they had the resources of the Kremlin-backed Internet Research Agency. Over the last five years, the toolkit required to skew social media has become readily accessible and significantly cheaper than it once was. That has been spurred on even more through the rapid growth of AI-enabled tools (more on that below.) But everything from a Bangladesh-based bot farm to a Philippines-based dark arts public relations has now become an off-the-shelf product that can be bought via a few clicks on a public-facing website.

This shift has not gone unnoticed by criminals. In 2025, the highest volume of attacks in the (Western) information environment now come from those seeking to dupe social media users out of money — and not to alter their political allegiances. Yes, the impact on politics can have significantly bigger effects. But the rise of "financial disinformation" in terms of frauds and scams promoted on social media has reached pandemic proportions.

Collectively, such digital efforts to swindle people out of money now costs billions of dollars a year, and even that is likely a significant underestimate. It's also directly linked to a crime (aka fraud) when scammers buy social media adverts to convince people to sign up to Ponzi and other get-rich-quick schemes. I did a quick search, via Meta's ad library in six different countries, for such financial scams, and found a prolific amount of advertising that promoted such disinformation. Some of it was blatantly illegal, some of it was not (I'm not linking to it to avoid amplification.) But the fact such scam artists are openly flaunting the law should be a worry for us all.

This democratization of disinformation has only gone from bad to worse with AI tools. Be it cloning technology to spoof a victim's voice, AI-generated images attacking a political opponent or next-generation video software that creates falsehoods from scratch within minutes, the cost for generating toxicity, hate and polarization is now almost zero. Yes, these tools can also generate joy, laughter and entertainment. But the last six months have seen a rapid rise in AI-generated slop that is quickly moving from being easy to detect to being indistinguishable from the real thing.

Trust me, I'm a regulator

THIS YEAR MARKS THE FIRST TIME ON RECORD that several countries' online safety rulebooks are in full operation. Yes, Australia got things started almost five years ago. But with the European Union's Digital Services Act and the UK's Online Safety Act, the Western world has the first signs of what a well-resourced regulatory environment looks like when it comes to keeping people safe online.

Sigh.

It's not that the European Commission and Ofcom (disclaimer: I sit on an independent advisory committee at the British regulator, so anything I say here is done so in a personal capacity) aren't doing their best. They are. It's just both are fighting a 2020 war against perceived threats within the online information environment, and just haven't kept pace with the fast-evolving tactics, some of which I outlined above.

To a degree, the time lag is understandable. Regulators are always going to be behind the curve on the latest threats. Both agencies are still staffing up and learning the ropes of their new rulebooks. How successful either the EU or UK will be in making their online worlds safer for citizens won't be known for at least five years, at the earliest.

But there have been some serious mistakes, especially from the European Commission. Let's leave aside the political nature of the first investigations under the Digital Services Act. And let's leave aside the internal bureaucratic infighting that was always going to arise from such a powerful — and well-resourced — piece of legislation.

For me, the biggest error was how Ursula von der Leyen framed the new rules as almost exclusively a means for combatting Russian interference. That was done primarily to secure her second tenure as European Commission president. But the characterization of the Digital Services Act as an all-powerful mechanism to thwart the Kremlin's covert influence operations has continued well into this year — most notably in the two presidential elections in Romania.

Let's be clear. These online safety rules are many things. But, at their heart, they are wonky, bureaucratic and cumbersome mandatory requirements for platforms to abide by their own internal policies against illegal content. They are not about Russian disinformation. And they certainly are not about censorship.

Weaponization and unknown unknowns

And that takes me to the final big concern within the Fake News Factory: the weaponization of online safety rules. Since 2016, there have been those within the US that pushed back hard against platforms' efforts to quell illegal and abusive content. That has spiralled into conspiratorial claims that a Censorship Industrial Complex — made up of governments, social media giants and outsiders — is trying to illegally silence predominantly rightwing voices, often via new online safety legislation.

US President Donald Trump's administration has made it clear what it thinks of these rules — and has pushed back hard. It has threatened retaliatory tariffs against countries with online safety rules on the books. It has threatened to ban anyone who allegedly tries to censor Americans from entering the country. It has accused both the UK and EU of infringing on US First Amendment rights.

These attacks against what are, essentially, legal commitments obligating companies to live by their own internal rules — and to demonstrate that they have done so — are now part of the conversation in other Western countries. That includes (mostly) right-wing lawmakers across Europe seeking to weaken these online safety rules, accusing others of censoring conservative viewpoints and mimicking many of the long-standing talking points from their US counterparts.

It's true, particularly during the pandemic, that social media companies made content moderation decisions with imperfect facts. Some posts were unfairly removed or downranked as these firms responded, in real time, to government efforts to amplify scientifically correct information. But the rise of conspiracy theories, which insinuated a mass censoring of online voices, just didn't bear out with the evidence at hand. And that came after repeated reports from the US House of Representatives select subcommittee on the weaponization of the federal government.

If there was evidence of such abuse, then I would be the first to champion such findings. But as we enter the second half of the year, there is one core underlying fact that underpins everything I've written so far: no one has a clue about what happens on these platforms.

Long-time Digital Politics readers will have heard me go on about this for months — and, to be fair, it's part of my day job to look into this issue. But how the complex recommender system algorithms interact with people's individual posts, paid-for advertising and wider efforts to influence people online remains a black box. What I have outlined above, for instance, is based on my own research, what I understand anecdotally about how these platforms work and discussions with policymakers, tech executives and other experts.

The Fake News Factory is my own imagining of how the current online information ecosystem interacts and shapes the world around us. But without better awareness — via mandatory requirements that these firms open up to independent scrutiny, transparency and accountability — about the inner workings of these platforms, that imagining will remain incomplete, at best.

We are entering a new generation of social media with limited awareness, mass balkanization and an increasingly politicization of what should be clear objectives of keeping everyone safe online. How long this era will stick around for is anyone's guess. But, for now, the Fake News Factory remains as strong as ever.


What I'm reading

— The Organization for Economic Cooperation and Development analyzed the so-called age assurance policies from 50 online services — most of which did not have checks in place. More here.

— The team at the DSA Observatory did a deep dive into how individuals, non-profit organizations and consumer groups can bring private enforcement actions under the EU's Digital Services Act.

— The UK's Competition and Markets Authority laid out its rationale for why it had designated Google a so-called "strategic market status" under the country's new digital antitrust rules. More here.

— OpenAI submitted recommendations to the upcoming US AI Action Plan. The words "freedom" and "PRC" are mentioned repeatedly throughout. More here.

— Researchers at USC Annenberg looked at how the media covered the negative side of social media/technology, and found that the companies are rarely blamed. More here.