10 min read

Platform governance: myths vs reality

There's an increasingly political battle raging over what happens on social media worldwide. Here's what is really going on in 2025.
Platform governance: myths vs reality
This image was created via DALL·E

WELCOME BACK TO ANOTHER DIGITAL POLITICS. I'm Mark Scott, and I've joined a yearlong taskforce to help the United Kingdom figure out how to implement its upcoming social media data access rules. We're looking for a PostDoc to help with that work — check that out here and spread the word (or apply!)

— There are a lot of misconceptions about nascent rules to oversee social media. Let's unpack what is really going on in 2025.

— The way the internet currently operates is under threat. That should worry us all.

— The speed of AI oversight has significantly slowed over the last two years. Here's a chart that explains that.

Let's get started.


MANY OF US SPEND A LOT OF TIME ON SOCIAL MEDIA. There's no shade in that. It's just the truth. And despite all the great things that TikTok, Instagram and YouTube can do, there are also some serious downsides. That includes: automated algorithmic recommender systems designed to keep us scrolling; masses of coordinated spam and inauthentic behavior; hate speech, terrorist content, and child sexual abuse material; foreign interference, most notably targeted at elections.

The thing with social media is that it's a tool. One that can be used for both good and bad. But as we hurtle toward the middle of 2025, we're living in a bizarro world where there are a lot of half-truths and falsehoods about regulator- and platform-led efforts to quell the bad stuff online. Ironically, these platform governance efforts are designed to weed out half-truths and falsehoods before they have real-world harm.

Without getting too binary, the dividing line falls between those who believe any form of platform governance equates to illegal attempts to quell people's legitimate free speech rights and those who believe platforms must do more to stop the spread of harmful content online. The split has taken on an increasingly geopolitical turn after the Donald Trump administration made repeated claims international platform governance regimes — in places like the European Union, United Kingdom and Brazil — were unfairly targeting US citizens and firms.

The truth is a lot more mundane than that. In reality, the likes of the EU's Digital Services Act and UK's Online Safety Act (disclaimer: I sit on an independent board advising the UK's regulator, so anything I say here is done so in a personal capacity) are almost exclusively transparency efforts to 1) get social media companies to explain how they go about their business; 2) allow regulators to hold these firms to account for their internal policies; and 3) provide an opportunity for social media users to seek redress if they believe their content has been mishandled.

Thanks for reading the free monthly version of Digital Politics. Paid subscribers receive at least one newsletter a week. If that sounds like your jam, please sign up here.

Here's what paid subscribers read in May:
— Here are three short-term ways to boost transatlantic ties on tech; the long-term effects of annulling Romania's presidential election because of alleged online hijinks; the democratic world's media ecosystems are degrading quickly. More here.
— How Washington is giving up its role as a leader within international tech policymaking; how Brussels has embraced its inner "MAGA" to promote a more muscular approach on digital to the world. More here.
— Western policymakers are shifting away from greater oversight of AI; the India-Pakistan conflict demonstrates how online falsehoods still affect offline conflict; companies are more responsible than governments/users for combating online hate speech. More here.
— The tide toward digital sovereignty will lead to increased digital barriers between countries; Romania's presidential election was rife with online shenanigans; the cost of using the most advanced AI models has fallen 280-fold. More here.
— Europe and the US are making it easier for AI companies to operate without oversight; transatlantic data flows are still on shaky ground; digital sovereignty may lead to a 4.5 percent reduction in the global economy. More here.

That has led to a series of uber-wonky risk assessments and independent audits, under the EU's platform governance regime (see here), that outline — in too much detail, frankly — all the machinations social media companies go through to comply with their existing terms of service. Only if these firms do not live up to these internal checks and other now legally-mandated platform governance rules will regulators step in. London is expected to carry out its more targeted risk assessment asks in the coming months.

So far, only X, formerly Twitter, has been issued a so-called statement of objections, or charge sheet, under the Digital Services Act. I have no inside knowledge of when a final decision will come. But, given that Brussels announced its preliminary findings last summer, I would expect X to face a fine and series of remedies by September. If that does happen, expect a likely pushback from Washington over claims that Brussels is implementing a protectionist, free speech-bashing racket to promote its own interest because it can't create its own global tech champions. Sigh.

Such accusations miss the point. They miss the point because if you scratch just a little below the surface, existing platform governance rules are mostly failing. What regulators are quickly finding out is that social media companies are complex machines — ones that, as of 2025, are taking a more adversarial approach to how outsiders question how they function. That means regulatory agencies, many of which have newly-minted, but as yet untested, powers, are finding themselves with too few resources to really get under the hood to mitigate potential harm.

What that has led to — as we saw in Romania's aborted first-round presidential election last year — is an often kneejerk reaction by politicians eager to use platform governance regimes in ways that they were never intended.

No, these online safety regimes are not about limiting people's speech. No, they are not about combating (Russian) foreign interference. No, they are not about hobbling a growing economic adversary (read: the United States) via hefty fines. They should be viewed as a legally-mandated reminder to social media giants that they need to abide by their own internal content policies.

In reality, the current state of play for most Western platform governance rules is middling, at best. Most regulators are still underpowered to do their jobs. Many of the actual oversight is not yet in place. And the growing adversarial stance of many social media platforms means that much of the goodwill necessary to conduct routine, daily oversight is waning quickly (at least in public).

Before I get angry emails, it's also true we are still very early into these untested regimes. No one has ever tried to institute mandatory oversight for such globe-spanning platforms before. It was never going to be easy, simple or without failure. I would argue you need to view "success" over a 3-5 year time horizon — not via a first volley of fines coming out from Brussels, London or elsewhere.

What I do worry about, however, is that for most citizens in countries with these platform governance rules, there is little, if any, demonstration of their online experiences improving through such greater oversight. Confronted with a growing minority who claim such regulation is censorship, the failure to show any meaningful upside to these rules for average voters leaves an opportunity for those who would seek to weaponize any form of platform governance via the lens of an illegal threat to fundamental free speech rights.

I have been critical of online safety rules — including the growing desire to implement so-called age verification technology to ensure minors do not get access to certain types of content online. But when you take a hard look at what the likes of Brussels and Canberra are doing on platform governance, there's just no way to view that work as illegal censorship. At best, it's a faulty effort to require global tech companies to be transparent, equitable and forthcoming about how their existing internal safeguards are designed and implemented to keep users safe online.

I contrast that with a number of US state-based rules aimed at reducing access to online content — often for somewhat valid reasons — and it's hard not to feel troubled by such rules.

Currently, 17 US states have implemented age verification rules to stop children from accessing porn sites within their borders. That is a valid choice for policymakers to make. But it also means, given the draconian nature of age verification that requires everyone to prove they are of age before accessing such material, these sites have also become inaccessible to adults — in a way, I would argue, that is an unfair restriction of their rights to access information.

It's weird to be in a position as a defender of pornography. I make no moral judgement on such material. But I do worry that in an effort to protect children from this material, US policymakers — and, now, many of their international counterparts — are entering a platform governance quagmire that gives lawmakers too much say over what can, and can't, be accessed over the internet.

At their imperfect best, platform governance rules are a tool to hold social media companies to account for what they have promised to do to keep people safe online. It is about transparency and accountability — and nothing more.

But this shift toward verifying who can access content via the internet is something that we should all be concerned about. It is an over-reach — often done in the name of protecting children — that veers from the basic tenets of platform transparency toward the creation of a growing list, designed by politicians, of content that people can not access online.


Chart of the Week

REGULAR DIGITAL POLITICS READERS will already be aware of how policymakers' interest in setting new guardrails for artificial intelligence has significantly dwindled in recent months.

But researchers at Stanford and Harvard devised this handy timeline for global AI governance that shows a steady stream of national, regional and international efforts to corral AI — until 2025.

What has come, so far this year, is primarily White House executive orders, including one that revoked Washington's previous pledge to promote AI safeguards.

The blue bars equate to "soft laws," the yellow bars to "hard law proposed," the green bars to "hard law passed" and the red bars to "hard law revoked."

Source: Sacha Alanoca, et al.

The open internet is in jeopardy

FOR THOSE OF YOU WHO AREN'T GENERATION X (or maybe Elderly Millennials), you won't remember a time before the internet was omnipresent. But the digitally-focused social and economic boom over the last 30 years — as well as some serious problems created by technology — has been predicated on a few basic underlying principles. Those date back to the early days of the world wide web, and focus on: 1) a decentralized and bottom-up oversight of the online world; and 2) a multi-stakeholder model that includes companies, civil society groups, academics and governments working together.

This two-pronged strategy allowed for a rapid growth of internet usage, the creation of international protocols to ensure underlying infrastructure communicated with each other, and the ability for anyone (caveat: with the technical and financial resources) to participate in what had been, in many ways, a golden era of digitally-powered economic and social gain.

That structure, however, is now in serious jeopardy.

When government officials, industry executives and civil society groups meet in Lillestrøm, Norway for the annual United Nations Internet Governance Forum later this month, the future of such a decentralized, inclusive model for how the internet will develop over the next decade is in question. I get it. It's hard to get excited or animated about the inner workings of internet governance policy. But stick with me.

Ever since China became a serious player in the world of technology twenty(ish) years ago, Beijing and its authoritarian allies have been pushing for greater government control over how the internet works. That makes sense. If you're a state that has a say over almost all aspects of citizens' lives, then giving them carte blanche to surf the web — often finding information about how repressive these regimes actually are — is not going to work.

In response, the US has led the pushback against such government control. In part, that is an economic choice to support American tech firms that have been the main benefactors of a free, interoperable internet. But it also plays to Washington's long-standing pledge to promote human rights and democratic values worldwide — including via the internet.

That status-quo is going through a major transition. China (and, to a lesser extent Russia) has been successful in wooing other countries, many from the Global Majority, to its cause. That's why the UN's recent Convention against Cybercrime was such a worry. It placed, for the first time, countries in the driving seat around internet governance policy that many fear will become the new normal.

At the same time, the US is forgoing its traditional leadership position in promoting a more open, multi-stakeholder approach to internet governance. Much is still up in the air with how the Trump 2.0 administration will approach these topics. But initial negotiations — often in opaque UN gatherings most of us have never heard of — linked to new internet standards have seen Washington pull back from its vocal backing of how the current internet model functions.

This shift will not be felt overnight. But, gradually, countries are taking on a stronger governance position in the world of technology that is fundamentally different from how these networks developed over decades. Many politicians want greater control over digital at a time when local citizens are becoming more digitally-aware.

Without the US leading the charge to keep the internet open, interoperable and mulit-stakeholder, it's likely China will have an easier chance to push its top-down approach in the years to come. That would be a mistake — one that will be felt across the world unless democracies (beyond the US) start to advocate publicly for a return to how the internet has worked for generations.


What I'm reading

— The US State Department chastised European countries for failing to uphold their democratic principles, including related to online speech. More here.

— The Institute for Strategic Dialogue goes deep on what the "manosphere" is and its impact on online culture. More here.

— The Canadian Centre for Child Protection produced a podcast series about Project Arachnid that detects and removes child sexual abuse images. More here.

— Australia's eSafety Commissioner wrote a position paper on so-called "immersive technologies" and their potential harms. More here.

— Researchers unpack the difficulties in accessing social media data via the EU's Digital Services Act. More here.