9 min read

The year ahead: what I'm watching (part one)

The year ahead: what I'm watching (part one)
This image was created via DALL·E.

WELCOME TO DIGITAL POLITICS. I'm Mark Scott, and this is how I spent my Saturday. I got up, had breakfast, and spent more than four hours waiting to buy Oasis tickets. And then, after all that, Ticketmaster's algorithm raised the prices through the roof. In honor of that, I give you this.

As I start my new job this week, I've been thinking a lot about what's ahead for the rest of 2024 when it comes to digital policymaking. My mind works by theme, not country/region, so I've broken down what I'll be watching over the next four months.

Couple of caveats. This is an ever-changing list, and one that skews more toward the developed world than so-called Global Majority countries. It's also not complete. If you think I've missed something, let me know at digitalpolitics (at) protonmail (dot) com.

Let's get started.


Social Media

THE EUROPEAN UNION'S POLITICAL MACHINERY is starting to whir back into gear after the summer break, and the bloc's content regulators are in for a busy period. The European Commission already has enough investigations and charges linked to its Digital Services Act to keep the boffins working overtime until December. Add to that: ongoing political efforts, by some, to push Europe's social media rules in specific directions ahead of Ursula von der Leyen, the incoming Commission president, doling out top assignments during her next 5-year tenure in the Berlaymont building.

Yet these glitzy public shows of strength are not where the true action will take place. Much of the detail required for Europe's new rules (known, in the lingo, as Delegated Acts) has yet to be published, and several of these acts are expected in the coming months. National EU regulators (again, in the parlance, referred to as Digital Services Coordinators) also must hire scores of experts; devise how to work with each other on cross-border cases; and unpick the regulatory reality from what Brussels created with its untested Digital Services Act.

In short, there's a lot of detail still to be hammered out at a time when von der Leyen wants to show Europe has its act together on digital. Add in ongoing foreign interference campaigns and potentially new charges against certain Big Tech firms (my bet: Meta), and Brussels' efforts to police social media will soon look like an effort to build the plane in mid-flight.

Jumping across the English channel, and the United Kingdom equally has work to do on its own rules, know as the Online Safety Act. London went a slightly different route than Brussels with its social media rules. But it is expected to publish so-called 'codes of practice,' or specific guidance for how to comply with the new provisions, by the end of the year.

What's complicating this effort, though, is a recent spate of racist violence — often spurred on via social media posts — that has renewed local politicians' desire to revamp the Online Safety Act, even before it has truly got underway. Let's be clear: social media did not instigate this summer's riots in Britain. But the country's regulator, Office of Communications, or Ofcom, is under mounting pressure to act.

Two countries worth tracking are Canada and Australia. Ottawa may finally move ahead with its (long-troubled) Online Harms Act, while Canberra is currently reviewing that country's Online Safety Act. Nothing is certain. But any legislative movement, from these countries, would add to the pressure, within democratic countries, for greater checks on social media.

And that takes us, finally, to the United States. For those who have followed my career, I remain massively skeptical Washington will do anything on digital policymaking, no matter who wins the White House in November.

Within the Beltway, social media and content moderation have become culture war issues — something that has left the US vulnerable to both foreign and domestic threats ahead of the upcoming nationwide vote. Don't expect Congress to act. Federal agencies like the Federal Election Commission have also dragged their feet to enforce existing rules on, say, online influencers who take money from political candidates to promote campaigns online.

As with almost all digital issues in the US, my interest lies where actual laws are made: the courts (and, to a lesser degree, US state legislatures.)

Last week, for instance, a federal appeals court released an opinion that claimed TikTok could be legally liable for harmful content posted on its network because of how the company's complex algorithm served up such material to users. That would potentially undermine a 30-year-old law, known colloquially as Section 230, that gave these networks immunity from what others posted on their sites.

If the appeals court's view holds up (and that's a big 'if'), then it may open the floodgates to lawsuits from social media users against Big Tech firms over accusations companies aided in inflicting harm on these individuals.

Thanks for reading Digital Politics. As a reminder, the first month of this newsletter will be free. If you like what you read, please sign up here — I'm offering a discounted subscription between now and Sept 30. Happy reading!

Privacy

WE'RE STAYING IN WASHINGTON WHERE all eyes (at least in the data protection world) are focused on whether the Kids Online Safety Act, or KOSA, will become law before the end of 2024. That bill and another piece of legislation, known as Children and Teens' Online Privacy Protection Act, or COPPA 2.0, made it through the US Senate over the summer.

If you want to go deeper, here are good overviews (KOSA and COPPA 2.0). The basic gist is that companies would be banned from targeting advertising at children and would not be able to collect minors' data without consent. Social media companies would also have a so-called 'duty of care' to under-18 year-olds using their services. An example: designing algorithms to avoid harmful content like material promoting eating disorders being shown to kids. It mirrors, to a degree, similar requirements already mandatory in the UK.

Time is running out on both privacy-centric bills, especially as the US House of Representatives has not picked up the mantle on either proposals. It's very unlikely that either the Democrats or Republicans will win both the House and Senate in November, so any path forward is expected to include a mad-dash effort, possibly in the so-called lame-duck session after November's election.

My best guess: we won't see either bills become law this year and Washington will again prove it remains second tier on any form of digital policymaking.

On the other side of the Atlantic, two battles are underway within Europe when it comes to privacy, though they are linked.

For years, many have grumbled the bloc's General Data Protection Regulation, or GDPR, has punched below its weight. They look at how the rules are enforced — in a system that gives Ireland and Luxembourg massive clout over Big Tech firms, mostly because these countries are where the companies are headquartered — and say Dublin and Luxembourg are just not doing enough.

I would possibly refer them to the multi-billion dollars of collective fines doled out by these two jurisdictions in recent years as a counterpoint. But it is true Europe's privacy revamp, now six years old, has not been the Big Tech slayer that many had hoped for.

Expect that battle (between those who want tougher enforcement and those who prefer the status quo) to again surface when the new European Commission takes over in mid-November. A recent Commission report flagged issues with how GDPR had been enforced, and many privacy campaigners — let alone disgruntled national regulators — would like to overhaul the current enforcement system.

Their goal: to centralize cases into the likes of Meta, TikTok and Amazon, so that Brussels, not Dublin or Luxembourg, run the show. Yet Commission officials acknowledge there is little appetite to enter yearslong negotiations to redo Europe's privacy laws, meaning this fight will be more bark than bite.

And that takes us to the second front: the European Data Protection Board. This pan-EU group of national regulators has played host to bitter battles (if that's possible?) between domestic enforcers scrapping over who has the final say in enforcing the bloc's rules. That has been led by a number of campaigning agencies (looking at you, German data protection authorities) that don't think Ireland and Luxembourg are doing enough.

In Europe's labyrinthine privacy enforcement structure, these two national capitals, technically, have final say over fines and remedies. But there's a (super complex) appeals process baked into the European Data Protection Board that gives that body's secretariat a lot of power to settle such disputes. In case after case, these officials have sided with a more aggressive approach to enforcing Europe's privacy standards.

That doesn't mean it will be smooth saying. Meta, for instance, is now suing the European Data Protection Board (here and here) over claims this pan-EU group overstepped its powers. It's unclear how successful the company will be. But the question remains: who gets the final say on enforcing arguably the Western world's most important data protection regime?


Chart of the Week

Freedom House, a nonprofit organization, ranked how countries approached digital issues related to domestic elections. The darker the red, the worse the ranking.


Artificial Intelligence

AND SO WE COME TO AI. It's not the most important policy topic linked to this (overhyped) technology. But both the US and European Commission (on behalf of the EU) will sign the Council of Europe's Framework Convention on Artificial Intelligence in Vilnius on Sept. 5.

Many in civil society grumble the proposals have been gelded (example: massive carve outs for national security and defense) to the point of making them useless. But the framework does represent the first legally-binding international treaty aimed at corralling harms associated with AI. It certainly will not be the last attempt.

In Europe, officials and companies are still scrambling to figure out how the bloc's Artificial Intelligence Act is going to work. Add into the mix: a somewhat connected 'AI Pact,' or set of voluntary commitments that Brussels wants firms to sign up to before much of the AI Act actually comes into force over the next two years.

Companies have been asked to sign up by mid-September ahead of a flashy ceremony planned for the end of the month. The non-binding pledges include: 1) put processes in place to ward against harm; 2) ensure humans oversee technology that will fall under the AI Act; and 3) include markers that let users know when they are shown AI-generated content.

The Commission's AI Office, or short-term effort to get ahead of the thorny AI enforcement questions, still needs to hire experts and find ways to work better with its international counterparts. European countries, too, need to figure out which (over-stretched) local regulators will be tasked with part of the AI Act's oversight.

In the US, much will depend on who wins the White House, given how differently Donald Trump and Kamala Harris have defined their approaches on AI. Legislative efforts by some in the US Senate also must be taken with a pinch of salt, given Washington's meager digital legislative track record.

For me, the $64 million question instead relates to the future of the so-called US AI Safety Institute, or agency within the US Department of Commerce tasked with promoting public safety and scientific rigor on questions about AI safety. The body was first announced at the UK's AI Safety Summit last year, and follows in the footsteps of a British entity, with the same name, whose remit equally involves overseeing the latest AI models.

But whereas the Brits gave their body an initial budget of $130 (albeit one that will likely be slashed, due to that country's economic woes), its American counterpart is still struggling for money. Over the summer, the US Congress earmarked $10 million for the AI Safety Institute — a tiny fraction given how much is being spent on AI research, privately, across the US.

Despite the lack of funding, the agency's workload is not slowing down. Last month, it signed agreements with both Anthropic and OpenAI to run tests on both companies' future AI models, before release, in the name of public safety. To do that properly, it's going to take well-paid and knowledgeable personnel to get under the hood of arguably some of the most complex large language models out there.

For those who believe AI represents an existential risk to humanity (caveat: my take is that this view is still overblown), then a cash-strapped institute that could soon be culled under a Trump 2.0 administration is just not a very good look.

How well the agency handles its first outings with OpenAI and Anthropic, respectively, will be key indicators if such so-called co-regulatory oversight of complex AI systems — in the name of public safety — is a model fit for purpose.

Stay tuned for the second half of my 2024 watchlist that will drop into your mailboxes on Sept 3. My goal is to keep Digital Politics to a maximum of 2,000 word, hence the double newsletter this week to cover all topics. If you want to sign up, click here. You can also email me at digitalpolitics (at) protonmail (dot) com.