7 min read

The AI election global boomerang

The AI election global boomerang

WELCOME BACK TO DIGITAL POLITICS. I'm Mark Scott, and I am an unashamed coffee snob. So, as we shift from summer to the fall, I've already heard the four words that I dread this time of year: pumpkin spice latte season.

Some logistics from me. I'm going to be in Brussels Sept. 19-20, then again on Oct. 8-9. Followed by Washington (most likely the second half of November), and Berlin on Nov. 20-21. If you're around and want to meet, ping me here or here.

Let's get started.


Global Majority, AI and Elections

A JAILED POLITICIAN RUNS TO BECOME HIS country's next leader via AI-generated election rallies and speeches. A deceased lawmaker urges voters to support his son in another country's election through videos created with artificial intelligence tools. An alleged war criminal rebrands himself as a cuddly cartoon — powered by images created with AI services — on his way to successfully winning his country's presidency. These are all clear examples of how artificial intelligence played a significant role in recent elections in Pakistan, India and Indonesia, respectively.

At a time when the over-hyped threat of AI undermining this year's megacycle of elections worldwide is (hopefully) subsiding, it's time to drill down on how this emerging technology is affecting political campaigns globally. For me, one thing is clear: the real innovation when it comes to AI and politics is happening in the so-called Global Majority countries — and not in places like France, the United Kingdom and, in November, the United States.

Sure, we have seen sporadic — and well-publicized — efforts to use ChatGPT and its rivals to woo voters in these developed countries. We've even seen efforts to skew elections via so-called deepfake videos and audio clips, most starkly in Slovakia's parliamentary elections late last year. But let's be honest. Many of these efforts have been quickly debunked. Political campaigners also tell me that even the more wonky AI use cases (involving analyses of voter databases via bespoke large language models) are still about two years away from being any good. In short: so far, AI has not been a factor.

Yet that hasn't stopped election after election from Southeast Asia to South America from showing clear signs that AI tools can play a significant — mostly public — role in political campaigning. First, a disclaimer: I'm not an expert in the domestic political situation of any of these countries. But from discussions with scores of digital rights campaigners, election officials and some politicians, what has become clear is that many campaigns in Global Majority countries have basically skipped one, if not two, generations of political technology to become a true hotbed for AI innovation.

To use a bad metaphor — it's similar to how these countries equally jumped over desktop computing to go from so-called 'dumb phones' to high-speed internet-enabled smartphones, all within a few years. It's a breakneck transition that has left many heads swirling. Next up: expect many of these AI tactics to boomerang back into the West.

"I think there is a lot of concern because it's uncharted terrain," Janet Love, vice-chairperson of South Africa's Election Commission told me when I asked her about her agency's approach to quelling AI-powered online disinformation. Thankfully, that country's recent election, in May, didn't see much of that nastiness (although there was a fair share of online falsehoods and gender-focused abuse.) Still, Love, the official, knows we're just at the start of this trend, not at the end of it. "All of us are feeling a huge need for greater capacity and expertise," she added.

The cost of AI tools certainly played a factor. If you can subscribe to many of these services for less than $30 a month, then it's a no-brainer to use them if you're in need of reams of politically-motivated images, videos and audio to blast at would-be supporters online. And while many of the Big Tech firms behind these tools have made public statements about stopping harmful uses of their technology, (voluntary) oversight of those pledges is patchy, at best.

Without seeking to diminish the importance of Global Majority countries, these markets are often too small and too obscure to get the necessary attention required to stop AI harm from happening. It's one thing for Kamala Harris' campaign to use AI tools to woo voters — something that would garner global attention. It's something else when it's, say, Moldova, which holds a presidential election and referendum on joining the European Union in late October.

For Nighat Dad, executive director of Digital Rights Foundation, a nonprofit organization in Pakistan, the main takeaway from how AI was used in that country's election in February was that it made clear how successful the technology could be in winning over supporters. "It had a huge impact," she told me. "It influenced people's minds." Now, she added, all political parties rely on artificial intelligence to create both public-facing messages and covert efforts to harm opponents. She has even seen it shift toward gender-based violence, with local female actors having their images used to create realistic deepfake pornography.

"An after-effect of generative AI in our elections is that a lot of women in the public eye have become victims of deepfakes," said Dad. Still, in Pakistan, few are willing to talk about such dangers. "No one is talking about how (AI) will affects humans," she added. "They only want to focus on the opportunities."

Thanks for reading Digital Politics. As a reminder, the first month of this newsletter will be free. If you like what you read, please sign up here — I'm offering a discounted subscription between now and Sept 30. Happy reading!

Chart of the Week

People worldwide were asked whether social media represented a positive or negative force for democracy in their countries.

Those in Global Majority countries, on average, had a more optimistic view of social media, while those aged at least 40-years-old, globally, held a more pessimistic outlook on the technology.

Source: Pew Research Center https://shorturl.at/K3C44

How to Really Combat Disinformation

IT'S NOT EVERY DAY THAT LAWS ABOUT money laundering and trademark infringement are used to take down a widespread Russian influence operation. But, when US attorney General Merrick Garland took to the stage last week, that's exactly what happened. In one fell swoop, the US government seized 32 websites that had pretended to be the likes of Washington Post and Fox News to peddle Moscow's disinformation and propaganda to unknowing Americans.

In conjunction, the US Treasury Department also sanctioned senior Russia officials – including Margarita Simonyan, the head of state-backed media outlet RT — for their role in the deception. It's not the first time Washington has taken such steps. But the designation is aimed at making it more difficult for those individuals to participate in the global world order (it's arguable if these efforts have teeth.)

This marks the latest chapter in the so-called 'Doppelganger' campaign that dates back to at least 2022. It involves Russian-linked groups using such fake versions of mainstream news outlets to dupe online users from the UK and Germany to France and, now, the US. In this latest episode, two Russian citizens were also indicted for funneling $10 million into a Tennessee-based company that created scores of pro-Russia social media content — without disclosing Moscow's involvement. Since November, 2023, this clandestine campaign posted almost 2,000 videos that received roughly 16 million views on YouTube, according to the US government. The company also peddled Kremlin propaganda on TikTok, X and Instagram, respectively. For a deeper dive, my colleagues have you covered.

There's a lot to unpack here. But Washington's sting is not the only game in town. Earlier this year, the EU also took action to quell this Russia influence operation. But instead of using decades-old trademark legislation, the 27-country bloc turned to its new social media rules, known as the Digital Services Act. It also didn't go directly after those peddling such covert campaigns, including scores of fake websites that pretended to be top-tier European media outlets like Der Spiegel and Le Monde. In Europe, the investigation's target was Meta, which may have infringed the Continent's digital rules by allowing Russian operatives to use its platforms to spread disinformation and deceptive advertising.

It feels odd that when confronted with a known influence operation, Europe went after the amplifier (Meta) and not the creator (Russia.) But the one-two punch between Washington and Brussels offers a stark example of how both sides of the Atlantic approach the influence threat.

In the US, law enforcement shut down websites, sanctioned individuals and indicted culprits. It was a one-off hit against what is likely a larger operation, with officials using the powers on hand to whack one mole (out of many.) In Europe, the focus was more systemic, forgoing dawn raids on Russian spooks to pinpoint how the Kremlin's propaganda reached social media users. Meta contends (rightly) these efforts barely resonated with everyday people. But for EU enforcers, the question is about the tech giant's role in how such disinformation was spread, not about why these fake websites were able to spring up in the first place.

There are a lot of reasons why Europe took this approach. For one, not every EU country views Russia as a threat, so coordinating between national law enforcement agencies can be tricky. Brussels already directly sanctioned Kremlin-backed media outlets like RT and Sputnik. But the bloc doesn't have the collective (investigatory) resources available to the US Department of Justice and Treasury Department to hit Moscow's agents where it hurts. EU law enforcement agencies arguably don't have the same global reach as their American counterparts.

I try to be balanced between how Washington and Brussels approach such issues. But it's hard not to view US authorities' recent actions and believe that, in the short-term, their all-guns-blazing approach — based on decades-old legislation — is a better tactic to fight Russian online interference than what European officials are trying to do.

In days, Kremlin-backed fake websites were shut down; people were indicted; and leading Russian officials were put on sanction lists. In Europe, despite the months-long investigation into Meta, 'Doppelganger' sites are still active, and their messages are still spread on social media (mostly on X, formerly Twitter.) No one has yet to be criminally indicted.

For me, the differing takes on Doppelganger illustrate how Brussels and Washington could work together on combating these influence operations. US authorities have the power to take immediate action on the underlying threat, while EU officials can take a more long-term systemic approach to quelling how such disinformation is spread online.

That would involve hard-nosed American law enforcement agencies working with as-yet untested European social media enforcers — both coming from different legal traditions. But, such a joined-up strategy could be more effective than working alone.


I'm still playing around with the format of this newsletter, so please bear with me. If you want to sign up, click here. I'm also thinking about setting up a WhatsApp community group if that would be useful to you. You can email me at digitalpolitics (at) protonmail (dot) com.