9 min read

Australia's digital rulemaking

Australia's digital rulemaking
This image was created via DALL·E.

G'DAY. THIS IS DIGITAL POLITICS. I'm Mark Scott, and when it comes to playing the long game, it turns out that Spain (and, most notably, the Spanish Armada) is in a class of its own. Bien hecho, compadres.

Let's get started:

— We're going deep on Australia's approach to digital policymaking this week, particularly when it comes to online platforms, gender-based digital violence and the obligations that social media giants must uphold for their users.

— That includes two interviews. One is with Michelle Rowland, Australia's minister for communications. The other is with Julie Inman Grant, head of the country's eSafety Commissioner, a local regulator.

— Also, despite almost no evidence of AI used at scale during the upcoming elections in the United States, Americans are still freaking out about the emerging technology.


Punching above its weight

AUSTRALIA OFFERS A PRIME EXAMPLE OF HOW a relatively small country can punch above its weight on digital rulemaking. That's not to knock the 26 million proud Australians (some of whom are readers to this newsletter.) It's just that the global digital policymaking discussion often centers on the European Union, United States and China. Almost everyone else — including reams of countries in the so-called Global Majority — gets left out.

But Australia and its current center-left prime minister Anthony Albanese are making a name for themselves. The country had already pushed through a deal, under the previous government, that allowed local publishers to be paid when their content appeared on social media. That model, similar to what already existed in Europe, has now been copied in Canada — equally to the annoyance of Big Tech.

But it's Albanese's plans, announced on Sept. 10, to set a mandatory age limit for accessing social media that caught my attention. Sure, technically, people are supposed to be at least 13-years-old before signing up to Instagram and TikTok (does anyone even use Facebook anymore?) But we all know that's a pretty low fence to climb, despite companies' recent efforts to keep those pesky kids off social media.

You have to see these latest plans in a wider context. By the end of October, a review is expected to be submitted on the country's Online Safety Act, or rules dating back from 2021 that tackle online harms like terrorist content and child sexual abuse material. The Act also mandates specific industry codes of conduct that are now being copied in both the EU and United Kingdom's separate legislation. In short, for a pretty small market, Australia has had an oversized impact. It offers a glimpse, for other smaller countries, about what happens when you combine online safety rules with a regulator willing to push those laws as far as possible.

When I met Michelle Rowland, the country's politician in charge of the country's online safety regime, in Australia's embassy in Brussels earlier this year, the former telecommunications lawyer was in full saleswoman mode. "In a short space of time, the breadth and the scale of new and emerging harms is quite startling," she told me, surrounded by a massive folder of briefing notes and sat across an exceedingly large boardroom table. "Whilst these massive organizations, with turnovers bigger than the GDP of some nation states, have considerable market power, we have a regulatory regime that recognizes market power, and where it is being inappropriately exercised."

Rowland was finishing up a whistle-stop tour of both London and Brussels — two capitals whose separate efforts, the Digital Services Act and Online Safety Act, both borrow and differ from what Australia is trying to do. "We don't have the EU's member states. There's only one minister for communications in Australia, at the federal level," she said when I asked how Australia's social media rules could be viewed versus their international counterparts. "The UK has taken a long period of time to get (its rules) through. We've had ours for some time now." Her basic gist: Australia is more centralized than what Europe is doing, and is further ahead than the UK on implementation.

The Australian politician was adamant that all regimes shared a similar ethos. "I do see the EU as being, again, a balancing of the need to address those harms, but also, the principles that our democracies hold, including freedom of speech and political expression are very important," she added. All lawmakers now fear the accusations of creating a so-called "Ministry of Truth" where dissenting voices are censored online. Personally, those accusations are more a political trick to tilt the social media scales in favor of more hyper-partisan users.

Rowland would not be drawn on what the review of Australia's rules would lead to. And, to be clear, our conversation happened before Albanese announced his new age-focused social media proposals this month. But potential heftier penalties for companies that fail to meet their legal obligations are an option. "Firstly on fines, it's one of the issues that we're examining in the OSA review." she said bluntly. "We will complete this review in this term of government. This term notionally concludes around May next year, so we do have time frames for that (aka: new legislation.)"

The core question is this: does she believe social media companies are doing enough, under the current Australian rules, to keep people safe online? "More needs to be done by platforms to keep Australians safe," Rowland answered when I put that to her. "These platforms know their consumers. They know their users better than anyone. Our philosophy, in this area, is that online safety needs to be a collective responsibility between governments, regulators, civil society and the industry."

Thanks for reading Digital Politics. As a reminder, the first month of this newsletter will be free. If you like what you read, please sign up here — I'm offering a discounted subscription until Sept. 30. Happy reading!

Chart of the Week

With just over a month to go before the US elections, a sizable minority of Americans polled believe artificial intelligence will be used for "bad," while almost two-thirds of those individuals remain concerned AI will be weaponized to create and share false information.

As of Sept. 30, little if any evidence exists that AI-generated political content in the US has had a meaningful impact on voters' decisions.

Source: Pew Research Center https://shorturl.at/PE8lf

The Aussie regulator on the front line

JULIE INMAN GRANT IS CLASSIC POACHER-TURNED-GAMEKEEPER. The American worked for both Microsoft and Twitter (before the Elon Musk era) before jumping ship, in 2017, to head a new Australian agency called the eSafety Commissioner. In recent months, the Boston University graduate picked a fight with Musk over how X handled online falsehoods linked to a deadly knife attack in Sydney. Her agency lost. Inman Grant previously had fined the social media company over $400,000 for failing to explain how it was handling child sexual abuse material. X has appealed.

But let's not dwell on X. When I spoke to Inman Grant, it was soon after deepfake pornographic images of Taylor Swift starting doing the rounds online in early 2024. My view on this is clear: deepfake porn is political violence. But I had called the head of Australia's eSafety Commissioner because she has championed the need for a serious approach to gender-based online violence for years. Her agency also deals with some of the nastiest online content out there. That includes so-called 'sextortion,' particularly targeted at teenage boys; horrific online child sex abuse, sometimes beamed live via mainstream video streaming services; and direct attacks on people across the country via online tools and social media that has led to real-life harm.

"It's sexualized, it's violent, it's rape threats," she told me when I asked her what types of issues, particularly those directed a women online, had come across her desk. "It's about killing your children. It's about your appearance and your weight and your supposed virtue. Gender double standards that still hold currency in our society."

One thing the eSafety Commissioner doesn't yet have is serious fining capabilities to hold companies to account. Yes, Inman Grant's agency has investigatory and legal powers to force social media companies, internet services providers and others peddling blatant harm to be more open about how they handle such awful material. But to maximize the regulator's reach, the American former tech executive has not been shy in naming-and-shaming companies directly connected to enabling this digital abuse.

It's a potential lesson for others. Yes, Australia's online safety regime has been around for longer everyone else's rules. But when confronted with an immediate 'no' from companies when they are asked to be more transparent about their practices, it's sometimes helpful to speak loudly about where you can affect change — all while holding out for potentially greater enforcement powers when the review of the Online Safety Act is finally published. My takeaway: companies care a lot about the court of public opinion.

"I don't think just responding to harms after they happen is going to be sufficient, particularly given how powerful these tools are, and how quickly they have proliferated," Inman Grant said, in reference to how AI-generated awfulness has started to percolate to the surface. "We're gonna have to look much more ex ante, and have the companies prove to us that they're building in safety protections that are efficacious and effective throughout the model."

That should ring alarm bells with social media and AI companies, alike. So-called ex ante rulemaking is the cause célèbre among digital enforcers. It allows agencies to step in before harm actually happens and demand firms show how they are protecting their users. The goal: to stop bad practices before they happen, and not enforce after the damage is done.

Just a reminder: I'm offering discounted subscriptions to Digital Politics until Sept 30. If you like what you read, please sign up here. If you want to chat, I'm on digitalpolitics@protonmail.com.

"We're never going to arrest or regulate our way out of this," she added. "But we do need to send a message that governments aren't going to accept this, and I think where we'll probably end up going with our Online Safety Act review is looking at more ex ante and being able to test the digital guardrails that these companies are putting in place."

Often, digital policy can get lost in translation. The focus on reducing harms and promoting online safety can get away from the fact that real people can be hurt, in the real world, with what happens online. During our conversation, Inman Grant recalled multiple instances when people had turned to her agency for urgent help. The cases involved former partners weaponizing intimate images online; teenagers being duped into handing over increasing amounts of graphic images of themselves; individuals, based out of Australia, creating reams of deepfake pornography about those inside the country, without their consent.

"What we're seeing in terms of their ability to respond to systemic issues, like repeat hacking of accounts and the sexual extortion, suggests to us that they really aren't picking up the signals," the American said when I asked how companies were responding to such cases.

Inman Grant has not been immune. Someone also created deepfake pornographic images of her in an attempt to silence her voice online. "It's creepy that the guy was that fixated that he would spend time creating a deepfake, trawling the internet for as many images of me as possible," she added.


They said what now?

"By focusing only on the most expensive and large-scale models, SB 1047 establishes a regulatory framework that could give the public a false sense of security about controlling this fast-moving technology," California's Governor, Gavin Newsom, wrote after he vetoed local legislation, known as SB 1047, that would have placed a sweeping safety safeguards on advanced AI systems.

"Smaller, specialized models may emerge as equally or even more dangerous than the models targeted by SB 1047 — at the potential expense of curtailing the very innovation that fuels advancement in favor of the public good."


What I'm reading

— Meta's Oversight Board gives an overview of how AI can be used to make content moderation more equitable and accurate. More here.

— The US State Department and several American tech companies announced more than $100 million in collective funding to help Global Majority countries take advantage of AI. More here.

— The United Nations and the Organization for Economic Cooperation and Development said they would work together on AI governance issues. More here.

— Ireland's data protection regulator fined Meta just over $100 million for privacy violations related to how the company failed to store and protect users' passwords. More here.

— The Institute for Strategic Dialogue analyzed how major platforms had prepared to combat harm and other problems ahead of the upcoming US election. More here.

— The European Commission said more than a hundred companies had signed voluntary pledges, known as the AI Pact, ahead of the bloc's mandatory AI Act coming into full force in 2026. More here.