This past week, Australia said it would fine X (formerly Twitter) for failing to provide information about its efforts to combat child exploitation. According to the Australia eSafety Commission, the social media platform told officials that its automated detection for abusive material has declined after Elon Musk bought the company.[1]
The fine totals 610,000 AUD, which is about 384,000 USD.
Australia’s eSafety Commission Fine
Australia has a national law that requires platforms to disclose what they’re doing to fight child exploitation on their services. X did not comply with that law, according to officials. They had sent legal notices to X, Google, TikTok, Twitch, and Discord in February requesting details about the measures for detecting and removing child sexual abuse content.
Julie Inman Grant, Australia’s commissioner in charge of online safety, said, “Companies can make empty statements like ‘Child exploitation is our top priority,’ so what we’re saying is show us. This is important not only in terms of deterrence in the types of defiance we are seeing from the companies but because this information is in the public interest.”[2]
Source: X
X Declines in Its Child Protection Measures
Elon Musk purchased Twitter for $44 billion in October 2022. He’s since rebranded Twitter to X and loosened the platform’s content moderation rules in favor of “free speech.” Despite telling the public that the platform was suspending hundreds of thousands of accounts for sharing abusive material, such content still persists on the platform.
X told Australian officials that its detection of child abuse material on the platform had fallen to 75% from 90% in the three months after Musk bought the company.[3] The detection has improved, but not completely.
Source: X
Google and X failed to address all the regulator’s questions. X’s response was more concerning, though Google received a warning.
There are plenty of approaches tech companies take to detect and eradicate child sexual abuse, including automated scanning tools. Some companies use these tools in only certain circumstances but claim to respond to abuse reports in minutes. Others may take hours.
X can appeal the fine. Lucinda Longcroft, the director of government affairs and public policy for Google, said that “Protecting children on our platforms is the most important work we do. We remain committed to these efforts and collaborating constructively and in good faith with the safety commissioner, government, and industry on the shared goal of keeping Australians safer online.”
X claimed to have a “zero-tolerance policy” regarding sexual abuse material and a “commitment” to finding and removing the content on its platform. The company said it uses automated software to detect abusive content bolstered by experts in 12 languages.
X also said that “Children are not our target customer, and our service is not overwhelmingly used by children,” when asked about whether children may be targeted for grooming by X. X’s chief executive Linda Yaccarino said that Generation Z was the company’s fastest-growing demographic with 200 million teenagers and young adults each month.[4]
X Restored an Account Promoting Child Abuse
In July, an influencer account with half a million followers shared an image of child abuse material on X.[5] According to the poster, the image was intended to draw attention to child exploitation. After gaining 3 million views and 8,000 retweets, it was taken down and the account was suspended.
Musk said that only the staff of the child exploitation team had seen the tweet, deleted it, and then restored the account after pressure from other users.
The Asia-Pacific head of global affairs at X, Kathleen Reen, said that “any content that features or promotes that content and abuse is prohibited and will be immediately removed and their accounts permanently suspended.”
Senator David Shoebridge asked X how such a policy aligned with the recent actions regarding the restored account. X’s head of global affairs, Nick Pickles, said that permanent suspension was an option, but sharing content on an “outrage basis” was something the company had considered.
For example, people share the content out of outrage because they want to raise awareness. According to X, if there are circumstances where someone shares content but, under review, [we] decide the appropriate remediation is to remove the content but not the user.”
Politicians across the political spectrum did not take that explanation lightly. Labor chair senator Helen Polley said, “There is no excuse, whether you’re posting something through outrage, which to me just is not logical, that your account should not be permanently suspended. You can see why we don’t have a lot of faith and trust in what you’re telling us here today.”[6]
Key Takeaways
This fine from Australia’s eSafety Commission is just one step toward accountability from tech giants in moderating their platform content and protecting vulnerable members of society, particularly children.
Sources: