Last month, a Facebook page in Slovakia shared a long-debunked claim that Ukraine’s president had secretly purchased a vacation in Egypt under his mother-in-law’s name.
Then, a widely shared post that began on Telegram suggested that a parliamentary candidate in the coming election had died from the COVID-19 vaccine – a completely false claim. A political extremist posted a photo of refugees in Slovakia that was doctored to include an African man with a machete.
Disinformation like this has always been a problem, but it ramped up as Slovakia heads toward an election this coming week. However, a new European Union law could force the world’s social media platforms to combat it or face fines of up to 6% of the company’s revenue.
The law – the Digital Services Act – is intended to force social media giants to adopt new policies and practices to address accusations that occur on their platforms and become viral with their algorithms. The possibilities for the law could extend beyond Europe and may change the company’s impact on users in other countries.
The Rise of Disinformation
The government legal machine moves slowly. After years buried in bureaucracy, the Digital Services Act reflects the alarm and unease across Europe with the unfettered flow of disinformation – and its power over public opinion and democracy.
Europe’s effort is different from that of the United States, which is caught in a controversy over whether political or legal action is warranted in shaping how platforms regulate their users’ content. In fact, a federal appeals court recently ruled that the Biden administration violated the First Amendment guarantee of free speech by asking social media to remove content.
While the debate isn’t as strong in Europe, the new law is poised for a major clash with Elon Musk, the owner of X (formerly Twitter). Musk withdrew from a voluntary code of conduct this year, but X will be forced to comply with the new law within the EU market.
The law left no stone unturned, but enforcing the rules on some the richest and most powerful companies is no less challenging for it, particularly when it comes to disinformation on an open forum.
The Challenge with Regulating Social Media
Social media is not protected by free speech. Users on each platform are beholden to the company’s policies and community guidelines, but most users are free to post their views and perceptions of truth.
For a law to truly control disinformation, regulators would have to establish that the platform had systemic problems that allow disinformation to flourish and that it caused harm, which is an unprecedented area of the legal landscape.
The European Union’s landmark data privacy law, General Data Protection Regulation (GDPR), which was adopted in 2018, has been difficult to enforce. In May, regulators enacted the harshest penalty by fining Meta 1.2 billion euros, which is about $1.3 billion. Meta has appealed, so the litigation can go on for years.
Fines can be an effective motivator for these companies, however. Dominika Hajdu, the director of the Center for Democracy and Resilience at Globsec, a research organization in Bratislava, said that fines would force platforms to do more in a unified but diverse market with an array of countries and languages. They would have to utilize multinational teams that monitor each country, which is a hard sell unless they face hefty fines for noncompliance.
Currently, the law applies to 19 sites with over 45 million users, including social media giants, shopping platforms like Amazon and Apple, and search engines like Google. It defines broad categories of harmful or illegal content, not necessarily a theme or subject area.
Essentially, social media companies are obliged to provide more comprehensive protection to users and information about how the algorithms recommend content – a setting they can opt out of. It also calls for an end to advertising that’s targeted toward children.
They must also submit to independent audits and make public decisions to remove data or content.
The Threat to Slovakia’s Election
The election in Slovakia is the first Europe saw since the Digital Services Act went into effect, but it’s a useful test of the law’s sweeping impact. There are other elections in the Eastern Bloc in the next few months and in 2024.
Officials and experts described the law as a “warning shot” to these platforms about the scrutiny they will undergo, particularly related to the spread of Russian disinformation on major social media sites following the Russian invasion of Ukraine in 2022.
Since the war began, engagements with Russian-aligned content skyrocketed to nearly 90% on YouTube and doubled on TikTok. Social media platforms acted as an ideal engine to wage an information war that posed risks for public safety and civic discourse.
In the weeks since the law took effect, researchers have documented instances of misinformation, hate speech, or incitement to violence, many of which stem from pro-Kremlin accounts.
Content moderation has proven challenging for social media since its inception, but it’s proven more dangerous for democracy and public wellbeing. The Digital Services Act focuses on moderating through reporting and auditing to develop more robust systemic actions to protect users and the public, rather than focusing on individual pieces of content, which could be the start of big changes ahead.