Advocacy Groups Claim Tech Firms Have Failed to Safeguard Elections from Disinformation and Hate Speech

A quarter of the way into in living memory, tech companies are failing their biggest test. Such is the charge that has been leveled by at least 160 rights groups across 55 countries, which are collectively calling on tech platforms to urgently adopt greater measures to safeguard people and elections amid rampant online disinformation and hate speech.

“Despite our and many others’ engagement, tech companies have failed to implement adequate measures to protect people and democratic processes from tech harms that include disinformation, hate speech, and influence operations that ruin lives and undermine democratic integrity,” reads the organizations’ , shared exclusively with TIME by the Global Coalition for Tech Justice, a consortium of civil society groups, activists, and experts. “In fact, tech platforms have apparently reduced their investments in platform safety and have restricted data access, even as they continue to profit from hate-filled ads and disinformation.”

In July, the coalition reached out to leading tech companies, among them Meta (which owns Facebook and Instagram), Google (which owns YouTube), TikTok, and X (formerly known as Twitter), and asked them to establish transparent, country-specific plans for the upcoming election year, in which more than half of the world’s population would be going to the polls across some 65 countries. But those calls were largely ignored, says Mona Shtaya, the campaigns and partnerships manager at Digital Action, the convenor of the Global Coalition for Tech Justice. She notes that while many of these firms have on their approach to the election year, they are often vague and lack country-specific details, such as the number of content moderators per country, language, and dialect. Crucially, some appeared to disproportionately focus on the U.S. elections.

“Because they are legally and politically accountable in the U.S., they are taking more strict measures to protect people and their democratic rights in the U.S.,” says Shtaya. “But in the rest of the world, there are different contexts that could lead to the spread of disinformation, misinformation, hateful content, gender-based violence, or smear campaigns against certain political parties or even vulnerable communities.”

When reached for comment, TikTok pointed TIME to on its plans to protect election integrity, as well as separate posts on its plans for the elections in , , , , the , the , and the . Google similarly pointed to its published statements on the upcoming , as well as the forthcoming contests in and . Meta noted that it has “provided extensive public information about our preparations for elections in major countries around the world,” including in statements on forthcoming elections in , ., , and .

X did not respond to requests for comment. 

Tech platforms have long had a reputation for underinvesting in content moderation in non-English languages, sometimes to dangerous effect. In India, which kicks off this week, anti-Muslim hate speech under the country’s Hindu nationalist government has given way to . The risks of such violence notwithstanding, observers warn that anti-Muslim and misogynistic hate speech continue to run rampant on social tech platforms such as , and . In South Africa, which goes to the polls next month, online xenophobia has manifested into targeting migrant workers, asylum seekers, and refugees—something that observers say social media platforms have done little to curb. Indeed, in conducted last year by the Cape Town-based human-rights organization Legal Resources Centre and the international NGO Global Witness, 10 non-English advertisements were approved by Facebook, TikTok, and YouTube despite the ads violating the platforms’ own policies on hate speech.

Rather than invest in more extensive content moderation, the Global Coalition for Tech Justice contends that tech platforms are doing just the opposite. “In the past year, Meta, Twitter, and YouTube have collectively removed 17 policies aimed at guarding against hate speech and disinformation,” Shtaya says, referencing by the non-profit media watchdog Free Press. She added that all three companies have had layoffs, with some directly affecting teams dedicated to and . 

Just last month, Meta announced its decision to , an analytics tool widely used by journalists and researchers to track misinformation and other viral content on Facebook and Instagram. It will cease to function on Aug. 14, 2024, less than three months before the U.S. presidential election. The Mozilla Foundation and 140 other civil society organizations (including several that signed onto the Global Coalition for Tech Justice letter) , which it deemed “a direct threat to our ability to safeguard the integrity of elections.”

Perhaps the biggest concern surrounding this year’s elections is the threat posed by AI-generated disinformation, which has already proven capable of producing fake images, audio, and video with alarming believability. Political deepfakes have already cropped up in elections in Slovakia (where purported to show a top candidate boasting about rigging the election, which he would go on to lose) and Pakistan (where of a candidate was altered to tell voters to boycott the vote). That they’ll feature in the upcoming U.S. presidential contest is almost a given: Last year, former President and presumptive Republican presidential nominee Donald Trump a manipulated video using AI voice-cloning of CNN host Anderson Cooper. More recently, a robocall purportedly recorded by President Biden (it was, in fact, an AI-generated impersonation of him) attempted to discourage voters from participating in the New Hampshire Democratic presidential primary just days leading up to the vote. (A political consultant who confessed to being behind the hoax he was trying to warn the country about the perils of AI.)

This isn’t the first time tech companies have been called out for their lack of preparedness. Just last week, a coalition of more than 200 civil-society organizations, researchers, and journalists sent to the top of executives of a dozen social media platforms calling on them to take “swift action” to combat AI-driven disinformation and to reinforce content moderation, civil-society oversight tools, and other election integrity policies. Until these platforms respond to such calls, it’s unlikely to be the last.