Denver – Colorado U.S. Senator Michael Bennet, a member of the Senate Intelligence and Rules Committees, which has oversight over U.S. elections, urged the the leaders of Alphabet, Meta, TikTok, and X to properly prepare for 2024’s global “Year of Democracy,” when over 2 billion people will go to the polls. Bennet’s letter comes after a troubling wave of layoffs across the social media sector, jeopardizing election safety and content moderation efforts.
“The dangers your platforms pose to elections are not new – users deployed deepfakes and digitally-altered content in previous contests – but now, artificial intelligence (AI) models are poised to exacerbate risks to both the democratic process and political stability. The proliferation of sophisticated AI tools has reduced earlier barriers to entry by allowing almost anyone to generate alarmingly realistic images, video, and audio,” Bennet wrote.
Bennet requested information on the platforms’ election-related policies, content moderation teams – including the languages covered and the number of moderators on full-time or part-time contracts – and tools adopted to identify AI-generated content.
“Democracy’s promise – that people rule themselves – is fragile,” Bennet continued. “Disinformation and misinformation poison democratic discourse by muddying the distinction between fact and fiction. Your platforms should strengthen democracy, not undermine it.”
On Monday, Bennet addressed concerns in the report in the U.S. Senate Select Committee on Intelligence, and on Tuesday, in the U.S. Senate Committee on Rules and Administration. Bennet has pushed for stronger standards to stop the spread of deceptive content online. Bennet was the first senator to propose creating an expert federal body to regulate digital platforms with his Digital Platform Commission Act. In October 2023, Bennet wrote to leading social media platforms to urge them to stop the spread of false and misleading content related to the ongoing conflict between Israel and Hamas. In June 2023, Bennet called on major technology companies to identify and label AI-generated content.
The text of the letter is available HERE and below:
Dear Mr. Musk, Mr. Zuckerberg, Mr. Chew, and Mr. Pichai:
With over 70 countries holding elections and more than 2 billion people casting ballots this year, 2024 is the “year of democracy.” Australia, Belgium, Croatia, the European Union, Finland, Ghana, Iceland, India, Lithuania, Namibia, Mexico, Moldova, Mongolia, Panama, Romania, Senegal, South Africa, the United Kingdom, and the United States are expected to hold major electoral contests this year. On Monday, I heard from the heads of the U.S. Intelligence Community that the Russian, Chinese, and Iranian governments may attempt to interfere in U.S. elections. As these and other actors threaten peoples’ right to exercise popular sovereignty, your platforms continue to allow users to distribute fabricated content, discredit electoral integrity, and deepen social distrust.
The dangers your platforms pose to elections are not new – users deployed deepfakes and digitally-altered content in previous contests – but now, artificial intelligence (AI) models are poised to exacerbate risks to both the democratic process and political stability. The proliferation of sophisticated AI tools has reduced earlier barriers to entry by allowing almost anyone to generate alarmingly realistic images, video, and audio.
Recent history demonstrates how rapidly bad actors have adopted generative AI models – and how inadequately your platforms have combated them. In September 2023, the research firm NewsGuard revealed that a network of TikTok accounts used AI-generated voice overs to spread conspiracy theories; their videos received over 330 million views. In January, New Hampshire voters received an AI-generated robocall of President Joe Biden urging them to refrain from voting in the state’s primary. More than 100 AI-generated videos of U.K. Prime Minister Rishi Sunak were recently promoted on Facebook. In Indonesia – the world’s third-largest democracy – AI-generated videos were quickly adopted by political campaigns and disseminated widely. Evidence suggests that users are wielding these tools to threaten elections around the world – in the U.K., India, Nigeria, Sudan, Ethiopia, Slovakia, and beyond. As people go to the polls in record numbers, you have a responsibility to prevent the misuse of AI tools on your platforms.
Last year’s Slovakian elections offered a window into one possible future. Days before the election – when the media was prohibited from reporting on the contest – an AI-generated audio clip that falsely depicted opposition candidate Michal Šime?ka conspiring to purchase votes and rig the outcome flooded the Internet. The entirely fabricated content confused the electorate and undermined Slovakians’ confidence in their political system. Šime?ka lost to his Russia-friendly rival Robert Fico.
This spring, the European Union (EU) will hold parliamentary elections in every member state. The stakes are exceptionally high as Russian President Vladimir Putin continues to wage his illegal war in Ukraine, right on the EU’s borders. The bloc is taking steps to combat foreign influence campaigns, and recently passed new legislation granting the EU’s executive arm greater authority to set and enforce rules for digital services. This more muscular approach may insulate European electorates from the worst forms of information warfare.
Beyond your failures to effectively moderate misleading AI-generated content, your platforms also remain unable to stop more traditional forms of false content. China-linked actors used malicious information campaigns to undermine Taiwan’s January elections. Facebook allowed the spread of disinformation campaigns that accused Taiwan and the United States of collaborating to create bioweapons, while TikTok permitted coordinated Chinese-language content critical of President-elect William Lai’s Democratic Progressive Party to proliferate across its platform.
In Mexico, the Associated Press recently uncovered around 40 fake online outlets spreading falsehoods on social media, including that former Mexico City Mayor and current presidential candidate Claudia Sheinbaum was born in Bulgaria, which would make her ineligible for the presidency. In India, the world’s largest democracy, the country’s dominant social media platforms – including Meta-owned WhatsApp – have a long track record of amplifying misleading and false content. Political actors that fan ethnic resentment for their own benefit have found easy access to disinformation networks on your platforms.
American adversaries – including Russia, China, and Iran – amplified disinformation during the 2022 midterm elections. While there is no indication that they compromised electoral systems or meddled in the vote, these governments are able to sow distrust and stoke baseless suspicion by manipulating the conversation online. The Senate Select Committee on Intelligence’s recent hearing on Worldwide Threats underscored the continued danger your platforms pose, highlighting how state propaganda has been allowed to propagate online.
Democracy’s promise – that people rule themselves – is fragile. Disinformation and misinformation poison democratic discourse by muddying the distinction between fact and fiction. Your platforms should strengthen democracy, not undermine it.
I urge you to take immediate and concerted efforts to combat the spread of false content, protect the integrity of this year’s elections, increase the resources devoted to content moderation in languages other than English, and improve your approach to transparency. To that end, I request responses to the following questions by April 12, 2024:
-
What reviews have you undertaken of your past election-related policies to identify their effectiveness, reliability, and ease of enforcement?
-
Have these reviews been published publicly? If not, explain why.
-
Have these reviews been reviewed by election administration and election integrity experts? If not, explain why.
-
What, if any, new policies have you developed and/or implemented to regulate the distribution of AI-generated content?
-
Have these policies been developed or implemented with input from election administration and election integrity experts? If not, explain why.
-
What, if any, new tools have you developed and/or implemented to detect AI-generated content?
-
What, if any, new policies have you put in place to prepare for the 2024 elections in the United States?
-
How many content moderators assigned to the U.S. market do you currently employ in languages other than English?
-
If any, please list the languages and the number of content moderators associated with each.
-
Of these, please provide a breakdown between full-time employees and contractors.
-
What, if any, new policies have you put in place to prepare for the 2024 Indian election?
-
How many content moderators do you currently employ in Assamese, Bengali, Gujarati, Hindi, Kannada, Kashmiri, Konkani, Malayalam, Manipuri, Marathi, Nepali, Oriya, Punjabi, Sanskrit, Sindhi, Tamil, Telugu, Urdu, Bodo, Santhali, Maithili, and Dogri?
-
Of these, please provide a breakdown between full-time employees and contractors.
-
What, if any, new policies have you put in place to prepare for the 2024 European Parliament elections?
-
Please provide the number of content moderators you have assigned to each official EU language.
-
For each country or electoral union where your platforms currently operate, aside from those listed above, please provide the following:
-
How many unique election policies have you put into place?
-
How have you publicized these policies?
-
For each country, please list the number of content moderators associated with each language, with a breakdown between full-time employees and contractors.
Sincerely,
Michael F. Bennet
United States Senator
###