Press Releases

WASHINGTON – With under 100 days until the U.S. presidential election, Senate Intelligence Committee Chairman Mark R. Warner (D-VA) today shared responses from tech companies about their efforts to crack down on malicious uses of AI and released the statement below on their ramifications for the election and beyond. In February, a group of technology companies (including generative AI vendors, social media platforms, chipmakers, and research firms) signed the Munich Tech Accord to Combat Deceptive Use of AI in 2024 Elections, a high-level roadmap for a variety of new initiatives, investments, and interventions that could improve the information ecosystem surrounding this year’s elections. In May, Sen. Warner pushed for specific answers about the actions that companies are taking to make good on the Tech Accord, including its applicability to combat misuse of generative AI products outside the election context. 

“I appreciate the thoughtful engagement from the signatories of the Munich Tech Accord. Their responses indicated promising avenues for collaboration, information-sharing, and standards development, but also illuminated areas for significant improvement.

“While many of the companies indicated that they have clear policies against a wide range of misuses, and have undertaken red-teaming and other pre-deployment testing measures, there is a very concerning lack of specificity and resourcing on enforcement of those policies. Additionally, companies offered little indication of detailed and sustained efforts to engage local media, civic institutions, and election officials and equip them with resources to identify and address misuse of generative AI tools in their communities. Leading social media platforms and gen-AI vendors have commendably posted resources to their websites and have had extensive engagement with legislative and regulatory bodies at the national level, but the failure modes of this technology require sustained relationship-building with local institutions.

“I’m disappointed that few of the companies provided users with clear reporting channels and remediation mechanisms against impersonation-based misuses. Generative AI tools are already harming vulnerable communities – including seniors, who are often victims of financial fraud, and teens, who are vulnerable to appalling acts of non-consensual image generation and extortion.

“Lastly – and perhaps most relevant ahead of the 2024 Presidential Election – I am deeply concerned by the lack of robust and standardized information-sharing mechanisms within the ecosystem. With the election less than 100 days away, we must prioritize real action and robust communication to systematically catalogue harmful AI-generated content. While this technology offers significant promise, generative AI still poses a grave threat to the integrity of our elections, and I’m laser-focused on continuing to work with public and private partners to get ahead of these real and credible threats.”

Responses by each of the companies are available here: Adobe, Amazon, Anthropic, Arm, ElevenLabs, Google, IBM, Intuit, LG, McAfee, Microsoft, Meta, Open AI, Snap, Stability AI, TikTok, Trend, True Media, Truepic, and X. Gen, Inflection, NetApp, and Nota did not provide responses.

Ahead of the 2024 election, Sen. Warner has been repeatedly raising the alarm about the potential for AI and tech companies to create and disseminate credible misinformation to influence election results. Last week, he issued a statement on the most recent election security update from the Director of National Intelligence. He has also held open hearings in the Intelligence Committee on this critical issue.  

###