Press Releases

WASHINGTON – With under six months until the U.S. general election, Intelligence Committee Chairman Mark R. Warner (D-VA) today pushed tech companies to follow up on commitments made at the Munich Security Conference and take concrete measures to combat malicious misuses of generative artificial intelligence (AI) that could impact elections. In February, a group of AI companies signed the Tech Accord to Combat Deceptive Use of AI in 2024 Elections, a high-level roadmap for a variety of new initiatives, investments, and interventions that could improve the information ecosystem surrounding this year’s elections. Following that initial agreement, Sen. Warner is pushing for specific answers about the actions that companies are taking to make good on the Tech Accord. 

“Against the backdrop of worldwide proliferation of malign influence activity globally – with an ever-growing range of malign actors embracing social media and wider digital communications technologies to undermine trust in public institutions, markets, democratic systems, and the free press –  generative AI (and related media-manipulation) tools can impact the volume, velocity, and believability of deceptive election,” Sen. Warner wrote.

This year, elections are taking place in over 40 countries representing over 4 billion people, while AI companies are simultaneously releasing a range of powerful and untested new tools that have the potential to rapidly spread believable misinformation, as well as abuse by a range of bad actors. While the Tech Accord represented a positive, public-facing first step to recognize and address this novel challenge, Sen. Warner is pushing for effective, durable protections to ensure that malign actors can’t use AI to craft misinformation campaigns and to prevent its dissemination on social media platforms. To that end, he posed a series of questions to get specific information on the actions that companies are taking to prevent the creation and rapid spread of AI-enabled disinformation and election deception.

“While high-level, the commitments your company announced in conjunction with the Tech Accord offer a clear roadmap for a variety of new initiatives, investments, and interventions that can materially enhance the information ecosystem surrounding this year’s election contests. To that end, I am interested in learning more about the specific measures your company is taking to implement the Tech Accord. While the public pledge demonstrated your company’s willingness to constructively engage on this front, ultimately the impact of the Tech Accord will be measured in the efficacy – and durability – of the initiatives and protection measures you adopt,” Sen. Warner continued.

The letter concludes by pointing out that several of the proposed measures to combat malicious misuse in elections would also help address adjacent misuses of AI technology, including the creation of non-consensual intimate imagery, child sexual abuse material, and online bullying and harassment campaigns. Sen. Warner has been consistently calling attention to and pushing for action from AI companies on these and other potential misuses. On Wednesday, Sen. Warner will host a public Intelligence Committee hearing where leaders from the FBI, CISA, and the ODNI will provide updates on threats to the 2024 election.

Sen. Warner sent letters to every signatory of the Tech Accord: Adobe, Amazon, Anthropic, Arm, Eleven Labs, Gen, GitHub, Google, IBM, Inflection, Intuit, LG, LinkedIn, McAfee, Microsoft, Meta, NetApp, Nota, Open AI, Snap, Stability AI, TikTok, Trend, True Media, Truepic, and X.

A copy of every letter is available here and one example is included below:

Earlier this year, I joined to amplify and applaud your company’s commitment to advance election integrity worldwide through the Tech Accord to Combat Deceptive Use of AI in 2024 Elections. As generative artificial intelligence (AI) products proliferate for both commercial and general users, a multi-stakeholder approach is needed to ensure that industry, governments, and civil society adequately anticipate – and counteract – misuse of these products in ways that cause harm to vulnerable communities, public trust, and democratic institutions. The release of a range of powerful new AI tools – many enabled or directly offered by your [company/organization] -- coincides with an unprecedented number of elections worldwide. As memorialized during the Munich Summit, elections have occurred – or will occur – in over 40 countries worldwide, with more than four billion global citizens exercising their franchise. Since the signing of the Tech Accord on February 16th, the first round of India’s elections have already concluded. European Parliament elections will take place in early June and– as primary contests are already well underway – the U.S. general election will take place on November 5th

While policymakers worldwide have begun the process of developing measures to ensure that generative AI technologies (and related media manipulation tools) serve the public interest, the private sector can – particularly in collaboration with civil society – dramatically shape the usage and wider impact of these technologies through proactive measures. Against the backdrop of worldwide proliferation of malign influence activity globally – with an ever-growing range of malign actors embracing social media and wider digital communications technologies to undermine trust in public institutions, markets, democratic systems, and the free press –  generative AI (and related media-manipulation) tools can impact the volume, velocity, and believability of deceptive election information.

While high-level, the commitments your company announced in conjunction with the Tech Accord offer a clear roadmap for a variety of new initiatives, investments, and interventions that can materially enhance the information ecosystem surrounding this year’s election contests. To that end, I am interested in learning more about the specific measures your company is taking to implement the Tech Accord. While the public pledge demonstrated your company’s willingness to constructively engage on this front, ultimately the impact of the Tech Accord will be measured in the efficacy – and durability – of the initiatives and protection measures you adopt. Indeed, many of these measures will be vital in addressing adjacent misuses of generative AI products, such as the creation of non-consensual intimate imagery, child sexual abuse material, or content generated for online harassment and bullying campaigns. I request that you provide answers to the following questions no later than May 24, 2024.

  1. What steps is your company taking to attach content credentials, and other relevant provenance signals, to any media created using your products? To the extent that your product is incorporated in a downstream product offered by a third-party, do license terms or other terms of use stipulate the adoption of such measures? To the extent you distribute content generated by others, does your company attach labels when you assess – based on either internal classifiers or credible third-party reports – to be machine-generated or machine-manipulated?
  2. What specific public engagement and education initiatives have you initiated in countries holding elections this year? What has the engagement rate been thus far and what proactive steps are you undertaking to raise user awareness on the availability of new tools hosted by your platform?
  3. What specific resources has your company provided for independent media and civil society organizations to assist in their efforts to verify media, generate authenticated media, and educate the public?
  4. What has been your company’s engagement with candidates and election officials with respect to anticipating misuse of your products, as well as the effective utilization of content credentialing or other media authentication tools for their public communications? 
  5. Has your company worked to develop widely-available detection tools and methods to identify, catalogue, and/or continuously track the distribution of machine-generated or machine-manipulated content?
  6. (To the extent your company offers social media or other content distribution platforms) What kinds of internal classifiers and detection measures are you developing to identify machine-generated or machine-manipulated content? To what extent to these measures depend on collaboration or contributions from generative AI vendors?
  7. (To the extent your company offers social media or other content distribution platforms) What mechanisms has your platform implemented to enable victims of impersonation campaigns to report content that may violate your Terms of Service? Do you maintain separate reporting tools for public figures?
  8. (To the extent your company offers generative AI products) What mechanisms has your platform implemented to enable victims of impersonation campaigns that may have relied on your models to report activity that may violate your Terms of Service? 
  9. (To the extent your company offers social media or other content distribution platforms) What is the current status of information sharing between platforms on detecting machine-generated or machine-manipulated content that may be used for malicious ends (such as election disinformation, non-consensual intimate imagery, online harassment, etc.)? Will your company commit to participation in a common database of violative content?

Thank you for your attention to these important matters and I look forward to your response.

###