As the world faces a record breaking election year, tech companies agree to co-operate to crack down on malicious AI use.
In a move to safeguard the integrity of global elections, major technology companies have united to combat the deceptive use of artificial intelligence (AI). This accord was signed at the Munich Security Conference, with notable participants including Adobe, Google, Microsoft, OpenAI, Snap Inc., and Meta.
The Threat of Deceptive AI
AI has the potential to be a powerful tool for spreading false information and disrupting essential systems. Malicious actors could use AI to share fake disparaging content against election candidates.
The accord is in response to, among others, recent occurrences in elections where AI robocalls mimicked President Joe Biden to discourage voters in the primary election of New Hampshire. The FCC has stated that AI-produced audio clips in robocalls are unlawful. However, there is still a lack of regulation concerning audio deepfakes on social media and within campaign advertisements.
This accord seeks to manage the risks arising from deceptive AI election content created through publicly accessible, large-scale platforms or open foundational models.
The Accord’s Seven Principal Goals
The accord sets out seven principal goals:
Prevention: Researching, investing in, and deploying precautions to limit risks of deliberately deceptive AI Election Content being generated.
Provenance: Attaching provenance signals to identify the origin of content where appropriate and technically feasible.
Detection: Attempting to detect Deceptive AI Election Content or authenticated content, including with methods such as reading provenance signals across platforms.
Responsive Protection: Providing swift and proportionate responses to incidents involving the creation and dissemination of Deceptive AI Election Content.
Evaluation: Undertaking collective efforts to evaluate and learn from the experiences and outcomes of dealing with Deceptive AI Election Content.
Public Awareness: Engaging in shared efforts to educate the public about media literacy best practices, in particular regarding Deceptive AI Content.
Resilience: Supporting efforts to develop and make available defensive tools and resources, such as AI literacy and other public programs, AI-based solutions (including open-source tools where appropriate), or contextual features, to help protect public debate, defend the integrity of the democratic process, and build whole-of-society resilience against the use of Deceptive AI Election Content.
Tech Companies’ Response
At the Munich conference, Kent Walker, President of Global Affairs at Google, highlighted that democracy rests on safe and secure elections. "Google has been supporting election integrity for years, and today's accord reflects an industry-side commitment against AI-generated election misinformation that erodes trust," Walker said. "We can't let digital abuse threaten AI's generational opportunity to improve our economies, create new jobs, and drive progress in health and science."
"We are committed to doing our part as technology companies, while acknowledging that the deceptive use of AI is not only a technical challenge, but a political, social, and ethical issue and hope others will similarly commit to action across society," the signatories said. "We affirm that the protection of electoral integrity and public trust is a shared responsibility and a common good that transcends partisan interests and national borders."
The Future of Elections
The year 2024 will bring more elections to more people than any year in history, with more than 40 countries and more than four billion people choosing their leaders and representatives through the right to vote. This accord represents a crucial step in advancing election integrity, increasing societal resilience, and creating trustworthy tech practices.