There"s been more and more attention being placed on the use of generative AI apps and services to create deepfake images and information. That certainly came to a head a few weeks ago with AI-created images of pop singer Taylor Swift flooded the X social network. Some reports claim the images were made by Microsoft"s AI image generator Designer.
As 2024 is also an election year for the office of the US President, there"s even more concern that AI deepfake images could be used to negatively influence votes in that election as well as others. Today, a large number of tech companies announced they will abide by a new accord that states they will use their resources to combat the use of AI in deceptive election efforts.
The agreement, which was announced at the Munich Security Conference, is called the AI Elections Accord. The companies that are on board with this agreement, at this time, are:
- Adobe
- Amazon
- Anthropic
- Arm
- ElevenLabs
- IBM
- Inflection AI
- McAfee
- Meta
- Microsoft
- Nota
- OpenAI
- Snap Inc.
- Stability AI
- TikTok
- Trend Micro
- Truepic
- X
The press release (in PDF format) announcing the accord states that the above companies have agreed to follow these commitments to combating deepfake election efforts:
- Developing and implementing technology to mitigate risks related to Deceptive AI Election content, including open-source tools where appropriate
- Assessing models in scope of this accord to understand the risks they may present regarding Deceptive AI Election Content
- Seeking to detect the distribution of this content on their platforms
- Seeking to appropriately address this content detected on their platforms
- Fostering cross-industry resilience to deceptive AI election content
- Providing transparency to the public regarding how the company addresses it
- Continuing to engage with a diverse set of global civil society organizations, academics
- Supporting efforts to foster public awareness, media literacy, and all-of-society resilience
Microsoft President Brad Smith was among the company executives quoted in the press release. He stated:
As society embraces the benefits of AI, we have a responsibility to help ensure these tools don’t become weaponized in elections. AI didn’t create election deception, but we must ensure it doesn’t help deception flourish.
An example of election deepfakes happened a few weeks ago, as robocalls with an AI-generated voice of US President Joe Biden urged callers not to vote in the New Hampshire primary. The calls were later found to be created by a Texas-based company.