Today, the 2024 US Presidential election officially gets underway with caucuses in the state of Iowa. Generative AI company OpenAI has decided to use today to outline how it plans to prevent groups from using its tools like ChatGPT and DALL-E to create and distribute "deepfakes" and other false information that could be used to disrupt the US presidential election, and indeed elections all over the world this year.
In a blog post, OpenAI stated it will make sure its tools are used for "accurate voting information, enforcing measured policies, and improving transparency". It added:
We have a cross-functional effort dedicated to election work, bringing together expertise from our safety systems, threat intelligence, legal, engineering, and policy teams to quickly investigate and address potential abuse.
OpenAI says it will not allow its tools to be used for any political campaigns or for any lobbying purposes. Also, it states that it will not allow any chatbots to be created with its tools and services that are designed to simulate chatting with real candidates or government groups.
The company says it will block the use of its tools to distribute efforts to misrepresent how people should vote in the election, or discourage people from voting. It will also allow people to report possible violations to OpenAI.
In addition, OpenAI is working on tools to help spot AI-created deepfake images made via DALL-E. A new tool, called a "provenance classifier" is currently under development that is supposed to help find AI-made images " even where images have been subject to common types of modifications." A group of testers that will include journalists, will be able to try out this tool "soon".
A recent report claimed that Microsoft"s Copilot chatbot, which uses technology developed by OpenAI, frequently offered false answers to questions about some 2023 elections. In November, Microsoft announced plans to offer a new tool that would help political parties show that images created by them, such as ads and videos, are real and have not been changed by AI tools.