The White House has announced that it has secured "voluntary commitments" from AI model developers and data providers to try to reduce the number of sexual abuse images being generated by AI. The White House got these commitments from several big AI players including OpenAI and Microsoft.
The White House said that Adobe, Anthropic, Cohere, Common Crawl, Microsoft, and OpenAI had committed to responsibly sourcing their datasets and safeguarding them from image-based sexual assault.
Adobe, Anthropic, Cohere, Microsoft, and OpenAI have also committed to using feedback loops and iterative stress-testing to guard against their models outputting image-based sexual abuse. All those companies, minus Cohere, have also pledged to remove nude images from AI training datasets.
The Biden-Harris Administration also announced the following measures were being implemented by other big tech firms:
- Cash App and Square are curbing payment services for companies producing, soliciting, or publishing image-based sexual abuse, including through additional investments into resources, systems, and partnerships to detect and mitigate payments for image-based sexual abuse.
- Cash App and Square commit to expanding participation in industry groups and initiatives that support signal sharing to detect sextortion and other forms of known image-based sexual abuse to help detection and limit payment services.
- Google continues to take actions across its platforms to address image-based sexual abuse, including updates in July to its search engine to further combat non-consensual intimate images.
- GitHub, a Microsoft company, has updated its policies to prohibit the sharing of software tools that are designed for, encourage, promote, support, or suggest in any way the use of synthetic or manipulated media for the creation of non-consensual intimate imagery.
- Microsoft is partnering with StopNCII.org to pilot efforts to detect and delist duplicates of survivor-reported non-consensual intimate imagery in Bing’s search results; developing new public service announcements to promote trusted, authoritative resources about image-based sexual abuse for victims and survivors; and continuing to demote low quality content across its search engine.
- Meta continues to prohibit the promotion of applications or services to generate image-based sexual abuse on its platforms, has incorporated solutions like StopNCII and TakeItDown directly into its reporting systems, and announced it had removed around 63,000 Instagram accounts that were attempting to engage in financial sextortion scams in July. Meta also recently expanded its existing partnership with the Tech Coalition to include sharing signals about sextortion activity via the Lantern program, helping to disrupt this criminal activity across the wider internet.
- Snap Inc. commits to strengthening reporting processes and promoting resources for survivors of image-based sexual abuse through in-app tools and via their websites.
To ensure that companies continue to pay attention to this issue, the White House said civil society organizations, including the Center for Democracy and Technology, are planning to set up a multi-stakeholder working group that will help to identify interventions to prevent and mitigate harms from image-based sexual abuse.
Furthermore, some companies have committed to "Safety by Design" guidelines outlined by Thorn and All Tech is Human which aims to fight the misuse of gen AI in the proliferation of child sexual abuse material. Thorn will be publishing its first round of transparency reports later this year which lawmakers will be able to refer to if they need to put more pressure on tech companies.
Source: The White House | Image via Depositphotos.com