The growing use of generative AI in various ways has raised concerns about how it might be used by hostile people and groups to threaten the safety of children. Today, Microsoft and Google have pledged to make new child safety commitments for their respective generative AI services.
Both companies and others have come up with these new commitments in collaboration with two groups. One is called Thorn, which is a non-profit organization trying to fight child sexual abuse. The other is called All Tech Is Human, which was created to help create a "responsible tech ecosystem."
Microsoft's blog post stated that it will develop generative AI models that will not be trained on datasets that have child sexual or exploitive material. It will also help to safeguard those models from that kind of content after they are released.
The blog post added:
Today’s commitment marks a significant step forward in preventing the misuse of AI technologies to create or spread child sexual abuse material (AIG-CSAM) and other forms of sexual harm against children. This collective action underscores the tech industry’s approach to child safety, demonstrating a shared commitment to ethical innovation and the well-being of the most vulnerable members of society.
Google's blog post on this subject covers some of the same grounds as Microsoft in terms of its child safety commitments for its own AI services. It added that it has its own team that's dedicated to finding content that indicates a child might be in danger. In addition, Google says that it tries to find and remove content that concerns child sexual abuse and exploitation with "a combination of hash-matching technology, artificial intelligence classifiers, and human reviews."
Finally, Google announced its support for a number of bills in the US Congress that deal with the protection of children from abuse and exploitation.
3 Comments - Add comment