Summary
Coalition of top generative AI developers, including Google and Meta, vow to enforce guardrails to prevent the creation and spread of child sexual abuse material (CSAM). Thorn advocates for “Safety by Design” principles in generative AI development. Deepfake child pornography has increased with the availability of AI models. The developers pledge to follow principles to prevent their technology from being used to create child pornography. Other companies, including Microsoft and Amazon, also sign onto the pledge. Metaphysic, known for digitizing characteristics of Hollywood stars, emphasizes responsibility in AI development. OpenAI and Meta state their commitment to upholding safety measures. Other members of the Coalition did not immediately respond. Internet Watch Foundation warns of potential overwhelming presence of AI-generated child abuse material online.
Key Points
1. A coalition of top generative AI developers, including Google, Meta, and OpenAI, have pledged to enforce guardrails around the emerging technology to fight the spread of child sexual abuse material (CSAM).
2. Thorn, a non-profit organization founded by actors Demi Moore and Ashton Kutcher, along with All Tech is Human, brought together the group of developers to advocate for a “Safety by Design” principle in generative AI development to prevent the creation of CSAM.
3. The developers committed to following principles outlined in Thorn’s report to prevent their technology from being used to create child pornography, including responsibly sourcing training datasets, incorporating feedback loops, employing content history with adversarial misuse in mind, and responsibly hosting AI models.