Google is committed to a bold, responsible, and collaborative approach to AI that maximizes the technology’s benefits while minimizing potential harm. A key priority is protecting against, and responding to, new and unique risks for potential child sexual abuse and exploitation (CSAE) that generative AI might pose.
We invest heavily in fighting child sexual abuse and exploitation online and employ a combination of automated detection tools and specially-trained reviewers working round the clock to deter, detect, remove, and report content that is illegal or violates our policies on our platform. This includes technology-facilitated CSAE.
Collaborating with partners
Our efforts to combat AI-facilitated CSAE are part of our broader end-to-end commitment to develop and deploy generative AI responsibly. We joined other technology companies in committing to Thorn's Safety by Design principles, which focus on embedding protections against AI-facilitated CSAE from the start. As part of our commitment, we agreed to share how we are incorporating the principles into our responsible AI work. This paper highlights some examples of how we are implementing the principles in Thorn’s Develop, Deploy, and Maintain framework.
Together, we can build generative AI that is safer by design and contributes to a more secure online environment for everyone, especially children.
Download the full whitepaper to learn more about Responsible AI and CSAE online.