AI Literacy Hub and Digital Well-being
Helping kids and teens safely explore online
We are committed to helping ensure children and their families have access to age-appropriate, privacy-preserving experiences across our platforms. Our efforts are guided by our three essential pillars: building safe products that help children learn and grow, respecting families’ unique relationships with technology, and empowering youth to safely learn and explore online.
We are committed to helping ensure children and their families have access to age-appropriate, privacy-preserving experiences across our platforms. Our efforts are guided by our three essential pillars: building safe products that help children learn and grow, respecting families’ unique relationships with technology, and empowering youth to safely learn and explore online.
Our policy priorities to ensure safer digital experiences for kids and teens
AI safety guardrails
assurance
family choice
youth exploration
Generative AI safety guardrails
Advances in AI present new opportunities to equip learners with the skills they need for success. We develop, build, and train generative AI models that proactively address safety risks for children and teens, including industry-leading protections against CSAM, and specific guardrails for generative AI tools used by younger audiences.
Age assurance
At Google and YouTube, we remain focused on the safety and wellbeing of our youngest users. We've continually invested in technology, policies, and literacy resources to better protect kids and teens across our platforms. In addition to developing automatic safeguards across our products and enhanced protection for users under 18, we believe that good legislative models can help hold companies accountable for promoting safety and privacy, while enabling access to richer experiences for children and teens. Across our services, age assurance helps us implement the right protections for the right users, so we can treat minors like minors and adults like adults.
Respecting family choice
We recognize that every family has a unique relationship with technology, which is why we develop digital tools that prioritize individual choice and foster healthy digital habits. By collaborating with educators, child development experts, and parents, we ensure our parental controls are both intuitive and effective, allowing families to set the boundaries that work best for them as they learn, grow, and play.
Empowering youth exploration
Digital tools and experiences are an essential part of children’s and teens’ lives. When age-appropriate products that align with children’s and teens’ developmental stages and needs are used effectively, they help young people learn, connect, grow, and prepare for the future. We created Be Internet Awesome, a program designed in partnership with child development and safety experts focused on topics like how to spot scams. We’ve educated more than 100 million children globally over the years.
Looking for something else?
FAQs
Proactive AI safeguards for youth
We ensure our generative AI tools for youth are supervised and secure, integrating safety, security, and ethical considerations throughout our development lifecycle. Our teams conduct specialized adversarial testing and red-teaming to probe AI models for vulnerabilities and build strong safeguards against content that could pose physical or psychological harm. To further those safeguards, we perform automatic evaluations and safety fine-tuning to mitigate young audiences from forming unhealthy personal attachments to generative AI tools and products.
Added protection for minors in Gemini Apps
Towards Responsible Development of Generative AI for Education
Our work with learners and educators to translate high level principles from learning science into pragmatic educational benchmarks for educational Generative AI models.
Gemini for Education
A hub for educators with access to AI models that help with lesson planning and personalized lesson support.
Empowering youth to safely learn and explore
Digital experiences are a foundational part of children’s and teens’ everyday lives. We support evidence-based regulation that protects kids and teens in the digital world, not from it. With appropriate safeguards, we can empower young people to use digital tools safely and help them learn, connect, grow, and prepare for the future. This includes proposals that require industry-wide standards for parental controls, limiting access to specific content, requiring product development input from independent child safety experts, and implementing privacy-preserving age-estimation tools.
To make the most of the Internet, kids need to be prepared to make smart decisions. We developed Be Internet Awesome to empower kids with tools and education to confidently and safely explore, grow, and play online. Be Internet Awesome includes a curriculum, where educators teaching online safety in the classroom can download lesson plans that have received the ISTE Seal of Alignment and classroom activities that bring the fundamental lessons to life, alongside an AI Literacy Guide and Interland game. We worked with experts in digital safety to ensure that every element of the program addresses what families and educators need to know.
Be Internet Awesome
Tools and education for kids that help them confidently and safely explore, grow, and play online.
Protecting teens in the digital world, not from the digital world
An overview of YouTube’s philosophy on youth safety, focusing on building age-appropriate product experiences and promoting digital wellbeing.
YouTube — Our Youth Principles
Five core principles guiding YouTube's work on creating a safer and more enriching environment for young people.
Family controls
A resource for parents and guardians with digital tools and account management settings to help keep their children and teens safer online.
Advancing age-appropriate safety and oversight
We design products that prioritize the developmental stages of children and teens, holistically weighing considerations like safety, mental wellbeing, privacy, and agency. To protect younger users from physical or psychological harm when interacting with Generative AI, we employ specialized development processes, such as child safety sprints focused on adversarial testing, guided by third-party child development experts, as well as content filters and policies that implement strict safeguards to handle sensitive queries and prevent generation of harmful content.
When you access our products with a school account, such as Gemini for Education, we provide robust tools to administrators , including the ability to control access and get insights. Furthermore, schools own their content and Google contractually commits to processing data as instructed. Your content is not reviewed by humans, and is not used to train Google’s general Large Language Models (LLMs) without your permission.
YouTube
Every family’s relationship with media and technology is different, so we offer options for you to decide what’s best for yours.
Gemini for Education
A hub for educators with access to AI models that help with lesson planning and personalized lesson support.
Safer digital learning with Google for Education
Our rigorous privacy protocols and data security standards for protecting student information and maintaining compliance in educational environments.
Generative AI in Google Workspace Privacy Hub
This article is intended to help users understand how we use their data and keep it secure when using Google Workspace with Gemini.
Our approach
Google is committed to a bold, responsible, and collaborative approach to AI that maximizes the technology’s benefits while minimizing potential harm. A key priority is protecting against, and responding to, new and unique risks for potential child sexual abuse and exploitation (CSAE) that generative AI might pose. We’re committed to tackling CSAE, and we prohibit storing, sharing, or creating child sexual abuse material (CSAM) on our platforms and services. Our industry-leading automated detections and reviews are designed to proactively find and remove CSAM from our platforms as detailed in our Transparency Report.
Progress update: Responsible AI and Child Sexual Abuse and Exploitation Online
Our detailed report on leveraging our industry-leading automated detections and reviews to combat AI-faciliated child sexual abuse material.
Protecting Children
Our central hub providing a high-level overview of Google’s approach to combating child sexual abuse and exploitation online, including our commitments, tools, and partnerships.
Advancing risk-based age assurance and advertising restrictions
A good understanding of user age can help online services deliver age-appropriate experiences. Methods that determine age of users across services, however, can intrude on privacy interests, requiring more data collection and use than necessary. That's why we do not collect additional user data to operate our age estimation model. Where required, we believe age assurance should be risk-based, preserving young users’ access to information and services, and respecting their privacy.
To further support user privacy, we enforce a strict ban on personalized advertising on our platforms targeted to anyone under the age of 18, and encourage policy that requires other digital providers to adhere to similar rules. Across Google platforms, ads cannot be targeted based on a minor user’s age, gender, or personal interests. We also provide additional age-appropriate protections including access to age-appropriate content with SafeSearch filters, YouTube digital wellbeing tools for take-a-break and bedtime reminders.
Legislative framework to protect children and teens online
Our proactive policy proposal and legal framework to ensure safe, age-appropriate digital environments for younger users.
Preparing teens for a digital future
Digital tools and experiences are a foundational part of children’s and teens’ everyday lives. Over the years, we have seen how innovative technologies like AI and access to high-quality, diverse content can yield enormous benefits. We believe that the appropriate safeguards can empower young people and help them learn, connect, grow, and prepare for the future.
Protecting teens in the digital world, not from the digital world
An overview of YouTube’s philosophy on youth safety, focusing on building age-appropriate product experiences and promoting digital wellbeing.
Partnerships
Studies, reports, and whitepapers
-
Youth Legislative Framework: Google's policy perspective on protecting children and teens online.
Google
-
Transparency Report (CSAM Protections): Data-driven insights into how Google fights exploitation online.
Google
-
Teaching Responsible Use of AI: A resource for educators and parents to promote AI literacy.
Google
-
Building content recommendations to meet the unique needs of teens and pre-teens.
YouTube Youth & Families Advisory Committee
-
We’ve partnered with Thorn to establish and develop best practices for mitigating CSAE risks and to support the development of safety standards.
Thorn