Towards Responsible Development of Generative AI for Education
Supporting education from classroom to career
AI can help unlock human potential through providing access to deeper, more personalized learning experiences that enable people to follow their curiosity. We work in close partnership with the education community to develop AI products grounded in Learning Science and designed to improve learning outcomes.
AI can help unlock human potential through providing access to deeper, more personalized learning experiences that enable people to follow their curiosity. We work in close partnership with the education community to develop AI products grounded in Learning Science and designed to improve learning outcomes.
Our policy priorities for enhancing education
in the science of learning
with and for teachers
understanding for learners
to safer digital learning
AI that is grounded in the science of learning
Our mission has always been to enable access to information. With AI, we are now enabling people to turn this information into knowledge. As the foundation for all of our AI Learning tools, LearnLM - now integrated into Gemini - is grounded in pedagogical principles and guided in collaboration with experts in education, including institutions like Columbia Teachers College, Arizona State University, NYU Tisch, and Khan Academy, reflecting years of partnership to research and improve how AI can support effective learning practices.
Tools built with and for teachers
For educators, AI can serve as a teaching assistant that supports their workload and enables new approaches, ultimately freeing up more time for the essential human aspects of teaching. It is critical that AI-powered learning tools are developed and supervised by experts and educators to ensure they reflect learners’ needs and provide effective, evenly distributed pedagogical benefits.
Enabling deeper understanding for learners
Advances in AI present new opportunities to equip learners with the skills they need for success. While the internet removed barriers to accessing information, AI increases our ability to understand and apply that information. This leap from passive consumption to active, deep understanding is a profound change that has the potential to significantly improve education. AI can expand teaching and learning by modifying content to be more engaging and interactive, simplifying complex topics, and enabling deeply personalized learning at scale.
Our commitment to safer digital learning
Our commitment to safety is underpinned by extensive testing and continuous expert consultation. We conduct rigorous user and adversarial testing designed to identify vulnerabilities and build strong safeguards against harmful content and bad actors. While no technology can eliminate risk entirely, the education sector and edtech vendors can work together to adopt best practices and build a safer, secure, comprehensive approach to protecting students, staff, and communities.
Looking for something else?
Link to Youtube Video (visible only when JS is disabled)
FAQs
Advancing model integrity and accountability
In an effort to reduce hallucination, we continue to train models to use trusted sources and verify outputs. Yet challenges still remain in determining which sources are trustworthy and how to handle subjective questions. To be an effective educational tool, AI must both be able to do both, moving beyond simple reinforcement so it can challenge a student’s misconceptions and correct inaccurate statements, not simply act as an uncritical mirror that enforces them. This is an area of deep importance to Google. We have made meaningful progress with each new model release, and continue driving the industry forward with new, comprehensive benchmarks that evaluate the ability of LLMs to generate factually accurate and sufficiently detailed answers to user queries.
DataGemma: Using real-world data to address AI hallucinations
An overview of new grounding techniques that connect LLMs to extensive statistical databases to ensure factual accuracy.
FACTS Grounding: A new benchmark for evaluating the factuality of large language models
Our technical deep-dive into new benchmarks designed to evaluate and improve the factuality of model responses.
Re-evaluating Factual Consistency Evaluation
A critical look at the methodology used to measure how reliably AI summarizes information without introducing errors.
Enabling institutional oversight for active learning
We’re committed to ensuring that AI is used as a supervised platform for safe learning for students. AI education tools like Gemini for Education are designed to support established pedagogical principles and give teachers and administrators direct control over how, when, and where AI is used, allowing them to turn these systems on and off within classrooms. As educational institutions grapple with this topic, Google is exploring tools that can support schools as they adjust their curricula and assessments. We hope our experimentation in these areas, such as developing AI tools that can help scale oral assessments or enable students to show their work, can be beneficial.
What do AI chatbots really mean for students and cheating?
A Stanford-led analysis of how AI is reshaping academic honesty and the evolving nature of student assessment.
Teaching Responsible use of AI
Our guide for educators to help integrate AI literacy into the classroom in a way that emphasizes ethical use and critical thinking.
Partnerships
Studies, reports, and whitepapers
-
Bridging the human-AI knowledge gap
PNAS
-
Towards and AI-Augmented Textbook
Google
-
What do AI chatbots really mean for students and cheating?
Stanford
-
Towards Responsible Development of Generative AI for Education: An Evaluation-Driven Approach
Google DeepMind
-
Evaluating Gemini in an Arena for Learning
Google DeepMind
-
AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking
MDPI