Advancing bold and responsible approaches to AI

We aim to unlock the benefits of AI that can help solve society’s biggest challenges, while minimizing problems by developing safeguards and working with partners.


Building helpful and safer AI

We believe a responsible approach to AI requires a collective effort, which is why we work with NGOs, industry partners, academics, ethicists, and other experts at every stage of product development. In 2018, we were one of the first companies to publish a set of AI Principles, which guide Google teams on the responsible development and use of AI. But while self-regulation is vital, it is not enough.

Balanced, fact-based guidance from governments, academia, and civil society is also needed to establish boundaries, like policies and regulation that can help promote progress while reducing risks of abuse.

Learn more about our latest work at


We share expertise and resources with other organizations to help achieve collective goals


We are constantly seeking out external viewpoints on public policy challenges


How does AI improve the helpfulness of our products? How does AI improve the helpfulness of our products?

We apply our AI advances to our core products and services to enhance and multiply the usefulness and value of all our core products and services. Billions of people use Google AI when they engage with Google Search, Google Photos, Google Maps, Google Workspace; hardware devices like Pixel and Nest; and accessibility applications like Android Voice Access, Live Transcribe, and Project Relate. We’re excited about the potential for AI to continue to make our products even more useful and transformative.

What does it mean to pursue AI responsibly? What does it mean to pursue AI responsibly?

We believe AI responsibility is not just about avoiding risks, but using AI to help improve people's lives and address social and scientific challenges. In 2018, we became one of the first companies to issue AI Principles, and we built guardrails for stated applications we will not pursue. Our AI Principles offer a framework to guide our decisions on research, product design, and development as well as ways to think about solving the numerous design, engineering, and operational challenges associated with any emerging technology.

But, as we know, issuing principles is one thing—applying them is another. This report involves our most comprehensive look at how we put the AI Principles into practice. We believe a formalized governance structure to support the implementation of our AI Principles along with rigorous testing and ethics reviews, are necessary to put the principles into practice. See more about how we pursue Responsible AI.

How can we work better together? How can we work better together?

No single company can progress this approach alone. AI responsibility is a collaborative exercise that requires bringing multiple perspectives to the table to help ensure balance.

That’s why we’re committed to working in partnership with others to get AI right. Over the years, we’ve built communities of researchers and academics dedicated to creating standards and guidance for responsible AI development. We collaborate with university researchers at institutions around the world, and to support international standards and shared best practices, we’ve contributed to ISO and IEC Joint Technical Committee’s standardization program in the area of Artificial Intelligence.

How should policymakers tackle AI governance? How should policymakers tackle AI governance?

AI will be critical to our scientific, geopolitical, and economic future, enabling current and future generations to live in a more prosperous, healthy, secure, and sustainable world. Governments, the private sector, educational institutions, and other stakeholders must work together to capitalize on AI’s benefits, while simultaneously managing risks.

Tackling these challenges will again require a multi-stakeholder approach to governance. Some of these challenges will be more appropriately addressed by standards and shared best practices, while others will require regulationfor example, requiring high-risk AI systems to undergo expert risk assessments tailored to specific applications. Other challenges will require fundamental research, in partnership with communities and civil society, to better understand potential harms and mitigations. International alignment will also be essential to develop common policy approaches that reflect democratic values and avoid fragmentation.

Our Policy Agenda for Responsible Progress in Artificial Intelligence outlines specific policy recommendations for governments around the world to realize the opportunity presented by AI, promote responsibility and reduce the risk of misuse, and enhance global security.