Advancing bold and responsible approaches to AI
We aim to unlock the benefits of AI that can help solve society’s biggest challenges, while minimizing problems by developing safeguards and working with partners.
OVERVIEW
Building helpful and safer AI
We believe a responsible approach to AI requires a collective effort, which is why we work with NGOs, industry partners, academics, ethicists, and other experts at every stage of product development. In 2018, we were one of the first companies to publish a set of AI Principles, which guide Google teams on the responsible development and use of AI. But while self-regulation is vital, it is not enough.
Balanced, fact-based guidance from governments, academia, and civil society is also needed to establish boundaries, like policies and regulation that can help promote progress while reducing risks of abuse.
Learn more about our latest work at AI.google