Educators and librarians have been doing this work for a long time – preparing students to find, evaluate, and understand information effectively through new media and technological advancements. While fact checking organizations, journalists and information literacy experts have developed best practices to discern the trustworthiness of information, many people feel unequipped to practice these tactics when they need them. In fact, seven in ten (70%) respondents to a 2023 study led by the Poynter Institute’s digital media initiative MediaWise reported not being totally or very confident in their ability to tell when online images are authentic and reliable.
“Is this AI-generated?” is not equivalent to “Is this trustworthy?”. Though these two questions can overlap, additional context is often needed to empower people to make informed decisions about trustworthiness. For example, an image may not be AI-generated but may still be taken out of context or manipulated with photo-editing software. There are other, broader credibility questions about its context and its provenance that should be asked, such as: Is this image being used in the right context? Where did this information come from? Has it been edited? What is the perspective or incentive of the person who is sharing it? Is there a bigger picture to consider? And even more challenging, in many cases the question “Is this true?” is complicated and does not have a clear answer.
People come to Google to verify information they see elsewhere - maybe it’s a text message from a family member or something shared on social media. Our approach to helping people find more information is two-fold: first, we build our products from the ground up with quality in mind. That means when people come to Google, our products are designed to surface reliable information where that is available. Second, we believe people should have access to easy-to-use tools that provide the context they need to help them answer the question “Can I trust this?” for themselves. We do this by building tools that leverage the best of our technology to help people understand the credibility and context of something they’re seeing online. Our tools and features don’t require any advanced technical skills to use.
We don’t build these products based on what we alone think will work. There will always be new ways for people to create and consume content. We know that we must keep learning and listening to information literacy experts and the people who use our products in order to continuously evolve and improve. We learn from the research of scholars (some of which is included in this paper), as well as from new best practices in the field. This helps us understand the best ways to approach this ever-evolving challenge within our products and ensures that our tools help people strengthen their own information literacy skills.
Our work in this space is far from done. At the speed of technological change, something that works today may not be sufficient next year, and we understand that technological solutions alone are not sufficient either. We’re committed to working with our users, partners, experts, and all those interested in helping people having an easier time deciding what to trust online to continuously evolve and update our approach in this space, and we are eager to learn from the discussions that may be sparked by the findings highlighted in this paper.