We’re committed to supporting democratic processes around the world

For over two decades, Google and YouTube have committed to providing timely and high quality information to help voters understand, navigate, and participate in democratic processes.

Surfacing high-quality information to voters

As people head to the polls, many will turn to AI products alongside the products they’ve historically relied on to find information. On Google Search, we prioritize surfacing authoritative election information directly within results and responses. For example, when people search for topics like “how to vote,” we will prioritize surfacing information about voting requirements, key dates and more, with direct links to official resources. In some instances, AI-powered features can provide richer context from multiple helpful sources.

When people ask questions in the Gemini app, they can use the Double check feature to give them helpful context about the response they are seeing. Users can click through to Google Search and evaluate whether there are high-quality sources and information that supports what they’re seeing.

Our AI features are deeply integrated with our core Search ranking and safety systems. To protect the integrity of information, we constantly update our ranking protections and leverage AI-based tools like SpamBrain to defend against low quality content across our platforms.

Our commitment to information integrity extends to YouTube, where our systems are designed to raise high-quality election news from reliable sources in both search results and recommendations.

Improving transparency by providing context about content across our platforms

As we continue to bring AI to more products, we are focused on helping people better understand how a particular piece of content was created and modified over time.

It is key to Google’s mission to provide users with context so they can make informed decisions about the content they encounter online including how content is created. For example, provenance technology can help explain whether a photo was taken with a camera, edited by software or produced by generative AI. This kind of information helps our users make more informed decisions about the content they’re engaging with and builds media literacy and trust.

That is why we have implemented SynthID, our state-of-the-art imperceptible watermarking technology, across Google’s AI-generated content including text, audio, images, and video. Users can now verify if an image, video, or audio was generated with or edited by Google AI right in the Gemini app by uploading it to the Gemini app and asking a question such as: “Was this created by Google AI?” or “Is this AI generated?” We are working with industry partners to expand the use of SynthID technology beyond Google so that even more content generated by AI is watermarked. This is a significant milestone for content transparency.

To increase transparency across the web, we have also helped to develop the Coalition for Content Provenance and Authenticity (C2PA) and Content Credentials. This technology provides a secure record of a file’s origin and edit history, enabling users to see if content was captured by a camera, modified by software, or generated by AI. By establishing this technical lineage, we aim to help users make more informed decisions about the content they encounter.

Safeguarding our platforms from abuse

Everyone is subject to our policies and community guidelines. Our policies apply to all forms of content, including content surrounding elections — regardless of the political viewpoints expressed, the language the content is in, or how the content is generated.

Content that misleads people on voting requirements or key dates is not allowed on YouTube. We have specific YouTube election misinformation policies that prohibit certain types of content relating to free and fair democratic elections, like voter suppression, false claims about candidate eligibility, and incitement to interfere with democratic processes.

Using AI to improve detection and enforcement across our products

For over a decade, we’ve used machine learning and AI to identify and remove content that violates our policies — including YouTube’s Community Guidelines related to misleading information around elections. Now, we’re deploying the world’s most advanced models to build even faster and more adaptable enforcement systems to combat abuse at scale. These advancements allow our Trust & Safety teams to scale our long-standing threat detection and defense systems and respond to emerging challenges to information integrity with unprecedented speed.

Expanding likeness detection to civic leaders and journalists

On YouTube, we have started to expand our likeness detection tools to government officials, journalists, and political candidates. This tool looks for a participant's likeness in AI-generated content, and if a match is found — like a deepfake of their face—the individual can review the content and request review under our privacy guidelines.

On search, if you find private or sensitive information, you can request to remove it from search results.

Building responsible AI content and advertising policies

One of the ways Google supports election integrity is through election ads transparency. Advertisers are required to complete our election ads verification process and follow the political content policies and ads policies as a whole.

When it comes to AI, we were the first tech company to require election advertisers to prominently disclose when their ads include realistic synthetic content that’s been digitally altered or generated, including by AI tools. These policies build on our ads political content policies and longstanding policies against using manipulated media, such as deepfakes, to mislead people.

As more advertisers and creators leverage the power and opportunity of AI, we want to make sure we continue to provide people with the transparency and information they need to make informed decisions. By using AI tools to categorize and identify problematic content, we can enforce our policies around harmful election content at greater speed, scale, and precision.

Partnering with election entities to provide best-in-class resources

We recognize the heightened cybersecurity risks associated with elections, and we have forged partnerships and developed tools to expand our offerings to safeguard against potential threats. Our Advanced Protection Program – our strongest set of cyber protections – is particularly recommended for elected officials, candidates, campaign workers, journalists, election workers and other high-risk individuals. This program is accessible for free. Through our longstanding partnerships with Defending Digital Campaigns (DDC) in the US and the International Foundation for Electoral Systems (IFES) globally, we provide campaigns and election management bodies with the security tools they need to stay safe online, including tools to rapidly configure Google Workspace's security features.

Our Google Threat Intelligence team helps identify, monitor and tackle emerging threats, ranging from coordinated influence operations to cyber espionage campaigns against high-risk entities. For example, on any given day, the team is tracking more than 270 targeted or government-backed attacker groups from more than 50 countries, consistently publishing findings to keep the public and private sectors vigilant and informed. The team also helps organizations build holistic election security programs and harden their defenses with comprehensive tools, ranging from proactive compromise assessment services to threat intelligence tracking of information operations.

Finally, we’re partnering with our industry peers, governments, and civil society to address existing and emerging AI challenges. We’ve joined and established a number of partner initiatives to expand and share knowledge, identify ways to mitigate emerging risks, prevent abuse, provide more transparency, and further the responsible development of AI.

In closing

Democracy thrives on authoritative, reliable information. At Google, supporting elections is a core part of our responsibility to our users and society. We’ll continue to build on our efforts to support the integrity of democratic elections around the world - deliberately, responsibly, and in partnership with others.


Resources

Read more about our work to support elections around the world