Jump to Content

Try our AI-powered search
TRUSTWORTHY INFORMATION & CONTENT

Protecting access to trustworthy information online

OVERVIEW
A dark square icon featuring a white horizontal list and a checkmark on the right

When users come to Google, they expect helpful and accurate information. To meet this need, we take a comprehensive approach to content responsibility: elevating authoritative sources, providing critical context to users, and setting clear, transparent rules for what is allowed on our platforms.

OVERVIEW
A dark square icon featuring a white horizontal list and a checkmark on the right

When users come to Google, they expect helpful and accurate information. To meet this need, we take a comprehensive approach to content responsibility: elevating authoritative sources, providing critical context to users, and setting clear, transparent rules for what is allowed on our platforms.

Our policy priorities for ensuring users’ access to trustworthy online information and content

Combating
harmful content and disinformation
Making it easier to determine
the origins and history of content
Empowering
users through information literacy

Combating
harmful content and disinformation

Combating harmful content and disinformation

We work to protect everyday users from harmful content. Making our products safer for everyone is core to the work of many different teams across Google and YouTube. When it comes to the information and content on our platforms, we take our responsibility to safeguard the people and businesses using our products seriously, and to do so with clear, transparent policies and processes.

Making it easier to determine
the origins and history of content

Making it easier to determine the origins and history of content

As we continue to bring AI to more products and services to help fuel creativity and productivity, we are focused on helping people better understand how a particular piece of content was created and modified over time. We believe it’s crucial that people have access to information about the source and history — the provenance — of digital content, and we are investing heavily in tools and innovative solutions to provide it.

Empowering
users through information literacy

Empowering users through information literacy

We help users make informed decisions by providing them with context and creating digital tools designed to verify the authenticity and accuracy of online media, video, and reporting. We're also increasingly focusing our information literacy efforts through our own tools by utilizing new and emerging AI technologies and helping provide essential transparency in the rapidly evolving landscape of generative media.

Looking for something else?

Try our AI-powered search
Photo of Liz Reid speaking

The Google Exec Reinventing Search in the AI Era

On the latest episode of the WSJ’s Bold Names podcast, Liz Reid, VP, head of Google Search, speaks to WSJ’s Christopher Mims and Tim Higgins about transforming search for the age of AI.
Watch the video

FAQs

How we’re addressing the rise of deepfakes and misinformation
Our approach to smart content regulation
How we’re unlocking AI’s benefits for creative industries
Our approach to training AI models
How we’re addressing the rise of deepfakes and misinformation

Establishing transparency standards for AI-generated content

We believe AI holds incredible potential, but unlocking it responsibly means putting safeguards in place and developing tools that help people identify synthetic content. To help promote authenticity in the AI era, we’ve invested in content provenance technologies. These include content credentials, which use cryptographically signed metadata to securely convey the origin and editing history of media files, and SynthID, which includes state-of-the-art watermarking tools so AI-generated content can be easily and reliably identified.

SynthID 
Google DeepMind’s tool for watermarking and identifying AI-generated content to increase digital transparency.

About this image
Search features that provide essential context on an image’s history, metadata, and AI-labeling to help users verify visual content.

Our approach to smart content regulation

A balanced framework for evolving content challenges

A smart regulatory framework is essential to enabling an appropriate approach to harmful content. Our practices are informed by four key principles, which form the basis for an an effective regulatory framework:

  1. Shared Responsibility: Tackling illegal content is a societal challenge, one in which companies, governments, civil society, and users all have a role to play. In some cases, content may not be clearly illegal, either because the facts are uncertain or because the legal outcome depends on a difficult balancing act. In turn, courts have an essential role to play in fact-finding and reaching legal conclusions on which platforms can rely.
  2. Rule of law and creating legal clarity: It’s important to clearly define what platforms can do to fulfill their legal responsibilities, including removal obligations. An online platform that takes other voluntary steps to address illegal content should not be penalized.
  3. Flexibility to accommodate new technology: While laws should accommodate relevant differences between platforms, given the fast-evolving nature of the sector, laws should be written in ways that address the underlying issue rather than focusing on existing technologies or mandating specific technological fixes. 
  4. Fairness and transparency: Laws should support companies’ efforts to be transparent about their content removals through transparency reports, appropriate notices, and appeals processes that balance different policy goals at stake.

Google Approach to Content Regulation:
Our core principles for balancing freedom of expression with necessary removal of harmful content.

How we’re unlocking AI’s benefits for creative industries

Driving collaboration between developers and content publishers

AI is a shared opportunity, with the potential to expand the realms of science, commerce, and creativity. We’re committed to working with all the stakeholders in the ecosystem to create a shared framework where both creators' rights and innovation flourish. Through tools like YouTube’s Likeness ID and advocating for the passing of the NO FAKES Act, we’re committed to establishing protections against unauthorized public use of AI-generated likeness (digital replicas).

Expanding likeness detection to civic leaders and journalists
We’re expanding likeness detection to a pilot group of government officials, journalists, and political candidates.

A practical approach to creative content and AI training
Our policy recommendations for balancing the interests of creators and AI developers by acquiring content responsibly and lawfully, such as by giving websites the ability to opt out of having content or information on their sites used for AI training and exploring new types of partnerships..

YouTube Copyright and Rights Management
YouTube’s guide to legal frameworks and digital tools for creators to protect and manage their intellectual property.

We support the NO FAKES act
Our endorsement of federal legislation to protect creators from unauthorized AI-generated likenesses.

Our approach to training AI models

The role of public information in developing AI and the importance of training responsibly

We train our models primarily on available data from the open internet, which we believe is a transformative and fair use that enables innovation. We believe that training on publicly available information enables the development of responsible transformative technology while respecting the rights of creators, and we advocate for maintaining balanced copyright frameworks to ensure the U.S. remains a global leader in this field.

Fair Use for AI Innovation
Why legal doctrine of fair use is essential for training the next generation of AI models and fostering global innovation.

A practical approach to creative content and AI training
Our policy recommendations for balancing the interests of creators and AI developers by acquiring content responsibly such as by giving websites the ability to opt out of having content or information on their sites used for AI training and exploring new types of partnerships.

Our Approach to Protecting AI Training Data

Partnerships to improve our AI products 
To help improve our AI capabilities, we engage in partnerships that include delivery of content in a few key areas: closed and offline datasets, enhanced metadata and signals, as well as real-time structured factual information for verification purposes.

Web Publisher Controls
We launched Google-Extended, a control that web publishers can use to manage whether their sites help improve Gemini and Vertex AI generative APIs, including future generations of models that power those products.

Partnerships