How AI Can Reverse the Defender’s Dilemma
Strengthening security and protecting critical infrastructure
We face a steady stream of evolving threats. Today, AI helps us detect and fix vulnerabilities in seconds, enabling cybersecurity professionals to act more nimbly and proactively than before. By applying this technology at scale, we can bolster the security ecosystems of our schools, hospitals, and electric grids.
We face a steady stream of evolving threats. Today, AI helps us detect and fix vulnerabilities in seconds, enabling cybersecurity professionals to act more nimbly and proactively than before. By applying this technology at scale, we can bolster the security ecosystems of our schools, hospitals, and electric grids.
Our policy priorities for advancing best-in-class security
cyberdefense
to secure AI development
cryptography
integrity across supply chains
AI for cyberdefense
With AI, cybersecurity professionals can now deploy preemptive security upgrades, scan software code for problems while it’s being written, and analyze vast datasets at lightning speed to spot malicious files and proactively contain threats. This is our once-in-a-generation moment to change the dynamics of cyberspace for the better — a chance for profound transformation, not incremental gains.
Our commitment to secure AI development
At Google, we build our tools to ensure that protections against AI threats are “always on” by default. But maximizing AI’s benefits for cybersecurity while safeguarding against its risks requires us to work together — with experts, everyday citizens, private companies, and policymakers — to help develop, deploy, and govern AI in a way that keeps people and their data safe and secure.
Post-quantum cryptography
Future quantum computers hold promise to solve once impossible problems in drug discovery, materials science, energy, and beyond. That power comes with an urgent responsibility as we prepare for the profound security implications of the post-quantum era.
Maintaining integrity across supply chains
From working to eliminate common vulnerabilities at the source to categorizing and tracking all supply chain assets, code, and training data, we utilize our Secure AI Framework (SAIF) to ensure that every element of the AI lifecycle is monitored for unauthorized use. And as a core partner to governments, we’ve scaled our internal standards to secure national supply chains against sophisticated global threats.
Looking for something else?
Link to Youtube Video (visible only when JS is disabled)
FAQs
Visibility and context on the threats that matter most
We’re moving beyond reactive intelligence — using threat intelligence data to build rigorous testing frameworks and using AI agents to find vulnerabilities before we do. Google’s Threat Intelligence Group (GTIG) focuses on identifying, analyzing, mitigating, and eliminating entire classes of cyber threats against Alphabet, our users, and our customers.
A Framework for Evaluating Emerging Cyberattack Capabilities of AI
An in-depth report on the methodology for testing whether LLMs could be misused to assist in developing cyberattacks.
From Naptime to Big Sleep: Using Large Language Models to Catch Vulnerabilities in Real-World Code
Our report on Big Sleep, a Google agentic model that can autonomously identify vulnerabilities with human oversight before they can be exploited.
Google’s Threat Analysis Group
Regular intelligence updates on state-sponsored hacking, information operations, and high-level cyber espionage.
Incorporating security from design to deployment
"Secure by Default" and "Secure by Design" are often used interchangeably, but they actually represent distinct approaches to building secure systems. Secure by Default focuses on ensuring that the system's out-of-the-box default settings are set to a secure mode. Secure by Design is a proactive approach that emphasizes incorporating security considerations throughout the entire software development lifecycle. It's about anticipating potential threats and vulnerabilities early on and making design choices that mitigate those risks.
Secure by Design at Google
A technical overview of our security-first architecture that embeds protection into the core of every product.
An Overview of Google’s Commitment to Secure by Design
Our policy proposal aligns Google’s security practices with international cybersecurity standards and government recommendations.
Strengthening defenses through products and policy
We combine built-in technological protections in our products with acceptable use policies to prevent, detect, and respond to fraud and scams. As we continue to scale our industry-leading practices, we help keep users safe online through proactive partnerships with key organizations, experts fighting fraud, and awareness-raising initiatives.
AI-powered tools enable us to scale and speed up our ability to detect and respond to threats, such as stopping spam calls in Android and blocking >99.9% of phishing emails in Gmail. We further empower users with safety tips and insights on how to spot scams in the wild and report fraud through transparent channels — helping people stay safer online every day.
Google has signed the Industry Accord Against Online Scams and Fraud
Google is joined by online industry partners to help unify our collective capabilities, share threat intelligence and coordinate defenses to fight global scammers.
Tackling Scams & Fraud Together
Our whitepaper detailing our approach to fighting scams and principles for fighting scams effectively together.
Legal action and new legislation to fight scammers
Our policy actions to dismantle sophisticated fraud networks and keep people safe.
Leveraging Gemini models to detect sophisticated scam messages
A technical brief on how on-device AI models can better detect subtle conversational patterns used in sophisticated baiting scams.
Our framework to secure Machine learning (ML) models
Google has a long history of driving responsible AI and cybersecurity development. Our Secure AI Framework is distilled from the body of experience and best practices we’ve developed and implemented, and reflects our approach to building ML and AI-powered apps with responsive, sustainable, and scalable protections for security and privacy. We will continue to evolve and build SAIF to address new risks, changing landscapes, and advancements in AI, enabling secure AI deployment for governments, businesses, and organizations.
Google’s secure AI framework (SAIF)
Our conceptual framework to help public and private sectors securely deploy AI systems while mitigating unique model-based risks.
A multi-layered approach to defend against security risks
Protecting AI models against attacks requires “defense-in-depth” — using multiple layers of protection to constrain potential harm while preserving maximum utility. We also develop our latest AI models and agents with increased protections via automated red teaming, helping us spot weak points before bad actors can exploit them.
Responsible AI Progress Report
A 2026 update on our adherence to our AI Principles, showing how we continue to advocate for safety, fairness, and accountability in AI.
Advancing Gemini’s Security Safeguards
How we implement technical guardrails and red-teaming processes to keep our most advanced models secure.
GTIG AI Threat Tracker: Distillation, Experimentation, and (Continued) Integration of AI for Adversarial Use
This February 2026 report offers an update regarding the advances in threat actor usage of AI tools, and mitigation actions taken with Google DeepMind.
Partnerships
Perspectives
-
Oasis Open
Introducing the Coalition for Secure AI, an OASIS Open Project
-
The UK National Cyber Security Centre & the U.S. Cybersecurity and Infrastructure Security Agency
Guidelines for secure AI system development
-
U.S. Cybersecurity and Infrastructure Security Agency & 17 U.S. and international partners
Secure by Design: Shifting the Balance of Cybersecurity Risk
-
Information Week
10 reasons for optimism about cybersecurity
-
FIDO Alliance
An inflection point in the journey to passwordless
-
Aspen Institute Financial Security Program
United We Stand: A National Strategy to Prevent Scams
-
R Street
The Transformative Role of AI in Cybersecurity
Studies, reports, and whitepapers
-
Lessons from Defending Gemini Against Prompt Injections
Google DeepMind
-
Our approach to Protecting AI Training Data
Google Research
-
An Approach to Technical AGI Safety and Security
Google DeepMind
-
Adversarial Misuse of Generative AI:
Google Cloud
-
Cybersecurity Forecast 2026 Report
Google Cloud
-
Leveraging AI through the Global Signal Exchange to tackle scams
Google
-
Transparency Report
Google
-
AI Risk and Resilience: A Mandiant Special Report
Mandiant Consulting