Strengthening Cybersecurity in an AI-Powered Future
Overview
Partnering for a safer web
AI promises immense opportunity for cybersecurity – blocking attacks sooner and resolving threats faster – but it also creates new risks. As soon as a new technology is introduced, threat actors and cyber criminals look for ways to exploit it. Getting security right will be a critical requirement for the safe adoption of AI. Society cannot achieve the promise of AI without a strong, trustworthy foundation.
Digital Futures Project with support from Google.org
As part of the Digital Futures Project, Google.org established a fund to provide grants to leading think tanks, academic institutions, and private sector stakeholders around the world to facilitate dialogue and inquiry into this important technology. Researchers' views are independent and intended to advance public understanding of these issues. Google does not endorse any specific proposals or recommendations.
Security experts – from government, the private sector, academia, and civil society – are exploring how the rapid advancement and public availability of new AI tools are reshaping the cybersecurity landscape.
To help policymakers and other stakeholders understand how we can create a cyber future where defenses are more efficient, defenders collaborate more effectively, and AI tools are more resilient against security threats; the independent research conducted by Digital Futures Project grantees, funded by Google.org, explores two key priorities:
- Ensuring that AI development shifts power away from cyber attackers and towards cyber defenders; and,
- Facilitating collaboration to strengthen the security ecosystem.
While AI holds immense promise, particularly in cybersecurity applications, to harness its potential, we must understand its current benefits, anticipate its advancement, comprehend the risks it can pose, and address threats that threaten its security.
— Haiman Wong, Amy Chang, and Brandon Pugh, R Street.
Ensuring that AI development shifts power away from cyber attackers and towards cyber defenders
Research shows that AI is already being used to streamline and even automate threat detection and response, helping security teams identify potential vulnerabilities through “red-teaming” and simulated attacks, and enhancing security analysis. It also shows how future AI developments are expected to significantly advance the speed and scale of these tools. Importantly, the research identifies actions that can be taken to protect against AI risks, including establishing clear accountability and oversight frameworks for AI systems, investing in AI security research, promoting collaboration to address AI security challenges, and more.
The actions that governments, companies, and organizations take today will lay the foundation that determines who benefits more from this emerging capability—attackers or defenders.
— Global Cybersecurity Group, Aspen Institute.
Facilitating collaboration to strengthen the security ecosystem
Because governments, businesses, and other institutions have different risk tolerance levels, researchers point out that it’s difficult to land on a one-size-fits-all solution to cyber governance. Government, industry, and academia will have to work together to find the appropriate balance between innovation and security. The research examines how different risk-based frameworks, voluntary safeguards, and legal standards should all be part of an approach that advances AI innovation while strengthening the security ecosystem.