Security

Talking Trust, Resiliency, and AI at Aspen Security Forum 2024

Jul 23, 2024 2 min read

Kent Walker, President, Global Affairs at Google & Alphabet

The following was originally posted to LinkedIn by Kent Walker, President, Global Affairs at Google & Alphabet on July 23, 2024.

At the 15th annual Aspen Security Forum last week, I had the great pleasure of joining Sir Jeremy Fleming, Jon Huntsman Jr, and Anne Neuberger on stage for a discussion about securing trust in the digital economy.

Steve Clemons moderated our discussion and was keen to find out if AI will be a net positive for global security–or something that will make the threat landscape all the more perilous.

My perspective was that we live in a highly-connected world in which we are all only as strong as the weakest link. But with the right foundations in place, AI has the very real potential to tilt the cybersecurity balance from attackers to defenders.

Building a Security Foundation for AI

At Google, we keep more people safe online than anyone else. But as we look ahead to AI, that’s not enough: We also want to do our part to ensure everyone up and down the supply chain is adhering to a robust set of standards.

That’s a major reason we worked with other leading companies to launch the Coalition for Secure AI (CoSAI) last week. Hosted by the OASIS Open global standards body, CoSAI is an open-source initiative designed to share tools and the best practices for creating Secure-by-Design AI systems.

We have made significant investments in digital infrastructure, digital connectivity (through the most reliable and resilient subsea network), and digital security, as well as in cutting-edge AI. But we recognize that secure AI requires participation by the wider ecosystem, and we look forward to working with CoSAI as well as governments to make advances in the field.

What’s At Stake

Last year at Aspen we said the countries most likely to lead in competitiveness and AI cyber-defense would be the ones able to deploy and innovate with AI boldly and responsibly.

This year I was asked how democracies are doing on that front–are we still leading on AI capabilities and fundamental AI research? The answer is yes, mostly–but in some places there are signs our lead is not secure.

And given AI’s potential to deliver between $17 and $25 trillion annually to the global economy by 2030 (an amount comparable to the entire US GDP), it’s a lead we can’t afford to lose.

So what do we do?

We can start by approaching AI with the same urgency as the space race of the 1960s. Remember that in 1964 the federal government spent twice as much as the private sector in R&D. By 2020 that had reversed, with the private sector spending 4x what the federal government does on R&D.

While we don’t need government subsidies, we do need pro-innovation regulatory frameworks to accelerate medicine and drug discovery, advance the frontiers of materials science, and promote progress in quantum science.

Final Thoughts

During the conversation, Anne Neuberger rightly pointed out that digital security has no borders–actions in one part of the world can impact lives worldwide.

For AI, that suggests the need to align our frameworks whenever possible. Much like the internet, AI is a general-purpose technology that will touch everyone, everywhere.

We look forward to working with others to bake security and resilience in from the start, increase access to tools and training, and–ultimately–help address some of humanity’s biggest challenges.

You can catch up on our full discussion here: