Jump to Content

Try our new AI-powered search
A man and a woman speaking on a panel at the Aspen Security Forum. The man on the left, wearing a dark suit and light blue shirt, is speaking into a microphone and gesturing with his hand. The woman on the right, in a blue blazer and glasses, listens attentively while holding a microphone. The background features a blue wall with the "Aspen Security Forum" and "Aspen Institute" logos repeated.
Kent Walker (Google), and Dr. Arati Prabhakar (OSTP, the White House).

On the Ground: Key Takeaways from the 2023 Aspen Security Forum

Executing on the promise of AI will require cross-sector collaboration.

The 14th annual Aspen Security Forum took place in Aspen, Colorado on July 18 – 21.

America’s premier national security and foreign policy conference, the event brought together leaders in government, business, academia, and media for discussions about AI, geopolitical opportunities, and risks “over the horizon.”

A wide-shot of a five-person panel discussion on a stage at the Aspen Security Forum. The stage is flanked by tall banners featuring logos from Google, NBCUniversal, Financial Times, Microsoft, and other sponsors. The diverse group of experts is seated on high stools against a blue lit backdrop, with a moderator on the far left leading the conversation.

Kent Walker (Google) speaking at a panel on “The Promise and Peril of AI.”

AI will have a transformative impact on security and online safety

Safety and security offer some of the most exciting, transformational applications of AI.

Royal Hansen, Google’s VP of Security, pointed out that security analysts at Google have already used AI to streamline manual aspects of their jobs, which has helped increase their efficiency. Likewise, Laurie Richardson, Google’s VP of Trust and Safety, pointed to how AI has helped Google keep users safe from spam and malware.

Rob Silvers, Under Secretary of the Office for Strategy, Policy and Planning at the Department of Homeland Security, also underscored the importance of collaborating when it comes to AI and security. For example, the Cyber Safety Review board is a joint effort where public and private partners (including Google) conduct after-action reviews of the most significant cyber incidents in the United States and then publish recommendations to the broader community.

Laurie Richardson (VP Trust and Safety, Google), Royal Hansen (VP Privacy, Safety, and Security, Google), and Rob Silvers (Under Secretary for Strategy, Policy, and Plans, DHS) sit in armchairs for a "Safer with Google" panel at the 2023 Aspen Security Forum. Behind them, a screen displays the event title and headshots of the speakers, alongside a modern wooden wall featuring the Google "G" logo.

Leaders from Google and the Department of Homeland Security discussing AI.

We don’t need to start from scratch on regulation

While there’s much about AI that feels new and unique, multiple participants noted that previous experiences with once-new technologies are critical to guiding how we develop and create guardrails for AI.

Regulation, too, has precedents to draw from. Dr. Arati Prabhakar, Director of the Office of Science and Technology Policy at the White House, pointed out that, when it comes to crimes like fraud, relevant laws are already in place to protect citizens. The more significant concern, she said, is society’s ability to enforce and regulate against the harms and threats as AI accelerates and changes how people can do things.

“[AI has] very broad applications and it turns out that a lot of the harms we are concerned about happen to already be illegal,” she said.

We talk about the notion of innovating globally and responsibly, and doing that together, doing that in a way that is inclusive and brings in lots of different views.
Kent Walker, Google’s President of Global Affairs

Working together is critical

Since AI is such a wide-ranging technology, developing it responsibly will require a robust collaborative effort.

“Ultimately, this has to be a system of governance,” said Kent Walker, Google’s President of Global Affairs. Such a system, he said, would require the internal and external collaboration between a wide array of groups, including technology companies, government regulators, and international bodies.

In the end, safeguarding the promise of AI and minimizing risks is about scaling it in a sound, responsible way, together.

For other Google perspectives on the future of AI, read about our commitment to advance bold and responsible AI together, and our AI principles. You can also watch most of the panels and events from this year’s Aspen Security Forum here.

Looking for something else?

Try our new AI-powered search
Return to top of page