Responsible AI

On the Ground: Key Takeaways from the 2023 Aspen Security Forum

Executing on the promise of AI will require cross-sector collaboration.

Aug 16, 2023 2 min read

The 14th annual Aspen Security Forum took place in Aspen, Colorado on July 18 – 21.

America’s premier national security and foreign policy conference, the event brought together leaders in government, business, academia, and media for discussions about AI, geopolitical opportunities, and risks “over the horizon.”

During the 2023 Aspen Security Forum, Kent Walker speaks into a microphone as three female panelists and one male moderator look at him.

AI will have a transformative impact on security and online safety

Safety and security offer some of the most exciting, transformational applications of AI.

Royal Hansen, Google’s VP of Security, pointed out that security analysts at Google have already used AI to streamline manual aspects of their jobs, which has helped increase their efficiency. Likewise, Laurie Richardson, Google’s VP of Trust and Safety, pointed to how AI has helped Google keep users safe from spam and malware.

Rob Silvers, Under Secretary of the Office for Strategy, Policy and Planning at the Department of Homeland Security, also underscored the importance of collaborating when it comes to AI and security. For example, the Cyber Safety Review board is a joint effort where public and private partners (including Google) conduct after-action reviews of the most significant cyber incidents in the United States and then publish recommendations to the broader community.

A woman and two men, leaders from Google and the Department of Homeland Security, sit at a panel discussing AI and security.

We don’t need to start from scratch on regulation

While there’s much about AI that feels new and unique, multiple participants noted that previous experiences with once-new technologies are critical to guiding how we develop and create guardrails for AI.

Regulation, too, has precedents to draw from. Dr. Arati Prabhakar, Director of the Office of Science and Technology Policy at the White House, pointed out that, when it comes to crimes like fraud, relevant laws are already in place to protect citizens. The more significant concern, she said, is society’s ability to enforce and regulate against the harms and threats as AI accelerates and changes how people can do things.

“[AI has] very broad applications and it turns out that a lot of the harms we are concerned about happen to already be illegal,” she said.

“We talk about the notion of innovating globally and responsibly, and doing that together, doing that in a way that is inclusive and brings in lots of different views.”

- Kent Walker, Google’s President of Global Affairs

Working together is critical

Since AI is such a wide-ranging technology, developing it responsibly will require a robust collaborative effort.

“Ultimately, this has to be a system of governance,” said Kent Walker, Google’s President of Global Affairs. Such a system, he said, would require the internal and external collaboration between a wide array of groups, including technology companies, government regulators, and international bodies.

In the end, safeguarding the promise of AI and minimizing risks is about scaling it in a sound, responsible way, together.