Policymakers, industry leaders, and academics came together at the Technology Policy Institute’s (TPI) Aspen Forum, one of the premier tech policy events, to debate and discuss today’s biggest questions around tech policy and regulation. On The Ground summarizes three big outtakes from the sessions.
On the Ground: Takeaways from the Technology Policy Institute’s Aspen Forum
Policymakers, industry leaders, and academics came together to discuss how to govern the digital future
AI tools are meant to reduce our workload, but to create them responsibly is a herculean undertaking
With AI’s popularity skyrocketing, those creating the tools and models must account for both its long-term and short-term uses to ensure they are accessible, beneficial, and safe.
“I like to joke and say it's a little bit like raising a child,” said David Graff, Google’s Vice President of Global Policy and Standards. He added that Google must “think about how [we] constantly train and then reinforce and tune” AI models to truly understand their complexities, variables, and outcomes.
David explained that to effectively help reduce human workloads, AI tools need to be able to solve challenging math word problems and also answer questions in new languages, as well as leverage chain-of-thought prompting to express reasoning through words.
AI tools are challenging how we think about ownership and copyright
The hot-button issues of content protection and copyright in a modern world, where tools such as generative AI play a role in creating content, were high on the agenda.
Should AI training models have free access to copyrighted materials to use as learning tools? What about authorship? Who owns the materials that AI created or contributed to? Some of these questions were discussed by the panelists during the session “What are the major AI policy questions.”
When profit is on the table, “people will figure out some set of contracts or arrangements or practices,” said Hal Varian, Google’s Chief Economist. Observing the similarities between this moment in time and the launch of YouTube in 2005, Hal expressed confidence that over time, this area “will see a normalization.”
The economic impact of new technology is seen through the productivity paradox, not GDP
How to quantify the economic impact of a technology that can’t be measured by gross domestic product (GDP) is summed up in what Hal Varian called technology’s “productivity paradox”: AI tools are serving as a companion, assisting with tasks that are already taking place, making it easier to do what we already do—but measuring this is tricky in strict GDP terms.
The evolution of photography and smartphone cameras provide a good example of the challenge of quantifying technology’s economic impact. Where it was once easy to pinpoint photography’s monetary footprint, by taking into account the cost of developing film and prints for example, the advent of digital and smartphone cameras ushered in a new reality and economic dynamic for the industry.
“Most photos are shared, not sold, so they don't show up in GDP,” Hal said. Still, the impact has been undeniable—and even more so as smartphones developed to the point where other age-old tools (like alarm clocks, maps, music players, and flashlights) became unnecessary.
The true impact of AI can be seen not in the GDP, but in the employee experience. With this technology, lower-skilled workers are using AI to upskill themselves and create broader professional opportunities.
When it comes to unlocking the potential of machine learning and AI technology, different levels of regulation around the world have created what Hal described as “a natural experiment at a global level.” This difference in regulation will provide the setting for the development of responsible AI.
For other Google perspectives on the future of AI see previous OTG articles: On the Ground: Key Takeaways from the 2023 Aspen Security Forum and On The Ground: Debating the Digital Future Forum with 3 questions about navigating the AI frontier