Responsible AI

What’s Driving Optimism around AI in 2024

Lila Ibrahim and Alexandra Reeve Givens share reasons to be optimistic about AI in 2024.

Feb 15, 2024 9 min read

In this first edition of the Future Together Q&A Series, we sat down with Alexandra Reeve Givens, CEO of the Center for Democracy & Technology, and Lila Ibrahim, Chief Operating Officer, Google DeepMind. They discuss reasons why they are optimistic around AI, how policymakers can engage to develop responsible AI policy, and what they hope to see from international cooperation around AI governance.

Future Together is part of the Digital Futures Project (DFP), a Google initiative that brings together diverse voices across sectors, creates a space for balanced policy conversations, and informs global policy debates on responsible AI policy solutions.

Q: 2023 was a massive year for AI and we don't see things slowing down next year. Thinking about 2024, what are you most optimistic about?

Alexandra Reeve Givens: 2024 will be the year where we move from high-level principles about AI governance into action. The EU’s AI Act will push companies and regulators to address hard questions about implementation, and in the U.S. the Biden Executive Order will see government agencies issue detailed guidance about their own uses of AI, as well as private sector uses. As an advocate, I’m optimistic that this is the year for advancing more detailed, actionable norms to ensure this transformative innovation flourishes responsibly.

Woman smiling in a room with a white shirt with black lines

Lila Ibrahim: I’m also fascinated by what we’ll see this coming year and beyond. I've spent close to three decades working at the intersection of technology and its impact on society. It’s clear to me that we are just scratching the surface of what is possible with AI. We’re working to crack seemingly impossible, foundational and long-term challenges that benefit people’s lives around the world.

That’s why last year we delivered major product and scientific breakthroughs - from discovering new and potentially transformational materials with GNoMe, to classifying genetic mutations as benign or harmful with AlphaMissense, to predicting extreme weather events with GraphCast.

We ended the year announcing Gemini, our largest, most capable and most flexible model. In 2024, I’m excited to see how people have started using Gemini in creative ways to boost their productivity and creativity. Gemini is especially good at explaining reasoning in complex subjects like math and physics. Tools like this can help learners, educators and school communities unlock potential in ways we can’t even imagine yet. I’m looking forward to using these tools with my own children at home.

Q. What are some of the key AI policy challenges and opportunities that you expect to see in the year ahead?

ARG: Moving from principles to policy is harder than it looks. We can all agree on core values, like AI systems shouldn’t be biased. But what does that mean in practice for AI developers and deployers? What expectations should be enshrined in law, and how do we create accountability? Where should liability fall between AI developers, deployers, users, and the hybrid models in between – and how does that change depending on use case?

LI: Fulfilling the potential of AI and managing risks is not something any of us can do alone. Governments, industry, and civil society will need to continue to partner in these efforts and share knowledge and expertise. Given the truly transformational benefits of AI for public services such as healthcare and education, it’s critical we get this right.

ARG: Policymakers are trying to write laws that will stand the test of time across different applications of a constantly evolving technology. The bipartisanship and rigor with which policymakers are approaching this work is encouraging.

LI: As you know, we're just getting started in tackling some of these. Last year we saw a flurry of new governance initiatives and institutions, from the UK and US AI Safety Institutes to the industry-led Frontier Model Forum. In 2024, we will see these entities further define their goals and ways of working, including how to collaborate with each other. For example, I’m excited to see the Frontier Model Forum work with the new AI Safety Institutes to make progress on evaluation methods, deepening our understanding of model capabilities, and advancing frontier AI safety to the benefit of everyone.

We’re in a beautiful moment of policy optimism for AI. Around the world, voices from industry, civil society, and government alike are expressing the need for effective governance solutions – and working with urgency to create them.

Alexandra Reeve Givens, CEO of the Center for Democracy & Technology

Q: How can policymakers partner together with civil society and the creators of AI tools to develop responsible AI policy?

LI: I think to ensure the safe development and deployment of AI models, we need to see an inclusive multi-stakeholder approach to governance, which brings together the perspectives of governments, companies, academics, and civil society.

ARG: I agree. We need smart minds working together to develop effective policy. Importantly, legislation isn’t the only path we should prioritize. Because legislation takes time, and because laws typically leave room for interpretation as to what they require, now is a vital time for industry and civil society to work together on best practices for responsible AI development and deployment.

LI: On Alexandra’s point about legislation, policymakers need to be deliberate about inclusion. That means involving professional communities early-on to determine how AI can best be used in their sectors, and engaging with groups that are typically not at the table - inviting them to share knowledge, and funding them to contribute to the work on how AI is introduced or regulated.

ARG: To expand on that thought, right now, too many companies are working in silos, and rapidly crafting new usage or development policies as they’re faced with each new challenge. That approach may have been understandable in 2023’s hectic news cycle, but it isn’t sustainable. I hope 2024 is a year of more sophisticated multi stakeholder engagement on what best practices and policies look like, with mechanisms for transparency and accountability. This will create better trust, and shape consensus norms even as legislative and policy efforts develop.

LI: We're seeing signs of this multi-stakeholder approach recently. The AI Safety Summit the UK Government hosted in November, with US Vice President Kamala Harris, intentionally pulled in diverse perspectives, with representatives from organizations such as the African Commission on Human and People’s Rights, the Centre for Democracy and Technology, and the Partnership on AI.

The question before us now is how we build on that engagement to make sure that all communities are involved in deciding what our future with AI could look like.

A woman sitting in a blue chair speaking

Q. International coordination on AI governance will be essential for unlocking the benefits of AI while ensuring the technology is developed and used responsibly. Is there one thing you’re hoping to see from civil society, governments and multilateral institutions as they work to global frameworks to help govern AI?

LI: In 2023 we witnessed a surge of international cooperation around AI governance. We’re delighted that in the next 12 months, South Korea and France will host Summits similar to the one in the UK. We also have the G7 Code of Conduct, the US’ Executive Order on AI, and the UN’s High-level Advisory Body on AI. These collective efforts are laying a solid foundation for cross-border collaboration on AI regulation.

As we move forward, we’ll need to focus on global standards for how to enable AI to tackle our biggest challenges as well as its responsible development and use. The White House has set a baseline around both how governments should approach the uptake of AI for broad benefit as well as the kinds of responsibilities that companies have. We also welcome the opportunity for further engagement on the EU’s AI Act. Now we have to build upon this momentum by getting to broader international consensus on these commitments.

ARG: To that point, The UN’s High-Level AI Advisory Body has an excellent interim report on what international coordination could look like. For me, clear areas for progress include identifying risks and mitigations; harmonizing (which needn’t mean copying) regulatory approaches; supporting standards development; and driving the more even distribution of resources to regions around the world. There are important questions about where such functions should reside, including the role of new versus existing institutions – but progress on some of these elements can (and is) starting now.

Q. What do we need to get right in the coming year for AI to have the greatest benefit to society?

ARG: We’re in a beautiful moment of policy optimism for AI. Around the world, voices from industry, civil society, and government alike are expressing the need for effective governance solutions – and working with urgency to create them. What a loss it would be if this transformational technology becomes yet another casualty of political stalemate. This is the year to make good on the calls for governance, with actionable progress by governments and companies alike.

LI: AI has the world’s attention. Now, it needs to earn the world’s trust. That means working to build relationships with the public and private sectors, civil society, academia and the general public so they will use it. It also means developing AI systems and forms of governance to make sure this technology will benefit all of society.

For the benefits of AI to materialize in an equitable way, we need to bring a diversity of perspectives into the development and deployment of AI. So we will keep building on projects that promote public education, inclusion and awareness.

To take one example, at Google DeepMind we are working with groups like Raspberry Pi to develop our ‘Experience AI’ program which gives teachers co-designed, adaptable lesson plans to promote AI literacy among secondary school students. Through this program, we hope to reach over 100,000 young people aged 11-14, with a focus on students from underrepresented groups. We hope to keep building trust in AI by helping people to understand how it is made, how it affects their lives, and how they can use AI to help shape their–and the world’s–future.

Alexandra Reeve Givens is the CEO of the Center for Democracy & Technology, a nonpartisan, nonprofit organization based in Washington D.C. and Brussels that works to protect human rights and democratic values in the digital age. She is a frequent public commentator on the responsible design, use and governance of emerging technologies. In 2023, she participated in the U.S. Senate’s AI Insight Forums and testified before three other Congressional committees; she also served as a civil society delegate to the UK AI Safety Summit, the U.S.-EU Trade & Technology Council’s Lulea meeting, and the technology track of the Summit for Democracy.

Lila Ibrahim is Chief Operating Officer of Google DeepMind, overseeing how the organization operates, builds responsibly, and engages with the external world. Lila has helped establish, scale and shape the values of multiple global technology organizations over the past three decades, through engineering and business leadership roles at Intel, Kleiner Perkins Caufield & Byers, and Coursera.