Oregon

How researchers are using open-source AI to decode an entire forest’s soundtrack to improve species survival

An interview with Matthew Weldy, a PhD student at Oregon State University College of Forestry who’s using Google AI to study threatened wildlife.

Four images of Matthew Weldy, a PhD student at Oregon State University College of Forestry who’s using Google AI to study threatened wildlife.
3 min read

As a PhD student in the Pacific Northwest, I’ve seen, up close, the tension caused by balancing timber extraction and conservation of forest species. Back in 1994, when strong evidence showed the decline of spotted owl populations, the administration established a task force to develop The Northwest Forest Plan to balance the competing needs of timber harvest and the conservation of old growth forests. This was crucial because at that point, much of the Pacific Northwest's old growth forest was already logged, leaving very few patches for species that are entirely dependent on it for their lifecycle.

A critical component of that plan was monitoring these threatened and endangered species—a labor intensive process that involved field biologists driving out into the woods at night, playing audio recordings of spotted owl calls, and listening for an owl calling back. In that initial phase, biologists tracked demographics like population survival rates, growth rates, and how many young were being hatched every year. This program is now approaching 30 years of monitoring such populations.

In 2017, we began exploring the use of an emerging technology—acoustic recorders—that allowed us to start collecting acoustic data at scale in a non-invasive way. Every day the field recorders would collect four hours at dawn and four hours at dusk of sound. The problem was that you end up with this immense pile of audio data that is completely infeasible for someone to listen to from start to end. You also have the needle in the haystack problem where you have a really rare species that calls infrequently, and its recordings are somewhere in this mess of audio data.

Initially we focused on developing computational tools to detect spotted owls and other low-frequency vocalizing species. In 2020, Google developed a large-scale avian classification model called Perch. Google’s open-source model has been trained on a huge, global data set and has learned a lot about differentiating different types of bird sound.

One of the benefits of using Google’s open-source AI is that we can feed our own audio into this model, which can then quickly extract feature embeddings—a kind of fingerprint of the sound—that we can use to make predictions about other bird species we're interested in.

Matthew Weldy, PhD student at Oregon State University College of Forestry

Every year, we collect enough audio data that if you were to play it end to end, it would last longer than the United States has been around. Using Google's machine learning bird classifier, Perch, we’re able to process all of our acoustic data—millions of hours—in less than two months.

Perch is enabling more real-time engagement with the forest, allowing us to process data at a much wider sampling scale and a much finer temporal scale than we've ever been able to do before, enabling land managers to have better information at their disposal for data-driven decision-making and policy.

Matthew Weldy, PhD student at Oregon State University College of Forestry

In the next few months, we’ll have an update on these bird populations that will inform the next round of revisions to the Northwest Forest Plan. There's a number of projects where we’re using Perch, like the HJ Andrews data collection, led by Oregon State University, where we’ve been doing long-term research for 42 years on topics like how old forests provide important buffers in climate change.

We’re just now moving into a period where AI and automated approaches to monitoring biodiversity are getting really important. AI approaches are increasingly important, and one of the biggest reasons is that many species are in decline. There are just not enough people to conduct surveys for all the species at once. Sensor-based technologies like acoustic recorders potentially empower conservation and monitoring efforts because they capture data for many species at once.

As the footprint of AI in conservation and ecology continues growing, we are developing better classifiers, and we're getting better insights into how communities function. This capacity for large-scale, rapid analysis provided by AI is no longer just promising—it’s an essential ally in the race to understand and protect our planet's biodiversity amidst mounting environmental pressures.

Learn more