Think Sonoma County, and the picturesque valley and vineyards come to mind. But the locale is also home to a rich and incredible biodiversity. And Soundscapes to Landscapes, the name of an initiative to monitor biodiversity in the province, aims to document just that.
Over the past five years, from mid-spring to late summer here in California’s wine country, the initiative has collected a massive amount of sound data by placing acoustic recorders at 1,300 locations across the county. The project, run by Sonoma State University, conservation NGO Point Blue Conservation Science, and several other partners, armed civilian volunteers with recorders, and worked with private landowners to collect audio, which was then processed and classified using artificial intelligence technology.
“The idea is to detect individual species, or to find information that tells you something new about the types of sounds there,” Leonardo Salas, a quantitative ecologist at Point Blue Conservation Science, told Mongabay in a video interview. “On that basis, we can characterize entire environments.”
The methodology has been effective in tracking changes in ecosystems and studying patterns of wildlife. Before the California wildfires in 2017, Soundscapes to Landscapes had placed audio recorders in a park. When examining the data after the fires, the team discovered a “predominance” of lazuligorzen (passerina amuna), a type of songbird that had never been seen or heard in the park before the fires. Initially, the citizen scientist monitoring the park thought it was a flaw in the AI models. But he later concluded that the songbirds preferred burned areas and may have flown in after the fires, helping the team understand how the fires were changing the park’s ecosystem.
Audio data has been used to track, study and conserve wildlife for decades. In recent years, bioacoustics has gained prominence as a non-invasive way to study wildlife. It can be used to study entire landscapes and detect species, as Salas’s team does, as well as to understand animal behavior and communication patterns.
The ability of audio recorders to collect large amounts of data can make them more efficient than traditional camera trapping and remote tracking methods. A study, published in the magazine Methods in Ecology and Evolution in 2020, passive acoustic monitoring found “a powerful species monitoring tool” detecting wild chimpanzees (Pan cavemen) in Tanzania five times faster than visual techniques. Another studypublished in the magazine Ecological indicators in 2019 compared acoustic recorders to camera traps, finding the advantage of the former in the “superior detection areas, which were 100 to 7,000 times wider than camera traps.”
However, larger coverage areas mean that larger amounts of data need to be analyzed, making the analysis of sound data labor intensive. Technological innovations, such as artificial intelligence and machine learning, have made the process easier. But conservationists say the technology still has a long way to go to make audio data processing faster and easier.
Salas says the AI models commonly used by Soundscapes to Landscapes expose these technological gaps. For example, in the past, models have mistaken the sound of a motorcycle engine for the cooing of some kind of pigeon, and confused the chatter of little girls with the sound of quail. “There is tremendous capacity to monitor wildlife using sound data, but the technology is not there yet,” he says. “My concern is [whether] it could happen soon enough so we can keep up with how the planet is changing.”
Darren Proppe, who has used audio data to study songbirds in Texas for years, says he’s “skeptical about AI without any human truth.” Human intervention, he says, is needed not only to spot errors, but to raise bigger questions that automated analysis can’t deduce.
“If I’m just looking for the presence or absence of a bird, a cougar, or an insect, sounds can confirm that,” Proppe, director of the Wild Basin Creative Research Center at St. Edward’s University in Texas, told me. Mongabay. in a video interview. “But the bigger question would be: what are you missing? And people are really going to have to do some checking to make sure they aren’t being misled.”
Accessibility to low-cost real-time monitoring and data transfer is another concern when it comes to processing bioacoustics data.
It’s a problem Daniela Hedwig knows all too well. As director of Cornell University’s Elephant Listening Project, she and her team have been listening to African forest elephants (Loxodonta cyclotis) that roam the rainforests of Central Africa. As a keystone species, the elephants play a vital role in maintaining and shaping the structure of the forest. The data collected by the project will be passed on to governments, who can use it to identify sites for conservation activities. The project is also collecting data that helps track poaching activities by detecting gunshots in the audio. But the inability to perform real-time monitoring, combined with inefficiencies in automated detectors, makes the process slow and laborious.
The data is collected from the recorders every four months, after which it takes Hedwig’s team nearly three weeks to go through and analyze the audio, which can often be as much as 8 terabytes – about 1,100 hours of 4K quality video streamed via Netflix. . “The reason is that the detectors are not perfect, and we have to go through each detection, look at it and decide whether it was really a gunshot or not,” Hedwig told Mongabay in a video interview.
Overcoming these challenges along with the integration of real-time monitoring, Hedwig says, will further advance bioacoustics technology. Given the enormous interest that the field has recently received, she says she is optimistic.
“Imagine anti-poaching units sitting in their control room, and they can get real-time information about a poacher and say, ‘Hey, we need to send people and catch them,’” Hedwig says. “That’s going to be the big game changer.”