Wageningen University and Research

As sounds travel far underwater, it is a privilege way of communicating for many marine species. Unravelling these communications can help to monitor populations, detect the presence of species and estimate fish abundance and diversity in ecosystems. With the recent development of automatic recording unit, large amount of data have become available to the point where it is very costly and even no longer possible to have them analysed by acousticians. New progress in data science and the development of AI have started to allow automatic acoustic data processing. However, the lack of reference sounds and annotated data available have impaired the development of automatic detection and identification of sounds underwater.

To fully utilize the potential of the information contained in soundscapes underwater, we developed an AI model automatically detecting sound events in audio recordings. Audio recordings were collected with hydrophones. The approach was divided in two steps. First, getting large volumes of annotated data from different environments using the few shot learning framework adapted for bioacoustics detection. Second, training a robust deep learning model to detect sound automatically in long recordings. This model is getting integrated into an embedded system to do real time detection in situ. In the near future this will allow the combination of different sensors on the biodiversity sensing box project to improve the ecological assessment capabilities of the box, for example by triggering the sampling of environmental DNA when a species of interest is detected.

Share:
Using sound to control autonomous biodiversity monitoring | NLAS