Wageningen University and Research
""

Many marine species rely on sound for communication and navigating their environment. Researchers record these sounds to monitor the marine environment, but managing the sheer volume of data poses challenges for traditional manual analysis methods. To tackle this challenge, researchers have developed an AI model for automatically detecting events in sound streams captured by hydrophones. This cutting-edge deep learning system was trained using annotated data obtained through a few-shot learning framework. Integrating this model into an embedded system enables real-time, in situ detection, which enhances biodiversity assessments and facilitates targeted environmental DNA sampling.

Successful field tests, including deployments in estuaries, have demonstrated the model's effectiveness in monitoring species such as harbour seals and migratory fish. Ongoing efforts aim to further refine the model and integrate it with other sensing technologies, such as plankton microscopes and acoustic receivers, to conduct even more comprehensive ecological monitoring studies.

Presentations

WIAS Science Day 2024 oral presentation: Detecting animal sounds faster in long recordings using 5 examples: Few shot Learning on fish sounds

5th World ecoacoustic congress in Madrid, communication (format TBD): Detecting animal sounds faster in long recordings using 5 examples: Few shot Learning on fish sounds

Bordoux, V. Adaptable input length using model trained on waveform. 2024. DCASE Challenge 2024. Technical report. https://dcase.community/documents/challenge2024/technical_reports/DCASE2024_Bordoux_66_5.pdf

Share:
Automated marine animal sounds identification | NLAS