Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
PLoS Comput Biol ; 20(8): e1012329, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39110762

RESUMEN

Our understanding of bird song, a model system for animal communication and the neurobiology of learning, depends critically on making reliable, validated comparisons between the complex multidimensional syllables that are used in songs. However, most assessments of song similarity are based on human inspection of spectrograms, or computational methods developed from human intuitions. Using a novel automated operant conditioning system, we collected a large corpus of zebra finches' (Taeniopygia guttata) decisions about song syllable similarity. We use this dataset to compare and externally validate similarity algorithms in widely-used publicly available software (Raven, Sound Analysis Pro, Luscinia). Although these methods all perform better than chance, they do not closely emulate the avian assessments. We then introduce a novel deep learning method that can produce perceptual similarity judgements trained on such avian decisions. We find that this new method outperforms the established methods in accuracy and more closely approaches the avian assessments. Inconsistent (hence ambiguous) decisions are a common occurrence in animal behavioural data; we show that a modification of the deep learning training that accommodates these leads to the strongest performance. We argue this approach is the best way to validate methods to compare song similarity, that our dataset can be used to validate novel methods, and that the general approach can easily be extended to other species.


Asunto(s)
Aprendizaje Profundo , Pinzones , Vocalización Animal , Animales , Vocalización Animal/fisiología , Pinzones/fisiología , Algoritmos , Biología Computacional/métodos , Juicio/fisiología , Masculino , Espectrografía del Sonido/métodos , Condicionamiento Operante/fisiología , Humanos
2.
Science ; 385(6705): 138-140, 2024 Jul 12.
Artículo en Inglés | MEDLINE | ID: mdl-38991079

RESUMEN

Bioacoustics and artificial intelligence facilitate ecological studies of animal populations.


Asunto(s)
Inteligencia Artificial , Biodiversidad , Extinción Biológica , Animales , Monitoreo Biológico/métodos
3.
Sensors (Basel) ; 24(7)2024 Mar 22.
Artículo en Inglés | MEDLINE | ID: mdl-38610256

RESUMEN

The ongoing biodiversity crisis, driven by factors such as land-use change and global warming, emphasizes the need for effective ecological monitoring methods. Acoustic monitoring of biodiversity has emerged as an important monitoring tool. Detecting human voices in soundscape monitoring projects is useful both for analyzing human disturbance and for privacy filtering. Despite significant strides in deep learning in recent years, the deployment of large neural networks on compact devices poses challenges due to memory and latency constraints. Our approach focuses on leveraging knowledge distillation techniques to design efficient, lightweight student models for speech detection in bioacoustics. In particular, we employed the MobileNetV3-Small-Pi model to create compact yet effective student architectures to compare against the larger EcoVAD teacher model, a well-regarded voice detection architecture in eco-acoustic monitoring. The comparative analysis included examining various configurations of the MobileNetV3-Small-Pi-derived student models to identify optimal performance. Additionally, a thorough evaluation of different distillation techniques was conducted to ascertain the most effective method for model selection. Our findings revealed that the distilled models exhibited comparable performance to the EcoVAD teacher model, indicating a promising approach to overcoming computational barriers for real-time ecological monitoring.


Asunto(s)
Habla , Voz , Humanos , Acústica , Biodiversidad , Conocimiento
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA