Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
PLoS Comput Biol ; 19(4): e1010325, 2023 04.
Article in English | MEDLINE | ID: mdl-37053268

ABSTRACT

Despite the accumulation of data and studies, deciphering animal vocal communication remains challenging. In most cases, researchers must deal with the sparse recordings composing Small, Unbalanced, Noisy, but Genuine (SUNG) datasets. SUNG datasets are characterized by a limited number of recordings, most often noisy, and unbalanced in number between the individuals or categories of vocalizations. SUNG datasets therefore offer a valuable but inevitably distorted vision of communication systems. Adopting the best practices in their analysis is essential to effectively extract the available information and draw reliable conclusions. Here we show that the most recent advances in machine learning applied to a SUNG dataset succeed in unraveling the complex vocal repertoire of the bonobo, and we propose a workflow that can be effective with other animal species. We implement acoustic parameterization in three feature spaces and run a Supervised Uniform Manifold Approximation and Projection (S-UMAP) to evaluate how call types and individual signatures cluster in the bonobo acoustic space. We then implement three classification algorithms (Support Vector Machine, xgboost, neural networks) and their combination to explore the structure and variability of bonobo calls, as well as the robustness of the individual signature they encode. We underscore how classification performance is affected by the feature set and identify the most informative features. In addition, we highlight the need to address data leakage in the evaluation of classification performance to avoid misleading interpretations. Our results lead to identifying several practical approaches that are generalizable to any other animal communication system. To improve the reliability and replicability of vocal communication studies with SUNG datasets, we thus recommend: i) comparing several acoustic parameterizations; ii) visualizing the dataset with supervised UMAP to examine the species acoustic space; iii) adopting Support Vector Machines as the baseline classification approach; iv) explicitly evaluating data leakage and possibly implementing a mitigation strategy.


Subject(s)
Algorithms , Pan paniscus , Animals , Workflow , Reproducibility of Results , Neural Networks, Computer
2.
Sci Rep ; 6: 22046, 2016 Feb 25.
Article in English | MEDLINE | ID: mdl-26911199

ABSTRACT

Long-term social recognition is vital for species with complex social networks, where familiar individuals can encounter one another after long periods of separation. For non-human primates who live in dense forest environments, visual access to one another is often limited, and recognition of social partners over distances largely depends on vocal communication. Vocal recognition after years of separation has never been reported in any great ape species, despite their complex societies and advanced social intelligence. Here we show that bonobos, Pan paniscus, demonstrate reliable vocal recognition of social partners, even if they have been separated for five years. We experimentally tested bonobos' responses to the calls of previous group members that had been transferred between captive groups. Despite long separations, subjects responded more intensely to familiar voices than to calls from unknown individuals - the first experimental evidence that bonobos can identify individuals utilising vocalisations even years after their last encounter. Our study also suggests that bonobos may cease to discriminate between familiar and unfamiliar individuals after a period of eight years, indicating that voice representations or interest could be limited in time in this species.


Subject(s)
Pan paniscus , Recognition, Psychology , Voice , Animals , Behavior, Animal , Female , Male , Social Behavior , Time Factors
SELECTION OF CITATIONS
SEARCH DETAIL