Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Assunto principal
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Behav Ecol ; 35(1): arad093, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38193012

RESUMO

Geographic differences in vocalizations provide strong evidence for animal culture, with patterns likely arising from generations of social learning and transmission. Most studies on the evolution of avian vocal variation have predominantly focused on fixed repertoire, territorial song in passerine birds. The study of vocal communication in open-ended learners and in contexts where vocalizations serve other functions is therefore necessary for a more comprehensive understanding of vocal dialect evolution. Parrots are open-ended vocal production learners that use vocalizations for social contact and coordination. Geographic variation in parrot vocalizations typically take the form of either distinct regional variations known as dialects or graded variation based on geographic distance known as clinal variation. In this study, we recorded monk parakeets (Myiopsitta monachus) across multiple spatial scales (i.e., parks and cities) in their European invasive range. We then compared calls using a multilevel Bayesian model and sensitivity analysis, with this novel approach allowing us to explicitly compare vocalizations at multiple spatial scales. We found support for founder effects and/or cultural drift at the city level, consistent with passive cultural processes leading to large-scale dialect differences. We did not find a strong signal for dialect or clinal differences between parks within cities, suggesting that birds did not actively converge on a group level signal, as expected under the group membership hypothesis. We demonstrate the robustness of our findings and offer an explanation that unifies the results of prior monk parakeet vocalization studies.

2.
Ecol Evol ; 14(5): e11384, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38799392

RESUMO

To better understand how vocalisations are used during interactions of multiple individuals, studies are increasingly deploying on-board devices with a microphone on each animal. The resulting recordings are extremely challenging to analyse, since microphone clocks drift non-linearly and record the vocalisations of non-focal individuals as well as noise. Here we address this issue with callsync, an R package designed to align recordings, detect and assign vocalisations to the caller, trace the fundamental frequency, filter out noise and perform basic analysis on the resulting clips. We present a case study where the pipeline is used on a dataset of six captive cockatiels (Nymphicus hollandicus) wearing backpack microphones. Recordings initially had a drift of ~2 min, but were aligned to within ~2 s with our package. Using callsync, we detected and assigned 2101 calls across three multi-hour recording sessions. Two had loud beep markers in the background designed to help the manual alignment process. One contained no obvious markers, in order to demonstrate that markers were not necessary to obtain optimal alignment. We then used a function that traces the fundamental frequency and applied spectrographic cross correlation to show a possible analytical pipeline where vocal similarity is visually assessed. The callsync package can be used to go from raw recordings to a clean dataset of features. The package is designed to be modular and allows users to replace functions as they wish. We also discuss the challenges that might be faced in each step and how the available literature can provide alternatives for each step.

3.
Sci Rep ; 12(1): 21966, 2022 12 19.
Artigo em Inglês | MEDLINE | ID: mdl-36535999

RESUMO

Bioacoustic research spans a wide range of biological questions and applications, relying on identification of target species or smaller acoustic units, such as distinct call types. However, manually identifying the signal of interest is time-intensive, error-prone, and becomes unfeasible with large data volumes. Therefore, machine-driven algorithms are increasingly applied to various bioacoustic signal identification challenges. Nevertheless, biologists still have major difficulties trying to transfer existing animal- and/or scenario-related machine learning approaches to their specific animal datasets and scientific questions. This study presents an animal-independent, open-source deep learning framework, along with a detailed user guide. Three signal identification tasks, commonly encountered in bioacoustics research, were investigated: (1) target signal vs. background noise detection, (2) species classification, and (3) call type categorization. ANIMAL-SPOT successfully segmented human-annotated target signals in data volumes representing 10 distinct animal species and 1 additional genus, resulting in a mean test accuracy of 97.9%, together with an average area under the ROC curve (AUC) of 95.9%, when predicting on unseen recordings. Moreover, an average segmentation accuracy and F1-score of 95.4% was achieved on the publicly available BirdVox-Full-Night data corpus. In addition, multi-class species and call type classification resulted in 96.6% and 92.7% accuracy on unseen test data, as well as 95.2% and 88.4% regarding previous animal-specific machine-based detection excerpts. Furthermore, an Unweighted Average Recall (UAR) of 89.3% outperformed the multi-species classification baseline system of the ComParE 2021 Primate Sub-Challenge. Besides animal independence, ANIMAL-SPOT does not rely on expert knowledge or special computing resources, thereby making deep-learning-based bioacoustic signal identification accessible to a broad audience.


Assuntos
Aprendizado Profundo , Animais , Humanos , Aprendizado de Máquina , Algoritmos , Acústica , Área Sob a Curva
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa