Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros








Base de dados
Intervalo de ano de publicação
1.
Eur J Neurosci ; 60(3): 4244-4253, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38816916

RESUMO

Studying ultrasonic vocalizations (USVs) plays a crucial role in understanding animal communication, particularly in the field of ethology and neuropharmacology. Communication is associated with social behaviour; so, USVs study is a valid assay in behavioural readout and monitoring in this context. This paper delved into an investigation of ultrasonic communication in mice treated with Cannabis sativa oil (CS mice), which has been demonstrated having a prosocial effect on behaviour of mice, versus control mice (vehicle-treated, VH mice). To conduct this study, we created a dataset by recording audio-video files and annotating the duration of time that test mice spent engaging in social activities, along with categorizing the types of emitted USVs. The analysis encompassed the frequency of individual sounds as well as more complex sequences of consecutive syllables (patterns). The primary goal was to examine the extent and nature of diversity in ultrasonic communication patterns emitted by these two groups of mice. As a result, we observed statistically significant differences for each considered pattern length between the two groups of mice. Additionally, the study extended its research by considering specific behaviours, aiming to ascertain whether dissimilarities in ultrasonic communication between CS and VH mice are more pronounced or subtle within distinct behavioural contexts. Our findings suggest that while there is variation in USV communication between the two groups of mice, the degree of this diversity may vary depending on the specific behaviour being observed.


Assuntos
Óleos de Plantas , Vocalização Animal , Animais , Camundongos , Vocalização Animal/efeitos dos fármacos , Vocalização Animal/fisiologia , Masculino , Óleos de Plantas/farmacologia , Cannabis , Ultrassom , Comportamento Social , Comportamento Animal/efeitos dos fármacos , Comportamento Animal/fisiologia
2.
Sci Rep ; 13(1): 11238, 2023 07 11.
Artigo em Inglês | MEDLINE | ID: mdl-37433808

RESUMO

Ultrasonic vocalizations (USVs) analysis represents a fundamental tool to study animal communication. It can be used to perform a behavioral investigation of mice for ethological studies and in the field of neuroscience and neuropharmacology. The USVs are usually recorded with a microphone sensitive to ultrasound frequencies and then processed by specific software, which help the operator to identify and characterize different families of calls. Recently, many automated systems have been proposed for automatically performing both the detection and the classification of the USVs. Of course, the USV segmentation represents the crucial step for the general framework, since the quality of the call processing strictly depends on how accurately the call itself has been previously detected. In this paper, we investigate the performance of three supervised deep learning methods for automated USV segmentation: an Auto-Encoder Neural Network (AE), a U-NET Neural Network (UNET) and a Recurrent Neural Network (RNN). The proposed models receive as input the spectrogram associated with the recorded audio track and return as output the regions in which the USV calls have been detected. To evaluate the performance of the models, we have built a dataset by recording several audio tracks and manually segmenting the corresponding USV spectrograms generated with the Avisoft software, producing in this way the ground-truth (GT) used for training. All three proposed architectures demonstrated precision and recall scores exceeding [Formula: see text], with UNET and AE achieving values above [Formula: see text], surpassing other state-of-the-art methods that were considered for comparison in this study. Additionally, the evaluation was extended to an external dataset, where UNET once again exhibited the highest performance. We suggest that our experimental results may represent a valuable benchmark for future works.


Assuntos
Aprendizado Profundo , Animais , Camundongos , Algoritmos , Redes Neurais de Computação , Software , Comunicação Animal
3.
PLoS One ; 16(1): e0244636, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-33465075

RESUMO

Ultrasonic vocalizations (USVs) analysis is a well-recognized tool to investigate animal communication. It can be used for behavioral phenotyping of murine models of different disorders. The USVs are usually recorded with a microphone sensitive to ultrasound frequencies and they are analyzed by specific software. Different calls typologies exist, and each ultrasonic call can be manually classified, but the qualitative analysis is highly time-consuming. Considering this framework, in this work we proposed and evaluated a set of supervised learning methods for automatic USVs classification. This could represent a sustainable procedure to deeply analyze the ultrasonic communication, other than a standardized analysis. We used manually built datasets obtained by segmenting the USVs audio tracks analyzed with the Avisoft software, and then by labelling each of them into 10 representative classes. For the automatic classification task, we designed a Convolutional Neural Network that was trained receiving as input the spectrogram images associated to the segmented audio files. In addition, we also tested some other supervised learning algorithms, such as Support Vector Machine, Random Forest and Multilayer Perceptrons, exploiting informative numerical features extracted from the spectrograms. The performance showed how considering the whole time/frequency information of the spectrogram leads to significantly higher performance than considering a subset of numerical features. In the authors' opinion, the experimental results may represent a valuable benchmark for future work in this research field.


Assuntos
Aprendizado de Máquina , Camundongos/fisiologia , Vocalização Animal , Comunicação Animal , Animais , Redes Neurais de Computação , Máquina de Vetores de Suporte , Ondas Ultrassônicas , Ultrassom
4.
Data Brief ; 24: 103881, 2019 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-31008162

RESUMO

The FASSEG repository is composed by four subsets containing face images useful for training and testing automatic methods for the task of face segmentation. Threesubsets, namely frontal01, frontal02, and frontal03 are specifically built for performing frontal face segmentation. Frontal01 contains 70 original RGB images and the corresponding roughly labelledground-truth masks. Frontal02 contains the same image data, with high-precision labelled ground-truth masks. Frontal03 consists in 150 annotated face masks of twins captured in various orientations, illumination conditions and facial expressions. The last subset, namely multipose01, contains more than 200 faces in multiple poses and the corresponding ground-truth masks. For all face images, ground-truth masks are labelled on six classes (mouth, nose, eyes, hair, skin, and background).

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA