Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
País de afiliação
Intervalo de ano de publicação
1.
J Anim Ecol ; 92(8): 1560-1574, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-37165474

RESUMO

Studying animal behaviour allows us to understand how different species and individuals navigate their physical and social worlds. Video coding of behaviour is considered a gold standard: allowing researchers to extract rich nuanced behavioural datasets, validate their reliability, and for research to be replicated. However, in practice, videos are only useful if data can be efficiently extracted. Manually locating relevant footage in 10,000 s of hours is extremely time-consuming, as is the manual coding of animal behaviour, which requires extensive training to achieve reliability. Machine learning approaches are used to automate the recognition of patterns within data, considerably reducing the time taken to extract data and improving reliability. However, tracking visual information to recognise nuanced behaviour is a challenging problem and, to date, the tracking and pose-estimation tools used to detect behaviour are typically applied where the visual environment is highly controlled. Animal behaviour researchers are interested in applying these tools to the study of wild animals, but it is not clear to what extent doing so is currently possible, or which tools are most suited to particular problems. To address this gap in knowledge, we describe the new tools available in this rapidly evolving landscape, suggest guidance for tool selection, provide a worked demonstration of the use of machine learning to track movement in video data of wild apes, and make our base models available for use. We use a pose-estimation tool, DeepLabCut, to demonstrate successful training of two pilot models of an extremely challenging pose estimate and tracking problem: multi-animal wild forest-living chimpanzees and bonobos across behavioural contexts from hand-held video footage. With DeepWild we show that, without requiring specific expertise in machine learning, pose estimation and movement tracking of free-living wild primates in visually complex environments is an attainable goal for behavioural researchers.


L'étude du comportement animal nous permet de comprendre comment différentes espèces et différents individus naviguent dans leur monde physique et social. Le codage vidéo du comportement est considéré comme une référence: il permet aux chercheurs d'extraire des ensembles de données comportementales riches et nuancées, de valider leur fiabilité et de reproduire les recherches. Toutefois, dans la pratique, les vidéos ne sont utiles que si les données peuvent être extraites efficacement. La localisation manuelle de séquences pertinentes parmi des dizaines de milliers d'heures prend énormément de temps, tout comme le codage manuel du comportement animal, qui nécessite une formation approfondie pour être fiable. Les approches d'apprentissage automatique sont utilisées pour automatiser la reconnaissance de modèles dans les données, ce qui réduit considérablement le temps nécessaire à l'extraction des données et améliore la fiabilité. Toutefois, le suivi des informations visuelles pour reconnaître un comportement nuancé est un problème difficile et, à ce jour, les outils de suivi et d'estimation de la pose utilisés pour détecter le comportement sont généralement appliqués lorsque l'environnement visuel est hautement contrôlé. Les chercheurs en comportement animal sont intéressés par l'application de ces outils à l'étude des animaux sauvages, mais il n'est pas clair dans quelle mesure cela est actuellement possible, ni quels outils sont les mieux adaptés à des problèmes particuliers. Pour combler ce manque de connaissances, nous décrivons les nouveaux outils disponibles dans ce paysage en évolution rapide, proposons des conseils pour la sélection des outils, fournissons une démonstration pratique de l'utilisation de l'apprentissage automatique pour suivre les mouvements dans les données vidéo des grands singes sauvages et mettons nos modèles de base à disposition pour utilisation. Nous utilisons un outil d'estimation de la pose, DeepLabCut, pour démontrer l'apprentissage réussi de deux modèles pilotes d'un problème extrêmement difficile d'estimation et de suivi de la pose: les chimpanzés et les bonobos sauvages vivant dans la forêt et représentant plusieurs animaux dans différents contextes comportementaux à partir de séquences vidéo tenues à la main. Avec DeepWild, nous montrons que, sans nécessiter d'expertise spécifique en apprentissage automatique, l'estimation de la pose et le suivi des mouvements de primates sauvages vivant en liberté dans des environnements visuellement complexes est un objectif réalisable pour les chercheurs en comportement.


Assuntos
Pan paniscus , Pan troglodytes , Animais , Reprodutibilidade dos Testes , Animais Selvagens , Movimento
2.
Brain ; 144(10): 2979-2984, 2021 11 29.
Artigo em Inglês | MEDLINE | ID: mdl-34750604

RESUMO

Theoretical accounts of developmental stuttering implicate dysfunctional cortico-striatal-thalamo-cortical motor loops through the putamen. However, the analysis of conventional MRI brain scans in individuals who stutter has failed to yield strong support for this theory in terms of reliable differences in the structure or function of the basal ganglia. Here, we performed quantitative mapping of brain tissue, which can be used to measure iron content alongside markers sensitive to myelin and thereby offers particular sensitivity to the measurement of iron-rich structures such as the basal ganglia. Analysis of these quantitative maps in 41 men and women who stutter and 32 individuals who are typically fluent revealed significant group differences in maps of R2*, indicative of higher iron content in individuals who stutter in the left putamen and in left hemisphere cortical regions important for speech motor control. Higher iron levels in brain tissue in individuals who stutter could reflect elevated dopamine levels or lysosomal dysfunction, both of which are implicated in stuttering. This study represents the first use of these quantitative measures in developmental stuttering and provides new evidence of microstructural differences in the basal ganglia and connected frontal cortical regions.


Assuntos
Mapeamento Encefálico/métodos , Lobo Frontal/metabolismo , Ferro/metabolismo , Rede Nervosa/metabolismo , Putamen/metabolismo , Gagueira/metabolismo , Adulto , Gânglios da Base/diagnóstico por imagem , Gânglios da Base/metabolismo , Estudos de Coortes , Feminino , Lobo Frontal/diagnóstico por imagem , Humanos , Masculino , Pessoa de Meia-Idade , Rede Nervosa/diagnóstico por imagem , Putamen/diagnóstico por imagem , Gagueira/diagnóstico por imagem , Adulto Jovem
3.
J Commun Disord ; 97: 106213, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35397388

RESUMO

INTRODUCTION: Most of the previous articulatory studies of stuttering have focussed on the fluent speech of people who stutter. However, to better understand what causes the actual moments of stuttering, it is necessary to probe articulatory behaviors during stuttered speech. We examined the supralaryngeal articulatory characteristics of stuttered speech using real-time structural magnetic resonance imaging (RT-MRI). We investigated how articulatory gestures differ across stuttered and fluent speech of the same speaker. METHODS: Vocal tract movements of an adult man who stutters during a pseudoword reading task were recorded using RT-MRI. Four regions of interest (ROIs) were defined on RT-MRI image sequences around the lips, tongue tip, tongue body, and velum. The variation of pixel intensity in each ROI over time provided an estimate of the movement of these four articulators. RESULTS: All disfluencies occurred on syllable-initial consonants. Three articulatory patterns were identified. Pattern 1 showed smooth gestural formation and release like fluent speech. Patterns 2 and 3 showed delayed release of gestures due to articulator fixation or oscillation respectively. Block and prolongation corresponded to either pattern 1 or 2. Repetition corresponded to pattern 3 or a mix of patterns. Gestures for disfluent consonants typically exhibited a greater constriction than fluent gestures, which was rarely corrected during disfluencies. Gestures for the upcoming vowel were initiated and executed during these consonant disfluencies, achieving a tongue body position similar to the fluent counterpart. CONCLUSION: Different perceptual types of disfluencies did not necessarily result from distinct articulatory patterns, highlighting the importance of collecting articulatory data of stuttering. Disfluencies on syllable-initial consonants were related to the delayed release and the overshoot of consonant gestures, rather than the delayed initiation of vowel gestures. This suggests that stuttering does not arise from problems with planning the vowel gestures, but rather with releasing the overly constricted consonant gestures.


Assuntos
Gagueira , Adulto , Gestos , Humanos , Imageamento por Ressonância Magnética , Masculino , Fala , Medida da Produção da Fala
4.
J Speech Lang Hear Res ; 64(7): 2438-2452, 2021 07 16.
Artigo em Inglês | MEDLINE | ID: mdl-34157239

RESUMO

Purpose People who stutter (PWS) have more unstable speech motor systems than people who are typically fluent (PWTF). Here, we used real-time magnetic resonance imaging (MRI) of the vocal tract to assess variability and duration of movements of different articulators in PWS and PWTF during fluent speech production. Method The vocal tracts of 28 adults with moderate to severe stuttering and 20 PWTF were scanned using MRI while repeating simple and complex pseudowords. Midsagittal images of the vocal tract from lips to larynx were reconstructed at 33.3 frames per second. For each participant, we measured the variability and duration of movements across multiple repetitions of the pseudowords in three selected articulators: the lips, tongue body, and velum. Results PWS showed significantly greater speech movement variability than PWTF during fluent repetitions of pseudowords. The group difference was most evident for measurements of lip aperture using these stimuli, as reported previously, but here, we report that movements of the tongue body and velum were also affected during the same utterances. Variability was not affected by phonological complexity. Speech movement variability was unrelated to stuttering severity within the PWS group. PWS also showed longer speech movement durations relative to PWTF for fluent repetitions of multisyllabic pseudowords, and this group difference was even more evident as complexity increased. Conclusions Using real-time MRI of the vocal tract, we found that PWS produced more variable movements than PWTF even during fluent productions of simple pseudowords. PWS also took longer to produce multisyllabic words relative to PWTF, particularly when words were more complex. This indicates general, trait-level differences in the control of the articulators between PWS and PWTF. Supplemental Material https://doi.org/10.23641/asha.14782092.


Assuntos
Fala , Gagueira , Adulto , Humanos , Imageamento por Ressonância Magnética , Movimento , Medida da Produção da Fala , Gagueira/diagnóstico por imagem
5.
Neuropsychologia ; 146: 107568, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32687836

RESUMO

Transcranial direct current stimulation (tDCS) modulates cortical excitability in a polarity-specific way and, when used in combination with a behavioural task, it can alter performance. TDCS has the potential, therefore, for use as an adjunct to therapies designed to treat disorders affecting speech, including, but not limited to acquired aphasias and developmental stuttering. For this reason, it is important to conduct studies evaluating its effectiveness and the parameters optimal for stimulation. Here, we aimed to evaluate the effects of bi-hemispheric tDCS over speech motor cortex on performance of a complex speech motor learning task, namely the repetition of tongue twisters. A previous study in older participants showed that tDCS could modulate performance on a similar task. To further understand the effects of tDCS, we also measured the excitability of the speech motor cortex before and after stimulation. Three groups of 20 healthy young controls received: (i) anodal tDCS to the left IFG/LipM1 and cathodal tDCS to the right hemisphere homologue; or (ii) cathodal tDCS over the left and anodal over the right; or (iii) sham stimulation. Participants heard and repeated novel tongue twisters and matched simple sentences before, during and 10 min after the stimulation. One mA tDCS was delivered concurrent with task performance for 13 min. Motor excitability was measured using transcranial magnetic stimulation to elicit motor-evoked potentials in the lip before and immediately after tDCS. The study was double-blind, randomized, and sham-controlled; the design and analysis were pre-registered. Performance on the task improved from baseline to after stimulation but was not significantly modulated by tDCS. Similarly, a small decrease in motor excitability was seen in all three stimulation groups but did not differ among them and was unrelated to task performance. Bayesian analyses provide substantial evidence in support of the null hypotheses in both cases, namely that tongue twister performance and motor excitability were not affected by tDCS. We discuss our findings in the context of the previous positive results for a similar task. We conclude that tDCS may be most effective when brain function is sub-optimal due to age-related declines or pathology. Further study is required to determine why tDCS failed to modulate excitability in the speech motor cortex in the expected ways.


Assuntos
Potencial Evocado Motor , Aprendizagem/fisiologia , Córtex Motor/fisiologia , Fala/fisiologia , Estimulação Transcraniana por Corrente Contínua , Adolescente , Adulto , Teorema de Bayes , Feminino , Voluntários Saudáveis , Humanos , Masculino , Estimulação Magnética Transcraniana , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA