Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Cortex ; 169: 309-325, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37981441

RESUMEN

Agrammatic or asyntactic comprehension is a common language impairment in aphasia. We considered three possible hypotheses about the underlying cause of this deficit, namely problems in syntactic processing, over-reliance on semantics, and a deficit in cognitive control. We tested four individuals showing asyntactic comprehension on their comprehension of syntax-semantics conflict sentences (e.g., The robber handcuffed the cop), where semantic cues pushed towards a different interpretation from syntax. Two of the four participants performed above chance on such sentences indicating that not all agrammatic individuals are impaired in structure-based interpretation. We collected additional eyetracking measures from the other two participants, who performed at chance on the conflict sentences. These measures suggested distinct underlying processing profiles in the two individuals. Cognitive assessments further suggested that one participant might have performed poorly due to a linguistic cognitive control impairment while the other had difficulty due to over-reliance on semantics. Together, the results highlight the importance of multimodal measures for teasing apart aphasic individuals' underlying deficits. They corroborate findings from neurotypical adults by showing that semantics can strongly influence comprehension and that cognitive control could be relevant for choosing between competing sentence interpretations. They extend previous findings by demonstrating variability between individuals with aphasia-cognitive control might be especially relevant for patients who are not overly reliant on semantics. Clinically, the identification of distinct underlying problems in different individuals suggests that different treatment paths might be warranted for cases who might look similar on behavioral assessments.


Asunto(s)
Afasia de Broca , Comprensión , Adulto , Humanos , Lenguaje , Semántica , Lingüística
2.
Behav Res Methods ; 2023 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-37604959

RESUMEN

Mouth and facial movements are part and parcel of face-to-face communication. The primary way of assessing their role in speech perception has been by manipulating their presence (e.g., by blurring the area of a speaker's lips) or by looking at how informative different mouth patterns are for the corresponding phonemes (or visemes; e.g., /b/ is visually more salient than /g/). However, moving beyond informativeness of single phonemes is challenging due to coarticulation and language variations (to name just a few factors). Here, we present mouth and facial informativeness (MaFI) for words, i.e., how visually informative words are based on their corresponding mouth and facial movements. MaFI was quantified for 2276 English words, varying in length, frequency, and age of acquisition, using phonological distance between a word and participants' speechreading guesses. The results showed that MaFI norms capture well the dynamic nature of mouth and facial movements per word, with words containing phonemes with roundness and frontness features, as well as visemes characterized by lower lip tuck, lip rounding, and lip closure being visually more informative. We also showed that the more of these features there are in a word, the more informative it is based on mouth and facial movements. Finally, we demonstrated that the MaFI norms generalize across different variants of English language. The norms are freely accessible via Open Science Framework ( https://osf.io/mna8j/ ) and can benefit any language researcher using audiovisual stimuli (e.g., to control for the effect of speech-linked mouth and facial movements).

3.
Cortex ; 165: 86-100, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37271014

RESUMEN

Aphasia is a language disorder that often involves speech comprehension impairments affecting communication. In face-to-face settings, speech is accompanied by mouth and facial movements, but little is known about the extent to which they benefit aphasic comprehension. This study investigated the benefit of visual information accompanying speech for word comprehension in people with aphasia (PWA) and the neuroanatomic substrates of any benefit. Thirty-six PWA and 13 neurotypical matched control participants performed a picture-word verification task in which they indicated whether a picture of an animate/inanimate object matched a subsequent word produced by an actress in a video. Stimuli were either audiovisual (with visible mouth and facial movements) or auditory-only (still picture of a silhouette) with audio being clear (unedited) or degraded (6-band noise-vocoding). We found that visual speech information was more beneficial for neurotypical participants than PWA, and more beneficial for both groups when speech was degraded. A multivariate lesion-symptom mapping analysis for the degraded speech condition showed that lesions to superior temporal gyrus, underlying insula, primary and secondary somatosensory cortices, and inferior frontal gyrus were associated with reduced benefit of audiovisual compared to auditory-only speech, suggesting that the integrity of these fronto-temporo-parietal regions may facilitate cross-modal mapping. These findings provide initial insights into our understanding of the impact of audiovisual information on comprehension in aphasia and the brain regions mediating any benefit.


Asunto(s)
Afasia , Percepción del Habla , Humanos , Habla , Comprensión , Afasia/etiología , Afasia/patología , Lóbulo Temporal/patología
4.
Psychon Bull Rev ; 29(2): 600-612, 2022 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-34671936

RESUMEN

Human face-to-face communication is multimodal: it comprises speech as well as visual cues, such as articulatory and limb gestures. In the current study, we assess how iconic gestures and mouth movements influence audiovisual word recognition. We presented video clips of an actress uttering single words accompanied, or not, by more or less informative iconic gestures. For each word we also measured the informativeness of the mouth movements from a separate lipreading task. We manipulated whether gestures were congruent or incongruent with the speech, and whether the words were audible or noise vocoded. The task was to decide whether the speech from the video matched a previously seen picture. We found that congruent iconic gestures aided word recognition, especially in the noise-vocoded condition, and the effect was larger (in terms of reaction times) for more informative gestures. Moreover, more informative mouth movements facilitated performance in challenging listening conditions when the speech was accompanied by gestures (either congruent or incongruent) suggesting an enhancement when both cues are present relative to just one. We also observed (a trend) that more informative mouth movements speeded up word recognition across clarity conditions, but only when the gestures were absent. We conclude that listeners use and dynamically weight the informativeness of gestures and mouth movements available during face-to-face communication.


Asunto(s)
Gestos , Percepción del Habla , Comprensión , Humanos , Lectura de los Labios , Habla
5.
Cortex ; 133: 309-327, 2020 12.
Artículo en Inglés | MEDLINE | ID: mdl-33161278

RESUMEN

Hand gestures, imagistically related to the content of speech, are ubiquitous in face-to-face communication. Here we investigated people with aphasia's (PWA) processing of speech accompanied by gestures using lesion-symptom mapping. Twenty-nine PWA and 15 matched controls were shown a picture of an object/action and then a video-clip of a speaker producing speech and/or gestures in one of the following combinations: speech-only, gesture-only, congruent speech-gesture, and incongruent speech-gesture. Participants' task was to indicate, in different blocks, whether the picture and the word matched (speech task), or whether the picture and the gesture matched (gesture task). Multivariate lesion analysis with Support Vector Regression Lesion-Symptom Mapping (SVR-LSM) showed that benefit for congruent speech-gesture was associated with 1) lesioned voxels in anterior fronto-temporal regions including inferior frontal gyrus (IFG), and sparing of posterior temporal cortex and lateral temporal-occipital regions (pTC/LTO) for the speech task, and 2) conversely, lesions to pTC/LTO and sparing of anterior regions for the gesture task. The two tasks did not share overlapping voxels. Costs from incongruent speech-gesture pairings were associated with lesioned voxels in these same anterior (for the speech task) and posterior (for the gesture task) regions, but crucially, also shared voxels in superior temporal gyrus (STG) and middle temporal gyrus (MTG), including the anterior temporal lobe. These results suggest that IFG and pTC/LTO contribute to extracting semantic information from speech and gesture, respectively; however, they are not causally involved in integrating information from the two modalities. In contrast, regions in anterior STG/MTG are associated with performance in both tasks and may thus be critical to speech-gesture integration. These conclusions are further supported by associations between performance in the experimental tasks and performance in tests assessing lexical-semantic processing and gesture recognition.


Asunto(s)
Comprensión , Accidente Cerebrovascular , Mapeo Encefálico , Gestos , Humanos , Imagen por Resonancia Magnética , Habla , Accidente Cerebrovascular/complicaciones , Accidente Cerebrovascular/diagnóstico por imagen , Lóbulo Temporal
6.
Artículo en Inglés | MEDLINE | ID: mdl-33154182

RESUMEN

OBJECTIVE: The efficacy of spoken language comprehension therapies for persons with aphasia remains equivocal. We investigated the efficacy of a self-led therapy app, 'Listen-In', and examined the relation between brain structure and therapy response. METHODS: A cross-over randomised repeated measures trial with five testing time points (12-week intervals), conducted at the university or participants' homes, captured baseline (T1), therapy (T2-T4) and maintenance (T5) effects. Participants with chronic poststroke aphasia and spoken language comprehension impairments completed consecutive Listen-In and standard care blocks (both 12 weeks with order randomised). Repeated measures analyses of variance compared change in spoken language comprehension on two co-primary outcomes over therapy versus standard care. Three structural MRI scans (T2-T4) for each participant (subgroup, n=25) were analysed using cross-sectional and longitudinal voxel-based morphometry. RESULTS: Thirty-five participants completed, on average, 85 hours (IQR=70-100) of Listen-In (therapy first, n=18). The first study-specific co-primary outcome (Auditory Comprehension Test (ACT)) showed large and significant improvements for trained spoken words over therapy versus standard care (11%, Cohen's d=1.12). Gains were largely maintained at 12 and 24 weeks. There were no therapy effects on the second standardised co-primary outcome (Comprehensive Aphasia Test: Spoken Words and Sentences). Change on ACT trained words was associated with volume of pretherapy right hemisphere white matter and post-therapy grey matter tissue density changes in bilateral temporal lobes. CONCLUSIONS: Individuals with chronic aphasia can improve their spoken word comprehension many years after stroke. Results contribute to hemispheric debates implicating the right hemisphere in therapy-driven language recovery. Listen-In will soon be available on GooglePlay. TRIAL REGISTRATION NUMBER: NCT02540889.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...