Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Psychophysiology ; 61(1): e14435, 2024 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-37691098

RESUMO

Predictive processing theories, which model the brain as a "prediction machine", explain a wide range of cognitive functions, including learning, perception and action. Furthermore, it is increasingly accepted that aberrant prediction tendencies play a crucial role in psychiatric disorders. Given this explanatory value for clinical psychiatry, prediction tendencies are often implicitly conceptualized as individual traits or as tendencies that generalize across situations. As this has not yet explicitly been shown, in the current study, we quantify to what extent the individual tendency to anticipate sensory features of high probability generalizes across modalities. Using magnetoencephalography (MEG), we recorded brain activity while participants were presented with a sequence of four different (either visual or auditory) stimuli, which changed according to predefined transitional probabilities of two entropy levels: ordered vs. random. Our results show that, on a group-level, under conditions of low entropy, stimulus features of high probability are preactivated in the auditory but not in the visual modality. Crucially, the magnitude of the individual tendency to predict sensory events seems not to correlate between the two modalities. Furthermore, reliability statistics indicate poor internal consistency, suggesting that the measures from the different modalities are unlikely to reflect a single, common cognitive process. In sum, our findings suggest that quantification and interpretation of individual prediction tendencies cannot be generalized across modalities.


Assuntos
Percepção Auditiva , Percepção Visual , Humanos , Reprodutibilidade dos Testes , Encéfalo , Magnetoencefalografia , Estimulação Acústica
2.
PLoS One ; 17(9): e0275585, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36178907

RESUMO

Visual input is crucial for understanding speech under noisy conditions, but there are hardly any tools to assess the individual ability to lipread. With this study, we wanted to (1) investigate how linguistic characteristics of language on the one hand and hearing impairment on the other hand have an impact on lipreading abilities and (2) provide a tool to assess lipreading abilities for German speakers. 170 participants (22 prelingually deaf) completed the online assessment, which consisted of a subjective hearing impairment scale and silent videos in which different item categories (numbers, words, and sentences) were spoken. The task for our participants was to recognize the spoken stimuli just by visual inspection. We used different versions of one test and investigated the impact of item categories, word frequency in the spoken language, articulation, sentence frequency in the spoken language, sentence length, and differences between speakers on the recognition score. We found an effect of item categories, articulation, sentence frequency, and sentence length on the recognition score. With respect to hearing impairment we found that higher subjective hearing impairment is associated with higher test score. We did not find any evidence that prelingually deaf individuals show enhanced lipreading skills over people with postlingual acquired hearing impairment. However, we see an interaction with education only in the prelingual deaf, but not in the population with postlingual acquired hearing loss. This points to the fact that there are different factors contributing to enhanced lipreading abilities depending on the onset of hearing impairment (prelingual vs. postlingual). Overall, lipreading skills vary strongly in the general population independent of hearing impairment. Based on our findings we constructed a new and efficient lipreading assessment tool (SaLT) that can be used to test behavioral lipreading abilities in the German speaking population.


Assuntos
Surdez , Perda Auditiva , Percepção da Fala , Humanos , Idioma , Linguística , Leitura Labial , Fala , Percepção Visual
4.
Neuroimage ; 252: 119044, 2022 05 15.
Artigo em Inglês | MEDLINE | ID: mdl-35240298

RESUMO

Multisensory integration enables stimulus representation even when the sensory input in a single modality is weak. In the context of speech, when confronted with a degraded acoustic signal, congruent visual inputs promote comprehension. When this input is masked, speech comprehension consequently becomes more difficult. But it still remains inconclusive which levels of speech processing are affected under which circumstances by occluding the mouth area. To answer this question, we conducted an audiovisual (AV) multi-speaker experiment using naturalistic speech. In half of the trials, the target speaker wore a (surgical) face mask, while we measured the brain activity of normal hearing participants via magnetoencephalography (MEG). We additionally added a distractor speaker in half of the trials in order to create an ecologically difficult listening situation. A decoding model on the clear AV speech was trained and used to reconstruct crucial speech features in each condition. We found significant main effects of face masks on the reconstruction of acoustic features, such as the speech envelope and spectral speech features (i.e. pitch and formant frequencies), while reconstruction of higher level features of speech segmentation (phoneme and word onsets) were especially impaired through masks in difficult listening situations. As we used surgical face masks in our study, which only show mild effects on speech acoustics, we interpret our findings as the result of the missing visual input. Our findings extend previous behavioural results, by demonstrating the complex contextual effects of occluding relevant visual information on speech processing.


Assuntos
Percepção da Fala , Fala , Estimulação Acústica , Acústica , Humanos , Boca , Percepção Visual
5.
Cereb Cortex ; 32(21): 4818-4833, 2022 10 20.
Artigo em Inglês | MEDLINE | ID: mdl-35062025

RESUMO

The integration of visual and auditory cues is crucial for successful processing of speech, especially under adverse conditions. Recent reports have shown that when participants watch muted videos of speakers, the phonological information about the acoustic speech envelope, which is associated with but independent from the speakers' lip movements, is tracked by the visual cortex. However, the speech signal also carries richer acoustic details, for example, about the fundamental frequency and the resonant frequencies, whose visuophonological transformation could aid speech processing. Here, we investigated the neural basis of the visuo-phonological transformation processes of these more fine-grained acoustic details and assessed how they change as a function of age. We recorded whole-head magnetoencephalographic (MEG) data while the participants watched silent normal (i.e., natural) and reversed videos of a speaker and paid attention to their lip movements. We found that the visual cortex is able to track the unheard natural modulations of resonant frequencies (or formants) and the pitch (or fundamental frequency) linked to lip movements. Importantly, only the processing of natural unheard formants decreases significantly with age in the visual and also in the cingulate cortex. This is not the case for the processing of the unheard speech envelope, the fundamental frequency, or the purely visual information carried by lip movements. These results show that unheard spectral fine details (along with the unheard acoustic envelope) are transformed from a mere visual to a phonological representation. Aging affects especially the ability to derive spectral dynamics at formant frequencies. As listening in noisy environments should capitalize on the ability to track spectral fine details, our results provide a novel focus on compensatory processes in such challenging situations.


Assuntos
Percepção da Fala , Humanos , Estimulação Acústica , Lábio , Fala , Movimento
6.
Cortex ; 137: 179-193, 2021 04.
Artigo em Inglês | MEDLINE | ID: mdl-33636631

RESUMO

Continuously prioritizing behaviourally relevant information from the environment for improved stimulus processing is a crucial function of attention. In the current MEG study, we investigated how ongoing oscillatory activity of both sensory and non-sensory brain regions are differentially impacted by attentional focus. Low-frequency phase alignment of neural activity in primary sensory areas, with respect to attended/ignored features has been suggested to support top-down prioritization. However, phase adjustment in frontoparietal regions has not been widely studied, despite general implication of these in top-down selection of information. To investigate this, we let participants perform an established intermodal selective attention task, where low-frequency auditory (1.6 Hz) and visual (1.8 Hz) stimuli were presented simultaneously. We instructed them to either attend to the auditory or to the visual stimuli and to detect targets while ignoring the other stimulus stream. As expected, the strongest phase adjustment was observed in primary sensory regions for auditory and for visual stimulation, independent of attentional focus. We found greater differences in phase locking between attended and ignored stimulation for the visual modality. Interestingly, auditory temporal regions show small but significant attention-dependent neural entrainment even for visual stimulation. Extending findings from invasive recordings in non-human primates, we demonstrate an effect of attentional focus on the phase of the entrained oscillations in auditory and visual cortex which may be driven by phase locked increases of induced power. While sensory areas adjusted the phase of the respective stimulation frequencies, attentional focus adjusted the peak frequencies in nonsensory areas. Spatially these areas show a striking overlap with core regions of the dorsal attention network and the frontoparietal network. This suggests that these areas prioritize the attended modality by optimally exploiting the temporal structure of stimulation. Overall, our study complements and extends previous work by showing a differential effect of attentional focus on entrained oscillations (or phase adjustment) in primary sensory areas and frontoparietal areas.


Assuntos
Lobo Frontal , Córtex Visual , Estimulação Acústica , Percepção Auditiva , Encéfalo , Mapeamento Encefálico , Estimulação Luminosa , Percepção Visual
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...