Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 15 de 15
Filtrar
1.
J Headache Pain ; 21(1): 56, 2020 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-32448118

RESUMO

BACKGROUND: Vestibular symptoms and balance changes are common in patients with migraine, especially in the ones with aura and chronic migraine. However, it is not known if the balance changes are determined by the presence of vestibular symptoms or migraine subdiagnosis. Therefore, the aim of this study was to verify if the migraine subdiagnosis and/or the presence of vestibular symptoms can predict balance dysfunction in migraineurs. METHODS: The study included 49 women diagnosed with migraine with aura, 53 without aura, 51 with chronic migraine, and 54 headache-free women. All participants answered a structured questionnaire regarding migraine features and presence of vestibular symptoms, such as dizziness/vertigo. The participants performed the Modified Sensory Organization Test on an AMTI© force plate. The data were analysed using a linear mixed-effect regression model. RESULTS: The presence of vestibular symptoms did not predict postural sway, but the subdiagnosis was a significant predictor of postural sway. Migraine with aura patients exhibited more sway than migraine patients without aura when the surface was unstable. Additionally, we found high effect sizes (ES > 0.79) for postural sway differences between patients with chronic migraine or with aura compared to controls or migraine without aura, suggesting that these results are clinically relevant. CONCLUSIONS: The subdiagnosis of migraine, instead of the presence of vestibular symptoms, can predict postural control impairments observed in migraineurs. This lends support to the notion that balance instability is related to the presence of aura and migraine chronicity, and that it should be considered even in patients without vestibular symptoms.


Assuntos
Transtornos de Enxaqueca/diagnóstico , Transtornos de Enxaqueca/fisiopatologia , Equilíbrio Postural/fisiologia , Doenças Vestibulares/diagnóstico , Doenças Vestibulares/fisiopatologia , Adulto , Estudos Transversais , Feminino , Humanos , Pessoa de Meia-Idade , Transtornos de Enxaqueca/epidemiologia , Valor Preditivo dos Testes , Inquéritos e Questionários , Vertigem/diagnóstico , Vertigem/epidemiologia , Vertigem/fisiopatologia , Doenças Vestibulares/epidemiologia , Adulto Jovem
2.
J Acoust Soc Am ; 144(4): 2178, 2018 10.
Artigo em Inglês | MEDLINE | ID: mdl-30404485

RESUMO

Cocktail parties pose a difficult yet solvable problem for the auditory system. Previous work has shown that the cocktail-party problem is considerably easier when all sounds in the target stream are spoken by the same talker (the voice-continuity benefit). The present study investigated the contributions of two of the most salient voice features-glottal-pulse rate (GPR) and vocal-tract length (VTL)-to the voice-continuity benefit. Twenty young, normal-hearing listeners participated in two experiments. On each trial, listeners heard concurrent sequences of spoken digits from three different spatial locations and reported the digits coming from a target location. Critically, across conditions, GPR and VTL either remained constant or varied across target digits. Additionally, across experiments, the target location either remained constant (Experiment 1) or varied (Experiment 2) within a trial. In Experiment 1, listeners benefited from continuity in either voice feature, but VTL continuity was more helpful than GPR continuity. In Experiment 2, spatial discontinuity greatly hindered listeners' abilities to exploit continuity in GPR and VTL. The present results suggest that selective attention benefits from continuity in target voice features and that VTL and GPR play different roles for perceptual grouping and stream segregation in the cocktail party.

3.
Neuroimage ; 91: 375-85, 2014 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-24434677

RESUMO

Understanding speech from different speakers is a sophisticated process, particularly because the same acoustic parameters convey important information about both the speech message and the person speaking. How the human brain accomplishes speech recognition under such conditions is unknown. One view is that speaker information is discarded at early processing stages and not used for understanding the speech message. An alternative view is that speaker information is exploited to improve speech recognition. Consistent with the latter view, previous research identified functional interactions between the left- and the right-hemispheric superior temporal sulcus/gyrus, which process speech- and speaker-specific vocal tract parameters, respectively. Vocal tract parameters are one of the two major acoustic features that determine both speaker identity and speech message (phonemes). Here, using functional magnetic resonance imaging (fMRI), we show that a similar interaction exists for glottal fold parameters between the left and right Heschl's gyri. Glottal fold parameters are the other main acoustic feature that determines speaker identity and speech message (linguistic prosody). The findings suggest that interactions between left- and right-hemispheric areas are specific to the processing of different acoustic features of speech and speaker, and that they represent a general neural mechanism when understanding speech from different speakers.


Assuntos
Encéfalo/fisiologia , Reconhecimento Psicológico/fisiologia , Fala/fisiologia , Adulto , Feminino , Lateralidade Funcional/fisiologia , Glote/anatomia & histologia , Glote/fisiologia , Humanos , Processamento de Imagem Assistida por Computador , Individualidade , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Psicolinguística , Prega Vocal/anatomia & histologia , Prega Vocal/fisiologia , Adulto Jovem
4.
Neuroimage ; 102 Pt 2: 332-44, 2014 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-25087482

RESUMO

Hemispheric specialization for linguistic prosody is a controversial issue. While it is commonly assumed that linguistic prosody and emotional prosody are preferentially processed in the right hemisphere, neuropsychological work directly comparing processes of linguistic prosody and emotional prosody suggests a predominant role of the left hemisphere for linguistic prosody processing. Here, we used two functional magnetic resonance imaging (fMRI) experiments to clarify the role of left and right hemispheres in the neural processing of linguistic prosody. In the first experiment, we sought to confirm previous findings showing that linguistic prosody processing compared to other speech-related processes predominantly involves the right hemisphere. Unlike previous studies, we controlled for stimulus influences by employing a prosody and speech task using the same speech material. The second experiment was designed to investigate whether a left-hemispheric involvement in linguistic prosody processing is specific to contrasts between linguistic prosody and emotional prosody or whether it also occurs when linguistic prosody is contrasted against other non-linguistic processes (i.e., speaker recognition). Prosody and speaker tasks were performed on the same stimulus material. In both experiments, linguistic prosody processing was associated with activity in temporal, frontal, parietal and cerebellar regions. Activation in temporo-frontal regions showed differential lateralization depending on whether the control task required recognition of speech or speaker: recognition of linguistic prosody predominantly involved right temporo-frontal areas when it was contrasted against speech recognition; when contrasted against speaker recognition, recognition of linguistic prosody predominantly involved left temporo-frontal areas. The results show that linguistic prosody processing involves functions of both hemispheres and suggest that recognition of linguistic prosody is based on an inter-hemispheric mechanism which exploits both a right-hemispheric sensitivity to pitch information and a left-hemispheric dominance in speech processing.


Assuntos
Encéfalo/fisiologia , Lateralidade Funcional , Reconhecimento Psicológico/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Adulto , Mapeamento Encefálico , Emoções/fisiologia , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
5.
Artigo em Inglês | MEDLINE | ID: mdl-37998307

RESUMO

Psychosocial support programs have been increasingly implemented to protect asylum seekers' wellbeing, though how and why these interventions work is not yet fully understood. This study first uses questionnaires to examine how self-efficacy, satisfaction of basic psychological needs, and adaptive stress may influence wellbeing for a group of asylum-seeking mothers attending a community-based psychosocial program called Welcome Haven. Second, we explore mothers' experiences attending the Welcome Haven program through qualitative interviews. Analysis reveals the importance of relatedness as a predictor of wellbeing as well as the mediating role of adaptive stress between need satisfaction and wellbeing. Further, attending Welcome Haven is associated with reduced adaptive stress and increased wellbeing, which correspond with the thematic analysis showing that attendance at the workshops fostered a sense of belonging through connection with other asylum seekers and service providers as well as empowerment through access to information and self-expression. The results point to the importance of community-based support that addresses adaptive stress and the promotion of social connection as key determinants of wellbeing. Nonetheless, the centrality of pervasive structural stressors asylum seekers experience during resettlement also cautions that relief offered by interventions may be insufficient in the face of ongoing systemic inequality and marginalization.


Assuntos
Mães , Refugiados , Feminino , Humanos , Pesquisa Qualitativa , Inquéritos e Questionários , Refugiados/psicologia , Autonomia Pessoal
6.
Neuroimage ; 54(3): 2340-9, 2011 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-20965256

RESUMO

The onset of motion in an otherwise continuous sound elicits a prominent auditory evoked potential, the so-called motion onset response (MOR). The MOR has recently been shown to be modulated by stimulus-dependent factors, such as velocity, while the possible role of task-dependent factors has remained unclear. Here, the effect of spatial attention on the MOR was investigated in 19 listeners. In each trial, the subject initially heard a free-field sound, consisting of a stationary period and a subsequent period of motion. Then, two successive stationary test tones were presented that differed in location and pitch. Subjects either judged whether or not the starting and final positions of the preceded motion matched the positions of the two test tones ('motion-focused condition'), or whether or not the test tones were identical in pitch, irrespective of the preceded motion stimulus ('baseline condition'). These two tasks were presented in separate experimental blocks. The performance level in both tasks was similar. However, especially later portions of the MOR were significantly increased in amplitude when auditory motion was task-relevant. Cortical source localization indicated that this extra activation originated in dorsofrontal areas that have been proposed to be part of the dorsal auditory processing stream. These results support the assumption that auditory motion processing is based on a complex interaction of both stimulus-specific and attentional processes.


Assuntos
Atenção/fisiologia , Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Eletroencefalografia , Percepção de Movimento/fisiologia , Estimulação Acústica , Adolescente , Adulto , Vias Auditivas/fisiologia , Interpretação Estatística de Dados , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Masculino , Desempenho Psicomotor/fisiologia , Percepção Espacial/fisiologia , Adulto Jovem
7.
Cognition ; 215: 104780, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34298232

RESUMO

Familiar and unfamiliar voice perception are often understood as being distinct from each other. For identity perception, theoretical work has proposed that listeners use acoustic information in different ways to perceive identity from familiar and unfamiliar voices: Unfamiliar voices are thought to be processed based on close comparisons of acoustic properties, while familiar voices are processed based on diagnostic acoustic features that activate a stored person-specific representation of that voice. To date no empirical study has directly examined whether and how familiar and unfamiliar listeners differ in their use of acoustic information for identity perception. Here, we tested this theoretical claim by linking listeners' judgements in voice identity tasks to complex acoustic representation - spectral similarity of the heard voice recordings. Participants (N = 177) who were either familiar or unfamiliar with a set of voices completed an identity discrimination task (Experiment 1) or an identity sorting task (Experiment 2). In both experiments, identity judgements for familiar and unfamiliar voices were guided by spectral similarity: Pairs of recordings with greater acoustic similarity were more likely to be perceived as belonging to the same voice identity. However, while there were no differences in how familiar and unfamiliar listeners used acoustic information for identity discrimination, differences were apparent for identity sorting. Our study therefore challenges proposals that view familiar and unfamiliar voice perception as being at all times distinct. Instead, our data suggest a critical role of the listening situation in which familiar and unfamiliar voices are evaluated, thus characterising voice identity perception as a highly dynamic process in which listeners opportunistically make use of any kind of information they can access.


Assuntos
Percepção da Fala , Voz , Acústica , Percepção Auditiva , Humanos , Reconhecimento Psicológico
8.
R Soc Open Sci ; 8(11): 210881, 2021 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-34804567

RESUMO

Acoustic noise is pervasive in human environments. Some individuals are more tolerant to noise than others. We demonstrate the explanatory potential of Big-5 personality traits neuroticism (being emotionally unstable) and extraversion (being enthusiastic, outgoing) on subjective self-report and objective psycho-acoustic metrics of hearing in noise in two samples (total N = 1103). Under statistical control for demographics and in agreement with pre-registered hypotheses, lower neuroticism and higher extraversion independently explained superior self-reported noise resistance, speech-hearing ability and acceptable background noise levels. Surprisingly, objective speech-in-noise recognition instead increased with higher levels of neuroticism. In turn, the bias in subjectively overrating one's own hearing in noise decreases with higher neuroticism but increases with higher extraversion. Of benefit to currently underspecified frameworks of hearing in noise and tailored audiological treatments, these results show that personality explains inter-individual differences in coping with acoustic noise, which is a ubiquitous source of distraction and a health hazard.

9.
iScience ; 24(4): 102345, 2021 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-33870139

RESUMO

Slow neurobiological rhythms, such as the circadian secretion of glucocorticoid (GC) hormones, modulate a variety of body functions. Whether and how endocrine fluctuations also exert an influence on perceptual abilities is largely uncharted. Here, we show that phasic increases in GC availability prove beneficial to auditory discrimination. In an age-varying sample of N = 68 healthy human participants, we characterize the covariation of saliva cortisol with perceptual sensitivity in an auditory pitch discrimination task at five time points across the sleep-wake cycle. First, momentary saliva cortisol levels were captured well by the time relative to wake-up and overall sleep duration. Second, within individuals, higher cortisol levels just prior to behavioral testing predicted better pitch discrimination ability, expressed as a steepened psychometric curve. This effect of GCs held under a set of statistical controls. Our results pave the way for more in-depth studies on neuroendocrinological determinants of sensory encoding and perception.

10.
Neuropsychologia ; 146: 107505, 2020 09.
Artigo em Inglês | MEDLINE | ID: mdl-32485200

RESUMO

Recent research posits that the cognitive system samples target stimuli in a rhythmic fashion, characterized by target detection fluctuating at frequencies of ~3-8 Hz. Besides prioritized encoding of targets, a key cognitive function is the protection of working memory from distractor intrusion. Here, we test to which degree the vulnerability of working memory to distraction is rhythmic. In an Irrelevant-Speech Task, N = 23 human participants had to retain the serial order of nine numbers in working memory while being distracted by task-irrelevant speech with variable temporal onsets. The magnitude of the distractor-evoked N1 component in the event-related potential as well as behavioural recall accuracy, both measures of memory distraction, were periodically modulated by distractor onset time in approximately 2-4 cycles per second (Hz). Critically, an underlying 2.5-Hz rhythm explained variation in both measures of distraction such that stronger phasic distractor encoding mediated lower phasic memory recall accuracy. In a behavioural follow-up experiment, we tested whether these results would replicate in a task design without rhythmic presentation of target items. Participants (N = 6 with on average >2500 trials, each) retained two line-figures in memory while being distracted by acoustic noise of varying onset across trials. In agreement with the main experiment, the temporal onset of the distractor periodically modulated memory performance. These results suggest that during working memory retention, the human cognitive system implements distractor suppression in a temporally dynamic fashion, reflected in ~400-ms long cycles of high versus low distractibility.


Assuntos
Atenção , Memória de Curto Prazo , Potenciais Evocados , Humanos , Rememoração Mental , Tempo de Reação
11.
Atten Percept Psychophys ; 81(4): 1108-1118, 2019 May.
Artigo em Inglês | MEDLINE | ID: mdl-30993655

RESUMO

When one is listening, familiarity with an attended talker's voice improves speech comprehension. Here, we instead investigated the effect of familiarity with a distracting talker. In an irrelevant-speech task, we assessed listeners' working memory for the serial order of spoken digits when a task-irrelevant, distracting sentence was produced by either a familiar or an unfamiliar talker (with rare omissions of the task-irrelevant sentence). We tested two groups of listeners using the same experimental procedure. The first group were undergraduate psychology students (N = 66) who had attended an introductory statistics course. Critically, each student had been taught by one of two course instructors, whose voices served as the familiar and unfamiliar task-irrelevant talkers. The second group of listeners were family members and friends (N = 20) who had known either one of the two talkers for more than 10 years. Students, but not family members and friends, made more errors when the task-irrelevant talker was familiar versus unfamiliar. Interestingly, the effect of talker familiarity was not modulated by the presence of task-irrelevant speech: Students experienced stronger working memory disruption by a familiar talker, irrespective of whether they heard a task-irrelevant sentence during memory retention or merely expected it. While previous work has shown that familiarity with an attended talker benefits speech comprehension, our findings indicate that familiarity with an ignored talker disrupts working memory for target speech. The absence of this effect in family members and friends suggests that the degree of familiarity modulates the memory disruption.


Assuntos
Estimulação Acústica/psicologia , Memória de Curto Prazo/fisiologia , Reconhecimento Psicológico/fisiologia , Percepção da Fala/fisiologia , Análise e Desempenho de Tarefas , Adolescente , Adulto , Idoso , Compreensão , Feminino , Audição , Humanos , Idioma , Masculino , Pessoa de Meia-Idade , Voz , Adulto Jovem
12.
Cortex ; 117: 122-134, 2019 08.
Artigo em Inglês | MEDLINE | ID: mdl-30974320

RESUMO

Speech prosody, the variation in sentence melody and rhythm, plays a crucial role in sentence comprehension. Specifically, changes in intonational pitch along a sentence can affect our understanding of who did what to whom. To date, it remains unclear how the brain processes this particular use of intonation and which brain regions are involved. In particular, one central matter of debate concerns the lateralisation of intonation processing. To study the role of intonation in sentence comprehension, we designed a functional magnetic resonance imaging (MRI) experiment in which participants listened to spoken sentences. Critically, the interpretation of these sentences depended on either intonational or grammatical cues. Our results showed stronger functional activity in the left inferior frontal gyrus (IFG) when the intonational cue was crucial for sentence comprehension compared to when it was not. When instead a grammatical cue was crucial for sentence comprehension, we found involvement of an overlapping region in the left IFG, as well as in a posterior temporal region. A further analysis revealed that the lateralisation of intonation processing depends on its role in syntactic processing: activity in the IFG was lateralised to the left hemisphere when intonation was the only source of information to comprehend the sentence. In contrast, activity in the IFG was right-lateralised when intonation did not contribute to sentence comprehension. Together, these results emphasise the key role of the left IFG in sentence comprehension, showing the importance of this region when intonation establishes sentence structure. Furthermore, our results provide evidence for the theory that the lateralisation of prosodic processing is modulated by its linguistic role.


Assuntos
Compreensão/fisiologia , Lateralidade Funcional/fisiologia , Córtex Pré-Frontal/diagnóstico por imagem , Percepção da Fala/fisiologia , Adulto , Feminino , Neuroimagem Funcional , Humanos , Idioma , Imageamento por Ressonância Magnética , Masculino , Adulto Jovem
13.
Front Psychol ; 8: 1584, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28959226

RESUMO

Previous studies have shown that listeners are better able to understand speech when they are familiar with the talker's voice. In most of these studies, talker familiarity was ensured by explicit voice training; that is, listeners learned to identify the familiar talkers. In the real world, however, the characteristics of familiar talkers are learned incidentally, through communication. The present study investigated whether speech comprehension benefits from implicit voice training; that is, through exposure to talkers' voices without listeners explicitly trying to identify them. During four training sessions, listeners heard short sentences containing a single verb (e.g., "he writes"), spoken by one talker. The sentences were mixed with noise, and listeners identified the verb within each sentence while their speech-reception thresholds (SRT) were measured. In a final test session, listeners performed the same task, but this time they heard different sentences spoken by the familiar talker and three unfamiliar talkers. Familiar and unfamiliar talkers were counterbalanced across listeners. Half of the listeners performed a test session in which the four talkers were presented in separate blocks (blocked paradigm). For the other half, talkers varied randomly from trial to trial (interleaved paradigm). The results showed that listeners had lower SRT when the speech was produced by the familiar talker than the unfamiliar talkers. The type of talker presentation (blocked vs. interleaved) had no effect on this familiarity benefit. These findings suggest that listeners implicitly learn talker-specific information during a speech-comprehension task, and exploit this information to improve the comprehension of novel speech material from familiar talkers.

14.
Front Hum Neurosci ; 10: 551, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27877120

RESUMO

Sensitivity to regularities plays a crucial role in the acquisition of various linguistic features from spoken language input. Artificial grammar learning paradigms explore pattern recognition abilities in a set of structured sequences (i.e., of syllables or letters). In the present study, we investigated the functional underpinnings of learning phonological regularities in auditorily presented syllable sequences. While previous neuroimaging studies either focused on functional differences between the processing of correct vs. incorrect sequences or between different levels of sequence complexity, here the focus is on the neural foundation of the actual learning success. During functional magnetic resonance imaging (fMRI), participants were exposed to a set of syllable sequences with an underlying phonological rule system, known to ensure performance differences between participants. We expected that successful learning and rule application would require phonological segmentation and phoneme comparison. As an outcome of four alternating learning and test fMRI sessions, participants split into successful learners and non-learners. Relative to non-learners, successful learners showed increased task-related activity in a fronto-parietal network of brain areas encompassing the left lateral premotor cortex as well as bilateral superior and inferior parietal cortices during both learning and rule application. These areas were previously associated with phonological segmentation, phoneme comparison, and verbal working memory. Based on these activity patterns and the phonological strategies for rule acquisition and application, we argue that successful learning and processing of complex phonological rules in our paradigm is mediated via a fronto-parietal network for phonological processes.

15.
Curr Biol ; 24(19): 2348-53, 2014 Oct 06.
Artigo em Inglês | MEDLINE | ID: mdl-25264258

RESUMO

Recognizing other individuals is an essential skill in humans and in other species. Over the last decade, it has become increasingly clear that person-identity recognition abilities are highly variable. Roughly 2% of the population has developmental prosopagnosia, a congenital deficit in recognizing others by their faces. It is currently unclear whether developmental phonagnosia, a deficit in recognizing others by their voices, is equally prevalent, or even whether it actually exists. Here, we aimed to identify cases of developmental phonagnosia. We collected more than 1,000 data sets from self-selected German individuals by using a web-based screening test that was designed to assess their voice-recognition abilities. We then examined potentially phonagnosic individuals by using a comprehensive laboratory test battery. We found two novel cases of phonagnosia: AS, a 32-year-old female, and SP, a 32-year-old male; both are otherwise healthy academics, have normal hearing, and show no pathological abnormalities in brain structure. The two cases have comparable patterns of impairments: both performed at least 2 SDs below the level of matched controls on tests that required learning new voices, judging the familiarity of famous voices, and discriminating pitch differences between voices. In both cases, only voice-identity processing per se was affected: face recognition, speech intelligibility, emotion recognition, and musical ability were all comparable to controls. The findings confirm the existence of developmental phonagnosia as a modality-specific impairment and allow a first rough prevalence estimate.


Assuntos
Agnosia/diagnóstico , Percepção Auditiva , Voz , Adolescente , Adulto , Idoso , Agnosia/epidemiologia , Agnosia/genética , Agnosia/patologia , Feminino , Alemanha/epidemiologia , Humanos , Masculino , Pessoa de Meia-Idade , Reconhecimento Psicológico , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA