Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Neuroimage ; 263: 119647, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-36162634

RESUMEN

Recognising a speaker's identity by the sound of their voice is important for successful interaction. The skill depends on our ability to discriminate minute variations in the acoustics of the vocal signal. Performance on voice identity assessments varies widely across the population. The neural underpinnings of this ability and its individual differences, however, remain poorly understood. Here we provide critical tests of a theoretical framework for the neural processing stages of voice identity and address how individual differences in identity discrimination mediate activation in this neural network. We scanned 40 individuals on an fMRI adaptation task involving voices drawn from morphed continua between two personally familiar identities. Analyses dissociated neuronal effects induced by repetition of acoustically similar morphs from those induced by a switch in perceived identity. Activation in temporal voice-sensitive areas decreased with acoustic similarity between consecutive stimuli. This repetition suppression effect was mediated by the performance on an independent voice assessment and this result highlights an important functional role of adaptive coding in voice expertise. Bilateral anterior insulae and medial frontal gyri responded to a switch in perceived voice identity compared to an acoustically equidistant switch within identity. Our results support a multistep model of voice identity perception.


Asunto(s)
Acústica , Enfermedades Auditivas Centrales , Cognición , Reconocimiento de Voz , Humanos , Estimulación Acústica , Cognición/fisiología , Imagen por Resonancia Magnética , Corteza Prefrontal/fisiología , Reconocimiento de Voz/fisiología , Enfermedades Auditivas Centrales/fisiopatología , Masculino , Femenino , Adolescente , Adulto Joven , Adulto , Red Nerviosa/fisiología
2.
Behav Res Methods ; 50(6): 2184-2192, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-29124718

RESUMEN

Recognising the identity of conspecifics is an important yet highly variable skill. Approximately 2 % of the population suffers from a socially debilitating deficit in face recognition. More recently the existence of a similar deficit in voice perception has emerged (phonagnosia). Face perception tests have been readily available for years, advancing our understanding of underlying mechanisms in face perception. In contrast, voice perception has received less attention, and the construction of standardized voice perception tests has been neglected. Here we report the construction of the first standardized test for voice perception ability. Participants make a same/different identity decision after hearing two voice samples. Item Response Theory guided item selection to ensure the test discriminates between a range of abilities. The test provides a starting point for the systematic exploration of the cognitive and neural mechanisms underlying voice perception. With a high test-retest reliability (r=.86) and short assessment duration (~10 min) this test examines individual abilities reliably and quickly and therefore also has potential for use in developmental and neuropsychological populations.


Asunto(s)
Agnosia/diagnóstico , Percepción del Habla , Femenino , Humanos , Masculino , Valor Predictivo de las Pruebas , Reproducibilidad de los Resultados , Adulto Joven
3.
Acta Neurochir (Wien) ; 156(6): 1237-43, 2014 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-24150189

RESUMEN

BACKGROUND: Brain tumor surgeries are associated with a high technical and personal effort. The required interactions between the surgeon and the technical components, such as neuronavigation, surgical instruments and intraoperative imaging, are complex and demand innovative training solutions and standardized evaluation methods. Phantom-based training systems could be useful in complementing the existing surgical education and training. METHODS: A prototype of a phantom-based training system was developed, intended for standardized training of important aspects of brain tumor surgery based on real patient data. The head phantom consists of a three-part construction that includes a reusable base and adapter, as well as a changeable module for single use. Training covers surgical planning of the optimal access path, the setup of the navigation system including the registration of the head phantom, as well as the navigated craniotomy with real instruments. Tracked instruments during the simulation and predefined access paths constitute the basis for the essential objective training feedback. RESULTS: The prototype was evaluated in a pilot study by assistant physicians at different education levels. They performed a complete simulation and a final assessment using an evaluation questionnaire. The analysis of the questionnaire showed the evaluation result as "good" for the phantom construction and the used materials. The learning effect concerning the navigated planning was evaluated as "very good", as well as having the effect of increasing safety for the surgeon before planning and conducting craniotomies independently on patients. CONCLUSIONS: The training system represents a promising approach for the future training of neurosurgeons. It aims to improve surgical skill training by creating a more realistic simulation in a non-risk environment. Hence, it could help to bridge the gap between theoretical and practical training with the potential to benefit both physicians and patients.


Asunto(s)
Neoplasias Encefálicas/cirugía , Maniquíes , Neuronavegación/educación , Neurocirugia/educación , Cirugía Asistida por Computador/educación , Ecoencefalografía , Humanos , Imagen por Resonancia Magnética , Modelos Anatómicos , Proyectos Piloto , Programas Informáticos
4.
Cognition ; 210: 104582, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33450447

RESUMEN

There are remarkable individual differences in the ability to recognise individuals by the sound of their voice. Theoretically, this ability is thought to depend on the coding accuracy of voices in a low-dimensional "voice-space". Here we were interested in how adaptive coding of voice identity relates to this variability in skill. In two adaptation experiments we explored first whether the aftereffect size to two familiar vocal identities can predict voice perception ability and second, whether this effect stems from general auditory skill (e.g. discrimination ability for tuning and tempo). Experiment 1 demonstrated that contrastive aftereffect sizes for voice identity predicted voice perception ability. In Experiment 2, we replicated this finding and further established that this effect is unrelated to general auditory abilities or general adaptability of listeners. Our results highlight the important functional role of adaptive coding in voice expertise and suggest that human voice perception is a highly specialised and distinct auditory ability.


Asunto(s)
Individualidad , Voz , Adaptación Fisiológica , Percepción Auditiva , Humanos , Percepción
5.
Q J Exp Psychol (Hove) ; 72(7): 1657-1666, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-30269658

RESUMEN

Recent models of voice perception propose a hierarchy of steps leading from a more general, "low-level" acoustic analysis of the voice signal to a voice-specific, "higher-level" analysis. We aimed to engage two of these stages: first, a more general detection task in which voices had to be identified amid environmental sounds, and, second, a more voice-specific task requiring a same/different decision about unfamiliar speaker pairs (Bangor Voice Matching Test [BVMT]). We explored how vulnerable voice recognition is to interfering distractor voices, and whether performance on the aforementioned tasks could predict resistance against such interference. In addition, we manipulated the similarity of distractor voices to explore the impact of distractor similarity on recognition accuracy. We found moderate correlations between voice detection ability and resistance to distraction (r = .44), and BVMT and resistance to distraction (r = .57). A hierarchical regression revealed both tasks as significant predictors of the ability to tolerate distractors (R2 = .36). The first stage of the regression (BVMT as sole predictor) already explained 32% of the variance. Descriptively, the "higher-level" BVMT was a better predictor (ß = .47) than the more general detection task (ß = .25), although further analysis revealed no significant difference between both beta weights. Furthermore, distractor similarity did not affect performance on the distractor task. Overall, our findings suggest the possibility to target specific stages of the voice perception process. This could help explore different stages of voice perception and their contributions to specific auditory abilities, possibly also in forensic and clinical settings.


Asunto(s)
Percepción Auditiva , Fonación , Habla , Voz , Estimulación Acústica , Adulto , Femenino , Humanos , Masculino , Medición de la Producción del Habla
6.
PLoS One ; 10(11): e0143151, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26588847

RESUMEN

Recognition of personally familiar voices benefits from the concurrent presentation of the corresponding speakers' faces. This effect of audiovisual integration is most pronounced for voices combined with dynamic articulating faces. However, it is unclear if learning unfamiliar voices also benefits from audiovisual face-voice integration or, alternatively, is hampered by attentional capture of faces, i.e., "face-overshadowing". In six study-test cycles we compared the recognition of newly-learned voices following unimodal voice learning vs. bimodal face-voice learning with either static (Exp. 1) or dynamic articulating faces (Exp. 2). Voice recognition accuracies significantly increased for bimodal learning across study-test cycles while remaining stable for unimodal learning, as reflected in numerical costs of bimodal relative to unimodal voice learning in the first two study-test cycles and benefits in the last two cycles. This was independent of whether faces were static images (Exp. 1) or dynamic videos (Exp. 2). In both experiments, slower reaction times to voices previously studied with faces compared to voices only may result from visual search for faces during memory retrieval. A general decrease of reaction times across study-test cycles suggests facilitated recognition with more speaker repetitions. Overall, our data suggest two simultaneous and opposing mechanisms during bimodal face-voice learning: while attentional capture of faces may initially impede voice learning, audiovisual integration may facilitate it thereafter.


Asunto(s)
Recuerdo Mental/fisiología , Patrones de Reconocimiento Fisiológico/fisiología , Reconocimiento Visual de Modelos/fisiología , Reconocimiento en Psicología , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Atención/fisiología , Aprendizaje Discriminativo/fisiología , Cara , Femenino , Humanos , Masculino , Estimulación Luminosa , Tiempo de Reacción , Factores de Tiempo , Voz
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA