Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 175
Filtrar
Más filtros

Bases de datos
Tipo del documento
Intervalo de año de publicación
1.
Appl Psychophysiol Biofeedback ; 49(3): 457-471, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-38739182

RESUMEN

Neurofeedback training (NFT) is a promising adjuvant intervention method. The desynchronization of mu rhythm (8-13 Hz) in the electroencephalogram (EEG) over centro-parietal areas is known as a valid indicator of mirror neuron system (MNS) activation, which has been associated with social skills. Still, the effect of neurofeedback training on the MNS requires to be well investigated. The present study examined the possible impact of NFT with a mu suppression training protocol encompassing 15 NFT sessions (45 min each) on 16 healthy neurotypical participants. In separate pre- and post-training sessions, 64-channel EEG was recorded while participants (1) observed videos with various types of movements (including complex goal-directed hand movements and social interaction scenes) and (2) performed the "Reading the Mind in the Eyes Test" (RMET). EEG source reconstruction analysis revealed statistically significant mu suppression during hand movement observation across MNS-attributed fronto-parietal areas after NFT. The frequency analysis showed no significant mu suppression after NFT, despite the fact that numerical mu suppression appeared to be visible in a majority of participants during goal-directed hand movement observation. At the behavioral level, RMET accuracy scores did not suggest an effect of NFT on the ability to interpret subtle emotional expressions, although RMET response times were reduced after NFT. In conclusion, the present study exhibited preliminary and partial evidence that mu suppression NFT can induce mu suppression in MNS-attributed areas. More powerful experimental designs and longer training may be necessary to induce substantial and consistent mu suppression, particularly while observing social scenarios.


Asunto(s)
Electroencefalografía , Neuronas Espejo , Neurorretroalimentación , Humanos , Neuronas Espejo/fisiología , Proyectos Piloto , Neurorretroalimentación/métodos , Masculino , Femenino , Adulto , Adulto Joven , Ondas Encefálicas/fisiología
2.
Cogn Emot ; 37(4): 731-747, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37104118

RESUMEN

Research into voice perception benefits from manipulation software to gain experimental control over acoustic expression of social signals such as vocal emotions. Today, parameter-specific voice morphing allows a precise control of the emotional quality expressed by single vocal parameters, such as fundamental frequency (F0) and timbre. However, potential side effects, in particular reduced naturalness, could limit ecological validity of speech stimuli. To address this for the domain of emotion perception, we collected ratings of perceived naturalness and emotionality on voice morphs expressing different emotions either through F0 or Timbre only. In two experiments, we compared two different morphing approaches, using either neutral voices or emotional averages as emotionally non-informative reference stimuli. As expected, parameter-specific voice morphing reduced perceived naturalness. However, perceived naturalness of F0 and Timbre morphs were comparable with averaged emotions as reference, potentially making this approach more suitable for future research. Crucially, there was no relationship between ratings of emotionality and naturalness, suggesting that the perception of emotion was not substantially affected by a reduction of voice naturalness. We hold that while these findings advocate parameter-specific voice morphing as a suitable tool for research on vocal emotion perception, great care should be taken in producing ecologically valid stimuli.


Asunto(s)
Percepción del Habla , Voz , Humanos , Emociones
3.
Behav Res Methods ; 2023 Oct 11.
Artículo en Inglés | MEDLINE | ID: mdl-37821750

RESUMEN

We describe JAVMEPS, an audiovisual (AV) database for emotional voice and dynamic face stimuli, with voices varying in emotional intensity. JAVMEPS includes 2256 stimulus files comprising (A) recordings of 12 speakers, speaking four bisyllabic pseudowords with six naturalistic induced basic emotions plus neutral, in auditory-only, visual-only, and congruent AV conditions. It furthermore comprises (B) caricatures (140%), original voices (100%), and anti-caricatures (60%) for happy, fearful, angry, sad, disgusted, and surprised voices for eight speakers and two pseudowords. Crucially, JAVMEPS contains (C) precisely time-synchronized congruent and incongruent AV (and corresponding auditory-only) stimuli with two emotions (anger, surprise), (C1) with original intensity (ten speakers, four pseudowords), (C2) and with graded AV congruence (implemented via five voice morph levels, from caricatures to anti-caricatures; eight speakers, two pseudowords). We collected classification data for Stimulus Set A from 22 normal-hearing listeners and four cochlear implant users, for two pseudowords, in auditory-only, visual-only, and AV conditions. Normal-hearing individuals showed good classification performance (McorrAV = .59 to .92), with classification rates in the auditory-only condition ≥ .38 correct (surprise: .67, anger: .51). Despite compromised vocal emotion perception, CI users performed above chance levels of .14 for auditory-only stimuli, with best rates for surprise (.31) and anger (.30). We anticipate JAVMEPS to become a useful open resource for researchers into auditory emotion perception, especially when adaptive testing or calibration of task difficulty is desirable. With its time-synchronized congruent and incongruent stimuli, JAVMEPS can also contribute to filling a gap in research regarding dynamic audiovisual integration of emotion perception via behavioral or neurophysiological recordings.

4.
Behav Res Methods ; 55(3): 1352-1371, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-35648317

RESUMEN

The ability to recognize someone's voice spans a broad spectrum with phonagnosia on the low end and super-recognition at the high end. Yet there is no standardized test to measure an individual's ability of learning and recognizing newly learned voices with samples of speech-like phonetic variability. We have developed the Jena Voice Learning and Memory Test (JVLMT), a 22-min test based on item response theory and applicable across languages. The JVLMT consists of three phases in which participants (1) become familiarized with eight speakers, (2) revise the learned voices, and (3) perform a 3AFC recognition task, using pseudo-sentences devoid of semantic content. Acoustic (dis)similarity analyses were used to create items with various levels of difficulty. Test scores are based on 22 items which had been selected and validated based on two online studies with 232 and 454 participants, respectively. Mean accuracy in the JVLMT is 0.51 (SD = .18) with an empirical (marginal) reliability of 0.66. Correlational analyses showed high and moderate convergent validity with the Bangor Voice Matching Test (BVMT) and Glasgow Voice Memory Test (GVMT), respectively, and high discriminant validity with a digit span test. Four participants with potential super recognition abilities and seven participants with potential phonagnosia were identified who performed at least 2 SDs above or below the mean, respectively. The JVLMT is a promising research and diagnostic screening tool to detect both impairments in voice recognition and super-recognition abilities.


Asunto(s)
Percepción del Habla , Voz , Humanos , Reproducibilidad de los Resultados , Voz/fisiología , Habla , Aprendizaje/fisiología , Reconocimiento en Psicología/fisiología , Percepción del Habla/fisiología
5.
Cogn Neuropsychol ; 39(3-4): 196-207, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36202621

RESUMEN

Most findings on prosopagnosia to date suggest preserved voice recognition in prosopagnosia (except in cases with bilateral lesions). Here we report a follow-up examination on M.T., suffering from acquired prosopagnosia following a large unilateral right-hemispheric lesion in frontal, parietal, and anterior temporal areas excluding core ventral occipitotemporal face areas. Twenty-three years after initial testing we reassessed face and object recognition skills [Henke, K., Schweinberger, S. R., Grigo, A., Klos, T., & Sommer, W. (1998). Specificity of face recognition: Recognition of exemplars of non-face objects in prosopagnosia. Cortex, 34(2), 289-296]; [Schweinberger, S. R., Klos, T., & Sommer, W. (1995). Covert face recognition in prosopagnosia - A dissociable function? Cortex, 31(3), 517-529] and additionally studied voice recognition. Confirming the persistence of deficits, M.T. exhibited substantial impairments in famous face recognition and memory for learned faces, but preserved face matching and object recognition skills. Critically, he showed substantially impaired voice recognition skills. These findings are congruent with the ideas that (i) prosopagnosia after right anterior temporal lesions can persist over long periods > 20 years, and that (ii) such lesions can be associated with both facial and vocal deficits in person recognition.


Asunto(s)
Prosopagnosia , Accidente Cerebrovascular , Estudios de Seguimiento , Humanos , Imagen por Resonancia Magnética , Prosopagnosia/patología , Lóbulo Temporal
6.
Ear Hear ; 43(4): 1178-1188, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-34999594

RESUMEN

OBJECTIVES: Research on cochlear implants (CIs) has focused on speech comprehension, with little research on perception of vocal emotions. We compared emotion perception in CI users and normal-hearing (NH) individuals, using parameter-specific voice morphing. DESIGN: Twenty-five CI users and 25 NH individuals (matched for age and gender) performed fearful-angry discriminations on bisyllabic pseudoword stimuli from morph continua across all acoustic parameters (Full), or across selected parameters (F0, Timbre, or Time information), with other parameters set to a noninformative intermediate level. RESULTS: Unsurprisingly, CI users as a group showed lower performance in vocal emotion perception overall. Importantly, while NH individuals used timbre and fundamental frequency (F0) information to equivalent degrees, CI users were far more efficient in using timbre (compared to F0) information for this task. Thus, under the conditions of this task, CIs were inefficient in conveying emotion based on F0 alone. There was enormous variability between CI users, with low performers responding close to guessing level. Echoing previous research, we found that better vocal emotion perception was associated with better quality of life ratings. CONCLUSIONS: Some CI users can utilize timbre cues remarkably well when perceiving vocal emotions.


Asunto(s)
Implantación Coclear , Implantes Cocleares , Música , Percepción del Habla , Estimulación Acústica , Percepción Auditiva , Emociones , Humanos , Calidad de Vida
7.
Sensors (Basel) ; 22(19)2022 Oct 06.
Artículo en Inglés | MEDLINE | ID: mdl-36236658

RESUMEN

Vocal emotion recognition (VER) in natural speech, often referred to as speech emotion recognition (SER), remains challenging for both humans and computers. Applied fields including clinical diagnosis and intervention, social interaction research or Human Computer Interaction (HCI) increasingly benefit from efficient VER algorithms. Several feature sets were used with machine-learning (ML) algorithms for discrete emotion classification. However, there is no consensus for which low-level-descriptors and classifiers are optimal. Therefore, we aimed to compare the performance of machine-learning algorithms with several different feature sets. Concretely, seven ML algorithms were compared on the Berlin Database of Emotional Speech: Multilayer Perceptron Neural Network (MLP), J48 Decision Tree (DT), Support Vector Machine with Sequential Minimal Optimization (SMO), Random Forest (RF), k-Nearest Neighbor (KNN), Simple Logistic Regression (LOG) and Multinomial Logistic Regression (MLR) with 10-fold cross validation using four openSMILE feature sets (i.e., IS-09, emobase, GeMAPS and eGeMAPS). Results indicated that SMO, MLP and LOG show better performance (reaching to 87.85%, 84.00% and 83.74% accuracies, respectively) compared to RF, DT, MLR and KNN (with minimum 73.46%, 53.08%, 70.65% and 58.69% accuracies, respectively). Overall, the emobase feature set performed best. We discuss the implications of these findings for applications in diagnosis, intervention or HCI.


Asunto(s)
Aprendizaje Automático , Habla , Algoritmos , Emociones , Humanos , Redes Neurales de la Computación , Máquina de Vectores de Soporte
8.
Psychol Res ; 84(6): 1485-1494, 2020 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-30864002

RESUMEN

The use of signs as a major means for communication affects other functions such as spatial processing. Intriguingly, this is true even for functions which are less obviously linked to language processing. Speakers using signs outperform non-signers in face recognition tasks, potentially as a result of a lifelong focus on the mouth region for speechreading. On this background, we hypothesized that the processing of emotional faces is altered in persons using mostly signs for communication (henceforth named deaf signers). While for the recognition of happiness the mouth region is more crucial, the eye region matters more for recognizing anger. Using morphed faces, we created facial composites in which either the upper or lower half of an emotional face was kept neutral while the other half varied in intensity of the expressed emotion, being either happy or angry. As expected, deaf signers were more accurate at recognizing happy faces than non-signers. The reverse effect was found for angry faces. These differences between groups were most pronounced for facial expressions of low intensities. We conclude that the lifelong focus on the mouth region in deaf signers leads to more sensitive processing of happy faces, especially when expressions are relatively subtle.


Asunto(s)
Sordera/psicología , Expresión Facial , Reconocimiento en Psicología , Lengua de Signos , Emociones , Femenino , Humanos , Masculino , Persona de Mediana Edad
9.
Behav Res Methods ; 52(3): 990-1007, 2020 06.
Artículo en Inglés | MEDLINE | ID: mdl-31637667

RESUMEN

Here we describe the Jena Speaker Set (JESS), a free database for unfamiliar adult voice stimuli, comprising voices from 61 young (18-25 years) and 59 old (60-81 years) female and male speakers uttering various sentences, syllables, read text, semi-spontaneous speech, and vowels. Listeners rated two voice samples (short sentences) per speaker for attractiveness, likeability, two measures of distinctiveness ("deviation"-based [DEV] and "voice in the crowd"-based [VITC]), regional accent, and age. Interrater reliability was high, with Cronbach's α between .82 and .99. Young voices were generally rated as more attractive than old voices, but particularly so when male listeners judged female voices. Moreover, young female voices were rated as more likeable than both young male and old female voices. Young voices were judged to be less distinctive than old voices according to the DEV measure, with no differences in the VITC measure. In age ratings, listeners almost perfectly discriminated young from old voices; additionally, young female voices were perceived as being younger than young male voices. Correlations between the rating dimensions above demonstrated (among other things) that DEV-based distinctiveness was strongly negatively correlated with rated attractiveness and likeability. By contrast, VITC-based distinctiveness was uncorrelated with rated attractiveness and likeability in young voices, although a moderate negative correlation was observed for old voices. Overall, the present results demonstrate systematic effects of vocal age and gender on impressions based on the voice and inform as to the selection of suitable voice stimuli for further research into voice perception, learning, and memory.


Asunto(s)
Percepción del Habla , Voz , Adolescente , Adulto , Anciano , Anciano de 80 o más Años , Femenino , Humanos , Masculino , Memoria , Persona de Mediana Edad , Reproducibilidad de los Resultados , Habla , Adulto Joven
10.
J Vis ; 19(5): 17, 2019 05 01.
Artículo en Inglés | MEDLINE | ID: mdl-31100133

RESUMEN

The continuous flash suppression (CFS) task can be used to investigate what limits our capacity to become aware of visual stimuli. In this task, a stream of rapidly changing mask images to one eye initially suppresses awareness for a static target image presented to the other eye. Several factors may determine the breakthrough time from mask suppression, one of which is the overlap in representation of the target/mask categories in higher visual cortex. This hypothesis is based on certain object categories (e.g., faces) being more effective in blocking awareness of other categories (e.g., buildings) than other combinations (e.g., cars/chairs). Previous work found mask effectiveness to be correlated with category-pair high-level representational similarity. As the cortical representations of hands and tools overlap, these categories are ideal to test this further as well as to examine alternative explanations. For our CFS experiments, we predicted longer breakthrough times for hands/tools compared to other pairs due to the reported cortical overlap. In contrast, across three experiments, participants were generally faster at detecting targets masked by hands or tools compared to other mask categories. Exploring low-level explanations, we found that the category average for edges (e.g., hands have less detail compared to cars) was the best predictor for the data. This low-level bottleneck could not completely account for the specific category patterns and the hand/tool effects, suggesting there are several levels at which object category-specific limits occur. Given these findings, it is important that low-level bottlenecks for visual awareness are considered when testing higher-level hypotheses.


Asunto(s)
Concienciación/fisiología , Percepción de Forma/fisiología , Mano/fisiología , Corteza Visual/fisiología , Adolescente , Adulto , Femenino , Humanos , Masculino , Estimulación Luminosa/métodos , Adulto Joven
11.
Sensors (Basel) ; 19(21)2019 Nov 04.
Artículo en Inglés | MEDLINE | ID: mdl-31689906

RESUMEN

The prevalence of autism spectrum disorders (ASD) has increased strongly over the past decades, and so has the demand for adequate behavioral assessment and support for persons affected by ASD. Here we provide a review on original research that used sensor technology for an objective assessment of social behavior, either with the aim to assist the assessment of autism or with the aim to use this technology for intervention and support of people with autism. Considering rapid technological progress, we focus (1) on studies published within the last 10 years (2009-2019), (2) on contact- and irritation-free sensor technology that does not constrain natural movement and interaction, and (3) on sensory input from the face, the voice, or body movements. We conclude that sensor technology has already demonstrated its great potential for improving both behavioral assessment and interventions in autism spectrum disorders. We also discuss selected examples for recent theoretical questions related to the understanding of psychological changes and potentials in autism. In addition to its applied potential, we argue that sensor technology-when implemented by appropriate interdisciplinary teams-may even contribute to such theoretical issues in understanding autism.


Asunto(s)
Trastorno del Espectro Autista/psicología , Procesamiento Automatizado de Datos , Conducta Social , Cognición/fisiología , Fijación Ocular/fisiología , Humanos , Voz
12.
Eur J Neurosci ; 48(5): 2259-2271, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-30107052

RESUMEN

Seeing a face being touched in spatial and temporal synchrony with the own face produces a bias in self-recognition, whereby the other face becomes more likely to be perceived as the self. The present study employed event-related potentials to explore whether this enfacement effect reflects initial face encoding, enhanced distinctiveness of the enfaced face, modified self-identity representations, or even later processing stages that are associated with the emotional processing of faces. Participants were stroked in synchrony or asynchrony with an unfamiliar face they observed on a monitor in front of them, in a situation approximating a mirror image. Subsequently, event-related potentials were recorded during the presentation of (a) a previously synchronously stimulated face, (b) an asynchronously stimulated face, (c) observers' own face, (d) filler faces, and (e) a to-be-detected target face, which required a response. Observers reported a consistent enfacement illusion after synchronous stimulation. Importantly, the synchronously stimulated face elicited more prominent N170 and P200 responses than the asynchronously stimulated face. By contrast, similar N250 and P300 responses were observed in these conditions. These results suggest that enfacement modulates early neural correlates of face encoding and facial prototypicality, rather than identity self-representations and associated emotional processes.


Asunto(s)
Potenciales Evocados/fisiología , Reconocimiento Visual de Modelos/fisiología , Reconocimiento en Psicología/fisiología , Percepción del Tacto/fisiología , Adulto , Cara , Reconocimiento Facial/fisiología , Femenino , Humanos , Ilusiones/fisiología , Masculino , Estimulación Física/métodos , Autoimagen , Tacto/fisiología , Adulto Joven
13.
Perception ; 47(2): 185-196, 2018 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-29165025

RESUMEN

The role of second-order configuration-that is, metric distances between individual features-for familiar face recognition has been the subject of debate. Recent reports suggest that better face recognition abilities coincide with a weaker reliance on shape information for face recognition. We examined contributions of second-order configuration to familiar face repetition priming by manipulating metric distances between facial features. S1 comprised familiar face primes as either: unaltered, with increased or decreased interocular distance, with increased or decreased distance between nose and mouth; or a different familiar face (unprimed). Participants performed a familiarity decision task on familiar and unfamiliar S2 targets, and completed a test battery consisting of three face identity processing tests. Accuracies, reaction times, and inverse efficiency scores were assessed for the priming experiment, and potential priming costs in inverse efficiency scores were correlated with test battery scores. Overall, priming was found, and priming effects were reduced only by primes with interocular distance distortions. Correlational data showed that better face recognition skills coincided with a weaker reliance on second-order configurations. Our findings (a) suggest an importance of interocular, but not mouth-to-nose, distances for familiar face recognition and (b) show that good face recognizers are less sensitive to second-order configuration.


Asunto(s)
Reconocimiento Facial/fisiología , Reconocimiento en Psicología/fisiología , Memoria Implícita/fisiología , Adulto , Femenino , Humanos , Individualidad , Masculino , Adulto Joven
14.
Neuroimage ; 155: 1-9, 2017 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-28438667

RESUMEN

The face perception system flexibly adjusts its neural responses to current face exposure, inducing aftereffects in the perception of subsequent faces. For instance, adaptation to expanded faces makes undistorted faces appear compressed, and adaptation to compressed faces makes undistorted faces appear expanded. Such distortion aftereffects have been proposed to result from renormalization, in which the visual system constantly updates a prototype according to the adaptors' characteristics and evaluates subsequent faces relative to that. However, although consequences of adaptation are easily observed in behavioral aftereffects, it has proven difficult to observe renormalization during adaptation itself. Here we directly measured brain responses during adaptation to establish a neural correlate of renormalization. Given that the face-evoked occipito-temporal P2 event-related brain potential has been found to increase with face prototypicality, we reasoned that the adaptor-elicited P2 could serve as an electrophysiological indicator for renormalization. Participants adapted to sequences of four distorted (compressed or expanded) or undistorted faces, followed by a slightly distorted test face, which they had to classify as undistorted or distorted. We analysed ERPs evoked by each of the adaptors and found that P2 (but not N170) amplitudes evoked by consecutive adaptor faces exhibited an electrophysiological pattern of renormalization during adaptation to distorted faces: P2 amplitudes evoked by both compressed and expanded adaptors significantly increased towards asymptotic levels as adaptation proceeded. P2 amplitudes were smallest for the first adaptor, significantly larger for the second, and yet larger for the third adaptor. We conclude that the sensitivity of the occipito-temporal P2 to the perceived deviation of a face from the norm makes this component an excellent tool to study adaptation-induced renormalization.


Asunto(s)
Encéfalo/fisiología , Efecto Tardío Figurativo/fisiología , Reconocimiento Visual de Modelos/fisiología , Adaptación Fisiológica/fisiología , Adulto , Electroencefalografía , Potenciales Evocados/fisiología , Cara , Femenino , Humanos , Masculino , Adulto Joven
15.
Cogn Affect Behav Neurosci ; 17(1): 185-197, 2017 02.
Artículo en Inglés | MEDLINE | ID: mdl-27718208

RESUMEN

Recent findings show benefits for learning and subsequent recognition of faces caricatured in shape or texture, but there is little evidence on whether this caricature learning advantage generalizes to recognition of veridical counterparts at test. Moreover, it has been reported that there is a relatively higher contribution of texture information, at the expense of shape information, for familiar compared to unfamiliar face recognition. The aim of this study was to examine whether veridical faces are recognized better when they were learned as caricatures compared to when they were learned as veridicals-what we call a caricature generalization benefit. Photorealistic facial stimuli derived from a 3-D camera system were caricatured selectively in either shape or texture by 50 %. Faces were learned across different images either as veridicals, shape caricatures, or texture caricatures. At test, all learned and novel faces were presented as previously unseen frontal veridicals, and participants performed an old-new task. We assessed accuracies, reaction times, and face-sensitive event-related potentials (ERPs). Faces learned as caricatures were recognized more accurately than faces learned as veridicals. At learning, N250 and LPC were largest for shape caricatures, suggesting encoding advantages of distinctive facial shape. At test, LPC was largest for faces that had been learned as texture caricatures, indicating the importance of texture for familiar face recognition. Overall, our findings demonstrate that caricature learning advantages can generalize to and, importantly, improve recognition of veridical versions of faces.


Asunto(s)
Encéfalo/fisiología , Cara , Generalización Psicológica/fisiología , Reconocimiento Visual de Modelos/fisiología , Reconocimiento en Psicología/fisiología , Adolescente , Adulto , Análisis de Varianza , Electroencefalografía , Potenciales Evocados , Femenino , Humanos , Imagenología Tridimensional , Masculino , Pruebas Neuropsicológicas , Estimulación Luminosa , Tiempo de Reacción , Adulto Joven
16.
Cogn Affect Behav Neurosci ; 15(1): 180-94, 2015 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-24934133

RESUMEN

Participants are more accurate at remembering faces from their own relative to a different age group (the own-age bias, or OAB). A recent socio-cognitive account has suggested that differential allocation of attention to old versus young faces underlies this phenomenon. Critically, empirical evidence for a direct relationship between attention to own- versus other-age faces and the OAB in memory is lacking. To fill this gap, we tested the roles of attention in three different experimental paradigms, and additionally analyzed event-related brain potentials (ERPs). In Experiment 1, we compared the learning of old and young faces during focused versus divided attention, but revealed similar OABs in subsequent memory for both attention conditions. Similarly, manipulating attention during learning did not differentially affect the ERPs elicited by young versus old faces. In Experiment 2, we examined the repetition effects from task-irrelevant old and young faces presented under varying attentional loads on the N250r ERP component as an index of face recognition. Independent of load, the N250r effects were comparable for both age categories. Finally, in Experiment 3 we measured the N2pc as an index of attentional selection of old versus young target faces in a visual search task. The N2pc was not significantly different for the young versus the old target search conditions, suggesting similar orientations of attention to either face age group. Overall, we propose that the OAB in memory is largely unrelated to early attentional processes. Our findings therefore contrast with the predictions from socio-cognitive accounts on own-group biases in recognition memory, and are more easily reconciled with expertise-based models.


Asunto(s)
Atención/fisiología , Potenciales Evocados/fisiología , Cara , Reconocimiento Visual de Modelos/fisiología , Reconocimiento en Psicología/fisiología , Adolescente , Adulto , Factores de Edad , Electroencefalografía , Femenino , Humanos , Masculino , Percepción Social , Adulto Joven
17.
Cereb Cortex ; 24(3): 826-35, 2014 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-23172775

RESUMEN

Participants are more accurate at remembering faces of their own relative to another ethnic group (own-race bias, ORB). This phenomenon has been explained by reduced perceptual expertise, or alternatively, by the categorization of other-race faces into social out-groups and reduced effort to individuate such faces. We examined event-related potential (ERP) correlates of the ORB, testing recognition memory for Asian and Caucasian faces in Caucasian and Asian participants. Both groups demonstrated a significant ORB in recognition memory. ERPs revealed more negative N170 amplitudes for other-race faces in both groups, probably reflecting more effortful structural encoding. Importantly, the ethnicity effect in left-hemispheric N170 during learning correlated significantly with the behavioral ORB. Similarly, in the subsequent N250, both groups demonstrated more negative amplitudes for other-race faces, and during test phases, this effect correlated significantly with the ORB. We suggest that ethnicity effects in the N170 reflect an early categorization of other-race faces into a social out-group, resulting in less efficient encoding and thus decreased memory. Moreover, ethnicity effects in the N250 may represent the "tagging" of other-race faces as perceptually salient, which hampers the recognition of these faces.


Asunto(s)
Sesgo , Encéfalo/fisiología , Potenciales Evocados Visuales/fisiología , Cara , Reconocimiento Visual de Modelos/fisiología , Reconocimiento en Psicología/fisiología , Adulto , Análisis de Varianza , Pueblo Asiatico , Mapeo Encefálico , Electroencefalografía , Femenino , Humanos , Masculino , Grupos Raciales , Tiempo de Reacción , Estadística como Asunto , Adulto Joven
18.
J Acoust Soc Am ; 138(2): 1180-93, 2015 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-26328731

RESUMEN

Prior adaptation to male (or female) voices causes androgynous voices to be perceived as more female (or male). Using a selective adaptation paradigm the authors investigate the relative impact of the vocal fold vibration rate (F0) and timbre (operationally in this paper as characteristics that differentiate two voices of the same F0 and loudness) on this basic voice gender aftereffect. TANDEM-STRAIGHT was used to morph between 10 pairs of male and female speakers uttering 2 different vowel-consonant-vowel sequences (20 continua). Adaptor stimuli had one parameter (either F0 or timbre) set at a clearly male or female level, while the other parameter was set at an androgynous level, as determined by an independent set of listeners. Compared to a control adaptation condition (in which both F0 and timbre were clearly male or female), aftereffects were clearly reduced in both F0 and timbre adaptation conditions. Critically, larger aftereffects were found after timbre adaptation (comprising androgynous F0) compared to F0 adaptation (comprising an androgynous timbre). Together these results suggest that timbre plays a larger role than F0 in voice gender adaptation. Finally, the authors found some evidence that individual differences among listeners reflect in part pre-experimental contact to male and female voices.


Asunto(s)
Identidad de Género , Caracteres Sexuales , Percepción del Habla/fisiología , Transferencia de Experiencia en Psicología/fisiología , Calidad de la Voz , Estimulación Acústica , Adolescente , Adulto , Femenino , Humanos , Masculino , Distribución Normal , Fonética , Adulto Joven
19.
Neuroimage ; 92: 90-105, 2014 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-24531047

RESUMEN

A current debate in memory research is whether and how the access to source information depends not only on recollection, but on fluency-based processes as well. In three experiments, we used event-related brain potentials (ERPs) to examine influences of fluency on source memory for famous names. At test, names were presented visually throughout, whereas visual or auditory presentation was used at learning. In Experiment 1, source decisions following old/new judgments were more accurate for repeated relative to non-repeated visually and auditorily learned names. ERPs were more positive between 300 and 600 ms for visually learned as compared to both auditorily learned and new names, resembling an N400 priming effect. In Experiment 2, we omitted the old/new decision to more directly test fast-acting fluency effects on source memory. We observed more accurate source judgments for repeated versus non-repeated visually learned names, but no such effect for repeated versus non-repeated auditorily learned names. Again, an N400 effect (300-600 ms) differentiated between visually and auditorily learned names. Importantly, this effect occurred for correct source decisions only. We interpret it as indexing fluency arising from within-modality priming of visually learned names at test. This idea was further supported in Experiment 3, which revealed an analogous pattern of results in older adults, consistent with the assumption of spared fluency processes in older age. In sum, our findings suggest that fluency affects person-related source memory via within-modality repetition priming in both younger and older adults.


Asunto(s)
Envejecimiento/fisiología , Corteza Cerebral/fisiología , Potenciales Evocados/fisiología , Memoria Episódica , Recuerdo Mental/fisiología , Nombres , Patrones de Reconocimiento Fisiológico/fisiología , Mapeo Encefálico , Femenino , Humanos , Masculino , Adulto Joven
20.
Neuroimage ; 102 Pt 2: 736-47, 2014 Nov 15.
Artículo en Inglés | MEDLINE | ID: mdl-25173417

RESUMEN

Spatially caricatured faces were recently shown to benefit face learning (Schulz et al., 2012a). Moreover, spatial information may be particularly important for encoding unfamiliar faces, but less so for recognizing familiar faces (Kaufmann et al., 2013). To directly test the possibility of a major role of reflectance information for the recognition of familiar faces, we compared effects of selective photorealistic caricaturing in either shape or reflectance on face learning and recognition. Participants learned 3D-photographed faces across different viewpoints, and different images were presented at learning and test. At test, performance benefits for both types of caricatures were modulated by familiarity: Benefits for learned faces were substantially larger for reflectance caricatures, whereas benefits for novel faces were numerically larger for shape caricatures. ERPs confirmed a consistent reduction of the occipitotemporal P200 (200-240 ms) by shape caricaturing, whereas the most prominent effect of reflectance caricaturing was seen in an enhanced posterior N250 (240-400 ms), a component that has been related to the activation of acquired face representations. Our results suggest that performance benefits for face learning caused by distinctive spatial versus reflectance information are mediated by different neural processes with different timing and support a prominent role of reflectance for the recognition of learned faces.


Asunto(s)
Potenciales Evocados/fisiología , Cara , Aprendizaje/fisiología , Reconocimiento Visual de Modelos/fisiología , Adolescente , Adulto , Caricaturas como Asunto , Femenino , Humanos , Masculino , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA