Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 31
Filtrar
1.
Cereb Cortex ; 33(7): 3621-3635, 2023 03 21.
Artículo en Inglés | MEDLINE | ID: mdl-36045002

RESUMEN

Neurons, even in the earliest sensory regions of cortex, are subject to a great deal of contextual influences from both within and across modality connections. Recent work has shown that primary sensory areas can respond to and, in some cases, discriminate stimuli that are not of their target modality: for example, primary somatosensory cortex (SI) discriminates visual images of graspable objects. In the present work, we investigated whether SI would discriminate sounds depicting hand-object interactions (e.g. bouncing a ball). In a rapid event-related functional magnetic resonance imaging experiment, participants listened attentively to sounds from 3 categories: hand-object interactions, and control categories of pure tones and animal vocalizations, while performing a one-back repetition detection task. Multivoxel pattern analysis revealed significant decoding of hand-object interaction sounds within SI, but not for either control category. Crucially, in the hand-sensitive voxels defined from an independent tactile localizer, decoding accuracies were significantly higher for hand-object interactions compared to pure tones in left SI. Our findings indicate that simply hearing sounds depicting familiar hand-object interactions elicit different patterns of activity in SI, despite the complete absence of tactile stimulation. These results highlight the rich contextual information that can be transmitted across sensory modalities even to primary sensory areas.


Asunto(s)
Mano , Corteza Somatosensorial , Animales , Corteza Somatosensorial/diagnóstico por imagen , Corteza Somatosensorial/fisiología , Tacto/fisiología , Neuronas/fisiología , Imagen por Resonancia Magnética , Mapeo Encefálico
2.
Neuroimage ; 258: 119347, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-35660460

RESUMEN

The reproducibility crisis in neuroimaging and in particular in the case of underpowered studies has introduced doubts on our ability to reproduce, replicate and generalize findings. As a response, we have seen the emergence of suggested guidelines and principles for neuroscientists known as Good Scientific Practice for conducting more reliable research. Still, every study remains almost unique in its combination of analytical and statistical approaches. While it is understandable considering the diversity of designs and brain data recording, it also represents a striking point against reproducibility. Here, we propose a non-parametric permutation-based statistical framework, primarily designed for neurophysiological data, in order to perform group-level inferences on non-negative measures of information encompassing metrics from information-theory, machine-learning or measures of distances. The framework supports both fixed- and random-effect models to adapt to inter-individuals and inter-sessions variability. Using numerical simulations, we compared the accuracy in ground-truth retrieving of both group models, such as test- and cluster-wise corrections for multiple comparisons. We then reproduced and extended existing results using both spatially uniform MEG and non-uniform intracranial neurophysiological data. We showed how the framework can be used to extract stereotypical task- and behavior-related effects across the population covering scales from the local level of brain regions, inter-areal functional connectivity to measures summarizing network properties. We also present an open-source Python toolbox called Frites1 that includes the proposed statistical pipeline using information-theoretic metrics such as single-trial functional connectivity estimations for the extraction of cognitive brain networks. Taken together, we believe that this framework deserves careful attention as its robustness and flexibility could be the starting point toward the uniformization of statistical approaches.


Asunto(s)
Mapeo Encefálico , Encéfalo , Encéfalo/fisiología , Mapeo Encefálico/métodos , Cognición , Humanos , Neuroimagen/métodos , Reproducibilidad de los Resultados
3.
Hum Brain Mapp ; 38(3): 1541-1573, 2017 03.
Artículo en Inglés | MEDLINE | ID: mdl-27860095

RESUMEN

We begin by reviewing the statistical framework of information theory as applicable to neuroimaging data analysis. A major factor hindering wider adoption of this framework in neuroimaging is the difficulty of estimating information theoretic quantities in practice. We present a novel estimation technique that combines the statistical theory of copulas with the closed form solution for the entropy of Gaussian variables. This results in a general, computationally efficient, flexible, and robust multivariate statistical framework that provides effect sizes on a common meaningful scale, allows for unified treatment of discrete, continuous, unidimensional and multidimensional variables, and enables direct comparisons of representations from behavioral and brain responses across any recording modality. We validate the use of this estimate as a statistical test within a neuroimaging context, considering both discrete stimulus classes and continuous stimulus features. We also present examples of analyses facilitated by these developments, including application of multivariate analyses to MEG planar magnetic field gradients, and pairwise temporal interactions in evoked EEG responses. We show the benefit of considering the instantaneous temporal derivative together with the raw values of M/EEG signals as a multivariate response, how we can separately quantify modulations of amplitude and direction for vector quantities, and how we can measure the emergence of novel information over time in evoked responses. Open-source Matlab and Python code implementing the new methods accompanies this article. Hum Brain Mapp 38:1541-1573, 2017. © 2016 Wiley Periodicals, Inc.


Asunto(s)
Mapeo Encefálico , Encéfalo/diagnóstico por imagen , Encéfalo/fisiología , Teoría de la Información , Neuroimagen/métodos , Distribución Normal , Simulación por Computador , Electroencefalografía , Entropía , Humanos , Sensibilidad y Especificidad
4.
Proc Natl Acad Sci U S A ; 111(38): 13795-8, 2014 Sep 23.
Artículo en Inglés | MEDLINE | ID: mdl-25201950

RESUMEN

The influence of language familiarity upon speaker identification is well established, to such an extent that it has been argued that "Human voice recognition depends on language ability" [Perrachione TK, Del Tufo SN, Gabrieli JDE (2011) Science 333(6042):595]. However, 7-mo-old infants discriminate speakers of their mother tongue better than they do foreign speakers [Johnson EK, Westrek E, Nazzi T, Cutler A (2011) Dev Sci 14(5):1002-1011] despite their limited speech comprehension abilities, suggesting that speaker discrimination may rely on familiarity with the sound structure of one's native language rather than the ability to comprehend speech. To test this hypothesis, we asked Chinese and English adult participants to rate speaker dissimilarity in pairs of sentences in English or Mandarin that were first time-reversed to render them unintelligible. Even in these conditions a language-familiarity effect was observed: Both Chinese and English listeners rated pairs of native-language speakers as more dissimilar than foreign-language speakers, despite their inability to understand the material. Our data indicate that the language familiarity effect is not based on comprehension but rather on familiarity with the phonology of one's native language. This effect may stem from a mechanism analogous to the "other-race" effect in face recognition.


Asunto(s)
Comprensión/fisiología , Lenguaje , Inteligibilidad del Habla/fisiología , Percepción del Habla/fisiología , Adulto , Femenino , Humanos , Masculino
5.
Exp Brain Res ; 234(4): 1145-58, 2016 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-26790425

RESUMEN

Skilled interactions with sounding objects, such as drumming, rely on resolving the uncertainty in the acoustical and tactual feedback signals generated by vibrating objects. Uncertainty may arise from mis-estimation of the objects' geometry-independent mechanical properties, such as surface stiffness. How multisensory information feeds back into the fine-tuning of sound-generating actions remains unexplored. Participants (percussionists, non-percussion musicians, or non-musicians) held a stylus and learned to control their wrist velocity while repeatedly striking a virtual sounding object whose surface stiffness was under computer control. Sensory feedback was manipulated by perturbing the surface stiffness specified by audition and haptics in a congruent or incongruent manner. The compensatory changes in striking velocity were measured as the motor effects of the sensory perturbations, and sensory dominance was quantified by the asymmetry of congruency effects across audition and haptics. A pronounced dominance of haptics over audition suggested a superior utility of somatosensation developed through long-term experience with object exploration. Large interindividual differences in the motor effects of haptic perturbation potentially arose from a differential reliance on the type of tactual prediction error for which participants tend to compensate: vibrotactile force versus object deformation. Musical experience did not have much of an effect beyond a slightly greater reliance on object deformation in mallet percussionists. The bias toward haptics in the presence of crossmodal perturbations was greater when participants appeared to rely on object deformation feedback, suggesting a weaker association between haptically sensed object deformation and the acoustical structure of concomitant sound during everyday experience of actions upon objects.


Asunto(s)
Estimulación Acústica/métodos , Percepción Auditiva/fisiología , Movimiento/fisiología , Muñeca/fisiología , Adolescente , Adulto , Femenino , Humanos , Masculino , Estimulación Física/métodos , Adulto Joven
6.
J Acoust Soc Am ; 138(1): 457-66, 2015 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-26233044

RESUMEN

Dynamic information in acoustical signals produced by bouncing objects is often used by listeners to predict the objects' future behavior (e.g., hitting a ball). This study examined factors that affect the accuracy of motor responses to sounds of real-world dynamic events. In experiment 1, listeners heard 2-5 bounces from a tennis ball, ping-pong, basketball, or wiffle ball, and would tap to indicate the time of the next bounce in a series. Across ball types and number of bounces, listeners were extremely accurate in predicting the correct bounce time (CT) with a mean prediction error of only 2.58% of the CT. Prediction based on a physical model of bouncing events indicated that listeners relied primarily on temporal cues when estimating the timing of the next bounce, and to a lesser extent on the loudness and spectral cues. In experiment 2, the timing of each bounce pattern was altered to correspond to the bounce timing pattern of another ball, producing stimuli with contradictory acoustic cues. Nevertheless, listeners remained highly accurate in their estimates of bounce timing. This suggests that listeners can adopt their estimates of bouncing-object timing based on acoustic cues that provide most veridical information about dynamic aspects of object behavior.


Asunto(s)
Anticipación Psicológica/fisiología , Patrones de Reconocimiento Fisiológico/fisiología , Sonido , Percepción del Tiempo/fisiología , Estimulación Acústica , Acústica , Adolescente , Señales (Psicología) , Femenino , Humanos , Masculino , Modelos Psicológicos , Movimiento (Física) , Localización de Sonidos , Equipo Deportivo , Factores de Tiempo , Adulto Joven
7.
Cereb Cortex ; 23(9): 2025-37, 2013 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-22802575

RESUMEN

The human brain is thought to process auditory objects along a hierarchical temporal "what" stream that progressively abstracts object information from the low-level structure (e.g., loudness) as processing proceeds along the middle-to-anterior direction. Empirical demonstrations of abstract object encoding, independent of low-level structure, have relied on speech stimuli, and non-speech studies of object-category encoding (e.g., human vocalizations) often lack a systematic assessment of low-level information (e.g., vocalizations are highly harmonic). It is currently unknown whether abstract encoding constitutes a general functional principle that operates for auditory objects other than speech. We combined multivariate analyses of functional imaging data with an accurate analysis of the low-level acoustical information to examine the abstract encoding of non-speech categories. We observed abstract encoding of the living and human-action sound categories in the fine-grained spatial distribution of activity in the middle-to-posterior temporal cortex (e.g., planum temporale). Abstract encoding of auditory objects appears to extend to non-speech biological sounds and to operate in regions other than the anterior temporal lobe. Neural processes for the abstract encoding of auditory objects might have facilitated the emergence of speech categories in our ancestors.


Asunto(s)
Percepción Auditiva/fisiología , Corteza Cerebral/fisiología , Estimulación Acústica , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Lóbulo Temporal/fisiología , Adulto Joven
8.
Nat Neurosci ; 26(4): 664-672, 2023 04.
Artículo en Inglés | MEDLINE | ID: mdl-36928634

RESUMEN

Recognizing sounds implicates the cerebral transformation of input waveforms into semantic representations. Although past research identified the superior temporal gyrus (STG) as a crucial cortical region, the computational fingerprint of these cerebral transformations remains poorly characterized. Here, we exploit a model comparison framework and contrasted the ability of acoustic, semantic (continuous and categorical) and sound-to-event deep neural network representation models to predict perceived sound dissimilarity and 7 T human auditory cortex functional magnetic resonance imaging responses. We confirm that spectrotemporal modulations predict early auditory cortex (Heschl's gyrus) responses, and that auditory dimensions (for example, loudness, periodicity) predict STG responses and perceived dissimilarity. Sound-to-event deep neural networks predict Heschl's gyrus responses similar to acoustic models but, notably, they outperform all competing models at predicting both STG responses and perceived dissimilarity. Our findings indicate that STG entails intermediate acoustic-to-semantic sound representations that neither acoustic nor semantic models can account for. These representations are compositional in nature and relevant to behavior.


Asunto(s)
Corteza Auditiva , Semántica , Humanos , Estimulación Acústica/métodos , Corteza Auditiva/fisiología , Acústica , Imagen por Resonancia Magnética , Percepción Auditiva/fisiología , Mapeo Encefálico/métodos
9.
J Acoust Soc Am ; 132(6): 4002-12, 2012 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-23231129

RESUMEN

The overall goal of the research presented here is to better understand how players evaluate violins within the wider context of finding relationships between measurable vibrational properties of instruments and their perceived qualities. In this study, the reliability of skilled musicians to evaluate the qualities of a violin was examined. In a first experiment, violinists were allowed to freely play a set of different violins and were then asked to rank the instruments by preference. Results showed that players were self-consistent, but a large amount of inter-individual variability was present. A second experiment was then conducted to investigate the origin of inter-individual differences in the preference for violins and to measure the extent to which different attributes of the instrument influence preference. Again, results showed large inter-individual variations in the preference for violins, as well as in assessing various characteristics of the instruments. Despite the significant lack of agreement in preference and the variability in how different criteria are evaluated between individuals, violin players tend to agree on the relevance of sound "richness" and, to a lesser extent, "dynamic range" for determining preference.


Asunto(s)
Percepción Auditiva , Juicio , Música , Estimulación Acústica , Acústica , Adulto , Anciano , Femenino , Humanos , Masculino , Persona de Mediana Edad , Variaciones Dependientes del Observador , Reproducibilidad de los Resultados , Sonido , Vibración , Adulto Joven
10.
J Acoust Soc Am ; 131(5): 4002-12, 2012 May.
Artículo en Inglés | MEDLINE | ID: mdl-22559373

RESUMEN

Locomotion generates multisensory information about walked-upon objects. How perceptual systems use such information to get to know the environment remains unexplored. The ability to identify solid (e.g., marble) and aggregate (e.g., gravel) walked-upon materials was investigated in auditory, haptic or audio-haptic conditions, and in a kinesthetic condition where tactile information was perturbed with a vibromechanical noise. Overall, identification performance was better than chance in all experimental conditions and for both solids and the better identified aggregates. Despite large mechanical differences between the response of solids and aggregates to locomotion, for both material categories discrimination was at its worst in the auditory and kinesthetic conditions and at its best in the haptic and audio-haptic conditions. An analysis of the dominance of sensory information in the audio-haptic context supported a focus on the most accurate modality, haptics, but only for the identification of solid materials. When identifying aggregates, response biases appeared to produce a focus on the least accurate modality--kinesthesia. When walking on loose materials such as gravel, individuals do not perceive surfaces by focusing on the most accurate modality, but by focusing on the modality that would most promptly signal postural instabilities.


Asunto(s)
Percepción Auditiva/fisiología , Discriminación en Psicología/fisiología , Percepción del Tacto/fisiología , Caminata/fisiología , Estimulación Acústica , Adulto , Sesgo , Materiales de Construcción , Humanos , Masculino , Enmascaramiento Perceptual/fisiología , Estimulación Luminosa , Vibración , Madera
11.
Front Neurosci ; 16: 921489, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36148146

RESUMEN

We use functional Magnetic Resonance Imaging (fMRI) to explore synchronized neural responses between observers of audiovisual presentation of a string quartet performance during free viewing. Audio presentation was accompanied by visual presentation of the string quartet as stick figures observed from a static viewpoint. Brain data from 18 musical novices were obtained during audiovisual presentation of a 116 s performance of the allegro of String Quartet, No. 14 in D minor by Schubert played by the 'Quartetto di Cremona.' These data were analyzed using intersubject correlation (ISC). Results showed extensive ISC in auditory and visual areas as well as parietal cortex, frontal cortex and subcortical areas including the medial geniculate and basal ganglia (putamen). These results from a single fixed viewpoint of multiple musicians are greater than previous reports of ISC from unstructured group activity but are broadly consistent with related research that used ISC to explore listening to music or watching solo dance. A feature analysis examining the relationship between brain activity and physical features of the auditory and visual signals yielded findings of a large proportion of activity related to auditory and visual processing, particularly in the superior temporal gyrus (STG) as well as midbrain areas. Motor areas were also involved, potentially as a result of watching motion from the stick figure display of musicians in the string quartet. These results reveal involvement of areas such as the putamen in processing complex musical performance and highlight the potential of using brief naturalistic stimuli to localize distinct brain areas and elucidate potential mechanisms underlying multisensory integration.

12.
Front Psychol ; 13: 964209, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36312201

RESUMEN

Taxonomies and ontologies for the characterization of everyday sounds have been developed in several research fields, including auditory cognition, soundscape research, artificial hearing, sound design, and medicine. Here, we surveyed 36 of such knowledge organization systems, which we identified through a systematic literature search. To evaluate the semantic domains covered by these systems within a homogeneous framework, we introduced a comprehensive set of verbal sound descriptors (sound source properties; attributes of sensation; sound signal descriptors; onomatopoeias; music genres), which we used to manually label the surveyed descriptor classes. We reveal that most taxonomies and ontologies were developed to characterize higher-level semantic relations between sound sources in terms of the sound-generating objects and actions involved (what/how), or in terms of the environmental context (where). This indicates the current lack of a comprehensive ontology of everyday sounds that covers simultaneously all semantic aspects of the relation between sounds. Such an ontology may have a wide range of applications and purposes, ranging from extending our scientific knowledge of auditory processes in the real world, to developing artificial hearing systems.

13.
J Acoust Soc Am ; 130(5): 2902-16, 2011 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-22087919

RESUMEN

The analysis of musical signals to extract audio descriptors that can potentially characterize their timbre has been disparate and often too focused on a particular small set of sounds. The Timbre Toolbox provides a comprehensive set of descriptors that can be useful in perceptual research, as well as in music information retrieval and machine-learning approaches to content-based retrieval in large sound databases. Sound events are first analyzed in terms of various input representations (short-term Fourier transform, harmonic sinusoidal components, an auditory model based on the equivalent rectangular bandwidth concept, the energy envelope). A large number of audio descriptors are then derived from each of these representations to capture temporal, spectral, spectrotemporal, and energetic properties of the sound events. Some descriptors are global, providing a single value for the whole sound event, whereas others are time-varying. Robust descriptive statistics are used to characterize the time-varying descriptors. To examine the information redundancy across audio descriptors, correlational analysis followed by hierarchical clustering is performed. This analysis suggests ten classes of relatively independent audio descriptors, showing that the Timbre Toolbox is a multidimensional instrument for the measurement of the acoustical structure of complex sound signals.


Asunto(s)
Acústica , Modelos Teóricos , Música , Procesamiento de Señales Asistido por Computador , Programas Informáticos , Análisis por Conglomerados , Análisis de Fourier , Lenguajes de Programación , Espectrografía del Sonido , Factores de Tiempo
14.
Nat Hum Behav ; 5(9): 1203-1213, 2021 09.
Artículo en Inglés | MEDLINE | ID: mdl-33707658

RESUMEN

Long-standing affective science theories conceive the perception of emotional stimuli either as discrete categories (for example, an angry voice) or continuous dimensional attributes (for example, an intense and negative vocal emotion). Which position provides a better account is still widely debated. Here we contrast the positions to account for acoustics-independent perceptual and cerebral representational geometry of perceived voice emotions. We combined multimodal imaging of the cerebral response to heard vocal stimuli (using functional magnetic resonance imaging and magneto-encephalography) with post-scanning behavioural assessment of voice emotion perception. By using representational similarity analysis, we find that categories prevail in perceptual and early (less than 200 ms) frontotemporal cerebral representational geometries and that dimensions impinge predominantly on a later limbic-temporal network (at 240 ms and after 500 ms). These results reconcile the two opposing views by reframing the perception of emotions as the interplay of cerebral networks with different representational dynamics that emphasize either categories or dimensions.


Asunto(s)
Nivel de Alerta/fisiología , Emociones/fisiología , Percepción del Habla/fisiología , Estimulación Acústica/métodos , Ira , Humanos , Voz/fisiología
15.
Curr Biol ; 31(21): 4839-4844.e4, 2021 11 08.
Artículo en Inglés | MEDLINE | ID: mdl-34506729

RESUMEN

How the evolution of speech has transformed the human auditory cortex compared to other primates remains largely unknown. While primary auditory cortex is organized largely similarly in humans and macaques,1 the picture is much less clear at higher levels of the anterior auditory pathway,2 particularly regarding the processing of conspecific vocalizations (CVs). A "voice region" similar to the human voice-selective areas3,4 has been identified in the macaque right anterior temporal lobe with functional MRI;5 however, its anatomical localization, seemingly inconsistent with that of the human temporal voice areas (TVAs), has suggested a "repositioning of the voice area" in recent human evolution.6 Here we report a functional homology in the cerebral processing of vocalizations by macaques and humans, using comparative fMRI and a condition-rich auditory stimulation paradigm. We find that the anterior temporal lobe of both species possesses cortical voice areas that are bilateral and not only prefer conspecific vocalizations but also implement a representational geometry categorizing them apart from all other sounds in a species-specific but homologous manner. These results reveal a more similar functional organization of higher-level auditory cortex in macaques and humans than currently known.


Asunto(s)
Corteza Auditiva , Estimulación Acústica , Animales , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico , Humanos , Macaca , Imagen por Resonancia Magnética , Primates , Vocalización Animal/fisiología
16.
Brain Cogn ; 73(1): 7-19, 2010 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-20188452

RESUMEN

The neurocognitive processing of environmental sounds and linguistic stimuli shares common semantic resources and can lead to the activation of motor programs for the generation of the passively heard sound or speech. We investigated the extent to which the cognition of environmental sounds, like that of language, relies on symbolic mental representations independent of the acoustic input. In a hierarchical sorting task, we found that evaluation of nonliving sounds is consistently biased toward a focus on acoustical information. However, the evaluation of living sounds focuses spontaneously on sound-independent semantic information, but can rely on acoustical information after exposure to a context consisting of nonliving sounds. We interpret these results as support for a robust iconic processing strategy for nonliving sounds and a flexible symbolic processing strategy for living sounds.


Asunto(s)
Cognición , Lenguaje , Patrones de Reconocimiento Fisiológico , Reconocimiento en Psicología , Sonido , Estimulación Acústica , Clasificación , Femenino , Humanos , Masculino , Psicolingüística , Valores de Referencia , Disposición en Psicología , Adulto Joven
17.
J Acoust Soc Am ; 128(3): 1401-13, 2010 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-20815474

RESUMEN

Sounds convey information about the materials composing an object. Stimuli were synthesized using a computer model of impacted plates that varied their material properties: viscoelastic and thermoelastic damping and wave velocity (related to elasticity and mass density). The range of damping properties represented a continuum between materials with predominant viscoelastic and thermoelastic damping (glass and aluminum, respectively). The perceptual structure of the sounds was inferred from multidimensional scaling of dissimilarity judgments and from their categorization as glass or aluminum. Dissimilarity ratings revealed dimensions that were closely related to mechanical properties: a wave-velocity-related dimension associated with pitch and a damping-related dimension associated with timbre and duration. When asked to categorize sounds, however, listeners ignored the cues related to wave velocity and focused on cues related to damping. In both dissimilarity-rating and identification experiments, the results were independent of the material of the mallet striking the plate (rubber or wood). Listeners thus appear to select acoustical information that is reliable for a given perceptual task. Because the frequency changes responsible for detecting changes in wave velocity can also be due to changes in geometry, they are not as reliable for material identification as are damping cues.


Asunto(s)
Acústica/instrumentación , Percepción Auditiva , Señales (Psicología) , Psicoacústica , Sonido , Estimulación Acústica , Adulto , Elasticidad , Diseño de Equipo , Femenino , Humanos , Masculino , Modelos Teóricos , Movimiento (Física) , Presión , Espectrografía del Sonido , Factores de Tiempo , Vibración , Viscosidad , Adulto Joven
18.
Cognition ; 200: 104249, 2020 07.
Artículo en Inglés | MEDLINE | ID: mdl-32413547

RESUMEN

Affective vocalisations such as screams and laughs can convey strong emotional content without verbal information. Previous research using morphed vocalisations (e.g. 25% fear/75% anger) has revealed categorical perception of emotion in voices, showing sudden shifts at emotion category boundaries. However, it is currently unknown how further modulation of vocalisations beyond the veridical emotion (e.g. 125% fear) affects perception. Caricatured facial expressions produce emotions that are perceived as more intense and distinctive, with faster recognition relative to the original and anti-caricatured (e.g. 75% fear) emotions, but a similar effect using vocal caricatures has not been previously examined. Furthermore, caricatures can play a key role in assessing how distinctiveness is identified, in particular by evaluating accounts of emotion perception with reference to prototypes (distance from the central stimulus) and exemplars (density of the stimulus space). Stimuli consisted of four emotions (anger, disgust, fear, and pleasure) morphed at 25% intervals between a neutral expression and each emotion from 25% to 125%, and between each pair of emotions. Emotion perception was assessed using emotion intensity ratings, valence and arousal ratings, speeded categorisation and paired similarity ratings. We report two key findings: 1) across tasks, there was a strongly linear effect of caricaturing, with caricatured emotions (125%) perceived as higher in emotion intensity and arousal, and recognised faster compared to the original emotion (100%) and anti-caricatures (25%-75%); 2) our results reveal evidence for a unique contribution of a prototype-based account in emotion recognition. We show for the first time that vocal caricature effects are comparable to those found previously with facial caricatures. The set of caricatured vocalisations provided open a promising line of research for investigating vocal affect perception and emotion processing deficits in clinical populations.


Asunto(s)
Percepción Social , Voz , Ira , Emociones , Expresión Facial , Humanos
19.
IEEE Trans Haptics ; 10(1): 113-122, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-27390182

RESUMEN

An experiment was conducted to study the effects of force produced by active touch on vibrotactile perceptual thresholds. The task consisted in pressing the fingertip against a flat rigid surface that provided either sinusoidal or broadband vibration. Three force levels were considered, ranging from light touch to hard press. Finger contact areas were measured during the experiment, showing positive correlation with the respective applied forces. Significant effects on thresholds were found for vibration type and force level. Moreover, possibly due to the concurrent effect of large (unconstrained) finger contact areas, active pressing forces, and long duration stimuli, the measured perceptual thresholds are considerably lower than what previously reported in the literature.


Asunto(s)
Dedos/fisiología , Umbral Sensorial/fisiología , Tacto/fisiología , Humanos , Fenómenos Mecánicos , Vibración
20.
Elife ; 62017 06 07.
Artículo en Inglés | MEDLINE | ID: mdl-28590903

RESUMEN

Seeing a speaker's face enhances speech intelligibility in adverse environments. We investigated the underlying network mechanisms by quantifying local speech representations and directed connectivity in MEG data obtained while human participants listened to speech of varying acoustic SNR and visual context. During high acoustic SNR speech encoding by temporally entrained brain activity was strong in temporal and inferior frontal cortex, while during low SNR strong entrainment emerged in premotor and superior frontal cortex. These changes in local encoding were accompanied by changes in directed connectivity along the ventral stream and the auditory-premotor axis. Importantly, the behavioral benefit arising from seeing the speaker's face was not predicted by changes in local encoding but rather by enhanced functional connectivity between temporal and inferior frontal cortex. Our results demonstrate a role of auditory-frontal interactions in visual speech representations and suggest that functional connectivity along the ventral pathway facilitates speech comprehension in multisensory environments.


Asunto(s)
Percepción Auditiva , Lóbulo Frontal/fisiología , Percepción del Habla , Lóbulo Temporal/fisiología , Percepción Visual , Adolescente , Adulto , Mapeo Encefálico , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA