Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 130
Filtrar
1.
Brain Cogn ; 178: 106180, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38815526

RESUMEN

Our ability to merge information from different senses into a unified percept is a crucial perceptual process for efficient interaction with our multisensory environment. Yet, the developmental process underlying how the brain implements multisensory integration (MSI) remains poorly known. This cross-sectional study aims to characterize the developmental patterns of audiovisual events in 131 individuals aged from 3 months to 30 years. Electroencephalography (EEG) was recorded during a passive task, including simple auditory, visual, and audiovisual stimuli. In addition to examining age-related variations in MSI responses, we investigated Event-Related Potentials (ERPs) linked with auditory and visual stimulation alone. This was done to depict the typical developmental trajectory of unisensory processing from infancy to adulthood within our sample and to contextualize the maturation effects of MSI in relation to unisensory development. Comparing the neural response to audiovisual stimuli to the sum of the unisensory responses revealed signs of MSI in the ERPs, more specifically between the P2 and N2 components (P2 effect). Furthermore, adult-like MSI responses emerge relatively late in the development, around 8 years old. The automatic integration of simple audiovisual stimuli is a long developmental process that emerges during childhood and continues to mature during adolescence with ERP latencies decreasing with age.


Asunto(s)
Estimulación Acústica , Percepción Auditiva , Electroencefalografía , Potenciales Evocados , Estimulación Luminosa , Percepción Visual , Humanos , Adulto , Femenino , Masculino , Lactante , Electroencefalografía/métodos , Percepción Auditiva/fisiología , Percepción Visual/fisiología , Adolescente , Niño , Preescolar , Adulto Joven , Potenciales Evocados/fisiología , Estimulación Luminosa/métodos , Estudios Transversales , Estimulación Acústica/métodos , Encéfalo/fisiología
2.
Emotion ; 2024 Feb 26.
Artículo en Inglés | MEDLINE | ID: mdl-38407120

RESUMEN

The ability to reliably discriminate vocal expressions of emotion is crucial to engage in successful social interactions. This process is arguably more crucial for blind individuals, since they cannot extract social information from faces and bodies, and therefore chiefly rely on voices to infer the emotional state of their interlocutors. Blind have demonstrated superior abilities in several aspects of auditory perception, but research on their ability to discriminate vocal features is still scarce and has provided unclear results. Here, we used a gating psychophysical paradigm to test whether early blind people would differ from individually matched sighted controls at the recognition of emotional expressions. Surprisingly, blind people showed lower performance than controls in discriminating specific vocal emotions. We presented segments of nonlinguistic emotional vocalizations of increasing duration (100-400 ms), portraying five basic emotions (fear, happy, sad, disgust, and angry), and we asked our participants for an explicit emotion categorization task. We then calculated sensitivity indices and confusion patterns of their performance. We observed better performance of the sighted group in the discrimination of angry and fearful expression, with no between-group differences for other emotions. This result supports the view that vision plays a calibrating role for specific threat-related emotions specifically. (PsycInfo Database Record (c) 2024 APA, all rights reserved).

3.
Curr Biol ; 34(1): 46-55.e4, 2024 01 08.
Artículo en Inglés | MEDLINE | ID: mdl-38096819

RESUMEN

Voices are the most relevant social sounds for humans and therefore have crucial adaptive value in development. Neuroimaging studies in adults have demonstrated the existence of regions in the superior temporal sulcus that respond preferentially to voices. Yet, whether voices represent a functionally specific category in the young infant's mind is largely unknown. We developed a highly sensitive paradigm relying on fast periodic auditory stimulation (FPAS) combined with scalp electroencephalography (EEG) to demonstrate that the infant brain implements a reliable preferential response to voices early in life. Twenty-three 4-month-old infants listened to sequences containing non-vocal sounds from different categories presented at 3.33 Hz, with highly heterogeneous vocal sounds appearing every third stimulus (1.11 Hz). We were able to isolate a voice-selective response over temporal regions, and individual voice-selective responses were found in most infants within only a few minutes of stimulation. This selective response was significantly reduced for the same frequency-scrambled sounds, indicating that voice selectivity is not simply driven by the envelope and the spectral content of the sounds. Such a robust selective response to voices as early as 4 months of age suggests that the infant brain is endowed with the ability to rapidly develop a functional selectivity to this socially relevant category of sounds.


Asunto(s)
Percepción Auditiva , Voz , Adulto , Lactante , Humanos , Percepción Auditiva/fisiología , Encéfalo/fisiología , Lóbulo Temporal/fisiología , Estimulación Acústica , Mapeo Encefálico
4.
Brain Topogr ; 36(6): 854-869, 2023 11.
Artículo en Inglés | MEDLINE | ID: mdl-37639111

RESUMEN

Seamlessly extracting emotional information from voices is crucial for efficient interpersonal communication. However, it remains unclear how the brain categorizes vocal expressions of emotion beyond the processing of their acoustic features. In our study, we developed a new approach combining electroencephalographic recordings (EEG) in humans with a frequency-tagging paradigm to 'tag' automatic neural responses to specific categories of emotion expressions. Participants were presented with a periodic stream of heterogeneous non-verbal emotional vocalizations belonging to five emotion categories: anger, disgust, fear, happiness and sadness at 2.5 Hz (stimuli length of 350 ms with a 50 ms silent gap between stimuli). Importantly, unknown to the participant, a specific emotion category appeared at a target presentation rate of 0.83 Hz that would elicit an additional response in the EEG spectrum only if the brain discriminates the target emotion category from other emotion categories and generalizes across heterogeneous exemplars of the target emotion category. Stimuli were matched across emotion categories for harmonicity-to-noise ratio, spectral center of gravity and pitch. Additionally, participants were presented with a scrambled version of the stimuli with identical spectral content and periodicity but disrupted intelligibility. Both types of sequences had comparable envelopes and early auditory peripheral processing computed via the simulation of the cochlear response. We observed that in addition to the responses at the general presentation frequency (2.5 Hz) in both intact and scrambled sequences, a greater peak in the EEG spectrum at the target emotion presentation rate (0.83 Hz) and its harmonics emerged in the intact sequence in comparison to the scrambled sequence. The greater response at the target frequency in the intact sequence, together with our stimuli matching procedure, suggest that the categorical brain response elicited by a specific emotion is at least partially independent from the low-level acoustic features of the sounds. Moreover, responses at the fearful and happy vocalizations presentation rates elicited different topographies and different temporal dynamics, suggesting that different discrete emotions are represented differently in the brain. Our paradigm revealed the brain's ability to automatically categorize non-verbal vocal emotion expressions objectively (at a predefined frequency of interest), behavior-free, rapidly (in few minutes of recording time) and robustly (with a high signal-to-noise ratio), making it a useful tool to study vocal emotion processing and auditory categorization in general and in populations where behavioral assessments are more challenging.


Asunto(s)
Encéfalo , Emociones , Humanos , Emociones/fisiología , Encéfalo/fisiología , Ira , Felicidad , Miedo
5.
Brain Lang ; 243: 105298, 2023 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-37399687

RESUMEN

Dual Coding Theories (DCT) suggest that meaning is represented in the brain by a double code: a language-derived code in the Anterior Temporal Lobe (ATL) and a sensory-derived code in perceptual and motor regions. Concrete concepts should activate both codes, while abstract ones rely solely on the linguistic code. To test these hypotheses, the present magnetoencephalography (MEG) experiment had participants judge whether visually presented words relate to the senses while we recorded brain responses to abstract and concrete semantic components obtained from 65 independently rated semantic features. Results evidenced early involvement of anterior-temporal and inferior-frontal brain areas in both abstract and concrete semantic information encoding. At later stages, occipital and occipito-temporal regions showed greater responses to concrete compared to abstract features. The present findings suggest that the concreteness of words is processed first with a transmodal/linguistic code, housed in frontotemporal brain systems, and only after with an imagistic/sensorimotor code in perceptual regions.

6.
PLoS Biol ; 21(7): e3001930, 2023 07.
Artículo en Inglés | MEDLINE | ID: mdl-37490508

RESUMEN

We can sense an object's shape by vision or touch. Previous studies suggested that the inferolateral occipitotemporal cortex (ILOTC) implements supramodal shape representations as it responds more to seeing or touching objects than shapeless textures. However, such activation in the anterior portion of the ventral visual pathway could be due to the conceptual representation of an object or visual imagery triggered by touching an object. We addressed these possibilities by directly comparing shape and conceptual representations of objects in early blind (who lack visual experience/imagery) and sighted participants. We found that bilateral ILOTC in both groups showed stronger activation during a shape verification task than during a conceptual verification task made on the names of the same manmade objects. Moreover, the distributed activity in the ILOTC encoded shape similarity but not conceptual association among objects. Besides the ILOTC, we also found shape representation in both groups' bilateral ventral premotor cortices and intraparietal sulcus (IPS), a frontoparietal circuit relating to object grasping and haptic processing. In contrast, the conceptual verification task activated both groups' left perisylvian brain network relating to language processing and, interestingly, the cuneus in early blind participants only. The ILOTC had stronger functional connectivity to the frontoparietal circuit than to the left perisylvian network, forming a modular structure specialized in shape representation. Our results conclusively support that the ILOTC selectively implements shape representation independently of visual experience, and this unique functionality likely comes from its privileged connection to the frontoparietal haptic circuit.


Asunto(s)
Corteza Cerebral , Percepción del Tacto , Humanos , Lóbulo Occipital , Percepción del Tacto/fisiología , Tacto/fisiología , Lóbulo Parietal/fisiología , Ceguera , Imagen por Resonancia Magnética/métodos , Mapeo Encefálico
7.
Brain Sci ; 13(2)2023 Jan 18.
Artículo en Inglés | MEDLINE | ID: mdl-36831705

RESUMEN

Successfully engaging in social communication requires efficient processing of subtle socio-communicative cues. Voices convey a wealth of social information, such as gender, identity, and the emotional state of the speaker. We tested whether our brain can systematically and automatically differentiate and track a periodic stream of emotional utterances among a series of neutral vocal utterances. We recorded frequency-tagged EEG responses of 20 neurotypical male adults while presenting streams of neutral utterances at a 4 Hz base rate, interleaved with emotional utterances every third stimulus, hence at a 1.333 Hz oddball frequency. Four emotions (happy, sad, angry, and fear) were presented as different conditions in different streams. To control the impact of low-level acoustic cues, we maximized variability among the stimuli and included a control condition with scrambled utterances. This scrambling preserves low-level acoustic characteristics but ensures that the emotional character is no longer recognizable. Results revealed significant oddball EEG responses for all conditions, indicating that every emotion category can be discriminated from the neutral stimuli, and every emotional oddball response was significantly higher than the response for the scrambled utterances. These findings demonstrate that emotion discrimination is fast, automatic, and is not merely driven by low-level perceptual features. Eventually, here, we present a new database for vocal emotion research with short emotional utterances (EVID) together with an innovative frequency-tagging EEG paradigm for implicit vocal emotion discrimination.

8.
Neuroimage ; 265: 119790, 2023 01.
Artículo en Inglés | MEDLINE | ID: mdl-36476566

RESUMEN

Alpha oscillatory activity is thought to contribute to visual expectancy through the engagement of task-relevant occipital regions. In early blindness, occipital alpha oscillations are systematically reduced, suggesting that occipital alpha depends on visual experience. However, it remains possible that alpha activity could serve expectancy in non-visual modalities in blind people, especially considering that previous research has shown the recruitment of the occipital cortex for non-visual processing. To test this idea, we used electroencephalography to examine whether alpha oscillations reflected a differential recruitment of task-relevant regions between expected and unexpected conditions in two haptic tasks (texture and shape discrimination). As expected, sensor-level analyses showed that alpha suppression in parieto-occipital sites was significantly reduced in early blind individuals compared with sighted participants. The source reconstruction analysis revealed that group differences originated in the middle occipital cortex. In that region, expected trials evoked higher alpha desynchronization than unexpected trials in the early blind group only. Our results support the role of alpha rhythms in the recruitment of occipital areas in early blind participants, and for the first time we show that although posterior alpha activity is reduced in blindness, it remains sensitive to expectancy factors. Our findings therefore suggest that occipital alpha activity is involved in tactile expectancy in blind individuals, serving a similar function to visual anticipation in sighted populations but switched to the tactile modality. Altogether, our results indicate that expectancy-dependent modulation of alpha oscillatory activity does not depend on visual experience. SIGNIFICANCE STATEMENT: Are posterior alpha oscillations and their role in expectancy and anticipation dependent on visual experience? Our results show that tactile expectancy can modulate posterior alpha activity in blind (but not sighted) individuals through the engagement of occipital regions, suggesting that in early blindness, alpha oscillations maintain their proposed role in visual anticipation but subserve tactile processing. Our findings bring a new understanding of the role that alpha oscillatory activity plays in blindness, contrasting with the view that alpha activity is task unspecific in blind populations.


Asunto(s)
Percepción del Tacto , Tacto , Humanos , Tacto/fisiología , Ceguera , Lóbulo Occipital , Percepción del Tacto/fisiología , Electroencefalografía
9.
Elife ; 112022 09 07.
Artículo en Inglés | MEDLINE | ID: mdl-36070354

RESUMEN

The ventral occipito-temporal cortex (VOTC) reliably encodes auditory categories in people born blind using a representational structure partially similar to the one found in vision (Mattioni et al.,2020). Here, using a combination of uni- and multivoxel analyses applied to fMRI data, we extend our previous findings, comprehensively investigating how early and late acquired blindness impact on the cortical regions coding for the deprived and the remaining senses. First, we show enhanced univariate response to sounds in part of the occipital cortex of both blind groups that is concomitant to reduced auditory responses in temporal regions. We then reveal that the representation of the sound categories in the occipital and temporal regions is more similar in blind subjects compared to sighted subjects. What could drive this enhanced similarity? The multivoxel encoding of the 'human voice' category that we observed in the temporal cortex of all sighted and blind groups is enhanced in occipital regions in blind groups , suggesting that the representation of vocal information is more similar between the occipital and temporal regions in blind compared to sighted individuals. We additionally show that blindness does not affect the encoding of the acoustic properties of our sounds (e.g. pitch, harmonicity) in occipital and in temporal regions but instead selectively alter the categorical coding of the voice category itself. These results suggest a functionally congruent interplay between the reorganization of occipital and temporal regions following visual deprivation, across the lifespan.


Asunto(s)
Ceguera , Lóbulo Temporal , Estimulación Acústica , Humanos , Lóbulo Occipital/diagnóstico por imagen , Lóbulo Occipital/fisiología , Sonido , Lóbulo Temporal/diagnóstico por imagen , Lóbulo Temporal/fisiología
10.
Handb Clin Neurol ; 187: 127-143, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35964967

RESUMEN

In congenitally deaf people, temporal regions typically believed to be primarily auditory enhance their response to nonauditory information. The neural mechanisms and functional principles underlying this phenomenon, as well as its impact on auditory recovery after sensory restoration, yet remain debated. In this chapter, we demonstrate that the cross-modal recruitment of temporal regions by visual inputs in congenitally deaf people follows organizational principles known to be present in the hearing brain. We propose that the functional and structural mechanisms allowing optimal convergence of multisensory information in the temporal cortex of hearing people also provide the neural scaffolding for feeding visual or tactile information into the deafened temporal areas. Innate in their nature, such anatomo-functional links between the auditory and other sensory systems would represent the common substrate of both early multisensory integration and expression of selective cross-modal plasticity in the superior temporal cortex.


Asunto(s)
Sordera , Encéfalo , Mapeo Encefálico , Pruebas Auditivas , Humanos , Lóbulo Temporal
11.
Eur J Neurosci ; 56(4): 4486-4500, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35792656

RESUMEN

It is well documented that early sensory loss typically alters brain morphology in the areas associated with the lost sense. However, much less is known about the impact of early sensory loss on the remainder of the sensory regions. Therefore, we investigated whether congenitally blind (CB) individuals show brain alterations in the olfactory system by comparing cortical morphology and olfactory bulb (OB) volume between 16 congenitally blind individuals and 16 sighted matched controls. Our results showed that not only CB blind individuals exhibited smaller OB but also alterations of cortical density in some higher olfactory processing centres, but unchanged cortical thickness. Our current findings suggest that a lifelong absence of visual input leads to morphological alterations in olfactory processing areas.


Asunto(s)
Imagen por Resonancia Magnética , Olfato , Ceguera , Humanos , Imagen por Resonancia Magnética/métodos , Bulbo Olfatorio
12.
JAMA Netw Open ; 5(7): e2221149, 2022 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-35819789
13.
J Neurosci ; 42(23): 4652-4668, 2022 06 08.
Artículo en Inglés | MEDLINE | ID: mdl-35501150

RESUMEN

hMT+/V5 is a region in the middle occipitotemporal cortex that responds preferentially to visual motion in sighted people. In cases of early visual deprivation, hMT+/V5 enhances its response to moving sounds. Whether hMT+/V5 contains information about motion directions and whether the functional enhancement observed in the blind is motion specific, or also involves sound source location, remains unsolved. Moreover, the impact of this cross-modal reorganization of hMT+/V5 on the regions typically supporting auditory motion processing, like the human planum temporale (hPT), remains equivocal. We used a combined functional and diffusion-weighted MRI approach and individual in-ear recordings to study the impact of early blindness on the brain networks supporting spatial hearing in male and female humans. Whole-brain univariate analysis revealed that the anterior portion of hMT+/V5 responded to moving sounds in sighted and blind people, while the posterior portion was selective to moving sounds only in blind participants. Multivariate decoding analysis revealed that the presence of motion direction and sound position information was higher in hMT+/V5 and lower in hPT in the blind group. While both groups showed axis-of-motion organization in hMT+/V5 and hPT, this organization was reduced in the hPT of blind people. Diffusion-weighted MRI revealed that the strength of hMT+/V5-hPT connectivity did not differ between groups, whereas the microstructure of the connections was altered by blindness. Our results suggest that the axis-of-motion organization of hMT+/V5 does not depend on visual experience, but that congenital blindness alters the response properties of occipitotemporal networks supporting spatial hearing in the sighted.SIGNIFICANCE STATEMENT Spatial hearing helps living organisms navigate their environment. This is certainly even more true in people born blind. How does blindness affect the brain network supporting auditory motion and sound source location? Our results show that the presence of motion direction and sound position information was higher in hMT+/V5 and lower in human planum temporale in blind relative to sighted people; and that this functional reorganization is accompanied by microstructural (but not macrostructural) alterations in their connections. These findings suggest that blindness alters cross-modal responses between connected areas that share the same computational goals.


Asunto(s)
Mapeo Encefálico , Percepción de Movimiento , Percepción Auditiva/fisiología , Ceguera , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Percepción de Movimiento/fisiología
14.
Neuropsychologia ; 170: 108226, 2022 06 06.
Artículo en Inglés | MEDLINE | ID: mdl-35358538

RESUMEN

Synesthesia represents an atypical merging of percepts, in which a given sensory experience (e.g., words, letters, music) triggers sensations in a different perceptual domain (e.g., color). According to recent estimates, the vast majority of the reported cases of synesthesia involve a visual experience. Purely non-visual synesthesia is extremely rare and to date there is no reported case of a congenitally blind synesthete. Moreover, it has been suggested that congenital blindness impairs the emergence of synesthesia-related phenomena such as multisensory integration and cross-modal correspondences between non-visual senses (e.g., sound-touch). Is visual experience necessary to develop synesthesia? Here we describe the case of a congenital blind man (CB) reporting a complex synesthetic experience, involving numbers, letters, months and days of the week. Each item is associated with a precise position in mental space and with a precise tactile texture. In one experiment we empirically verified the presence of number-texture and letter-texture synesthesia in CB, compared to non-synesthete controls, probing the consistency of item-texture associations across time and demonstrating that synesthesia can develop without vision. Our data fill an important void in the current knowledge on synesthesia and shed light on the mechanisms behind sensory crosstalk in the human mind.


Asunto(s)
Música , Trastornos de la Percepción , Percepción del Tacto , Ceguera/complicaciones , Percepción de Color , Humanos , Masculino , Trastornos de la Percepción/etiología , Sinestesia , Tacto
15.
Clin Pharmacol Ther ; 112(6): 1183-1190, 2022 12.
Artículo en Inglés | MEDLINE | ID: mdl-35253205

RESUMEN

Since the release of the ICH E9(R1) (International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use Addendum on Estimands and Sensitivity Analysis in Clinical Trials to the Guideline on Statistical Principles for Clinical Trials) document in 2019, the estimand framework has become a fundamental part of clinical trial protocols. In parallel, complex innovative designs have gained increased popularity in drug development, in particular in early development phases or in difficult experimental situations. While the estimand framework is relevant to any study in which a treatment effect is estimated, experience is lacking as regards its application to these designs. In a basket trial for example, should a different estimand be specified for each subpopulation of interest, defined, for example, by cancer site? Or can a single estimand focusing on the general population (defined, for example, by the positivity to a certain biomarker) be used? In the case of platform trials, should a different estimand be proposed for each drug investigated? In this work we discuss possible ways of implementing the estimand framework for different types of complex innovative designs. We consider trials that allow adding or selecting experimental treatment arms, modifying the control arm or the standard of care, and selecting or pooling populations. We also address the potentially data-driven, adaptive selection of estimands in an ongoing trial and disentangle certain statistical issues that pertain to estimation rather than to estimands, such as the borrowing of nonconcurrent information. We hope this discussion will facilitate the implementation of the estimand framework and its description in the study protocol when the objectives of the trial require complex innovative designs.


Asunto(s)
Desarrollo de Medicamentos , Proyectos de Investigación , Humanos , Interpretación Estadística de Datos
17.
J Exp Psychol Gen ; 151(3): 731-738, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-34498912

RESUMEN

words are typically more difficult to identify than concrete words in lexical-decision, word-naming, and recall tasks. This behavioral advantage, known as the concreteness effect, is often considered as evidence for embodied semantics, which emphasizes the role of sensorimotor experience in the comprehension of word meaning. In this view, online sensorimotor simulations triggered by concrete words, but not by abstract words, facilitate access to word meaning and speed up word identification. To test whether perceptual simulation is the driving force underlying the concreteness effect, we compared data from early-blind and sighted individuals performing an auditory lexical-decision task. Subjects were presented with property words referring to abstract (e.g., "logic"), concrete multimodal (e.g., "spherical"), and concrete unimodal visual concepts (e.g., "blue"). According to the embodied account, the processing advantage for concrete unimodal visual words should disappear in the early blind because they cannot rely on visual experience and simulation during semantics processing (i.e., purely visual words should be abstract for early-blind people). On the contrary, we found that both sighted and blind individuals are faster when processing multimodal and unimodal visual words compared with abstract words. This result suggests that the concreteness effect does not depend on perceptual simulations but might be driven by modality-independent properties of word meaning. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Asunto(s)
Comprensión , Semántica , Ceguera , Humanos , Tiempo de Reacción
18.
Front Med (Lausanne) ; 8: 752021, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-34869446

RESUMEN

Patients treated for bilateral congenital cataracts provide a unique model to test the role of early visual input in shaping the development of the human cortex. Previous studies showed that brief early visual deprivation triggers long-lasting changes in the human visual cortex. However, it remains unknown if such changes interact with the development of other parts of the cortex. With high-resolution structural and resting-state fMRI images, we found changes in cortical thickness within, but not limited to, the visual cortex in adult patients, who experienced transient visual deprivation early in life as a result of congenital cataracts. Importantly, the covariation of cortical thickness across regions was also altered in the patients. The areas with altered cortical thickness in patients also showed differences in functional connectivity between patients and normally sighted controls. Together, the current findings suggest an impact of early visual deprivation on the interactive development of the human cortex.

19.
Commun Biol ; 4(1): 746, 2021 06 16.
Artículo en Inglés | MEDLINE | ID: mdl-34135466

RESUMEN

Our brain constructs reality through narrative and argumentative thought. Some hypotheses argue that these two modes of cognitive functioning are irreducible, reflecting distinct mental operations underlain by separate neural bases; Others ascribe both to a unitary neural system dedicated to long-timescale information. We addressed this question by employing inter-subject measures to investigate the stimulus-induced neural responses when participants were listening to narrative and argumentative texts during fMRI. We found that following both kinds of texts enhanced functional couplings within the frontoparietal control system. However, while a narrative specifically implicated the default mode system, an argument specifically induced synchronization between the intraparietal sulcus in the frontoparietal control system and multiple perisylvian areas in the language system. Our findings reconcile the two hypotheses by revealing commonalities and differences between the narrative and the argumentative brain networks, showing how diverse mental activities arise from the segregation and integration of the existing brain systems.


Asunto(s)
Encéfalo/fisiología , Cognición/fisiología , Pensamiento/fisiología , Adulto , Anciano , Mapeo Encefálico/métodos , Femenino , Humanos , Lenguaje , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Red Nerviosa/fisiología , Adulto Joven
20.
eNeuro ; 8(3)2021.
Artículo en Inglés | MEDLINE | ID: mdl-34016602

RESUMEN

Voices are arguably among the most relevant sounds in humans' everyday life, and several studies have suggested the existence of voice-selective regions in the human brain. Despite two decades of research, defining the human brain regions supporting voice recognition remains challenging. Moreover, whether neural selectivity to voices is merely driven by acoustic properties specific to human voices (e.g., spectrogram, harmonicity), or whether it also reflects a higher-level categorization response is still under debate. Here, we objectively measured rapid automatic categorization responses to human voices with fast periodic auditory stimulation (FPAS) combined with electroencephalography (EEG). Participants were tested with stimulation sequences containing heterogeneous non-vocal sounds from different categories presented at 4 Hz (i.e., four stimuli/s), with vocal sounds appearing every three stimuli (1.333 Hz). A few minutes of stimulation are sufficient to elicit robust 1.333 Hz voice-selective focal brain responses over superior temporal regions of individual participants. This response is virtually absent for sequences using frequency-scrambled sounds, but is clearly observed when voices are presented among sounds from musical instruments matched for pitch and harmonicity-to-noise ratio (HNR). Overall, our FPAS paradigm demonstrates that the human brain seamlessly categorizes human voices when compared with other sounds including musical instruments' sounds matched for low level acoustic features and that voice-selective responses are at least partially independent from low-level acoustic features, making it a powerful and versatile tool to understand human auditory categorization in general.


Asunto(s)
Percepción Auditiva , Encéfalo , Estimulación Acústica , Humanos , Sonido , Lóbulo Temporal
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...