Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
1.
Proc Natl Acad Sci U S A ; 117(37): 23011-23020, 2020 09 15.
Artículo en Inglés | MEDLINE | ID: mdl-32839334

RESUMEN

The fusiform face area responds selectively to faces and is causally involved in face perception. How does face-selectivity in the fusiform arise in development, and why does it develop so systematically in the same location across individuals? Preferential cortical responses to faces develop early in infancy, yet evidence is conflicting on the central question of whether visual experience with faces is necessary. Here, we revisit this question by scanning congenitally blind individuals with fMRI while they haptically explored 3D-printed faces and other stimuli. We found robust face-selective responses in the lateral fusiform gyrus of individual blind participants during haptic exploration of stimuli, indicating that neither visual experience with faces nor fovea-biased inputs is necessary for face-selectivity to arise in the lateral fusiform gyrus. Our results instead suggest a role for long-range connectivity in specifying the location of face-selectivity in the human brain.


Asunto(s)
Cara/fisiología , Reconocimiento Facial/fisiología , Lóbulo Temporal/fisiología , Percepción Visual/fisiología , Adulto , Mapeo Encefálico/métodos , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Reconocimiento Visual de Modelos/fisiología , Estimulación Luminosa/métodos , Reconocimiento en Psicología/fisiología
2.
Cogn Neuropsychol ; 38(7-8): 468-489, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-35729704

RESUMEN

How does the auditory system categorize natural sounds? Here we apply multimodal neuroimaging to illustrate the progression from acoustic to semantically dominated representations. Combining magnetoencephalographic (MEG) and functional magnetic resonance imaging (fMRI) scans of observers listening to naturalistic sounds, we found superior temporal responses beginning ∼55 ms post-stimulus onset, spreading to extratemporal cortices by ∼100 ms. Early regions were distinguished less by onset/peak latency than by functional properties and overall temporal response profiles. Early acoustically-dominated representations trended systematically toward category dominance over time (after ∼200 ms) and space (beyond primary cortex). Semantic category representation was spatially specific: Vocalizations were preferentially distinguished in frontotemporal voice-selective regions and the fusiform; scenes and objects were distinguished in parahippocampal and medial place areas. Our results are consistent with real-world events coded via an extended auditory processing hierarchy, in which acoustic representations rapidly enter multiple streams specialized by category, including areas typically considered visual cortex.


Asunto(s)
Mapeo Encefálico , Semántica , Estimulación Acústica/métodos , Percepción Auditiva/fisiología , Mapeo Encefálico/métodos , Cóclea , Humanos , Imagen por Resonancia Magnética/métodos , Magnetoencefalografía/métodos
3.
eNeuro ; 2024 Aug 09.
Artículo en Inglés | MEDLINE | ID: mdl-39122554

RESUMEN

Reverberation, a ubiquitous feature of real-world acoustic environments, exhibits statistical regularities that human listeners leverage to self-orient, facilitate auditory perception, and understand their environment. Despite the extensive research on sound source representation in the auditory system, it remains unclear how the brain represents real-world reverberant environments. Here, we characterized the neural response to reverberation of varying realism by applying multivariate pattern analysis to electroencephalographic (EEG) brain signals. Human listeners (12 male and 8 female) heard speech samples convolved with real-world and synthetic reverberant impulse responses and judged whether the speech samples were in a "real" or "fake" environment, focusing on the reverberant background rather than the properties of speech itself. Participants distinguished real from synthetic reverberation with ∼75% accuracy; EEG decoding reveals a multistage decoding time course, with dissociable components early in the stimulus presentation and later in the peri-offset stage. The early component predominantly occurred in temporal electrode clusters, while the later component was prominent in centro-parietal clusters. These findings suggest distinct neural stages in perceiving natural acoustic environments, likely reflecting sensory encoding and higher-level perceptual decision-making processes. Overall, our findings provide evidence that reverberation, rather than being largely suppressed as a noise-like signal, carries relevant environmental information and gains representation along the auditory system. This understanding also offers various applications; it provides insights for including reverberation as a cue to aid navigation for blind and visually impaired people. It also helps to enhance realism perception in immersive virtual reality settings, gaming, music, and film production.Significance Statement In real-world environments, multiple acoustic signals coexist, typically reflecting off innumerable surrounding surfaces as reverberation. While reverberation is a rich environmental cue and a ubiquitous feature in acoustic spaces, we do not fully understand how our brains process a signal usually treated as a distortion to be ignored. When asking human participants to make perceptual judgments about reverberant sounds during EEG recordings, we identified distinct, sequential stages of neural processing. The perception of acoustic realism first involves encoding low-level reverberation acoustic features and their subsequent integration into a coherent environment representation. This knowledge provides insights for enhancing realism in immersive virtual reality, music and film production, and using reverberation to guide navigation for blind and visually impaired people.

4.
Front Neurosci ; 18: 1288635, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38440393

RESUMEN

Active echolocation allows blind individuals to explore their surroundings via self-generated sounds, similarly to dolphins and other echolocating animals. Echolocators emit sounds, such as finger snaps or mouth clicks, and parse the returning echoes for information about their surroundings, including the location, size, and material composition of objects. Because a crucial function of perceiving objects is to enable effective interaction with them, it is important to understand the degree to which three-dimensional shape information extracted from object echoes is useful in the context of other modalities such as haptics or vision. Here, we investigated the resolution of crossmodal transfer of object-level information between acoustic echoes and other senses. First, in a delayed match-to-sample task, blind expert echolocators and sighted control participants inspected common (everyday) and novel target objects using echolocation, then distinguished the target object from a distractor using only haptic information. For blind participants, discrimination accuracy was overall above chance and similar for both common and novel objects, whereas as a group, sighted participants performed above chance for the common, but not novel objects, suggesting that some coarse object information (a) is available to both expert blind and novice sighted echolocators, (b) transfers from auditory to haptic modalities, and (c) may be facilitated by prior object familiarity and/or material differences, particularly for novice echolocators. Next, to estimate an equivalent resolution in visual terms, we briefly presented blurred images of the novel stimuli to sighted participants (N = 22), who then performed the same haptic discrimination task. We found that visuo-haptic discrimination performance approximately matched echo-haptic discrimination for a Gaussian blur kernel σ of ~2.5°. In this way, by matching visual and echo-based contributions to object discrimination, we can estimate the quality of echoacoustic information that transfers to other sensory modalities, predict theoretical bounds on perception, and inform the design of assistive techniques and technology available for blind individuals.

5.
Exp Brain Res ; 216(4): 483-8, 2012 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-22101568

RESUMEN

Echolocating organisms represent their external environment using reflected auditory information from emitted vocalizations. This ability, long known in various non-human species, has also been documented in some blind humans as an aid to navigation, as well as object detection and coarse localization. Surprisingly, our understanding of the basic acuity attainable by practitioners-the most fundamental underpinning of echoic spatial perception-remains crude. We found that experts were able to discriminate horizontal offsets of stimuli as small as ~1.2° auditory angle in the frontomedial plane, a resolution approaching the maximum measured precision of human spatial hearing and comparable to that found in bats performing similar tasks. Furthermore, we found a strong correlation between echolocation acuity and age of blindness onset. This first measure of functional spatial resolution in a population of expert echolocators demonstrates precision comparable to that found in the visual periphery of sighted individuals.


Asunto(s)
Percepción Auditiva/fisiología , Ceguera/psicología , Ecolocación/fisiología , Orientación/fisiología , Percepción Espacial/fisiología , Adaptación Fisiológica/fisiología , Adaptación Psicológica/fisiología , Adulto , Animales , Ceguera/rehabilitación , Humanos , Masculino , Discriminación de la Altura Tonal/fisiología , Adulto Joven
6.
J Vis Impair Blind ; 105(1): 20-32, 2011 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-21611133

RESUMEN

Compared with the echolocation performance of a blind expert, sighted novices rapidly learned size and position discrimination with surprising precision. We use a novel task to characterize the population distribution of echolocation skill in the sighted and report the highest known human echolocation acuity in our expert subject.

7.
Cortex ; 44(5): 537-47, 2008 May.
Artículo en Inglés | MEDLINE | ID: mdl-18387586

RESUMEN

A crucial element of testing hypotheses about rules for behavior is the use of performance feedback. In this study, we used fMRI and EEG to test the role of medial prefrontal cortex (PFC) and dorsolateral (DL) PFC in hypothesis testing using a modified intradimensional/extradimensional rule shift task. Eighteen adults were asked to infer rules about color or shape on the basis of positive and negative feedback in sets of two trials. Half of the trials involved color-to-color or shape-to-shape trials (intradimensional switches; ID) and the other half involved color-to-shape or shape-to-color trials (extradimensional switches; ED). Participants performed the task in separate fMRI and EEG sessions. ED trials were associated with reduced accuracy relative to ID trials. In addition, accuracy was reduced and response latencies increased following negative relative to positive feedback. Negative feedback resulted in increased activation in medial PFC and DLPFC, but more so for ED than ID shifts. Reduced accuracy following negative feedback correlated with increased activation in DLPFC, and increased response latencies following negative feedback correlated with increased activation in medial PFC. Additionally, around 250msec following negative performance feedback participants showed a feedback-related negative scalp potential, but this potential did not differ between ID and ED shifts. These results indicate that both medial PFC and DLPFC signal the need for performance adjustment, and both regions are sensitive to the increased demands of set shifting in hypothesis testing.


Asunto(s)
Aprendizaje por Asociación/fisiología , Atención/fisiología , Potenciales Evocados/fisiología , Retroalimentación Psicológica/fisiología , Corteza Prefrontal/fisiología , Adolescente , Adulto , Análisis de Varianza , Percepción de Color/fisiología , Femenino , Percepción de Forma/fisiología , Humanos , Imagen por Resonancia Magnética , Masculino , Tiempo de Reacción/fisiología , Disposición en Psicología
8.
Artículo en Inglés | MEDLINE | ID: mdl-28044019

RESUMEN

In natural environments, visual and auditory stimulation elicit responses across a large set of brain regions in a fraction of a second, yielding representations of the multimodal scene and its properties. The rapid and complex neural dynamics underlying visual and auditory information processing pose major challenges to human cognitive neuroscience. Brain signals measured non-invasively are inherently noisy, the format of neural representations is unknown, and transformations between representations are complex and often nonlinear. Further, no single non-invasive brain measurement technique provides a spatio-temporally integrated view. In this opinion piece, we argue that progress can be made by a concerted effort based on three pillars of recent methodological development: (i) sensitive analysis techniques such as decoding and cross-classification, (ii) complex computational modelling using models such as deep neural networks, and (iii) integration across imaging methods (magnetoencephalography/electroencephalography, functional magnetic resonance imaging) and models, e.g. using representational similarity analysis. We showcase two recent efforts that have been undertaken in this spirit and provide novel results about visual and auditory scene analysis. Finally, we discuss the limits of this perspective and sketch a concrete roadmap for future research.This article is part of the themed issue 'Auditory and visual scene analysis'.


Asunto(s)
Mapeo Encefálico/métodos , Encéfalo/fisiología , Electroencefalografía/métodos , Imagen por Resonancia Magnética/métodos , Magnetoencefalografía/métodos , Estimulación Acústica , Humanos , Estimulación Luminosa
9.
eNeuro ; 4(1)2017.
Artículo en Inglés | MEDLINE | ID: mdl-28451630

RESUMEN

Perceiving the geometry of surrounding space is a multisensory process, crucial to contextualizing object perception and guiding navigation behavior. Humans can make judgments about surrounding spaces from reverberation cues, caused by sounds reflecting off multiple interior surfaces. However, it remains unclear how the brain represents reverberant spaces separately from sound sources. Here, we report separable neural signatures of auditory space and source perception during magnetoencephalography (MEG) recording as subjects listened to brief sounds convolved with monaural room impulse responses (RIRs). The decoding signature of sound sources began at 57 ms after stimulus onset and peaked at 130 ms, while space decoding started at 138 ms and peaked at 386 ms. Importantly, these neuromagnetic responses were readily dissociable in form and time: while sound source decoding exhibited an early and transient response, the neural signature of space was sustained and independent of the original source that produced it. The reverberant space response was robust to variations in sound source, and vice versa, indicating a generalized response not tied to specific source-space combinations. These results provide the first neuromagnetic evidence for robust, dissociable auditory source and reverberant space representations in the human brain and reveal the temporal dynamics of how auditory scene analysis extracts percepts from complex naturalistic auditory signals.


Asunto(s)
Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico , Localización de Sonidos/fisiología , Percepción Espacial/fisiología , Estimulación Acústica , Adulto , Discriminación en Psicología , Femenino , Humanos , Magnetoencefalografía , Masculino , Dinámicas no Lineales , Psicoacústica , Sonido , Factores de Tiempo , Adulto Joven
10.
IEEE Trans Biomed Eng ; 62(6): 1526-1534, 2015 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-25608301

RESUMEN

OBJECTIVE: We present a device that combines principles of ultrasonic echolocation and spatial hearing to provide human users with environmental cues that are 1) not otherwise available to the human auditory system, and 2) richer in object and spatial information than the more heavily processed sonar cues of other assistive devices. The device consists of a wearable headset with an ultrasonic emitter and stereo microphones with affixed artificial pinnae. The goal of this study is to describe the device and evaluate the utility of the echoic information it provides. METHODS: The echoes of ultrasonic pulses were recorded and time stretched to lower their frequencies into the human auditory range, then played back to the user. We tested performance among naive and experienced sighted volunteers using a set of localization experiments, in which the locations of echo-reflective surfaces were judged using these time-stretched echoes. RESULTS: Naive subjects were able to make laterality and distance judgments, suggesting that the echoes provide innately useful information without prior training. Naive subjects were generally unable to make elevation judgments from recorded echoes. However, trained subjects demonstrated an ability to judge elevation as well. CONCLUSION: This suggests that the device can be used effectively to examine the environment and that the human auditory system can rapidly adapt to these artificial echolocation cues. SIGNIFICANCE: Interpreting and interacting with the external world constitutes a major challenge for persons who are blind or visually impaired. This device has the potential to aid blind people in interacting with their environment.


Asunto(s)
Ecolocación/fisiología , Dispositivos de Autoayuda , Procesamiento de Señales Asistido por Computador/instrumentación , Ultrasonido/instrumentación , Ultrasonido/métodos , Adulto , Animales , Pabellón Auricular , Diseño de Equipo , Femenino , Humanos , Masculino , Modelos Biológicos , Personas con Daño Visual , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA