Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
1.
J Vis ; 22(2): 14, 2022 02 01.
Artículo en Inglés | MEDLINE | ID: mdl-35195673

RESUMEN

Retinal prostheses partially restore vision to late blind patients with retinitis pigmentosa through electrical stimulation of still-viable retinal ganglion cells. We investigated whether the late blind can perform visual-tactile shape matching following the partial restoration of vision via retinal prostheses after decades of blindness. We tested for visual-visual, tactile-tactile, and visual-tactile two-dimensional shape matching with six Argus II retinal prosthesis patients, ten sighted controls, and eight sighted controls with simulated ultra-low vision. In the Argus II patients, the visual-visual shape matching performance was significantly greater than chance. Although the visual-tactile shape matching performance of the Argus II patients was not significantly greater than chance, it was significantly higher with longer duration of prosthesis use. The sighted controls using natural vision and the sighted controls with simulated ultra-low vision both performed the visual-visual and visual-tactile shape matching tasks significantly more accurately than the Argus II patients. The tactile-tactile matching was not significantly different between the Argus II patients and sighted controls with or without simulated ultra-low vision. These results show that experienced retinal prosthesis patients can match shapes across the senses and integrate artificial vision with somatosensation. The correlation of retinal prosthesis patients' crossmodal shape matching performance with the duration of device use supports the value of experience to crossmodal shape learning. These crossmodal shape matching results in Argus II patients are the first step toward understanding crossmodal perception after artificial visual restoration.


Asunto(s)
Retinitis Pigmentosa , Prótesis Visuales , Ceguera , Humanos , Visión Ocular , Percepción Visual
2.
medRxiv ; 2024 Jun 27.
Artículo en Inglés | MEDLINE | ID: mdl-38978654

RESUMEN

The Argus II retinal prosthesis restores visual perception to late blind patients. It has been shown that structural changes occur in the brain due to late-onset blindness, including cortical thinning in visual regions of the brain. Following vision restoration, it is not yet known whether these visual regions are reinvigorated and regain a normal cortical thickness or retain the diminished thickness from blindness. We evaluated the cortical thicknesses of ten Argus II Retinal Prostheses patients, ten blind patients, and thirteen sighted participants. The Argus II patients on average had a thicker left Cuneus Cortex and Lateral Occipital Cortex relative to the blind patients. The duration of the Argus II use (time since implant in active users) significantly partially correlated with thicker visual cortical regions in the left hemisphere. Furthermore, in the two case studies (scanned before and after implantation), the patient with longer device use (44.5 months) had an increase in the cortical thickness of visual regions, whereas the shorter-using patient did not (6.5 months). Finally, a third case, scanned at three time points post-implantation, showed an increase in cortical thickness in the Lateral Occipital Cortex between 43.5 and 57 months, which was maintained even after 3 years of disuse (106 months). Overall, the Argus II patients' cortical thickness was on average significantly rejuvenated in two higher visual regions and, patients using the implant for a longer duration had thicker visual regions. This research raises the possibility of structural plasticity reversing visual cortical atrophy in late-blind patients with prolonged vision restoration.

3.
J Percept Imaging ; 52022 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-35464341

RESUMEN

Postdiction occurs when later stimuli influence the perception of earlier stimuli. As the multisensory science field has grown in recent decades, the investigation of crossmodal postdictive phenomena has also expanded. Crossmodal postdiction can be considered (in its simplest form) the phenomenon in which later stimuli in one modality influence earlier stimuli in another modality (e.g., Intermodal Apparent Motion). Crossmodal postdiction can also appear in more nuanced forms, such as unimodal postdictive illusions (e.g., Apparent Motion) that are influenced by concurrent crossmodal stimuli (e.g., Crossmodal Influence on Apparent Motion), or crossmodal illusions (e.g., the Double Flash Illusion) that are influenced postdictively by a stimulus in one or the other modality (e.g., a visual stimulus in the Illusory Audiovisual Rabbit Illusion). In this review, these and other varied forms of crossmodal postdiction will be discussed. Three neuropsychological models proposed for unimodal postdiction will be adapted to the unique aspects of processing and integrating multisensory stimuli. Crossmodal postdiction opens a new window into sensory integration, and could potentially be used to identify new mechanisms of crossmodal crosstalk in the brain.

4.
Vision Res ; 182: 58-68, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33607599

RESUMEN

Crossmodal mappings associate features (such as spatial location) between audition and vision, thereby aiding sensory binding and perceptual accuracy. Previously, it has been unclear whether patients with artificial vision will develop crossmodal mappings despite the low spatial and temporal resolution of their visual perception (particularly in light of the remodeling of the retina and visual cortex that takes place during decades of vision loss). To address this question, we studied crossmodal mappings psychophysically in Retinitis Pigmentosa patients with partial visual restoration by means of Argus II retinal prostheses, which incorporate an electrode array implanted on the retinal surface that stimulates still-viable ganglion cells with a video stream from a head-mounted camera. We found that Argus II patients (N = 10) exhibit significant crossmodal mappings between auditory location and visual location, and between auditory pitch and visual elevation, equivalent to those of age-matched sighted controls (N = 10). Furthermore, Argus II patients (N = 6) were able to use crossmodal mappings to locate a visual target more quickly with auditory cueing than without. Overall, restored artificial vision was shown to interact with audition via crossmodal mappings, which implies that the reorganization during blindness and the limitations of artificial vision did not prevent the relearning of crossmodal mappings. In particular, cueing based on crossmodal mappings was shown to improve visual search with a retinal prosthesis. This result represents a key first step toward leveraging crossmodal interactions for improved patient visual functionality.


Asunto(s)
Retinitis Pigmentosa , Prótesis Visuales , Electrodos Implantados , Humanos , Implantación de Prótesis , Percepción Visual
5.
Multisens Res ; 33(1): 87-108, 2020 07 01.
Artículo en Inglés | MEDLINE | ID: mdl-31648193

RESUMEN

In the original double flash illusion, a visual flash (e.g., a sharp-edged disk, or uniformly filled circle) presented with two short auditory tones (beeps) is often followed by an illusory flash. The illusory flash has been previously shown to be triggered by the second auditory beep. The current study extends the double flash illusion by showing that this paradigm can not only create the illusory repeat of an on-off flash, but also trigger an illusory expansion (and in some cases a subsequent contraction) that is induced by the flash of a circular brightness gradient (gradient disk) to replay as well. The perception of the dynamic double flash illusion further supports the interpretation of the illusory flash (in the double flash illusion) as similar in its spatial and temporal properties to the perception of the real visual flash, likely by replicating the neural processes underlying the illusory expansion of the real flash. We show further that if a gradient disk (generating an illusory expansion) and a sharp-edged disk are presented simultaneously side by side with two sequential beeps, often only one visual stimulus or the other will be perceived to double flash. This indicates selectivity in auditory-visual binding, suggesting the usefulness of this paradigm as a psychophysical tool for investigating crossmodal binding phenomena.


Asunto(s)
Percepción Auditiva/fisiología , Ilusiones/fisiología , Tiempo de Reacción/fisiología , Corteza Visual/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Electroencefalografía , Femenino , Humanos , Masculino , Estimulación Luminosa
6.
PLoS One ; 13(10): e0204217, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30281629

RESUMEN

Neuroscience investigations are most often focused on the prediction of future perception or decisions based on prior brain states or stimulus presentations. However, the brain can also process information retroactively, such that later stimuli impact conscious percepts of the stimuli that have already occurred (called "postdiction"). Postdictive effects have thus far been mostly unimodal (such as apparent motion), and the models for postdiction have accordingly been limited to early sensory regions of one modality. We have discovered two related multimodal illusions in which audition instigates postdictive changes in visual perception. In the first illusion (called the "Illusory Audiovisual Rabbit"), the location of an illusory flash is influenced by an auditory beep-flash pair that follows the perceived illusory flash. In the second illusion (called the "Invisible Audiovisual Rabbit"), a beep-flash pair following a real flash suppresses the perception of the earlier flash. Thus, we showed experimentally that these two effects are influenced significantly by postdiction. The audiovisual rabbit illusions indicate that postdiction can bridge the senses, uncovering a relatively-neglected yet critical type of neural processing underlying perceptual awareness. Furthermore, these two new illusions broaden the Double Flash Illusion, in which a single real flash is doubled by two sounds. Whereas the double flash indicated that audition can create an illusory flash, these rabbit illusions expand audition's influence on vision to the suppression of a real flash and the relocation of an illusory flash. These new additions to auditory-visual interactions indicate a spatio-temporally fine-tuned coupling of the senses to generate perception.


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/fisiología , Ilusiones , Percepción Visual/fisiología , Estimulación Acústica , Femenino , Humanos , Masculino , Estimulación Luminosa , Procesamiento Espacial
7.
Sci Rep ; 5: 15628, 2015 Oct 22.
Artículo en Inglés | MEDLINE | ID: mdl-26490260

RESUMEN

Millions of people are blind worldwide. Sensory substitution (SS) devices (e.g., vOICe) can assist the blind by encoding a video stream into a sound pattern, recruiting visual brain areas for auditory analysis via crossmodal interactions and plasticity. SS devices often require extensive training to attain limited functionality. In contrast to conventional attention-intensive SS training that starts with visual primitives (e.g., geometrical shapes), we argue that sensory substitution can be engaged efficiently by using stimuli (such as textures) associated with intrinsic crossmodal mappings. Crossmodal mappings link images with sounds and tactile patterns. We show that intuitive SS sounds can be matched to the correct images by naive sighted participants just as well as by intensively-trained participants. This result indicates that existing crossmodal interactions and amodal sensory cortical processing may be as important in the interpretation of patterns by SS as crossmodal plasticity (e.g., the strengthening of existing connections or the formation of new ones), especially at the earlier stages of SS usage. An SS training procedure based on crossmodal mappings could both considerably improve participant performance and shorten training times, thereby enabling SS devices to significantly expand blind capabilities.

8.
Front Psychol ; 6: 842, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26136719

RESUMEN

A subset of sensory substitution (SS) devices translate images into sounds in real time using a portable computer, camera, and headphones. Perceptual constancy is the key to understanding both functional and phenomenological aspects of perception with SS. In particular, constancies enable object externalization, which is critical to the performance of daily tasks such as obstacle avoidance and locating dropped objects. In order to improve daily task performance by the blind, and determine if constancies can be learned with SS, we trained blind (N = 4) and sighted (N = 10) individuals on length and orientation constancy tasks for 8 days at about 1 h per day with an auditory SS device. We found that blind and sighted performance at the constancy tasks significantly improved, and attained constancy performance that was above chance. Furthermore, dynamic interactions with stimuli were critical to constancy learning with the SS device. In particular, improved task learning significantly correlated with the number of spontaneous left-right head-tilting movements while learning length constancy. The improvement from previous head-tilting trials even transferred to a no-head-tilt condition. Therefore, not only can SS learning be improved by encouraging head movement while learning, but head movement may also play an important role in learning constancies in the sighted. In addition, the learning of constancies by the blind and sighted with SS provides evidence that SS may be able to restore vision-like functionality to the blind in daily tasks.

9.
Sci Rep ; 5: 8857, 2015 Mar 09.
Artículo en Inglés | MEDLINE | ID: mdl-25748443

RESUMEN

The brain constructs a representation of temporal properties of events, such as duration and frequency, but the underlying neural mechanisms are under debate. One open question is whether these mechanisms are unisensory or multisensory. Duration perception studies provide some evidence for a dissociation between auditory and visual timing mechanisms; however, we found active crossmodal interaction between audition and vision for rate perception, even when vision and audition were never stimulated together. After exposure to 5 Hz adaptors, people perceived subsequent test stimuli centered around 4 Hz to be slower, and the reverse after exposure to 3 Hz adaptors. This aftereffect occurred even when the adaptor and test were different modalities that were never presented together. When the discrepancy in rate between adaptor and test increased, the aftereffect was attenuated, indicating that the brain uses narrowly-tuned channels to process rate information. Our results indicate that human timing mechanisms for rate perception are not entirely segregated between modalities and have substantial implications for models of how the brain encodes temporal features. We propose a model of multisensory channels for rate perception, and consider the broader implications of such a model for how the brain encodes timing.


Asunto(s)
Adaptación Fisiológica/fisiología , Percepción Auditiva/fisiología , Encéfalo/fisiología , Modelos Neurológicos , Percepción del Tiempo/fisiología , Percepción Visual/fisiología , Estimulación Acústica/métodos , Asociación , Simulación por Computador , Femenino , Humanos , Masculino , Plasticidad Neuronal/fisiología , Estimulación Luminosa/métodos , Sensación/fisiología , Células Receptoras Sensoriales/fisiología , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA