Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 13 de 13
Filtrar
1.
Trends Hear ; 27: 23312165221076681, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37377212

RESUMEN

The reduction in spectral resolution by cochlear implants oftentimes requires complementary visual speech cues to facilitate understanding. Despite substantial clinical characterization of auditory-only speech measures, relatively little is known about the audiovisual (AV) integrative abilities that most cochlear implant (CI) users rely on for daily speech comprehension. In this study, we tested AV integration in 63 CI users and 69 normal-hearing (NH) controls using the McGurk and sound-induced flash illusions. To our knowledge, this study is the largest to-date measuring the McGurk effect in this population and the first that tests the sound-induced flash illusion (SIFI). When presented with conflicting AV speech stimuli (i.e., the phoneme "ba" dubbed onto the viseme "ga"), we found that 55 CI users (87%) reported a fused percept of "da" or "tha" on at least one trial. After applying an error correction based on unisensory responses, we found that among those susceptible to the illusion, CI users experienced lower fusion than controls-a result that was concordant with results from the SIFI where the pairing of a single circle flashing on the screen with multiple beeps resulted in fewer illusory flashes for CI users. While illusion perception in these two tasks appears to be uncorrelated among CI users, we identified a negative correlation in the NH group. Because neither illusion appears to provide further explanation of variability in CI outcome measures, further research is needed to determine how these findings relate to CI users' speech understanding, particularly in ecological listening conditions that are naturally multisensory.


Asunto(s)
Implantes Cocleares , Ilusiones , Percepción del Habla , Humanos , Percepción del Habla/fisiología , Ilusiones/fisiología , Percepción Visual/fisiología , Percepción Auditiva/fisiología , Estimulación Luminosa , Estimulación Acústica
2.
J Vis Exp ; (98): e52677, 2015 Apr 22.
Artículo en Inglés | MEDLINE | ID: mdl-25938209

RESUMEN

In addition to impairments in social communication and the presence of restricted interests and repetitive behaviors, deficits in sensory processing are now recognized as a core symptom in autism spectrum disorder (ASD). Our ability to perceive and interact with the external world is rooted in sensory processing. For example, listening to a conversation entails processing the auditory cues coming from the speaker (speech content, prosody, syntax) as well as the associated visual information (facial expressions, gestures). Collectively, the "integration" of these multisensory (i.e., combined audiovisual) pieces of information results in better comprehension. Such multisensory integration has been shown to be strongly dependent upon the temporal relationship of the paired stimuli. Thus, stimuli that occur in close temporal proximity are highly likely to result in behavioral and perceptual benefits--gains believed to be reflective of the perceptual system's judgment of the likelihood that these two stimuli came from the same source. Changes in this temporal integration are expected to strongly alter perceptual processes, and are likely to diminish the ability to accurately perceive and interact with our world. Here, a battery of tasks designed to characterize various aspects of sensory and multisensory temporal processing in children with ASD is described. In addition to its utility in autism, this battery has great potential for characterizing changes in sensory function in other clinical populations, as well as being used to examine changes in these processes across the lifespan.


Asunto(s)
Estimulación Acústica/métodos , Trastorno del Espectro Autista/fisiopatología , Escala de Evaluación de la Conducta , Estimulación Luminosa/métodos , Adolescente , Percepción Auditiva , Trastorno del Espectro Autista/diagnóstico , Trastorno del Espectro Autista/psicología , Niño , Cognición , Comprensión , Humanos , Percepción Visual
3.
Brain Topogr ; 28(3): 479-93, 2015 May.
Artículo en Inglés | MEDLINE | ID: mdl-24276220

RESUMEN

The ability to effectively combine sensory inputs across modalities is vital for acquiring a unified percept of events. For example, watching a hammer hit a nail while simultaneously identifying the sound as originating from the event requires the ability to identify spatio-temporal congruencies and statistical regularities. In this study, we applied a reaction time and hazard function measure known as capacity (e.g., Townsend and AshbyCognitive Theory 200-239, 1978) to quantify the extent to which observers learn paired associations between simple auditory and visual patterns in a model theoretic manner. As expected, results showed that learning was associated with an increase in accuracy, but more significantly, an increase in capacity. The aim of this study was to associate capacity measures of multisensory learning, with neural based measures, namely mean global field power (GFP). We observed a co-variation between an increase in capacity, and a decrease in GFP amplitude as learning occurred. This suggests that capacity constitutes a reliable behavioral index of efficient energy expenditure in the neural domain.


Asunto(s)
Aprendizaje por Asociación/fisiología , Percepción Auditiva/fisiología , Encéfalo/fisiología , Percepción Visual/fisiología , Estimulación Acústica/métodos , Electroencefalografía , Humanos , Estimulación Luminosa/métodos , Adulto Joven
4.
J Autism Dev Disord ; 44(12): 3161-7, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25022248

RESUMEN

Individuals with autism spectrum disorders (ASD) exhibit alterations in sensory processing, including changes in the integration of information across the different sensory modalities. In the current study, we used the sound-induced flash illusion to assess multisensory integration in children with ASD and typically-developing (TD) controls. Thirty-one children with ASD and 31 age and IQ matched TD children (average age = 12 years) were presented with simple visual (i.e., flash) and auditory (i.e., beep) stimuli of varying number. In illusory conditions, a single flash was presented with 2-4 beeps. In TD children, these conditions generally result in the perception of multiple flashes, implying a perceptual fusion across vision and audition. In the present study, children with ASD were significantly less likely to perceive the illusion relative to TD controls, suggesting that multisensory integration and cross-modal binding may be weaker in some children with ASD. These results are discussed in the context of previous findings for multisensory integration in ASD and future directions for research.


Asunto(s)
Estimulación Acústica/métodos , Percepción Auditiva/fisiología , Trastornos Generalizados del Desarrollo Infantil/diagnóstico , Trastornos Generalizados del Desarrollo Infantil/psicología , Estimulación Luminosa/métodos , Percepción Visual/fisiología , Adolescente , Niño , Estudios de Cohortes , Femenino , Humanos , Masculino
5.
J Neurosci ; 34(3): 691-7, 2014 Jan 15.
Artículo en Inglés | MEDLINE | ID: mdl-24431427

RESUMEN

The new DSM-5 diagnostic criteria for autism spectrum disorders (ASDs) include sensory disturbances in addition to the well-established language, communication, and social deficits. One sensory disturbance seen in ASD is an impaired ability to integrate multisensory information into a unified percept. This may arise from an underlying impairment in which individuals with ASD have difficulty perceiving the temporal relationship between cross-modal inputs, an important cue for multisensory integration. Such impairments in multisensory processing may cascade into higher-level deficits, impairing day-to-day functioning on tasks, such as speech perception. To investigate multisensory temporal processing deficits in ASD and their links to speech processing, the current study mapped performance on a number of multisensory temporal tasks (with both simple and complex stimuli) onto the ability of individuals with ASD to perceptually bind audiovisual speech signals. High-functioning children with ASD were compared with a group of typically developing children. Performance on the multisensory temporal tasks varied with stimulus complexity for both groups; less precise temporal processing was observed with increasing stimulus complexity. Notably, individuals with ASD showed a speech-specific deficit in multisensory temporal processing. Most importantly, the strength of perceptual binding of audiovisual speech observed in individuals with ASD was strongly related to their low-level multisensory temporal processing abilities. Collectively, the results represent the first to illustrate links between multisensory temporal function and speech processing in ASD, strongly suggesting that deficits in low-level sensory processing may cascade into higher-order domains, such as language and communication.


Asunto(s)
Estimulación Acústica/métodos , Percepción Auditiva/fisiología , Trastornos Generalizados del Desarrollo Infantil/fisiopatología , Estimulación Luminosa/métodos , Tiempo de Reacción/fisiología , Percepción Visual/fisiología , Adolescente , Niño , Trastornos Generalizados del Desarrollo Infantil/diagnóstico , Trastornos Generalizados del Desarrollo Infantil/psicología , Femenino , Humanos , Masculino , Desempeño Psicomotor/fisiología , Factores de Tiempo
6.
Anesthesiology ; 118(2): 376-81, 2013 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-23263015

RESUMEN

BACKGROUND: Anesthesiology requires performing visually oriented procedures while monitoring auditory information about a patient's vital signs. A concern in operating room environments is the amount of competing information and the effects that divided attention has on patient monitoring, such as detecting auditory changes in arterial oxygen saturation via pulse oximetry. METHODS: The authors measured the impact of visual attentional load and auditory background noise on the ability of anesthesia residents to monitor the pulse oximeter auditory display in a laboratory setting. Accuracies and response times were recorded reflecting anesthesiologists' abilities to detect changes in oxygen saturation across three levels of visual attention in quiet and with noise. RESULTS: Results show that visual attentional load substantially affects the ability to detect changes in oxygen saturation concentrations conveyed by auditory cues signaling 99 and 98% saturation. These effects are compounded by auditory noise, up to a 17% decline in performance. These deficits are seen in the ability to accurately detect a change in oxygen saturation and in speed of response. CONCLUSIONS: Most anesthesia accidents are initiated by small errors that cascade into serious events. Lack of monitor vigilance and inattention are two of the more commonly cited factors. Reducing such errors is thus a priority for improving patient safety. Specifically, efforts to reduce distractors and decrease background noise should be considered during induction and emergence, periods of especially high risk, when anesthesiologists has to attend to many tasks and are thus susceptible to error.


Asunto(s)
Atención , Monitoreo Intraoperatorio/psicología , Ruido , Quirófanos/organización & administración , Oximetría/psicología , Estimulación Acústica , Adulto , Percepción Auditiva/fisiología , Estudios de Cohortes , Femenino , Fijación Ocular , Humanos , Internado y Residencia , Masculino , Percepción , Desempeño Psicomotor/fisiología , Tiempo de Reacción/fisiología , Percepción Visual/fisiología
7.
Exp Brain Res ; 219(1): 121-37, 2012 May.
Artículo en Inglés | MEDLINE | ID: mdl-22447249

RESUMEN

In natural environments, human sensory systems work in a coordinated and integrated manner to perceive and respond to external events. Previous research has shown that the spatial and temporal relationships of sensory signals are paramount in determining how information is integrated across sensory modalities, but in ecologically plausible settings, these factors are not independent. In the current study, we provide a novel exploration of the impact on behavioral performance for systematic manipulations of the spatial location and temporal synchrony of a visual-auditory stimulus pair. Simple auditory and visual stimuli were presented across a range of spatial locations and stimulus onset asynchronies (SOAs), and participants performed both a spatial localization and simultaneity judgment task. Response times in localizing paired visual-auditory stimuli were slower in the periphery and at larger SOAs, but most importantly, an interaction was found between the two factors, in which the effect of SOA was greater in peripheral as opposed to central locations. Simultaneity judgments also revealed a novel interaction between space and time: individuals were more likely to judge stimuli as synchronous when occurring in the periphery at large SOAs. The results of this study provide novel insights into (a) how the speed of spatial localization of an audiovisual stimulus is affected by location and temporal coincidence and the interaction between these two factors and (b) how the location of a multisensory stimulus impacts judgments concerning the temporal relationship of the paired stimuli. These findings provide strong evidence for a complex interdependency between spatial location and temporal structure in determining the ultimate behavioral and perceptual outcome associated with a paired multisensory (i.e., visual-auditory) stimulus.


Asunto(s)
Percepción Auditiva/fisiología , Detección de Señal Psicológica/fisiología , Percepción Espacial/fisiología , Percepción Visual/fisiología , Estimulación Acústica/métodos , Adolescente , Femenino , Humanos , Juicio , Masculino , Estimulación Luminosa/métodos , Psicofísica , Tiempo de Reacción/fisiología , Factores de Tiempo , Adulto Joven
8.
Brain Topogr ; 25(3): 308-26, 2012 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-22367585

RESUMEN

In recent years, it has become evident that neural responses previously considered to be unisensory can be modulated by sensory input from other modalities. In this regard, visual neural activity elicited to viewing a face is strongly influenced by concurrent incoming auditory information, particularly speech. Here, we applied an additive-factors paradigm aimed at quantifying the impact that auditory speech has on visual event-related potentials (ERPs) elicited to visual speech. These multisensory interactions were measured across parametrically varied stimulus salience, quantified in terms of signal to noise, to provide novel insights into the neural mechanisms of audiovisual speech perception. First, we measured a monotonic increase of the amplitude of the visual P1-N1-P2 ERP complex during a spoken-word recognition task with increases in stimulus salience. ERP component amplitudes varied directly with stimulus salience for visual, audiovisual, and summed unisensory recordings. Second, we measured changes in multisensory gain across salience levels. During audiovisual speech, the P1 and P1-N1 components exhibited less multisensory gain relative to the summed unisensory components with reduced salience, while N1-P2 amplitude exhibited greater multisensory gain as salience was reduced, consistent with the principle of inverse effectiveness. The amplitude interactions were correlated with behavioral measures of multisensory gain across salience levels as measured by response times, suggesting that change in multisensory gain associated with unisensory salience modulations reflects an increased efficiency of visual speech processing.


Asunto(s)
Estimulación Acústica , Percepción Auditiva/fisiología , Potenciales Evocados Visuales/fisiología , Estimulación Luminosa , Percepción del Habla/fisiología , Percepción Visual/fisiología , Adolescente , Adulto , Electroencefalografía , Femenino , Humanos , Masculino , Tiempo de Reacción
9.
Neuropsychologia ; 49(7): 1807-15, 2011 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-21397616

RESUMEN

A recent view of cortical functional specialization suggests that the primary organizing principle of the cortex is based on task requirements, rather than sensory modality. Consistent with this view, recent evidence suggests that a region of the lateral occipitotemporal cortex (LO) may process object shape information regardless of the modality of sensory input. There is considerable evidence that area LO is involved in processing visual and haptic shape information. However, sound can also carry acoustic cues to an object's shape, for example, when a sound is produced by an object's impact with a surface. Thus, the current study used auditory stimuli that were created from recordings of objects impacting a hard surface to test the hypothesis that area LO is also involved in auditory shape processing. The objects were of two shapes, rods and balls, and of two materials, metal and wood. Subjects were required to categorize the impact sounds in one of three tasks, (1) by the shape of the object while ignoring material, (2) by the material of the object while ignoring shape, or (3) by using all the information available. Area LO was more strongly recruited when subjects discriminated impact sounds based on the shape of the object that made them, compared to when subjects discriminated those same sounds based on material. The current findings suggest that activation in area LO is shape selective regardless of sensory input modality, and are consistent with an emerging theory of perceptual functional specialization of the brain that is task-based rather than sensory modality-based.


Asunto(s)
Percepción Auditiva/fisiología , Percepción de Forma/fisiología , Lóbulo Occipital/fisiología , Estimulación Acústica , Mapeo Encefálico , Femenino , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Metales , Desempeño Psicomotor/fisiología , Reconocimiento en Psicología/fisiología , Percepción Visual/fisiología , Madera , Adulto Joven
10.
Neuroimage ; 55(3): 1339-45, 2011 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-21195198

RESUMEN

The ability to combine information from multiple sensory modalities into a single, unified percept is a key element in an organism's ability to interact with the external world. This process of perceptual fusion, the binding of multiple sensory inputs into a perceptual gestalt, is highly dependent on the temporal synchrony of the sensory inputs. Using fMRI, we identified two anatomically distinct brain regions in the superior temporal cortex, one involved with processing temporal-synchrony, and one with processing perceptual fusion of audiovisual speech. This dissociation suggests that the superior temporal cortex should be considered a "neuronal hub" composed of multiple discrete subregions that underlie an array of complementary low- and high-level multisensory integration processes. In this role, abnormalities in the structure and function of superior temporal cortex provide a possible common etiology for temporal-processing and perceptual-fusion deficits seen in a number of clinical populations, including individuals with autism spectrum disorder, dyslexia, and schizophrenia.


Asunto(s)
Percepción Auditiva/fisiología , Percepción del Habla/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Mapeo Encefálico , Femenino , Lateralidad Funcional/fisiología , Humanos , Procesamiento de Imagen Asistido por Computador , Imagen por Resonancia Magnética , Masculino , Oxígeno/sangre , Estimulación Luminosa , Lóbulo Temporal/fisiología , Adulto Joven
11.
Neuropsychologia ; 49(1): 108-14, 2011 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-21036183

RESUMEN

Environmental events produce many sensory cues for identifying the action that evoked the event, the agent that performed the action, and the object targeted by the action. The cues for identifying environmental events are usually distributed across multiple sensory systems. Thus, to understand how environmental events are recognized requires an understanding of the fundamental cognitive and neural processes involved in multisensory object and action recognition. Here, we investigated the neural substrates involved in auditory and visual recognition of object-directed actions. Consistent with previous work on visual recognition of isolated objects, visual recognition of actions, and recognition of environmental sounds, we found evidence for multisensory audiovisual event-selective activation bilaterally at the junction of the posterior middle temporal gyrus and the lateral occipital cortex, the left superior temporal sulcus, and bilaterally in the intraparietal sulcus. The results suggest that recognition of events through convergence of visual and auditory cues is accomplished through a network of brain regions that was previously implicated only in visual recognition of action.


Asunto(s)
Percepción Auditiva/fisiología , Mapeo Encefálico , Lóbulo Parietal/fisiología , Lóbulo Temporal/fisiología , Percepción Visual/fisiología , Estimulación Acústica/métodos , Electroencefalografía/métodos , Femenino , Lateralidad Funcional , Humanos , Procesamiento de Imagen Asistido por Computador/métodos , Imagen por Resonancia Magnética/métodos , Masculino , Oxígeno/sangre , Lóbulo Parietal/irrigación sanguínea , Estimulación Luminosa/métodos , Lóbulo Temporal/irrigación sanguínea , Adulto Joven
12.
Exp Brain Res ; 198(2-3): 183-94, 2009 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-19352638

RESUMEN

It can be shown empirically and theoretically that inferences based on established metrics used to assess multisensory integration with BOLD fMRI data, such as superadditivity, are dependent on the particular experimental situation. For example, the law of inverse effectiveness shows that the likelihood of finding superadditivity in a known multisensory region increases with decreasing stimulus discriminability. In this paper, we suggest that Sternberg's additive-factors design allows for an unbiased assessment of multisensory integration. Through the manipulation of signal-to-noise ratio as an additive factor, we have identified networks of cortical regions that show properties of audio-visual or visuo-haptic neuronal convergence. These networks contained previously identified multisensory regions and also many new regions, for example, the caudate nucleus for audio-visual integration, and the fusiform gyrus for visuo-haptic integration. A comparison of integrative networks across audio-visual and visuo-haptic conditions showed very little overlap, suggesting that neural mechanisms of integration are unique to particular sensory pairings. Our results provide evidence for the utility of the additive-factors approach by demonstrating its effectiveness across modality (vision, audition, and haptics), stimulus type (speech and non-speech), experimental design (blocked and event-related), method of analysis (SPM and ROI), and experimenter-chosen baseline. The additive-factors approach provides a method for investigating multisensory interactions that goes beyond what can be achieved with more established metric-based, subtraction-type methods.


Asunto(s)
Percepción Auditiva/fisiología , Encéfalo/fisiología , Percepción del Habla/fisiología , Percepción del Tacto/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Encéfalo/irrigación sanguínea , Mapeo Encefálico/métodos , Circulación Cerebrovascular , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Oxígeno/sangre , Estimulación Luminosa , Estimulación Física , Procesamiento de Señales Asistido por Computador , Habla
13.
Exp Brain Res ; 179(1): 85-95, 2007 May.
Artículo en Inglés | MEDLINE | ID: mdl-17109108

RESUMEN

Evidence from neurophysiological studies has shown the superior temporal sulcus (STS) to be a site of audio-visual integration, with neuronal response to audio-visual stimuli exceeding the sum of independent responses to unisensory audio and visual stimuli. However, experimenters have yet to elicit superadditive (AV > A+V) blood oxygen-level dependent (BOLD) activation from STS in humans using non-speech objects. Other studies have found integration in the BOLD signal with objects, but only using less stringent criteria to define integration. Using video clips and sounds of hand held tools presented at psychophysical threshold, we were able to elicit BOLD activation to audio-visual objects that surpassed the sum of the BOLD activations to audio and visual stimuli presented independently. Our findings suggest that the properties of the BOLD signal do not limit our ability to detect and define sites of integration using stringent criteria.


Asunto(s)
Percepción Auditiva/fisiología , Circulación Cerebrovascular/fisiología , Imagen por Resonancia Magnética/métodos , Umbral Sensorial/fisiología , Lóbulo Temporal/fisiología , Percepción Visual/fisiología , Estimulación Acústica/métodos , Adulto , Corteza Auditiva/anatomía & histología , Corteza Auditiva/fisiología , Umbral Auditivo/fisiología , Mapeo Encefálico , Femenino , Humanos , Masculino , Pruebas Neuropsicológicas , Consumo de Oxígeno/fisiología , Estimulación Luminosa/métodos , Lóbulo Temporal/anatomía & histología , Corteza Visual/anatomía & histología , Corteza Visual/fisiología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA