Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 19 de 19
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 12(1): 19016, 2022 11 08.
Artículo en Inglés | MEDLINE | ID: mdl-36347938

RESUMEN

There is broad interest in discovering quantifiable physiological biomarkers for psychiatric disorders to aid diagnostic assessment. However, finding biomarkers for autism spectrum disorder (ASD) has proven particularly difficult, partly due to high heterogeneity. Here, we recorded five minutes eyes-closed rest electroencephalography (EEG) from 186 adults (51% with ASD and 49% without ASD) and investigated the potential of EEG biomarkers to classify ASD using three conventional machine learning models with two-layer cross-validation. Comprehensive characterization of spectral, temporal and spatial dimensions of source-modelled EEG resulted in 3443 biomarkers per recording. We found no significant group-mean or group-variance differences for any of the EEG features. Interestingly, we obtained validation accuracies above 80%; however, the best machine learning model merely distinguished ASD from the non-autistic comparison group with a mean balanced test accuracy of 56% on the entirely unseen test set. The large drop in model performance between validation and testing, stress the importance of rigorous model evaluation, and further highlights the high heterogeneity in ASD. Overall, the lack of significant differences and weak classification indicates that, at the group level, intellectually able adults with ASD show remarkably typical resting-state EEG.


Asunto(s)
Trastorno del Espectro Autista , Adulto , Humanos , Trastorno del Espectro Autista/diagnóstico , Electroencefalografía/métodos , Aprendizaje Automático , Descanso , Biomarcadores
2.
J Neural Eng ; 19(6)2022 11 09.
Artículo en Inglés | MEDLINE | ID: mdl-36250685

RESUMEN

Objective. Post-traumatic stress disorder (PTSD) is highly heterogeneous, and identification of quantifiable biomarkers that could pave the way for targeted treatment remains a challenge. Most previous electroencephalography (EEG) studies on PTSD have been limited to specific handpicked features, and their findings have been highly variable and inconsistent. Therefore, to disentangle the role of promising EEG biomarkers, we developed a machine learning framework to investigate a wide range of commonly used EEG biomarkers in order to identify which features or combinations of features are capable of characterizing PTSD and potential subtypes.Approach. We recorded 5 min of eyes-closed and 5 min of eyes-open resting-state EEG from 202 combat-exposed veterans (53% with probable PTSD and 47% combat-exposed controls). Multiple spectral, temporal, and connectivity features were computed and logistic regression, random forest, and support vector machines with feature selection methods were employed to classify PTSD. To obtain robust results, we performed repeated two-layer cross-validation to test on an entirely unseen test set.Main results. Our classifiers obtained a balanced test accuracy of up to 62.9% for predicting PTSD patients. In addition, we identified two subtypes within PTSD: one where EEG patterns were similar to those of the combat-exposed controls, and another that were characterized by increased global functional connectivity. Our classifier obtained a balanced test accuracy of 79.4% when classifying this PTSD subtype from controls, a clear improvement compared to predicting the whole PTSD group. Interestingly, alpha connectivity in the dorsal and ventral attention network was particularly important for the prediction, and these connections were positively correlated with arousal symptom scores, a central symptom cluster of PTSD.Significance. Taken together, the novel framework presented here demonstrates how unsupervised subtyping can delineate heterogeneity and improve machine learning prediction of PTSD, and may pave the way for better identification of quantifiable biomarkers.


Asunto(s)
Trastornos por Estrés Postraumático , Veteranos , Humanos , Trastornos por Estrés Postraumático/diagnóstico , Trastornos por Estrés Postraumático/terapia , Electroencefalografía , Aprendizaje Automático , Máquina de Vectores de Soporte , Imagen por Resonancia Magnética
3.
Clin Neurophysiol ; 136: 40-48, 2022 04.
Artículo en Inglés | MEDLINE | ID: mdl-35131637

RESUMEN

OBJECTIVE: To explore the possibilities of wearable multi-modal monitoring in epilepsy and to identify effective strategies for seizure-detection. METHODS: Thirty patients with suspected epilepsy admitted to video electroencephalography (EEG) monitoring were equipped with a wearable multi-modal setup capable of continuous recording of electrocardiography (ECG), accelerometry (ACM) and behind-the-ear EEG. A support vector machine (SVM) algorithm was trained for cross-modal automated seizure detection. Visualizations of multi-modal time series data were used to generate ideas for seizure detection strategies. RESULTS: Three patients had more than five seizures and were eligible for SVM classification. Classification of 47 focal tonic seizures in one patient found a sensitivity of 84% with a false alarm rate (FAR) of 8/24 h. In two patients each with nine focal nonmotor seizures it yielded a sensitivity of 100% and a FAR of 13/24 h and 5/24. Visual comparisons of features were used to identify strategies for seizure detection in future research. CONCLUSIONS: Multi-modal monitoring in epilepsy using wearables is feasible and automatic seizure detection may benefit from multiple modalities when compared to uni-modal EEG. SIGNIFICANCE: This study is unique in exploring a combination of wearable EEG, ECG and ACM and can help inform future research on monitoring of epilepsy.


Asunto(s)
Epilepsia , Dispositivos Electrónicos Vestibles , Algoritmos , Electroencefalografía , Humanos , Proyectos Piloto , Convulsiones/diagnóstico
4.
Cortex ; 147: 9-23, 2022 02.
Artículo en Inglés | MEDLINE | ID: mdl-34998084

RESUMEN

Gaze patterns during face perception have been shown to relate to psychiatric symptoms. Standard analysis of gaze behavior includes calculating fixations within arbitrarily predetermined areas of interest. In contrast to this approach, we present an objective, data-driven method for the analysis of gaze patterns and their relation to diagnostic test scores. This method was applied to data acquired in an adult sample (N = 111) of psychiatry outpatients while they freely looked at images of human faces. Dimensional symptom scores of autism, attention deficit, and depression were collected. A linear regression model based on Principal Component Analysis coefficients computed for each participant was used to model symptom scores. We found that specific components of gaze patterns predicted autistic traits as well as depression symptoms. Gaze patterns shifted away from the eyes with increasing autism traits, a well-known effect. Additionally, the model revealed a lateralization component, with a reduction of the left visual field bias increasing with both autistic traits and depression symptoms independently. Taken together, our model provides a data-driven alternative for gaze data analysis, which can be applied to dimensionally-, rather than categorically-defined clinical subgroups within a variety of contexts. Methodological and clinical contribution of this approach are discussed.


Asunto(s)
Trastorno Autístico , Reconocimiento Facial , Adulto , Ojo , Cara , Fijación Ocular , Humanos
5.
PLoS One ; 16(2): e0246986, 2021.
Artículo en Inglés | MEDLINE | ID: mdl-33606815

RESUMEN

Speech is perceived with both the ears and the eyes. Adding congruent visual speech improves the perception of a faint auditory speech stimulus, whereas adding incongruent visual speech can alter the perception of the utterance. The latter phenomenon is the case of the McGurk illusion, where an auditory stimulus such as e.g. "ba" dubbed onto a visual stimulus such as "ga" produces the illusion of hearing "da". Bayesian models of multisensory perception suggest that both the enhancement and the illusion case can be described as a two-step process of binding (informed by prior knowledge) and fusion (informed by the information reliability of each sensory cue). However, there is to date no study which has accounted for how they each contribute to audiovisual speech perception. In this study, we expose subjects to both congruent and incongruent audiovisual speech, manipulating the binding and the fusion stages simultaneously. This is done by varying both temporal offset (binding) and auditory and visual signal-to-noise ratio (fusion). We fit two Bayesian models to the behavioural data and show that they can both account for the enhancement effect in congruent audiovisual speech, as well as the McGurk illusion. This modelling approach allows us to disentangle the effects of binding and fusion on behavioural responses. Moreover, we find that these models have greater predictive power than a forced fusion model. This study provides a systematic and quantitative approach to measuring audiovisual integration in the perception of the McGurk illusion as well as congruent audiovisual speech, which we hope will inform future work on audiovisual speech perception.


Asunto(s)
Ilusiones , Modelos Biológicos , Percepción del Habla/fisiología , Adulto , Teorema de Bayes , Femenino , Humanos , Masculino , Relación Señal-Ruido , Percepción Visual , Adulto Joven
6.
PLoS One ; 14(7): e0219744, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31310616

RESUMEN

Speech perception is influenced by vision through a process of audiovisual integration. This is demonstrated by the McGurk illusion where visual speech (for example /ga/) dubbed with incongruent auditory speech (such as /ba/) leads to a modified auditory percept (/da/). Recent studies have indicated that perception of the incongruent speech stimuli used in McGurk paradigms involves mechanisms of both general and audiovisual speech specific mismatch processing and that general mismatch processing modulates induced theta-band (4-8 Hz) oscillations. Here, we investigated whether the theta modulation merely reflects mismatch processing or, alternatively, audiovisual integration of speech. We used electroencephalographic recordings from two previously published studies using audiovisual sine-wave speech (SWS), a spectrally degraded speech signal sounding nonsensical to naïve perceivers but perceived as speech by informed subjects. Earlier studies have shown that informed, but not naïve subjects integrate SWS phonetically with visual speech. In an N1/P2 event-related potential paradigm, we found a significant difference in theta-band activity between informed and naïve perceivers of audiovisual speech, suggesting that audiovisual integration modulates induced theta-band oscillations. In a McGurk mismatch negativity paradigm (MMN) where infrequent McGurk stimuli were embedded in a sequence of frequent audio-visually congruent stimuli we found no difference between congruent and McGurk stimuli. The infrequent stimuli in this paradigm are violating both the general prediction of stimulus content, and that of audiovisual congruence. Hence, we found no support for the hypothesis that audiovisual mismatch modulates induced theta-band oscillations. We also did not find any effects of audiovisual integration in the MMN paradigm, possibly due to the experimental design.


Asunto(s)
Percepción Auditiva , Oscilometría , Percepción del Habla , Habla/fisiología , Percepción Visual , Estimulación Acústica , Análisis por Conglomerados , Electrodos , Electroencefalografía , Potenciales Evocados , Potenciales Evocados Auditivos , Humanos , Ilusiones , Lenguaje , Masculino , Fonética , Estimulación Luminosa , Procesamiento de Señales Asistido por Computador
7.
Eur J Neurosci ; 46(10): 2578-2583, 2017 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-28976045

RESUMEN

Incongruent audiovisual speech stimuli can lead to perceptual illusions such as fusions or combinations. Here, we investigated the underlying audiovisual integration process by measuring ERPs. We observed that visual speech-induced suppression of P2 amplitude (which is generally taken as a measure of audiovisual integration) for fusions was similar to suppression obtained with fully congruent stimuli, whereas P2 suppression for combinations was larger. We argue that these effects arise because the phonetic incongruency is solved differently for both types of stimuli.


Asunto(s)
Ilusiones/fisiología , Percepción del Habla/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Electroencefalografía , Potenciales Evocados , Femenino , Humanos , Masculino , Fonética , Estimulación Luminosa
8.
Front Psychol ; 6: 435, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-25972819

RESUMEN

Lesions to Broca's area cause aphasia characterized by a severe impairment of the ability to speak, with comparatively intact speech perception. However, some studies have found effects on speech perception under adverse listening conditions, indicating that Broca's area is also involved in speech perception. While these studies have focused on auditory speech perception other studies have shown that Broca's area is activated by visual speech perception. Furthermore, one preliminary report found that a patient with Broca's aphasia did not experience the McGurk illusion suggesting that an intact Broca's area is necessary for audiovisual integration of speech. Here we describe a patient with Broca's aphasia who experienced the McGurk illusion. This indicates that an intact Broca's area is not necessary for audiovisual integration of speech. The McGurk illusions this patient experienced were atypical, which could be due to Broca's area having a more subtle role in audiovisual integration of speech. The McGurk illusions of a control subject with Wernicke's aphasia were, however, also atypical. This indicates that the atypical McGurk illusions were due to deficits in speech processing that are not specific to Broca's aphasia.

9.
J Acoust Soc Am ; 137(5): 2884-91, 2015 May.
Artículo en Inglés | MEDLINE | ID: mdl-25994715

RESUMEN

Speech perception is facilitated by seeing the articulatory mouth movements of the talker. This is due to perceptual audiovisual integration, which also causes the McGurk-MacDonald illusion, and for which a comprehensive computational account is still lacking. Decades of research have largely focused on the fuzzy logical model of perception (FLMP), which provides excellent fits to experimental observations but also has been criticized for being too flexible, post hoc and difficult to interpret. The current study introduces the early maximum likelihood estimation (MLE) model of audiovisual integration to speech perception along with three model variations. In early MLE, integration is based on a continuous internal representation before categorization, which can make the model more parsimonious by imposing constraints that reflect experimental designs. The study also shows that cross-validation can evaluate models of audiovisual integration based on typical data sets taking both goodness-of-fit and model flexibility into account. All models were tested on a published data set previously used for testing the FLMP. Cross-validation favored the early MLE while more conventional error measures favored more complex models. This difference between conventional error measures and cross-validation was found to be indicative of over-fitting in more complex models such as the FLMP.


Asunto(s)
Modelos Psicológicos , Boca/fisiología , Movimiento , Percepción del Habla , Percepción Visual , Estimulación Acústica , Señales (Psicología) , Humanos , Funciones de Verosimilitud , Estimulación Luminosa , Psicoacústica , Reproducibilidad de los Resultados
10.
Psychophysiology ; 52(1): 32-45, 2015 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-25048104

RESUMEN

In this study, we aim to automatically identify multiple artifact types in EEG. We used multinomial regression to classify independent components of EEG data, selecting from 65 spatial, spectral, and temporal features of independent components using forward selection. The classifier identified neural and five nonneural types of components. Between subjects within studies, high classification performances were obtained. Between studies, however, classification was more difficult. For neural versus nonneural classifications, performance was on par with previous results obtained by others. We found that automatic separation of multiple artifact classes is possible with a small feature set. Our method can reduce manual workload and allow for the selective removal of artifact classes. Identifying artifacts during EEG recording may be used to instruct subjects to refrain from activity causing them.


Asunto(s)
Artefactos , Interpretación Estadística de Datos , Electroencefalografía/clasificación , Electroencefalografía/normas , Adulto , Humanos
11.
Neuropsychologia ; 66: 48-54, 2015 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-25447378

RESUMEN

We perceive identity, expression and speech from faces. While perception of identity and expression depends crucially on the configuration of facial features it is less clear whether this holds for visual speech perception. Facial configuration is poorly perceived for upside-down faces as demonstrated by the Thatcher illusion in which the orientation of the eyes and mouth with respect to the face is inverted (Thatcherization). This gives the face a grotesque appearance but this is only seen when the face is upright. Thatcherization can likewise disrupt visual speech perception but only when the face is upright indicating that facial configuration can be important for visual speech perception. This effect can propagate to auditory speech perception through audiovisual integration so that Thatcherization disrupts the McGurk illusion in which visual speech perception alters perception of an incongruent acoustic phoneme. This is known as the McThatcher effect. Here we show that the McThatcher effect is reflected in the McGurk mismatch negativity (MMN). The MMN is an event-related potential elicited by a change in auditory perception. The McGurk-MMN can be elicited by a change in auditory perception due to the McGurk illusion without any change in the acoustic stimulus. We found that Thatcherization disrupted a strong McGurk illusion and a correspondingly strong McGurk-MMN only for upright faces. This confirms that facial configuration can be important for audiovisual speech perception. For inverted faces we found a weaker McGurk illusion but, surprisingly, no MMN. We also found no correlation between the strength of the McGurk illusion and the amplitude of the McGurk-MMN. We suggest that this may be due to a threshold effect so that a strong McGurk illusion is required to elicit the McGurk-MMN.


Asunto(s)
Encéfalo/fisiología , Expresión Facial , Reconocimiento Visual de Modelos/fisiología , Reconocimiento en Psicología/fisiología , Percepción del Habla/fisiología , Adolescente , Adulto , Electroencefalografía , Potenciales Evocados Auditivos , Femenino , Humanos , Ilusiones/fisiología , Masculino , Adulto Joven
12.
Int J Psychophysiol ; 91(1): 54-66, 2014 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-23994206

RESUMEN

Mobile brain imaging solutions, such as the Smartphone Brain Scanner, which combines low cost wireless EEG sensors with open source software for real-time neuroimaging, may transform neuroscience experimental paradigms. Normally subject to the physical constraints in labs, neuroscience experimental paradigms can be transformed into dynamic environments allowing for the capturing of brain signals in everyday contexts. Using smartphones or tablets to access text or images may enable experimental design capable of tracing emotional responses when shopping or consuming media, incorporating sensorimotor responses reflecting our actions into brain machine interfaces, and facilitating neurofeedback training over extended periods. Even though the quality of consumer neuroheadsets is still lower than laboratory equipment and susceptible to environmental noise, we show that mobile neuroimaging solutions, like the Smartphone Brain Scanner, complemented by 3D reconstruction or source separation techniques may support a range of neuroimaging applications and thus become a valuable addition to high-end neuroimaging solutions.


Asunto(s)
Mapeo Encefálico , Encéfalo/fisiología , Teléfono Celular , Neurorretroalimentación/instrumentación , Neurorretroalimentación/métodos , Neuroimagen , Adulto , Interfaces Cerebro-Computador , Electroencefalografía , Emociones , Femenino , Dedos , Lateralidad Funcional , Humanos , Procesamiento de Imagen Asistido por Computador , Masculino , Estimulación Luminosa , Desempeño Psicomotor , Adulto Joven
13.
Brain Lang ; 126(2): 188-92, 2013 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-23774289

RESUMEN

Pure alexia is a selective deficit in reading, following lesions to the posterior left hemisphere. Writing and other language functions remain intact in these patients. Whether pure alexia is caused by a primary problem in visual perception is highly debated. A recent hypothesis suggests that a low level deficit - reduced sensitivity to particular spatial frequencies - is the underlying cause. We tested this hypothesis in a pure alexic patient (LK), using a sensitive psychophysical paradigm to examine her performance with simple patterns of different spatial frequency. We find that both in a detection and a classification task, LK's contrast sensitivity is comparable to normal controls for all spatial frequencies. Thus, reduced spatial frequency sensitivity does not constitute a general explanation for pure alexia, suggesting that the core deficit in this disorder is at a higher level in the visual processing stream.


Asunto(s)
Alexia Pura/fisiopatología , Sensibilidad de Contraste/fisiología , Adulto , Alexia Pura/etiología , Encéfalo/fisiopatología , Femenino , Humanos , Imagen por Resonancia Magnética , Lectura
14.
J Exp Psychol Hum Percept Perform ; 38(2): 498-514, 2012 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-22288689

RESUMEN

The psychometric function of single-letter identification is typically described as a function of stimulus intensity. However, the effect of stimulus exposure duration on letter identification remains poorly described. This is surprising because the effect of exposure duration has played a central role in modeling performance in whole and partial report (Shibuya & Bundesen, 1988). Therefore, we experimentally investigated visual letter identification as a function of exposure duration. We compared the exponential, the gamma, and the Weibull psychometric functions, all with a temporal offset included, as well as the ex-Gaussian, the log-logistic, and finally the squared-logistic, which is a psychometric function that to our knowledge has not been described before. The log-logistic and the squared-logistic psychometric function fit well to experimental data. Also, we conducted an experiment to test the ability of the psychometric functions to fit single-letter identification data, at different stimulus contrast levels; also here the same psychometric functions prevailed. Finally, after insertion into Bundesen's Theory of Visual Attention (Bundesen, 1990), the same psychometric functions enable closer fits to data from a previous whole and partial report experiment.


Asunto(s)
Atención , Aprendizaje Discriminativo , Reconocimiento Visual de Modelos , Tiempo de Reacción , Adulto , Sensibilidad de Contraste , Humanos , Masculino , Memoria a Corto Plazo , Enmascaramiento Perceptual , Campos Visuales , Adulto Joven
15.
Exp Brain Res ; 208(3): 447-57, 2011 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-21188364

RESUMEN

Speech perception integrates auditory and visual information. This is evidenced by the McGurk illusion where seeing the talking face influences the auditory phonetic percept and by the audiovisual detection advantage where seeing the talking face influences the detectability of the acoustic speech signal. Here, we show that identification of phonetic content and detection can be dissociated as speech-specific and non-specific audiovisual integration effects. To this end, we employed synthetically modified stimuli, sine wave speech (SWS), which is an impoverished speech signal that only observers informed of its speech-like nature recognize as speech. While the McGurk illusion only occurred for informed observers, the audiovisual detection advantage occurred for naïve observers as well. This finding supports a multistage account of audiovisual integration of speech in which the many attributes of the audiovisual speech signal are integrated by separate integration processes.


Asunto(s)
Estimulación Acústica/métodos , Estimulación Luminosa/métodos , Desempeño Psicomotor/fisiología , Percepción del Habla/fisiología , Percepción Visual/fisiología , Adulto , Recursos Audiovisuales , Percepción Auditiva/fisiología , Femenino , Humanos , Masculino , Fonética , Adulto Joven
16.
Vision Res ; 48(25): 2537-44, 2008 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-18801382

RESUMEN

A change in sound intensity can facilitate luminance change detection. We found that this effect did not depend on whether sound intensity and luminance increased or decreased. In contrast, luminance identification was strongly influenced by the congruence of luminance and sound intensity change leaving only unsigned stimulus transients as the basis for audiovisual integration. Facilitation of luminance detection occurred even with varying audiovisual stimulus onset asynchrony and even when the sound lagged behind the luminance change by 75ms supporting the interpretation that perceptual integration rather than a reduction of temporal uncertainty or effects of attention caused the effect.


Asunto(s)
Percepción Auditiva/fisiología , Percepción Visual/fisiología , Estimulación Acústica , Adulto , Condicionamiento Operante , Femenino , Humanos , Masculino , Estimulación Luminosa , Psicofísica , Tiempo de Reacción , Umbral Sensorial , Adulto Joven
17.
Cognition ; 96(1): B13-22, 2005 May.
Artículo en Inglés | MEDLINE | ID: mdl-15833302

RESUMEN

In face-to-face conversation speech is perceived by ear and eye. We studied the prerequisites of audio-visual speech perception by using perceptually ambiguous sine wave replicas of natural speech as auditory stimuli. When the subjects were not aware that the auditory stimuli were speech, they showed only negligible integration of auditory and visual stimuli. When the same subjects learned to perceive the same auditory stimuli as speech, they integrated the auditory and visual stimuli in a similar manner as natural speech. These results demonstrate the existence of a multisensory speech-specific mode of perception.


Asunto(s)
Aprendizaje por Asociación , Atención , Lectura de los Labios , Fonética , Percepción del Habla , Aprendizaje Discriminativo , Humanos , Espectrografía del Sonido , Acústica del Lenguaje
18.
Neurosci Lett ; 380(1-2): 155-60, 2005.
Artículo en Inglés | MEDLINE | ID: mdl-15854769

RESUMEN

Maximum likelihood models of multisensory integration are theoretically attractive because the goals and assumptions of sensory information processing are explicitly stated in such optimal models. When subjects perceive stimuli categorically, as opposed to on a continuous scale, Maximum Likelihood Integration (MLI) can occur before or after categorization-early or late. We introduce early MLI and apply it to the audiovisual perception of rapid beeps and flashes. We compare it to late MLI and show that early MLI is a better fitting and more parsimonious model. We also show that early MLI is better able to account for the effects of information reliability, modality appropriateness and intermodal attention which affect multisensory perception.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Percepción Visual/fisiología , Estimulación Acústica/métodos , Animales , Relación Dosis-Respuesta en la Radiación , Modelos Neurológicos , Estimulación Luminosa/métodos , Tiempo de Reacción/fisiología , Umbral Sensorial/fisiología , Factores de Tiempo
19.
Brain Res Cogn Brain Res ; 21(3): 301-8, 2004 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-15511646

RESUMEN

Information processing in auditory and visual modalities interacts in many circumstances. Spatially and temporally coincident acoustic and visual information are often bound together to form multisensory percepts [B.E. Stein, M.A. Meredith, The Merging of the Senses, A Bradford Book, Cambridge, MA, (1993), 211 pp.; Psychol. Bull. 88 (1980) 638]. Shams et al. recently reported a multisensory fission illusion where a single flash is perceived as two flashes when two rapid tone beeps are presented concurrently [Nature 408 (2000) 788; Cogn. Brain Res. 14 (2002) 147]. The absence of a fusion illusion, where two flashes would fuse to one when accompanied by one beep, indicated a perceptual rather than cognitive nature of the illusion. Here we report both fusion and fission illusions using stimuli very similar to those used by Shams et al. By instructing subjects to count beeps rather than flashes and decreasing the sound intensity to near threshold, we also created a corresponding visually induced auditory illusion. We discuss our results in light of four hypotheses of multisensory integration, each advocating a condition for modality dominance. According to the discontinuity hypothesis [Cogn. Brain Res. 14 (2002) 147], the modality in which stimulation is discontinuous dominates. The modality appropriateness hypothesis [Psychol. Bull. 88 (1980) 638] states that the modality more appropriate for the task at hand dominates. The information reliability hypothesis [J.-L. Schwartz, J. Robert-Ribes, P. Escudier, Ten years after Summerfield: a taxonomy of models for audio-visual fusion in speech perception. In: R. Campbell (Ed.), Hearing by Eye: The Psychology of Lipreading, Lawrence Earlbaum Associates, Hove, UK, (1998), pp. 3-51] claims that the modality providing more reliable information dominates. In strong forms, none of these three hypotheses applies to our data. We re-state the hypotheses in weak forms so that discontinuity, modality appropriateness and information reliability are factors which increase a modality's tendency to dominate. All these factors are important in explaining our data. Finally, we interpret the effect of instructions in light of the directed attention hypothesis which states that the attended modality is dominant [Psychol. Bull. 88 (1980) 638].


Asunto(s)
Percepción Auditiva/fisiología , Ilusiones/fisiología , Percepción del Tiempo/fisiología , Percepción Visual/fisiología , Estimulación Acústica/métodos , Adulto , Femenino , Humanos , Masculino , Oportunidad Relativa , Estimulación Luminosa/métodos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...