Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
bioRxiv ; 2024 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-38370798

RESUMO

The prevalence of synthetic talking faces in both commercial and academic environments is increasing as the technology to generate them grows more powerful and available. While it has long been known that seeing the face of the talker improves human perception of speech-in-noise, recent studies have shown that synthetic talking faces generated by deep neural networks (DNNs) are also able to improve human perception of speech-in-noise. However, in previous studies the benefit provided by DNN synthetic faces was only about half that of real human talkers. We sought to determine whether synthetic talking faces generated by an alternative method would provide a greater perceptual benefit. The facial action coding system (FACS) is a comprehensive system for measuring visually discernible facial movements. Because the action units that comprise FACS are linked to specific muscle groups, synthetic talking faces generated by FACS might have greater verisimilitude than DNN synthetic faces which do not reference an explicit model of the facial musculature. We tested the ability of human observers to identity speech-in-noise accompanied by a blank screen; the real face of the talker; and synthetic talking face generated either by DNN or FACS. We replicated previous findings of a large benefit for seeing the face of a real talker for speech-in-noise perception and a smaller benefit for DNN synthetic faces. FACS faces also improved perception, but only to the same degree as DNN faces. Analysis at the phoneme level showed that the performance of DNN and FACS faces was particularly poor for phonemes that involve interactions between the teeth and lips, such as /f/, /v/, and /th/. Inspection of single video frames revealed that the characteristic visual features for these phonemes were weak or absent in synthetic faces. Modeling the real vs. synthetic difference showed that increasing the realism of a few phonemes could substantially increase the overall perceptual benefit of synthetic faces, providing a roadmap for improving communication in this rapidly developing domain.

2.
eNeuro ; 10(10)2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37857509

RESUMO

Intracranial electroencephalography (iEEG) provides a unique opportunity to record and stimulate neuronal populations in the human brain. A key step in neuroscience inference from iEEG is localizing the electrodes relative to individual subject anatomy and identified regions in brain atlases. We describe a new software tool, Your Advanced Electrode Localizer (YAEL), that provides an integrated solution for every step of the electrode localization process. YAEL is compatible with all common data formats to provide an easy-to-use, drop-in replacement for problematic existing workflows that require users to grapple with multiple programs and interfaces. YAEL's automatic extrapolation and interpolation functions speed localization, especially important in patients with many implanted stereotactic (sEEG) electrode shafts. The graphical user interface is presented in a web browser for broad compatibility and includes an interactive 3D viewer for easier localization of nearby sEEG contacts. After localization is complete, users may enter or import data into YAEL's 3D viewer to create publication-ready visualizations of electrodes and brain anatomy, including identified brain areas from atlases; the response to experimental tasks measured with iEEG; and clinical measures such as epileptiform activity or the results of electrical stimulation mapping. YAEL is free and open source and does not depend on any commercial software. Installation instructions for Mac, Windows, and Linux are available at https://yael.wiki.


Assuntos
Eletrocorticografia , Eletroencefalografia , Humanos , Eletroencefalografia/métodos , Eletrocorticografia/métodos , Encéfalo/fisiologia , Mapeamento Encefálico/métodos , Eletrodos Implantados
3.
Neuroimage ; 278: 120271, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37442310

RESUMO

Humans have the unique ability to decode the rapid stream of language elements that constitute speech, even when it is contaminated by noise. Two reliable observations about noisy speech perception are that seeing the face of the talker improves intelligibility and the existence of individual differences in the ability to perceive noisy speech. We introduce a multivariate BOLD fMRI measure that explains both observations. In two independent fMRI studies, clear and noisy speech was presented in visual, auditory and audiovisual formats to thirty-seven participants who rated intelligibility. An event-related design was used to sort noisy speech trials by their intelligibility. Individual-differences multidimensional scaling was applied to fMRI response patterns in superior temporal cortex and the dissimilarity between responses to clear speech and noisy (but intelligible) speech was measured. Neural dissimilarity was less for audiovisual speech than auditory-only speech, corresponding to the greater intelligibility of noisy audiovisual speech. Dissimilarity was less in participants with better noisy speech perception, corresponding to individual differences. These relationships held for both single word and entire sentence stimuli, suggesting that they were driven by intelligibility rather than the specific stimuli tested. A neural measure of perceptual intelligibility may aid in the development of strategies for helping those with impaired speech perception.


Assuntos
Percepção da Fala , Fala , Humanos , Imageamento por Ressonância Magnética , Individualidade , Percepção Visual/fisiologia , Percepção da Fala/fisiologia , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/fisiologia , Inteligibilidade da Fala , Estimulação Acústica/métodos
4.
Hum Brain Mapp ; 44(13): 4738-4753, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37417774

RESUMO

Lesion-behavior mapping (LBM) provides a statistical map of the association between voxel-wise brain damage and individual differences in behavior. To understand whether two behaviors are mediated by damage to distinct regions, researchers often compare LBM weight outputs by either the Overlap method or the Correlation method. However, these methods lack statistical criteria to determine whether two LBM are distinct versus the same and are disconnected from a major goal of LBMs: predicting behavior from brain damage. Without such criteria, researchers may draw conclusions from numeric differences between LBMs that are irrelevant to predicting behavior. We developed and validated a predictive validity comparison method (PVC) that establishes a statistical criterion for comparing two LBMs using predictive accuracy: two LBMs are distinct if and only if they provide unique predictive power for the behaviors being assessed. We applied PVC to two lesion-behavior stroke data sets, demonstrating its utility for determining when behaviors arise from the same versus different lesion patterns. Using region-of-interest-based simulations derived from proportion damage from a large data set (n = 131), PVC accurately detected when behaviors were mediated by different regions (high sensitivity) versus the same region (high specificity). Both the Overlap method and Correlation method performed poorly on the simulated data. By objectively determining whether two behavioral deficits can be explained by single versus distinct patterns of brain damage, PVC provides a critical advance in establishing the brain bases of behavior. We have developed and released a GUI-driven web app to encourage widespread adoption.


Assuntos
Lesões Encefálicas , Acidente Vascular Cerebral , Humanos , Mapeamento Encefálico , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Acidente Vascular Cerebral/diagnóstico por imagem , Acidente Vascular Cerebral/patologia , Lesões Encefálicas/patologia , Cabeça , Imageamento por Ressonância Magnética
5.
Brain ; 146(10): 4366-4377, 2023 10 03.
Artigo em Inglês | MEDLINE | ID: mdl-37293814

RESUMO

Emotion is represented in limbic and prefrontal brain areas, herein termed the affective salience network (ASN). Within the ASN, there are substantial unknowns about how valence and emotional intensity are processed-specifically, which nodes are associated with affective bias (a phenomenon in which participants interpret emotions in a manner consistent with their own mood). A recently developed feature detection approach ('specparam') was used to select dominant spectral features from human intracranial electrophysiological data, revealing affective specialization within specific nodes of the ASN. Spectral analysis of dominant features at the channel level suggests that dorsal anterior cingulate (dACC), anterior insula and ventral-medial prefrontal cortex (vmPFC) are sensitive to valence and intensity, while the amygdala is primarily sensitive to intensity. Akaike information criterion model comparisons corroborated the spectral analysis findings, suggesting all four nodes are more sensitive to intensity compared to valence. The data also revealed that activity in dACC and vmPFC were predictive of the extent of affective bias in the ratings of facial expressions-a proxy measure of instantaneous mood. To examine causality of the dACC in affective experience, 130 Hz continuous stimulation was applied to dACC while patients viewed and rated emotional faces. Faces were rated significantly happier during stimulation, even after accounting for differences in baseline ratings. Together the data suggest a causal role for dACC during the processing of external affective stimuli.


Assuntos
Mapeamento Encefálico , Encéfalo , Humanos , Encéfalo/fisiologia , Emoções/fisiologia , Afeto , Eletroencefalografia , Imageamento por Ressonância Magnética
6.
J Neurosurg ; : 1-11, 2022 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-35303696

RESUMO

OBJECTIVE: Magnetoencephalography (MEG) is a useful component of the presurgical evaluation of patients with epilepsy. Due to its high spatiotemporal resolution, MEG often provides additional information to the clinician when forming hypotheses about the epileptogenic zone (EZ). Because of the increasing utilization of stereo-electroencephalography (sEEG), MEG clusters are used to guide sEEG electrode targeting with increasing frequency. However, there are no predefined features of an MEG cluster that predict ictal activity. This study aims to determine which MEG cluster characteristics are predictive of the EZ. METHODS: The authors retrospectively analyzed all patients who had an MEG study (2017-2021) and underwent subsequent sEEG evaluation. MEG dipoles and sEEG electrodes were reconstructed in the same coordinate space to calculate overlap among individual contacts on electrodes and MEG clusters. MEG cluster features-including number of dipoles, proximity, angle, density, magnitude, confidence parameters, and brain region-were used to predict ictal activity in sEEG. Logistic regression was used to identify important cluster features and to train a binary classifier to predict ictal activity. RESULTS: Across 40 included patients, 196 electrodes (42.2%) sampled MEG clusters. Electrodes that sampled MEG clusters had higher rates of ictal and interictal activity than those that did not sample MEG clusters (ictal 68.4% vs 39.8%, p < 0.001; interictal 71.9% vs 44.6%, p < 0.001). Logistic regression revealed that the number of dipoles (odds ratio [OR] 1.09, 95% confidence interval [CI] 1.04-1.14, t = 3.43) and confidence volume (OR 0.02, 95% CI 0.00-0.86, t = -2.032) were predictive of ictal activity. This model was predictive of ictal activity with 77.3% accuracy (sensitivity = 80%, specificity = 74%, C-statistic = 0.81). Using only the number of dipoles had a predictive accuracy of 75%, whereas a threshold between 14 and 17 dipoles in a cluster detected ictal activity with 75.9%-85.2% sensitivity. CONCLUSIONS: MEG clusters with approximately 14 or more dipoles are strong predictors of ictal activity and may be useful in the preoperative planning of sEEG implantation.

7.
J Exp Psychol Anim Learn Cogn ; 47(3): 384-392, 2021 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-34081496

RESUMO

concepts require individuals to identify relationships between novel stimuli. Previous studies have reported that the ability to learn abstract concepts is found in a wide range of species. In regard to a same/different concept, Clark's nutcrackers (Nucifraga columbiana) and black-billed magpies (Pica hudsonia), two corvid species, were shown to outperform other avian and primate species (Wright et al., 2017). Two additional corvid species, pinyon jays (Gymnorhinus cyanocephalus) and California scrub jays (Aphelocoma californica) chosen as they belong to a different clade than nutcrackers and magpies, were examined using the same set-size expansion procedure of the same/different task (the task used with nutcrackers and magpies) to evaluate whether this trait is common across the Corvidae lineage. During this task, concept learning is assessed with novel images after training. Results from the current study showed that when presented with novel stimuli after training with an 8-image set, discrimination accuracy did not differ significantly from chance for pinyon jays and California scrub jays, unlike the magpies and nutcrackers from previous studies that showed partial transfer at that stage. However, concept learning improved with each set-size expansion, and the jays reached full concept learning with a 128-image set. This performance is similar to the other corvids and monkeys tested, all of which outperform pigeons. Results from the current study show a qualitative similarity in full abstract-concept learning in all species tested with a quantitative difference in the set-size functions, highlighting the shared survival importance of mechanisms supporting abstract-concept learning for corvids and primates. (PsycInfo Database Record (c) 2021 APA, all rights reserved).


Assuntos
Formação de Conceito , Aprendizagem , Animais , Aves
8.
Cortex ; 133: 371-383, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-33221701

RESUMO

The McGurk effect is a widely used measure of multisensory integration during speech perception. Two observations have raised questions about the validity of the effect as a tool for understanding speech perception. First, there is high variability in perception of the McGurk effect across different stimuli and observers. Second, across observers there is low correlation between McGurk susceptibility and recognition of visual speech paired with auditory speech-in-noise, another common measure of multisensory integration. Using the framework of the causal inference of multisensory speech (CIMS) model, we explored the relationship between the McGurk effect, syllable perception, and sentence perception in seven experiments with a total of 296 different participants. Perceptual reports revealed a relationship between the efficacy of different McGurk stimuli created from the same talker and perception of the auditory component of the McGurk stimuli presented in isolation, both with and without added noise. The CIMS model explained this strong stimulus-level correlation using the principles of noisy sensory encoding followed by optimal cue combination within a common representational space across speech types. Because the McGurk effect (but not speech-in-noise) requires the resolution of conflicting cues between modalities, there is an additional source of individual variability that can explain the weak observer-level correlation between McGurk and noisy speech. Power calculations show that detecting this weak correlation requires studies with many more participants than those conducted to-date. Perception of the McGurk effect and other types of speech can be explained by a common theoretical framework that includes causal inference, suggesting that the McGurk effect is a valid and useful experimental tool.


Assuntos
Ilusões , Percepção da Fala , Estimulação Acústica , Percepção Auditiva , Humanos , Estimulação Luminosa , Reconhecimento Psicológico , Fala , Percepção Visual
9.
Neuroimage ; 223: 117341, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32920161

RESUMO

Direct recording of neural activity from the human brain using implanted electrodes (iEEG, intracranial electroencephalography) is a fast-growing technique in human neuroscience. While the ability to record from the human brain with high spatial and temporal resolution has advanced our understanding, it generates staggering amounts of data: a single patient can be implanted with hundreds of electrodes, each sampled thousands of times a second for hours or days. The difficulty of exploring these vast datasets is the rate-limiting step in discovery. To overcome this obstacle, we created RAVE ("R Analysis and Visualization of iEEG"). All components of RAVE, including the underlying "R" language, are free and open source. User interactions occur through a web browser, making it transparent to the user whether the back-end data storage and computation are occurring locally, on a lab server, or in the cloud. Without writing a single line of computer code, users can create custom analyses, apply them to data from hundreds of iEEG electrodes, and instantly visualize the results on cortical surface models. Multiple types of plots are used to display analysis results, each of which can be downloaded as publication-ready graphics with a single click. RAVE consists of nearly 50,000 lines of code designed to prioritize an interactive user experience, reliability and reproducibility.


Assuntos
Encéfalo/fisiologia , Visualização de Dados , Eletroencefalografia , Processamento de Imagem Assistida por Computador/métodos , Eletrodos Implantados , Humanos , Reprodutibilidade dos Testes , Software
10.
J Neurosci ; 40(36): 6938-6948, 2020 09 02.
Artigo em Inglês | MEDLINE | ID: mdl-32727820

RESUMO

Experimentalists studying multisensory integration compare neural responses to multisensory stimuli with responses to the component modalities presented in isolation. This procedure is problematic for multisensory speech perception since audiovisual speech and auditory-only speech are easily intelligible but visual-only speech is not. To overcome this confound, we developed intracranial encephalography (iEEG) deconvolution. Individual stimuli always contained both auditory and visual speech, but jittering the onset asynchrony between modalities allowed for the time course of the unisensory responses and the interaction between them to be independently estimated. We applied this procedure to electrodes implanted in human epilepsy patients (both male and female) over the posterior superior temporal gyrus (pSTG), a brain area known to be important for speech perception. iEEG deconvolution revealed sustained positive responses to visual-only speech and larger, phasic responses to auditory-only speech. Confirming results from scalp EEG, responses to audiovisual speech were weaker than responses to auditory-only speech, demonstrating a subadditive multisensory neural computation. Leveraging the spatial resolution of iEEG, we extended these results to show that subadditivity is most pronounced in more posterior aspects of the pSTG. Across electrodes, subadditivity correlated with visual responsiveness, supporting a model in which visual speech enhances the efficiency of auditory speech processing in pSTG. The ability to separate neural processes may make iEEG deconvolution useful for studying a variety of complex cognitive and perceptual tasks.SIGNIFICANCE STATEMENT Understanding speech is one of the most important human abilities. Speech perception uses information from both the auditory and visual modalities. It has been difficult to study neural responses to visual speech because visual-only speech is difficult or impossible to comprehend, unlike auditory-only and audiovisual speech. We used intracranial encephalography deconvolution to overcome this obstacle. We found that visual speech evokes a positive response in the human posterior superior temporal gyrus, enhancing the efficiency of auditory speech processing.


Assuntos
Potenciais Evocados , Percepção da Fala , Lobo Temporal/fisiologia , Percepção Visual , Adulto , Eletrodos Implantados , Eletroencefalografia/instrumentação , Eletroencefalografia/métodos , Feminino , Humanos , Masculino
11.
Cell ; 181(4): 774-783.e5, 2020 05 14.
Artigo em Inglês | MEDLINE | ID: mdl-32413298

RESUMO

A visual cortical prosthesis (VCP) has long been proposed as a strategy for restoring useful vision to the blind, under the assumption that visual percepts of small spots of light produced with electrical stimulation of visual cortex (phosphenes) will combine into coherent percepts of visual forms, like pixels on a video screen. We tested an alternative strategy in which shapes were traced on the surface of visual cortex by stimulating electrodes in dynamic sequence. In both sighted and blind participants, dynamic stimulation enabled accurate recognition of letter shapes predicted by the brain's spatial map of the visual world. Forms were presented and recognized rapidly by blind participants, up to 86 forms per minute. These findings demonstrate that a brain prosthetic can produce coherent percepts of visual forms.


Assuntos
Cegueira/fisiopatologia , Visão Ocular/fisiologia , Percepção Visual/fisiologia , Adulto , Estimulação Elétrica/métodos , Eletrodos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fosfenos , Córtex Visual/metabolismo , Córtex Visual/fisiologia , Próteses Visuais
12.
J Neurophysiol ; 123(5): 1955-1968, 2020 05 01.
Artigo em Inglês | MEDLINE | ID: mdl-32233886

RESUMO

Although we routinely experience complex tactile patterns over our entire body, how we selectively experience multisite touch over our bodies remains poorly understood. Here, we characterized tactile search behavior over the full body using a tactile analog of the classic visual search task. On each trial, participants judged whether a target stimulus (e.g., 10-Hz vibration) was present or absent anywhere on the body. When present, the target stimulus could occur alone or simultaneously with distractor stimuli (e.g., 30-Hz vibrations) on other body locations. We systematically varied the number and spatial configurations of the distractors as well as the target and distractor frequencies and measured the impact of these factors on tactile search response times. First, we found that response times were faster on target-present trials compared with target-absent trials. Second, response times increased with the number of stimulated sites, suggesting a serial search process. Third, search performance differed depending on stimulus frequencies. This frequency-dependent behavior may be related to perceptual grouping effects based on timing cues. We constructed linear models to explore how the locations of the target and distractor cues influenced tactile search behavior. Our modeling results reveal that, in isolation, cues on the index fingers make relatively greater contributions to search performance compared with stimulation experienced on other body sites. Additionally, costimulation of sites within the same limb or simply on the same body side preferentially influence search behavior. Our collective findings identify some principles of attentional search that are common to vision and touch, but others that highlight key differences that may be unique to body-based spatial perception.NEW & NOTEWORTHY Little is known about how we selectively experience multisite touch patterns over the body. Using a tactile analog of the classic visual target search paradigm, we show that tactile search behavior for flutter cues is generally consistent with a serial search process. Modeling results reveal the preferential contributions of index finger stimulation and two-site stimulus interactions involving ipsilateral patterns and within-limb patterns. Our results offer initial evidence for spatial and temporal principles underlying tactile search behavior over the body.


Assuntos
Atenção/fisiologia , Extremidades/fisiologia , Percepção do Tato/fisiologia , Adulto , Feminino , Dedos/fisiologia , Humanos , Masculino , Tempo de Reação/fisiologia , Adulto Jovem
13.
J Vis ; 19(13): 2, 2019 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-31689715

RESUMO

Human faces contain dozens of visual features, but viewers preferentially fixate just two of them: the eyes and the mouth. Face-viewing behavior is usually studied by manually drawing regions of interest (ROIs) on the eyes, mouth, and other facial features. ROI analyses are problematic as they require arbitrary experimenter decisions about the location and number of ROIs, and they discard data because all fixations within each ROI are treated identically and fixations outside of any ROI are ignored. We introduce a data-driven method that uses principal component analysis (PCA) to characterize human face-viewing behavior. All fixations are entered into a PCA, and the resulting eigenimages provide a quantitative measure of variability in face-viewing behavior. In fixation data from 41 participants viewing four face exemplars under three stimulus and task conditions, the first principal component (PC1) separated the eye and mouth regions of the face. PC1 scores varied widely across participants, revealing large individual differences in preference for eye or mouth fixation, and PC1 scores varied by condition, revealing the importance of behavioral task in determining fixation location. Linear mixed effects modeling of the PC1 scores demonstrated that task condition accounted for 41% of the variance, individual differences accounted for 28% of the variance, and stimulus exemplar for less than 1% of the variance. Fixation eigenimages provide a useful tool for investigating the relative importance of the different factors that drive human face-viewing behavior.


Assuntos
Movimentos Oculares/fisiologia , Reconhecimento Facial/fisiologia , Fixação Ocular/fisiologia , Análise de Componente Principal , Adolescente , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
14.
Proc Natl Acad Sci U S A ; 116(43): 21715-21726, 2019 10 22.
Artigo em Inglês | MEDLINE | ID: mdl-31591222

RESUMO

Meningiomas account for one-third of all primary brain tumors. Although typically benign, about 20% of meningiomas are aggressive, and despite the rigor of the current histopathological classification system there remains considerable uncertainty in predicting tumor behavior. Here, we analyzed 160 tumors from all 3 World Health Organization (WHO) grades (I through III) using clinical, gene expression, and sequencing data. Unsupervised clustering analysis identified 3 molecular types (A, B, and C) that reliably predicted recurrence. These groups did not directly correlate with the WHO grading system, which classifies more than half of the tumors in the most aggressive molecular type as benign. Transcriptional and biochemical analyses revealed that aggressive meningiomas involve loss of the repressor function of the DREAM complex, which results in cell-cycle activation; only tumors in this category tend to recur after full resection. These findings should improve our ability to predict recurrence and develop targeted treatments for these clinically challenging tumors.


Assuntos
Proteínas Interatuantes com Canais de Kv/genética , Neoplasias Meníngeas/genética , Meningioma/genética , Recidiva Local de Neoplasia/genética , Proteínas Repressoras/genética , Adulto , Idoso , Idoso de 80 Anos ou mais , Ciclo Celular/genética , Ciclo Celular/fisiologia , Linhagem Celular , Variações do Número de Cópias de DNA/genética , Progressão da Doença , Feminino , Perfilação da Expressão Gênica , Humanos , Masculino , Neoplasias Meníngeas/patologia , Meningioma/patologia , Pessoa de Meia-Idade , Prognóstico , Adulto Jovem
15.
Front Neurosci ; 13: 1029, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31636529

RESUMO

Multisensory integration of information from the talker's voice and the talker's mouth facilitates human speech perception. A popular assay of audiovisual integration is the McGurk effect, an illusion in which incongruent visual speech information categorically changes the percept of auditory speech. There is substantial interindividual variability in susceptibility to the McGurk effect. To better understand possible sources of this variability, we examined the McGurk effect in 324 native Mandarin speakers, consisting of 73 monozygotic (MZ) and 89 dizygotic (DZ) twin pairs. When tested with 9 different McGurk stimuli, some participants never perceived the illusion and others always perceived it. Within participants, perception was similar across time (r = 0.55 at a 2-year retest in 150 participants) suggesting that McGurk susceptibility reflects a stable trait rather than short-term perceptual fluctuations. To examine the effects of shared genetics and prenatal environment, we compared McGurk susceptibility between MZ and DZ twins. Both twin types had significantly greater correlation than unrelated pairs (r = 0.28 for MZ twins and r = 0.21 for DZ twins) suggesting that the genes and environmental factors shared by twins contribute to individual differences in multisensory speech perception. Conversely, the existence of substantial differences within twin pairs (even MZ co-twins) and the overall low percentage of explained variance (5.5%) argues against a deterministic view of individual differences in multisensory integration.

16.
Behav Processes ; 169: 103957, 2019 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-31493491

RESUMO

Judgements of items viewed less than 100 ms prior are predominantly supported by a sensory, or iconic, memory system. Iconic memory is of high-capacity, but is also volatile and limited in duration. Judgements after longer delays increasingly rely on a working memory system, which is lower in capacity and volatility than sensory memory, but is longer in duration. In four experiments, several factors (e.g., length of delay, number of items, time to view items, presence of a visual mask) were manipulated during a spatial change-detection task conducted with humans and pigeons. Both species were exposed to trials with an array of colored circles (2, 3, and 4 circles in Experiment 1 and 2a; 4, 6, and 8 circles in Experiment 2b) followed by a brief delay (0, 50, and 100 ms in Experiment 1a; 0, 100, and 1000 ms in Experiments 1b and 2), and then were presented with a test display in which the position of one of the items had changed. Pigeons, like humans, were less accurate in selecting the changed item with more items in the display and after longer delays. Pigeons were equally accurate on trials with 0 and 100-ms delays, but worse on trials with a 1000-ms delay; whereas, humans were equally accurate on 100-ms and 1000-ms delays, but better on 0-ms delay trials. Accurate change detection was disrupted in both species when a visual mask was inserted between the sample and test display after a short (100 ms), but not a long (1000 ms) delay. The results support similarity between species in the functional relationships between delay and memory systems, despite time course differences related to sensory memory.


Assuntos
Julgamento/fisiologia , Memória de Curto Prazo/fisiologia , Percepção Visual/fisiologia , Adolescente , Animais , Columbidae , Feminino , Humanos , Masculino , Testes Neuropsicológicos , Estimulação Luminosa , Tempo de Reação/fisiologia , Fatores de Tempo , Adulto Jovem
17.
Elife ; 82019 08 08.
Artigo em Inglês | MEDLINE | ID: mdl-31393261

RESUMO

Visual information about speech content from the talker's mouth is often available before auditory information from the talker's voice. Here we examined perceptual and neural responses to words with and without this visual head start. For both types of words, perception was enhanced by viewing the talker's face, but the enhancement was significantly greater for words with a head start. Neural responses were measured from electrodes implanted over auditory association cortex in the posterior superior temporal gyrus (pSTG) of epileptic patients. The presence of visual speech suppressed responses to auditory speech, more so for words with a visual head start. We suggest that the head start inhibits representations of incompatible auditory phonemes, increasing perceptual accuracy and decreasing total neural responses. Together with previous work showing visual cortex modulation (Ozker et al., 2018b) these results from pSTG demonstrate that multisensory interactions are a powerful modulator of activity throughout the speech perception network.


Assuntos
Percepção Auditiva , Boca , Movimento , Percepção da Fala , Lobo Temporal/fisiologia , Percepção Visual , Humanos
18.
Behav Processes ; 158: 192-199, 2019 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-30508564

RESUMO

Many animals are challenged with the task of reorientation. Considerable research over the years has shown a diversity of species extract geometric information (e.g., distance and direction) from continuous surfaces or boundaries to reorient. How this information is extracted from the environment is less understood. Three encoding strategies that have received the most study are the use of principal axes, medial axis or local geometric cues. We used a modeling approach to investigate which of these three general strategies best fit the spatial search data of a highly-spatial corvid, the Clark's nutcracker (Nucifraga columbiana). Individual nutcrackers were trained in a rectangular-shaped arena, and once accurately locating a hidden goal, received non-reinforced tests in an L-shaped arena. The specific shape of this arena allowed us to dissociate among the three general encoding strategies. Furthermore, we reanalyzed existing data from chicks, pigeons and humans using our modeling approach. Overall, we found the most support for the use of the medial axis, although we additionally found that pigeons and humans may have engaged in random guessing. As with our previous studies, we find no support for the use of principal axes.


Assuntos
Orientação Espacial/fisiologia , Passeriformes/fisiologia , Percepção Espacial/fisiologia , Animais , Sinais (Psicologia)
19.
Sci Rep ; 8(1): 18032, 2018 12 21.
Artigo em Inglês | MEDLINE | ID: mdl-30575791

RESUMO

The McGurk effect is a popular assay of multisensory integration in which participants report the illusory percept of "da" when presented with incongruent auditory "ba" and visual "ga" (AbaVga). While the original publication describing the effect found that 98% of participants perceived it, later studies reported much lower prevalence, ranging from 17% to 81%. Understanding the source of this variability is important for interpreting the panoply of studies that examine McGurk prevalence between groups, including clinical populations such as individuals with autism or schizophrenia. The original publication used stimuli consisting of multiple repetitions of a co-articulated syllable (three repetitions, AgagaVbaba). Later studies used stimuli without repetition or co-articulation (AbaVga) and used congruent syllables from the same talker as a control. In three experiments, we tested how stimulus repetition, co-articulation, and talker repetition affect McGurk prevalence. Repetition with co-articulation increased prevalence by 20%, while repetition without co-articulation and talker repetition had no effect. A fourth experiment compared the effect of the on-line testing used in the first three experiments with the in-person testing used in the original publication; no differences were observed. We interpret our results in the framework of causal inference: co-articulation increases the evidence that auditory and visual speech tokens arise from the same talker, increasing tolerance for content disparity and likelihood of integration. The results provide a principled explanation for how co-articulation aids multisensory integration and can explain the high prevalence of the McGurk effect in the initial publication.


Assuntos
Percepção Auditiva/fisiologia , Fonética , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adolescente , Adulto , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Estimulação Luminosa , Integração de Sistemas , Adulto Jovem
20.
PLoS One ; 13(9): e0202908, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30231054

RESUMO

A common measure of multisensory integration is the McGurk effect, an illusion in which incongruent auditory and visual speech are integrated to produce an entirely different percept. Published studies report that participants who differ in age, gender, culture, native language, or traits related to neurological or psychiatric disorders also differ in their susceptibility to the McGurk effect. These group-level differences are used as evidence for fundamental alterations in sensory processing between populations. Using empirical data and statistical simulations tested under a range of conditions, we show that published estimates of group differences in the McGurk effect are inflated when only statistically significant (p < 0.05) results are published. With a sample size typical of published studies, a group difference of 10% would be reported as 31%. As a consequence of this inflation, follow-up studies often fail to replicate published reports of large between-group differences. Inaccurate estimates of effect sizes and replication failures are especially problematic in studies of clinical populations involving expensive and time-consuming interventions, such as training paradigms to improve sensory processing. Reducing effect size inflation and increasing replicability requires increasing the number of participants by an order of magnitude compared with current practice.


Assuntos
Ilusões , Percepção de Movimento , Projetos de Pesquisa , Percepção da Fala , Envelhecimento/psicologia , Simulação por Computador , Cultura , Humanos , Idioma , Transtornos Mentais/psicologia , Modelos Estatísticos , Doenças do Sistema Nervoso/psicologia , Erro Científico Experimental , Caracteres Sexuais , Percepção Social
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...