Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 112
Filtrar
1.
Cell ; 181(4): 774-783.e5, 2020 05 14.
Artigo em Inglês | MEDLINE | ID: mdl-32413298

RESUMO

A visual cortical prosthesis (VCP) has long been proposed as a strategy for restoring useful vision to the blind, under the assumption that visual percepts of small spots of light produced with electrical stimulation of visual cortex (phosphenes) will combine into coherent percepts of visual forms, like pixels on a video screen. We tested an alternative strategy in which shapes were traced on the surface of visual cortex by stimulating electrodes in dynamic sequence. In both sighted and blind participants, dynamic stimulation enabled accurate recognition of letter shapes predicted by the brain's spatial map of the visual world. Forms were presented and recognized rapidly by blind participants, up to 86 forms per minute. These findings demonstrate that a brain prosthetic can produce coherent percepts of visual forms.


Assuntos
Cegueira/fisiopatologia , Visão Ocular/fisiologia , Percepção Visual/fisiologia , Adulto , Estimulação Elétrica/métodos , Eletrodos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Fosfenos , Córtex Visual/metabolismo , Córtex Visual/fisiologia , Próteses Visuais
2.
J Neurosci ; 42(6): 1054-1067, 2022 02 09.
Artigo em Inglês | MEDLINE | ID: mdl-34965979

RESUMO

Narrowband γ oscillations (NBG: ∼20-60 Hz) in visual cortex reflect rhythmic fluctuations in population activity generated by underlying circuits tuned for stimulus location, orientation, and color. A variety of theories posit a specific role for NBG in encoding and communicating this information within visual cortex. However, recent findings suggest a more nuanced role for NBG, given its dependence on certain stimulus feature configurations, such as coherent-oriented edges and specific hues. Motivated by these factors, we sought to quantify the independent and joint tuning properties of NBG to oriented and color stimuli using intracranial recordings from the human visual cortex (male and female). NBG was shown to display a cardinal orientation bias (horizontal) and also an end- and mid-spectral color bias (red/blue and green). When jointly probed, the cardinal bias for orientation was attenuated and an end-spectral preference for red and blue predominated. This loss of mid-spectral tuning occurred even for recording sites showing large responses to uniform green stimuli. Our results demonstrate the close, yet complex, link between the population dynamics driving NBG oscillations and known feature selectivity biases for orientation and color within visual cortex. Such a bias in stimulus tuning imposes new constraints on the functional significance of the visual γ rhythm. More generally, these biases in population electrophysiology will need to be considered in experiments using orientation or color features to examine the role of visual cortex in other domains, such as working memory and decision-making.SIGNIFICANCE STATEMENT Oscillations in electrophysiological activity occur in visual cortex in response to stimuli that strongly drive the orientation or color selectivity of visual neurons. The significance of this induced "γ rhythm" to brain function remains unclear. Answering this question requires understanding how and why some stimuli can reliably generate oscillatory γ activity while others do not. We examined how different orientations and colors independently and jointly modulate γ oscillations in the human brain. Our data show that γ oscillations are greatest for certain orientations and colors that reflect known response biases in visual cortex. Such findings complicate the functional significance of γ oscillations but open new avenues for linking circuits to population dynamics in visual cortex.


Assuntos
Percepção de Cores/fisiologia , Ritmo Gama/fisiologia , Orientação Espacial/fisiologia , Córtex Visual/fisiologia , Adulto , Eletrocorticografia , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
3.
Neuroimage ; 278: 120271, 2023 09.
Artigo em Inglês | MEDLINE | ID: mdl-37442310

RESUMO

Humans have the unique ability to decode the rapid stream of language elements that constitute speech, even when it is contaminated by noise. Two reliable observations about noisy speech perception are that seeing the face of the talker improves intelligibility and the existence of individual differences in the ability to perceive noisy speech. We introduce a multivariate BOLD fMRI measure that explains both observations. In two independent fMRI studies, clear and noisy speech was presented in visual, auditory and audiovisual formats to thirty-seven participants who rated intelligibility. An event-related design was used to sort noisy speech trials by their intelligibility. Individual-differences multidimensional scaling was applied to fMRI response patterns in superior temporal cortex and the dissimilarity between responses to clear speech and noisy (but intelligible) speech was measured. Neural dissimilarity was less for audiovisual speech than auditory-only speech, corresponding to the greater intelligibility of noisy audiovisual speech. Dissimilarity was less in participants with better noisy speech perception, corresponding to individual differences. These relationships held for both single word and entire sentence stimuli, suggesting that they were driven by intelligibility rather than the specific stimuli tested. A neural measure of perceptual intelligibility may aid in the development of strategies for helping those with impaired speech perception.


Assuntos
Percepção da Fala , Fala , Humanos , Imageamento por Ressonância Magnética , Individualidade , Percepção Visual/fisiologia , Percepção da Fala/fisiologia , Lobo Temporal/diagnóstico por imagem , Lobo Temporal/fisiologia , Inteligibilidade da Fala , Estimulação Acústica/métodos
4.
Biometrics ; 79(2): 1226-1238, 2023 06.
Artigo em Inglês | MEDLINE | ID: mdl-35514244

RESUMO

This paper is motivated by studying differential brain activities to multiple experimental condition presentations in intracranial electroencephalography (iEEG) experiments. Contrasting effects of experimental conditions are often zero in most regions and nonzero in some local regions, yielding locally sparse functions. Such studies are essentially a function-on-scalar regression problem, with interest being focused not only on estimating nonparametric functions but also on recovering the function supports. We propose a weighted group bridge approach for simultaneous function estimation and support recovery in function-on-scalar mixed effect models, while accounting for heterogeneity present in functional data. We use B-splines to transform sparsity of functions to its sparse vector counterpart of increasing dimension, and propose a fast nonconvex optimization algorithm using nested alternative direction method of multipliers (ADMM) for estimation. Large sample properties are established. In particular, we show that the estimated coefficient functions are rate optimal in the minimax sense under the L2 norm and resemble a phase transition phenomenon. For support estimation, we derive a convergence rate under the L ∞ $L_{\infty }$ norm that leads to a selection consistency property under δ-sparsity, and obtain a result under strict sparsity using a simple sufficient regularity condition. An adjusted extended Bayesian information criterion is proposed for parameter tuning. The developed method is illustrated through simulations and an application to a novel iEEG data set to study multisensory integration.


Assuntos
Algoritmos , Encéfalo , Teorema de Bayes
5.
Neuroimage ; 247: 118796, 2022 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-34906712

RESUMO

Regions of the human posterior superior temporal gyrus and sulcus (pSTG/S) respond to the visual mouth movements that constitute visual speech and the auditory vocalizations that constitute auditory speech, and neural responses in pSTG/S may underlie the perceptual benefit of visual speech for the comprehension of noisy auditory speech. We examined this possibility through the lens of multivoxel pattern responses in pSTG/S. BOLD fMRI data was collected from 22 participants presented with speech consisting of English sentences presented in five different formats: visual-only; auditory with and without added auditory noise; and audiovisual with and without auditory noise. Participants reported the intelligibility of each sentence with a button press and trials were sorted post-hoc into those that were more or less intelligible. Response patterns were measured in regions of the pSTG/S identified with an independent localizer. Noisy audiovisual sentences with very similar physical properties evoked very different response patterns depending on their intelligibility. When a noisy audiovisual sentence was reported as intelligible, the pattern was nearly identical to that elicited by clear audiovisual sentences. In contrast, an unintelligible noisy audiovisual sentence evoked a pattern like that of visual-only sentences. This effect was less pronounced for noisy auditory-only sentences, which evoked similar response patterns regardless of intelligibility. The successful integration of visual and auditory speech produces a characteristic neural signature in pSTG/S, highlighting the importance of this region in generating the perceptual benefit of visual speech.


Assuntos
Percepção Auditiva/fisiologia , Lobo Temporal/fisiologia , Percepção Visual/fisiologia , Estimulação Acústica , Adolescente , Adulto , Córtex Auditivo/fisiologia , Mapeamento Encefálico , Cognição , Compreensão/fisiologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Fala/fisiologia , Percepção da Fala/fisiologia , Adulto Jovem
6.
Anal Bioanal Chem ; 414(1): 545-550, 2022 Jan.
Artigo em Inglês | MEDLINE | ID: mdl-34263346

RESUMO

In this work, we demonstrate for the first time the design and fabrication of microchip electrophoresis devices containing cross-shaped channels and spiral electrodes around the separation channel for microchip electrophoresis and capacitively coupled contactless conductivity detection. The whole device was prepared in a digital light processing-based 3D printer in poly(ethylene glycol) diacrylate resin. Outstanding X-Y resolution of the customized 3D printer ensured the fabrication of 40-µm cross section channels. The spiral channels were filled with melted gallium to form conductive electrodes around the separation channel. We demonstrate the applicability of the device on the separation of sodium, potassium, and lithium cations by microchip electrophoresis. Graphical abstract.

7.
J Neurosci ; 40(44): 8530-8542, 2020 10 28.
Artigo em Inglês | MEDLINE | ID: mdl-33023923

RESUMO

Natural conversation is multisensory: when we can see the speaker's face, visual speech cues improve our comprehension. The neuronal mechanisms underlying this phenomenon remain unclear. The two main alternatives are visually mediated phase modulation of neuronal oscillations (excitability fluctuations) in auditory neurons and visual input-evoked responses in auditory neurons. Investigating this question using naturalistic audiovisual speech with intracranial recordings in humans of both sexes, we find evidence for both mechanisms. Remarkably, auditory cortical neurons track the temporal dynamics of purely visual speech using the phase of their slow oscillations and phase-related modulations in broadband high-frequency activity. Consistent with known perceptual enhancement effects, the visual phase reset amplifies the cortical representation of concomitant auditory speech. In contrast to this, and in line with earlier reports, visual input reduces the amplitude of evoked responses to concomitant auditory input. We interpret the combination of improved phase tracking and reduced response amplitude as evidence for more efficient and reliable stimulus processing in the presence of congruent auditory and visual speech inputs.SIGNIFICANCE STATEMENT Watching the speaker can facilitate our understanding of what is being said. The mechanisms responsible for this influence of visual cues on the processing of speech remain incompletely understood. We studied these mechanisms by recording the electrical activity of the human brain through electrodes implanted surgically inside the brain. We found that visual inputs can operate by directly activating auditory cortical areas, and also indirectly by modulating the strength of cortical responses to auditory input. Our results help to understand the mechanisms by which the brain merges auditory and visual speech into a unitary perception.


Assuntos
Córtex Auditivo/fisiologia , Potenciais Evocados/fisiologia , Comunicação não Verbal/fisiologia , Adulto , Epilepsia Resistente a Medicamentos/cirurgia , Eletrocorticografia , Potenciais Evocados Auditivos/fisiologia , Potenciais Evocados Visuais/fisiologia , Feminino , Humanos , Pessoa de Meia-Idade , Neurônios/fisiologia , Comunicação não Verbal/psicologia , Estimulação Luminosa , Adulto Jovem
8.
J Neurosci ; 40(36): 6938-6948, 2020 09 02.
Artigo em Inglês | MEDLINE | ID: mdl-32727820

RESUMO

Experimentalists studying multisensory integration compare neural responses to multisensory stimuli with responses to the component modalities presented in isolation. This procedure is problematic for multisensory speech perception since audiovisual speech and auditory-only speech are easily intelligible but visual-only speech is not. To overcome this confound, we developed intracranial encephalography (iEEG) deconvolution. Individual stimuli always contained both auditory and visual speech, but jittering the onset asynchrony between modalities allowed for the time course of the unisensory responses and the interaction between them to be independently estimated. We applied this procedure to electrodes implanted in human epilepsy patients (both male and female) over the posterior superior temporal gyrus (pSTG), a brain area known to be important for speech perception. iEEG deconvolution revealed sustained positive responses to visual-only speech and larger, phasic responses to auditory-only speech. Confirming results from scalp EEG, responses to audiovisual speech were weaker than responses to auditory-only speech, demonstrating a subadditive multisensory neural computation. Leveraging the spatial resolution of iEEG, we extended these results to show that subadditivity is most pronounced in more posterior aspects of the pSTG. Across electrodes, subadditivity correlated with visual responsiveness, supporting a model in which visual speech enhances the efficiency of auditory speech processing in pSTG. The ability to separate neural processes may make iEEG deconvolution useful for studying a variety of complex cognitive and perceptual tasks.SIGNIFICANCE STATEMENT Understanding speech is one of the most important human abilities. Speech perception uses information from both the auditory and visual modalities. It has been difficult to study neural responses to visual speech because visual-only speech is difficult or impossible to comprehend, unlike auditory-only and audiovisual speech. We used intracranial encephalography deconvolution to overcome this obstacle. We found that visual speech evokes a positive response in the human posterior superior temporal gyrus, enhancing the efficiency of auditory speech processing.


Assuntos
Potenciais Evocados , Percepção da Fala , Lobo Temporal/fisiologia , Percepção Visual , Adulto , Eletrodos Implantados , Eletroencefalografia/instrumentação , Eletroencefalografia/métodos , Feminino , Humanos , Masculino
9.
Neuroimage ; 223: 117341, 2020 12.
Artigo em Inglês | MEDLINE | ID: mdl-32920161

RESUMO

Direct recording of neural activity from the human brain using implanted electrodes (iEEG, intracranial electroencephalography) is a fast-growing technique in human neuroscience. While the ability to record from the human brain with high spatial and temporal resolution has advanced our understanding, it generates staggering amounts of data: a single patient can be implanted with hundreds of electrodes, each sampled thousands of times a second for hours or days. The difficulty of exploring these vast datasets is the rate-limiting step in discovery. To overcome this obstacle, we created RAVE ("R Analysis and Visualization of iEEG"). All components of RAVE, including the underlying "R" language, are free and open source. User interactions occur through a web browser, making it transparent to the user whether the back-end data storage and computation are occurring locally, on a lab server, or in the cloud. Without writing a single line of computer code, users can create custom analyses, apply them to data from hundreds of iEEG electrodes, and instantly visualize the results on cortical surface models. Multiple types of plots are used to display analysis results, each of which can be downloaded as publication-ready graphics with a single click. RAVE consists of nearly 50,000 lines of code designed to prioritize an interactive user experience, reliability and reproducibility.


Assuntos
Encéfalo/fisiologia , Visualização de Dados , Eletroencefalografia , Processamento de Imagem Assistida por Computador/métodos , Eletrodos Implantados , Humanos , Reprodutibilidade dos Testes , Software
10.
Eur J Neurosci ; 51(5): 1364-1376, 2020 03.
Artigo em Inglês | MEDLINE | ID: mdl-29888819

RESUMO

During natural speech perception, humans must parse temporally continuous auditory and visual speech signals into sequences of words. However, most studies of speech perception present only single words or syllables. We used electrocorticography (subdural electrodes implanted on the brains of epileptic patients) to investigate the neural mechanisms for processing continuous audiovisual speech signals consisting of individual sentences. Using partial correlation analysis, we found that posterior superior temporal gyrus (pSTG) and medial occipital cortex tracked both the auditory and the visual speech envelopes. These same regions, as well as inferior temporal cortex, responded more strongly to a dynamic video of a talking face compared to auditory speech paired with a static face. Occipital cortex and pSTG carry temporal information about both auditory and visual speech dynamics. Visual speech tracking in pSTG may be a mechanism for enhancing perception of degraded auditory speech.


Assuntos
Córtex Auditivo , Percepção da Fala , Estimulação Acústica , Percepção Auditiva , Mapeamento Encefálico , Eletrocorticografia , Humanos , Lobo Occipital , Fala , Percepção Visual
11.
Anal Chem ; 91(11): 7418-7425, 2019 06 04.
Artigo em Inglês | MEDLINE | ID: mdl-31056901

RESUMO

This work demonstrates for the first time the creation of microchip electrophoresis devices with ∼50 µm cross-sectional dimensions by stereolithographic 3D printing and their application in the analysis of medically significant biomarkers related to risk for preterm birth (PTB). We determined that device current was linear with applied potential up to 800 V (620 V/cm). We optimized device and separation conditions using fluorescently labeled amino acids as a model system and compared the performance in our 3D printed microfluidic devices to that in other device materials commonly used for microchip electrophoresis analysis. We demonstrated for the first time microchip electrophoresis in a 3D printed device of three PTB biomarkers, including peptides and a protein, with suitable separation characteristics. Limits of detection for microchip electrophoresis in 3D printed microfluidic devices were also determined for PTB biomarkers to be in the high picomolar to low nanomolar range.


Assuntos
Eletroforese em Microchip , Dispositivos Lab-On-A-Chip , Nascimento Prematuro/diagnóstico , Impressão Tridimensional , Aminoácidos/química , Biomarcadores/análise , Feminino , Corantes Fluorescentes/química , Humanos , Gravidez
12.
Anal Bioanal Chem ; 411(21): 5405-5413, 2019 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-30382326

RESUMO

Preterm birth (PTB) is defined as birth before the 37th week of pregnancy and results in 15 million early deliveries worldwide every year. Presently, there is no clinical test to determine PTB risk; however, a panel of nine biomarkers found in maternal blood serum has predictive power for a subsequent PTB. A significant step in creating a clinical diagnostic for PTB is designing an automated method to extract and purify these biomarkers from blood serum. Here, microfluidic devices with 45 µm × 50 µm cross-section channels were 3D printed with a built-in polymerization window to allow a glycidyl methacrylate monolith to be site-specifically polymerized within the channel. This monolith was then used as a solid support to attach antibodies for PTB biomarker extraction. Using these functionalized monoliths, it was possible to selectively extract a PTB biomarker, ferritin, from buffer and a human blood serum matrix. This is the first demonstration of monolith formation in a 3D printed microfluidic device for immunoaffinity extraction. Notably, this work is a crucial first step toward developing a 3D printed microfluidic clinical diagnostic for PTB risk.


Assuntos
Dispositivos Lab-On-A-Chip , Gravidez/sangue , Nascimento Prematuro , Impressão Tridimensional/instrumentação , Biomarcadores/sangue , Feminino , Humanos , Recém-Nascido , Polimerização
13.
J Vis ; 19(13): 2, 2019 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-31689715

RESUMO

Human faces contain dozens of visual features, but viewers preferentially fixate just two of them: the eyes and the mouth. Face-viewing behavior is usually studied by manually drawing regions of interest (ROIs) on the eyes, mouth, and other facial features. ROI analyses are problematic as they require arbitrary experimenter decisions about the location and number of ROIs, and they discard data because all fixations within each ROI are treated identically and fixations outside of any ROI are ignored. We introduce a data-driven method that uses principal component analysis (PCA) to characterize human face-viewing behavior. All fixations are entered into a PCA, and the resulting eigenimages provide a quantitative measure of variability in face-viewing behavior. In fixation data from 41 participants viewing four face exemplars under three stimulus and task conditions, the first principal component (PC1) separated the eye and mouth regions of the face. PC1 scores varied widely across participants, revealing large individual differences in preference for eye or mouth fixation, and PC1 scores varied by condition, revealing the importance of behavioral task in determining fixation location. Linear mixed effects modeling of the PC1 scores demonstrated that task condition accounted for 41% of the variance, individual differences accounted for 28% of the variance, and stimulus exemplar for less than 1% of the variance. Fixation eigenimages provide a useful tool for investigating the relative importance of the different factors that drive human face-viewing behavior.


Assuntos
Movimentos Oculares/fisiologia , Reconhecimento Facial/fisiologia , Fixação Ocular/fisiologia , Análise de Componente Principal , Adolescente , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
14.
J Neurosci ; 37(10): 2697-2708, 2017 03 08.
Artigo em Inglês | MEDLINE | ID: mdl-28179553

RESUMO

Cortex in and around the human posterior superior temporal sulcus (pSTS) is known to be critical for speech perception. The pSTS responds to both the visual modality (especially biological motion) and the auditory modality (especially human voices). Using fMRI in single subjects with no spatial smoothing, we show that visual and auditory selectivity are linked. Regions of the pSTS were identified that preferred visually presented moving mouths (presented in isolation or as part of a whole face) or moving eyes. Mouth-preferring regions responded strongly to voices and showed a significant preference for vocal compared with nonvocal sounds. In contrast, eye-preferring regions did not respond to either vocal or nonvocal sounds. The converse was also true: regions of the pSTS that showed a significant response to speech or preferred vocal to nonvocal sounds responded more strongly to visually presented mouths than eyes. These findings can be explained by environmental statistics. In natural environments, humans see visual mouth movements at the same time as they hear voices, while there is no auditory accompaniment to visual eye movements. The strength of a voxel's preference for visual mouth movements was strongly correlated with the magnitude of its auditory speech response and its preference for vocal sounds, suggesting that visual and auditory speech features are coded together in small populations of neurons within the pSTS.SIGNIFICANCE STATEMENT Humans interacting face to face make use of auditory cues from the talker's voice and visual cues from the talker's mouth to understand speech. The human posterior superior temporal sulcus (pSTS), a brain region known to be important for speech perception, is complex, with some regions responding to specific visual stimuli and others to specific auditory stimuli. Using BOLD fMRI, we show that the natural statistics of human speech, in which voices co-occur with mouth movements, are reflected in the neural architecture of the pSTS. Different pSTS regions prefer visually presented faces containing either a moving mouth or moving eyes, but only mouth-preferring regions respond strongly to voices.


Assuntos
Boca/fisiologia , Rede Nervosa/fisiologia , Mascaramento Perceptivo/fisiologia , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Percepção Visual/fisiologia , Adulto , Comportamento de Escolha , Sinais (Psicologia) , Feminino , Humanos , Masculino
15.
J Neurosci ; 37(30): 7188-7197, 2017 07 26.
Artigo em Inglês | MEDLINE | ID: mdl-28652411

RESUMO

Electrically stimulating early visual cortex results in a visual percept known as a phosphene. Although phosphenes can be evoked by a wide range of electrode sizes and current amplitudes, they are invariably described as small. To better understand this observation, we electrically stimulated 93 electrodes implanted in the visual cortex of 13 human subjects who reported phosphene size while stimulation current was varied. Phosphene size increased as the stimulation current was initially raised above threshold, but then rapidly reached saturation. Phosphene size also depended on the location of the stimulated site, with size increasing with distance from the foveal representation. We developed a model relating phosphene size to the amount of activated cortex and its location within the retinotopic map. First, a sigmoidal curve was used to predict the amount of activated cortex at a given current. Second, the amount of active cortex was converted to degrees of visual angle by multiplying by the inverse cortical magnification factor for that retinotopic location. This simple model accurately predicted phosphene size for a broad range of stimulation currents and cortical locations. The unexpected saturation in phosphene sizes suggests that the functional architecture of cerebral cortex may impose fundamental restrictions on the spread of artificially evoked activity and this may be an important consideration in the design of cortical prosthetic devices.SIGNIFICANCE STATEMENT Understanding the neural basis for phosphenes, the visual percepts created by electrical stimulation of visual cortex, is fundamental to the development of a visual cortical prosthetic. Our experiments in human subjects implanted with electrodes over visual cortex show that it is the activity of a large population of cells spread out across several millimeters of tissue that supports the perception of a phosphene. In addition, we describe an important feature of the production of phosphenes by electrical stimulation: phosphene size saturates at a relatively low current level. This finding implies that, with current methods, visual prosthetics will have a limited dynamic range available to control the production of spatial forms and that more advanced stimulation methods may be required.


Assuntos
Estimulação Elétrica , Potenciais Evocados Visuais/fisiologia , Rede Nervosa/fisiologia , Fosfenos/fisiologia , Córtex Visual/fisiologia , Campos Visuais/fisiologia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
16.
Neuroimage ; 183: 25-36, 2018 12.
Artigo em Inglês | MEDLINE | ID: mdl-30092347

RESUMO

During face-to-face communication, the mouth of the talker is informative about speech content, while the eyes of the talker convey other information, such as gaze location. Viewers most often fixate either the mouth or the eyes of the talker's face, presumably allowing them to sample these different sources of information. To study the neural correlates of this process, healthy humans freely viewed talking faces while brain activity was measured with BOLD fMRI and eye movements were recorded with a video-based eye tracker. Post hoc trial sorting was used to divide the data into trials in which participants fixated the mouth of the talker and trials in which they fixated the eyes. Although the audiovisual stimulus was identical, the two trials types evoked differing responses in subregions of the posterior superior temporal sulcus (pSTS). The anterior pSTS preferred trials in which participants fixated the mouth of the talker while the posterior pSTS preferred fixations on the eye of the talker. A second fMRI experiment demonstrated that anterior pSTS responded more strongly to auditory and audiovisual speech than posterior pSTS eye-preferring regions. These results provide evidence for functional specialization within the pSTS under more realistic viewing and stimulus conditions than in previous neuroimaging studies.


Assuntos
Mapeamento Encefálico/métodos , Movimentos Oculares/fisiologia , Olho , Reconhecimento Facial/fisiologia , Boca , Percepção Social , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Adolescente , Adulto , Medições dos Movimentos Oculares , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Lobo Temporal/diagnóstico por imagem , Adulto Jovem
18.
PLoS Comput Biol ; 13(2): e1005229, 2017 02.
Artigo em Inglês | MEDLINE | ID: mdl-28207734

RESUMO

Audiovisual speech integration combines information from auditory speech (talker's voice) and visual speech (talker's mouth movements) to improve perceptual accuracy. However, if the auditory and visual speech emanate from different talkers, integration decreases accuracy. Therefore, a key step in audiovisual speech perception is deciding whether auditory and visual speech have the same source, a process known as causal inference. A well-known illusion, the McGurk Effect, consists of incongruent audiovisual syllables, such as auditory "ba" + visual "ga" (AbaVga), that are integrated to produce a fused percept ("da"). This illusion raises two fundamental questions: first, given the incongruence between the auditory and visual syllables in the McGurk stimulus, why are they integrated; and second, why does the McGurk effect not occur for other, very similar syllables (e.g., AgaVba). We describe a simplified model of causal inference in multisensory speech perception (CIMS) that predicts the perception of arbitrary combinations of auditory and visual speech. We applied this model to behavioral data collected from 60 subjects perceiving both McGurk and non-McGurk incongruent speech stimuli. The CIMS model successfully predicted both the audiovisual integration observed for McGurk stimuli and the lack of integration observed for non-McGurk stimuli. An identical model without causal inference failed to accurately predict perception for either form of incongruent speech. The CIMS model uses causal inference to provide a computational framework for studying how the brain performs one of its most important tasks, integrating auditory and visual speech cues to allow us to communicate with others.


Assuntos
Percepção Auditiva/fisiologia , Córtex Cerebral/fisiologia , Ilusões/fisiologia , Modelos Neurológicos , Percepção da Fala/fisiologia , Percepção Visual/fisiologia , Adulto , Simulação por Computador , Sinais (Psicologia) , Feminino , Humanos , Masculino , Mascaramento Perceptivo/fisiologia , Semântica , Enquadramento Psicológico
19.
J Cogn Neurosci ; 29(6): 1044-1060, 2017 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-28253074

RESUMO

Human speech can be comprehended using only auditory information from the talker's voice. However, comprehension is improved if the talker's face is visible, especially if the auditory information is degraded as occurs in noisy environments or with hearing loss. We explored the neural substrates of audiovisual speech perception using electrocorticography, direct recording of neural activity using electrodes implanted on the cortical surface. We observed a double dissociation in the responses to audiovisual speech with clear and noisy auditory component within the superior temporal gyrus (STG), a region long known to be important for speech perception. Anterior STG showed greater neural activity to audiovisual speech with clear auditory component, whereas posterior STG showed similar or greater neural activity to audiovisual speech in which the speech was replaced with speech-like noise. A distinct border between the two response patterns was observed, demarcated by a landmark corresponding to the posterior margin of Heschl's gyrus. To further investigate the computational roles of both regions, we considered Bayesian models of multisensory integration, which predict that combining the independent sources of information available from different modalities should reduce variability in the neural responses. We tested this prediction by measuring the variability of the neural responses to single audiovisual words. Posterior STG showed smaller variability than anterior STG during presentation of audiovisual speech with noisy auditory component. Taken together, these results suggest that posterior STG but not anterior STG is important for multisensory integration of noisy auditory and visual speech.


Assuntos
Mapeamento Encefálico/métodos , Eletrocorticografia/métodos , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Percepção Visual/fisiologia , Adulto , Epilepsia Resistente a Medicamentos/fisiopatologia , Feminino , Humanos , Masculino
20.
Anal Bioanal Chem ; 409(18): 4311-4319, 2017 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-28612085

RESUMO

Three-dimensional (3D) printing has generated considerable excitement in recent years regarding the extensive possibilities of this enabling technology. One area in which 3D printing has potential, not only for positive impact but also for substantial improvement, is microfluidics. To date many researchers have used 3D printers to make fluidic channels directed at point-of-care or lab-on-a-chip applications. Here, we look critically at the cross-sectional sizes of these 3D printed fluidic structures, classifying them as millifluidic (larger than 1 mm), sub-millifluidic (0.5-1.0 mm), large microfluidic (100-500 µm), or truly microfluidic (smaller than 100 µm). Additionally, we provide our prognosis for making 10-100-µm cross-section microfluidic features with custom-formulated resins and stereolithographic printers. Such 3D printed microfluidic devices for bioanalysis will accelerate research through designs that can be easily created and modified, allowing improved assays to be developed.


Assuntos
Dispositivos Lab-On-A-Chip , Impressão Tridimensional , Sistemas Automatizados de Assistência Junto ao Leito
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA