Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Neurosci ; 43(23): 4291-4303, 2023 06 07.
Artigo em Inglês | MEDLINE | ID: mdl-37142430

RESUMO

According to a classical view of face perception (Bruce and Young, 1986; Haxby et al., 2000), face identity and facial expression recognition are performed by separate neural substrates (ventral and lateral temporal face-selective regions, respectively). However, recent studies challenge this view, showing that expression valence can also be decoded from ventral regions (Skerry and Saxe, 2014; Li et al., 2019), and identity from lateral regions (Anzellotti and Caramazza, 2017). These findings could be reconciled with the classical view if regions specialized for one task (either identity or expression) contain a small amount of information for the other task (that enables above-chance decoding). In this case, we would expect representations in lateral regions to be more similar to representations in deep convolutional neural networks (DCNNs) trained to recognize facial expression than to representations in DCNNs trained to recognize face identity (the converse should hold for ventral regions). We tested this hypothesis by analyzing neural responses to faces varying in identity and expression. Representational dissimilarity matrices (RDMs) computed from human intracranial recordings (n = 11 adults; 7 females) were compared with RDMs from DCNNs trained to label either identity or expression. We found that RDMs from DCNNs trained to recognize identity correlated with intracranial recordings more strongly in all regions tested-even in regions classically hypothesized to be specialized for expression. These results deviate from the classical view, suggesting that face-selective ventral and lateral regions contribute to the representation of both identity and expression.SIGNIFICANCE STATEMENT Previous work proposed that separate brain regions are specialized for the recognition of face identity and facial expression. However, identity and expression recognition mechanisms might share common brain regions instead. We tested these alternatives using deep neural networks and intracranial recordings from face-selective brain regions. Deep neural networks trained to recognize identity and networks trained to recognize expression learned representations that correlate with neural recordings. Identity-trained representations correlated with intracranial recordings more strongly in all regions tested, including regions hypothesized to be expression specialized in the classical hypothesis. These findings support the view that identity and expression recognition rely on common brain regions. This discovery may require reevaluation of the roles that the ventral and lateral neural pathways play in processing socially relevant stimuli.


Assuntos
Eletrocorticografia , Reconhecimento Facial , Adulto , Feminino , Humanos , Encéfalo , Redes Neurais de Computação , Reconhecimento Facial/fisiologia , Lobo Temporal/fisiologia , Mapeamento Encefálico , Imageamento por Ressonância Magnética/métodos
2.
Behav Res Methods ; 55(5): 2333-2352, 2023 08.
Artigo em Inglês | MEDLINE | ID: mdl-35877024

RESUMO

Eye tracking and other behavioral measurements collected from patient-participants in their hospital rooms afford a unique opportunity to study natural behavior for basic and clinical translational research. We describe an immersive social and behavioral paradigm implemented in patients undergoing evaluation for surgical treatment of epilepsy, with electrodes implanted in the brain to determine the source of their seizures. Our studies entail collecting eye tracking with other behavioral and psychophysiological measurements from patient-participants during unscripted behavior, including social interactions with clinical staff, friends, and family in the hospital room. This approach affords a unique opportunity to study the neurobiology of natural social behavior, though it requires carefully addressing distinct logistical, technical, and ethical challenges. Collecting neurophysiological data synchronized to behavioral and psychophysiological measures helps us to study the relationship between behavior and physiology. Combining across these rich data sources while participants eat, read, converse with friends and family, etc., enables clinical-translational research aimed at understanding the participants' disorders and clinician-patient interactions, as well as basic research into natural, real-world behavior. We discuss data acquisition, quality control, annotation, and analysis pipelines that are required for our studies. We also discuss the clinical, logistical, and ethical and privacy considerations critical to working in the hospital setting.


Assuntos
Encéfalo , Comportamento Social , Humanos , Privacidade
3.
PLoS Comput Biol ; 18(1): e1009642, 2022 01.
Artigo em Inglês | MEDLINE | ID: mdl-35061666

RESUMO

The number of neurons in mammalian cortex varies by multiple orders of magnitude across different species. In contrast, the ratio of excitatory to inhibitory neurons (E:I ratio) varies in a much smaller range, from 3:1 to 9:1 and remains roughly constant for different sensory areas within a species. Despite this structure being important for understanding the function of neural circuits, the reason for this consistency is not yet understood. While recent models of vision based on the efficient coding hypothesis show that increasing the number of both excitatory and inhibitory cells improves stimulus representation, the two cannot increase simultaneously due to constraints on brain volume. In this work, we implement an efficient coding model of vision under a constraint on the volume (using number of neurons as a surrogate) while varying the E:I ratio. We show that the performance of the model is optimal at biologically observed E:I ratios under several metrics. We argue that this happens due to trade-offs between the computational accuracy and the representation capacity for natural stimuli. Further, we make experimentally testable predictions that 1) the optimal E:I ratio should be higher for species with a higher sparsity in the neural activity and 2) the character of inhibitory synaptic distributions and firing rates should change depending on E:I ratio. Our findings, which are supported by our new preliminary analyses of publicly available data, provide the first quantitative and testable hypothesis based on optimal coding models for the distribution of excitatory and inhibitory neural types in the mammalian sensory cortices.


Assuntos
Modelos Neurológicos , Neurônios/fisiologia , Córtex Visual , Potenciais de Ação/fisiologia , Animais , Gatos , Biologia Computacional , Tamanho do Órgão/fisiologia , Primatas , Ratos , Córtex Visual/citologia , Córtex Visual/fisiologia
4.
Artigo em Inglês | MEDLINE | ID: mdl-30106683

RESUMO

OBJECTIVE: Assistive technologies often focus on a remaining ability of their users, particularly those with physical disabilities, e.g. tetraplegia, to facilitate their computer access. We hypothesized that by combining multiple remaining abilities of the end users in an intuitive fashion, it is possible to improve the quality of computer access. In this study, 15 able-bodied subjects completed four computer access tasks without using their hands: center-out tapping, on-screen maze navigation, playing a game, and sending an email. They used the multimodal Tongue Drive System (mTDS), which offers proportional cursor control via head motion, discrete clicks via tongue gestures, and typing via speech recognition simultaneously. Their performances were compared against unimodal tongue gestures (TDS), and Keyboard & Mouse combination (KnM), as the gold standard. RESULTS: Center-out tapping task average throughputs using mTDS and TDS were 0.84 bps and 0.94 bps, which were 21% and 22.4% of the throughput using mouse, respectively, while the average error rate and missed targets using mTDS were 4.1% and 25.5% less than TDS. Maze navigation throughputs using mTDS and TDS were 0.35 bps and 0.46 bps, which were 16.6% and 21.8% of the throughput using mouse, respectively. Participants achieved 72.32% higher score using mTDS than TDS when playing a simple game. Average email generating time with mTDS was ~2x longer than KnM with a mean typing accuracy of 78.1%. CONCLUSION: Engaging multimodal abilities helped participants perform considerably better in complex tasks, such as sending an email, compared to a unimodal system (TDS). Their performances were similar for simpler task, while multimodal inputs improved interaction accuracy. Cursor navigation with head motion led to higher score in less constrained tasks, such as game, than a highly constrained maze task.

5.
IEEE Trans Biomed Circuits Syst ; 12(1): 192-201, 2018 02.
Artigo em Inglês | MEDLINE | ID: mdl-29377807

RESUMO

Multimodal Tongue Drive System (mTDS) is a highly integrated wireless assistive technology (AT) in the form of a lightweight wearable headset that utilizes three remaining key control and communication abilities in people with severe physical disabilities, such as tetraplegia, to provide them with effective access to computers: 1) tongue motion for discrete/switch-based control (e.g., clicking), 2) head tracking for proportional control (e.g., mouse pointer movements), and 3) speech recognition for typing, all available simultaneously. The mTDS architecture is presented here with new sensor signal processing algorithm for head tracking. To evaluate the device performance, it was compared against keyboard-and-mouse (KnM) combination, the gold standard in computer input methods, by 15 able-bodied participants, who used both mTDS and KnM to generate and sent an email with randomly selected content, under a 5-minute time constraint. In four repetitions, in the last trial, it took participants only 1.8 times longer to complete the email task, on average, using the mTDS versus KnM at 82.4% typing accuracy. Mean task completion time and typing accuracy improved 24.6% and 18.8% from first to fourth trial using mTDS. Multimodal simultaneous discrete and proportional control input options of mTDS, plus rapid typing, is expected to provide more effective computer access to people with severe physical disabilities.


Assuntos
Cabeça , Movimento , Tecnologia Assistiva , Processamento de Sinais Assistido por Computador , Interface para o Reconhecimento da Fala , Língua , Dispositivos Eletrônicos Vestíveis , Adulto , Pessoas com Deficiência , Feminino , Humanos , Masculino
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...