Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 39
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Sci Rep ; 13(1): 14021, 2023 08 28.
Artículo en Inglés | MEDLINE | ID: mdl-37640768

RESUMEN

Automatic wheelchairs directly controlled by brain activity could provide autonomy to severely paralyzed individuals. Current approaches mostly rely on non-invasive measures of brain activity and translate individual commands into wheelchair movements. For example, an imagined movement of the right hand would steer the wheelchair to the right. No research has investigated decoding higher-order cognitive processes to accomplish wheelchair control. We envision an invasive neural prosthetic that could provide input for wheelchair control by decoding navigational intent from hippocampal signals. Navigation has been extensively investigated in hippocampal recordings, but not for the development of neural prostheses. Here we show that it is possible to train a decoder to classify virtual-movement speeds from hippocampal signals recorded during a virtual-navigation task. These results represent the first step toward exploring the feasibility of an invasive hippocampal BCI for wheelchair control.


Asunto(s)
Interfaces Cerebro-Computador , Humanos , Mano , Hipocampo , Intención , Movimiento
2.
Neuroimage ; 269: 119913, 2023 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-36731812

RESUMEN

Recent studies have demonstrated that it is possible to decode and synthesize various aspects of acoustic speech directly from intracranial measurements of electrophysiological brain activity. In order to continue progressing toward the development of a practical speech neuroprosthesis for the individuals with speech impairments, better understanding and modeling of imagined speech processes are required. The present study uses intracranial brain recordings from participants that performed a speaking task with trials consisting of overt, mouthed, and imagined speech modes, representing various degrees of decreasing behavioral output. Speech activity detection models are constructed using spatial, spectral, and temporal brain activity features, and the features and model performances are characterized and compared across the three degrees of behavioral output. The results indicate the existence of a hierarchy in which the relevant channels for the lower behavioral output modes form nested subsets of the relevant channels from the higher behavioral output modes. This provides important insights for the elusive goal of developing more effective imagined speech decoding models with respect to the better-established overt speech decoding counterparts.


Asunto(s)
Interfaces Cerebro-Computador , Habla , Humanos , Habla/fisiología , Encéfalo/fisiología , Boca , Cara , Electroencefalografía/métodos
4.
Artículo en Inglés | MEDLINE | ID: mdl-36121939

RESUMEN

Numerous state-of-the-art solutions for neural speech decoding and synthesis incorporate deep learning into the processing pipeline. These models are typically opaque and can require significant computational resources for training and execution. A deep learning architecture is presented that learns input bandpass filters that capture task-relevant spectral features directly from data. Incorporating such explainable feature extraction into the model furthers the goal of creating end-to-end architectures that enable automated subject-specific parameter tuning while yielding an interpretable result. The model is implemented using intracranial brain data collected during a speech task. Using raw, unprocessed timesamples, the model detects the presence of speech at every timesample in a causal manner, suitable for online application. Model performance is comparable or superior to existing approaches that require substantial signal preprocessing and the learned frequency bands were found to converge to ranges that are supported by previous studies.


Asunto(s)
Interfaces Cerebro-Computador , Aprendizaje Profundo , Encéfalo , Electrocorticografía , Humanos , Habla
5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 6045-6048, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34892495

RESUMEN

Neurological disorders can lead to significant impairments in speech communication and, in severe cases, cause the complete loss of the ability to speak. Brain-Computer Interfaces have shown promise as an alternative communication modality by directly transforming neural activity of speech processes into a textual or audible representations. Previous studies investigating such speech neuroprostheses relied on electrocorticography (ECoG) or microelectrode arrays that acquire neural signals from superficial areas on the cortex. While both measurement methods have demonstrated successful speech decoding, they do not capture activity from deeper brain structures and this activity has therefore not been harnessed for speech-related BCIs. In this study, we bridge this gap by adapting a previously presented decoding pipeline for speech synthesis based on ECoG signals to implanted depth electrodes (sEEG). For this purpose, we propose a multi-input convolutional neural network that extracts speech-related activity separately for each electrode shaft and estimates spectral coefficients to reconstruct an audible waveform. We evaluate our approach on open-loop data from 5 patients who conducted a recitation task of Dutch utterances. We achieve correlations of up to 0.80 between original and reconstructed speech spectrograms, which are significantly above chance level for all patients (p < 0.001). Our results indicate that sEEG can yield similar speech decoding performance to prior ECoG studies and is a promising modality for speech BCIs.


Asunto(s)
Interfaces Cerebro-Computador , Habla , Electrocorticografía , Electrodos Implantados , Humanos , Redes Neurales de la Computación
6.
Commun Biol ; 4(1): 1055, 2021 09 23.
Artículo en Inglés | MEDLINE | ID: mdl-34556793

RESUMEN

Speech neuroprosthetics aim to provide a natural communication channel to individuals who are unable to speak due to physical or neurological impairments. Real-time synthesis of acoustic speech directly from measured neural activity could enable natural conversations and notably improve quality of life, particularly for individuals who have severely limited means of communication. Recent advances in decoding approaches have led to high quality reconstructions of acoustic speech from invasively measured neural activity. However, most prior research utilizes data collected during open-loop experiments of articulated speech, which might not directly translate to imagined speech processes. Here, we present an approach that synthesizes audible speech in real-time for both imagined and whispered speech conditions. Using a participant implanted with stereotactic depth electrodes, we were able to reliably generate audible speech in real-time. The decoding models rely predominately on frontal activity suggesting that speech processes have similar representations when vocalized, whispered, or imagined. While reconstructed audio is not yet intelligible, our real-time synthesis approach represents an essential step towards investigating how patients will learn to operate a closed-loop speech neuroprosthesis based on imagined speech.


Asunto(s)
Interfaces Cerebro-Computador , Electrodos Implantados/estadística & datos numéricos , Prótesis Neurales/estadística & datos numéricos , Calidad de Vida , Habla , Femenino , Humanos , Adulto Joven
7.
Front Neurosci ; 14: 123, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32174810

RESUMEN

Stereotactic electroencephalogaphy (sEEG) utilizes localized, penetrating depth electrodes to measure electrophysiological brain activity. It is most commonly used in the identification of epileptogenic zones in cases of refractory epilepsy. The implanted electrodes generally provide a sparse sampling of a unique set of brain regions including deeper brain structures such as hippocampus, amygdala and insula that cannot be captured by superficial measurement modalities such as electrocorticography (ECoG). Despite the overlapping clinical application and recent progress in decoding of ECoG for Brain-Computer Interfaces (BCIs), sEEG has thus far received comparatively little attention for BCI decoding. Additionally, the success of the related deep-brain stimulation (DBS) implants bodes well for the potential for chronic sEEG applications. This article provides an overview of sEEG technology, BCI-related research, and prospective future directions of sEEG for long-term BCI applications.

9.
Front Hum Neurosci ; 13: 401, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31803035

RESUMEN

With the recent surge of affordable, high-performance virtual reality (VR) headsets, there is unlimited potential for applications ranging from education, to training, to entertainment, to fitness and beyond. As these interfaces continue to evolve, passive user-state monitoring can play a key role in expanding the immersive VR experience, and tracking activity for user well-being. By recording physiological signals such as the electroencephalogram (EEG) during use of a VR device, the user's interactions in the virtual environment could be adapted in real-time based on the user's cognitive state. Current VR headsets provide a logical, convenient, and unobtrusive framework for mounting EEG sensors. The present study evaluates the feasibility of passively monitoring cognitive workload via EEG while performing a classical n-back task in an interactive VR environment. Data were collected from 15 participants and the spatio-spectral EEG features were analyzed with respect to task performance. The results indicate that scalp measurements of electrical activity can effectively discriminate three workload levels, even after suppression of a co-varying high-frequency activity.

10.
Front Neurosci ; 13: 1267, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-31824257

RESUMEN

Neural interfaces that directly produce intelligible speech from brain activity would allow people with severe impairment from neurological disorders to communicate more naturally. Here, we record neural population activity in motor, premotor and inferior frontal cortices during speech production using electrocorticography (ECoG) and show that ECoG signals alone can be used to generate intelligible speech output that can preserve conversational cues. To produce speech directly from neural data, we adapted a method from the field of speech synthesis called unit selection, in which units of speech are concatenated to form audible output. In our approach, which we call Brain-To-Speech, we chose subsequent units of speech based on the measured ECoG activity to generate audio waveforms directly from the neural recordings. Brain-To-Speech employed the user's own voice to generate speech that sounded very natural and included features such as prosody and accentuation. By investigating the brain areas involved in speech production separately, we found that speech motor cortex provided more information for the reconstruction process than the other cortical areas.

11.
J Neural Eng ; 16(3): 036019, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-30831567

RESUMEN

OBJECTIVE: Direct synthesis of speech from neural signals could provide a fast and natural way of communication to people with neurological diseases. Invasively-measured brain activity (electrocorticography; ECoG) supplies the necessary temporal and spatial resolution to decode fast and complex processes such as speech production. A number of impressive advances in speech decoding using neural signals have been achieved in recent years, but the complex dynamics are still not fully understood. However, it is unlikely that simple linear models can capture the relation between neural activity and continuous spoken speech. APPROACH: Here we show that deep neural networks can be used to map ECoG from speech production areas onto an intermediate representation of speech (logMel spectrogram). The proposed method uses a densely connected convolutional neural network topology which is well-suited to work with the small amount of data available from each participant. MAIN RESULTS: In a study with six participants, we achieved correlations up to r = 0.69 between the reconstructed and original logMel spectrograms. We transfered our prediction back into an audible waveform by applying a Wavenet vocoder. The vocoder was conditioned on logMel features that harnessed a much larger, pre-existing data corpus to provide the most natural acoustic output. SIGNIFICANCE: To the best of our knowledge, this is the first time that high-quality speech has been reconstructed from neural recordings during speech production using deep neural networks.


Asunto(s)
Corteza Cerebral/fisiología , Equipos de Comunicación para Personas con Discapacidad , Electrocorticografía/métodos , Redes Neurales de la Computación , Habla/fisiología , Humanos , Estimulación Luminosa/métodos
12.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 3103-3106, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31946544

RESUMEN

Virtual Reality (VR) has emerged as a novel paradigm for immersive applications in training, entertainment, rehabilitation, and other domains. In this paper, we investigate the automatic classification of mental workload from brain activity measured through functional near-infrared spectroscopy (fNIRS) in VR. We present results from a study which implements the established n-back task in an immersive visual scene, including physical interaction. Our results show that user workload can be detected from fNIRS signals in immersive VR tasks both person-dependently and -adaptively.


Asunto(s)
Encéfalo/fisiología , Espectroscopía Infrarroja Corta , Realidad Virtual , Carga de Trabajo , Humanos , Procesos Mentales
13.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 3111-3114, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31946546

RESUMEN

Millions of individuals suffer from impairments that significantly disrupt or completely eliminate their ability to speak. An ideal intervention would restore one's natural ability to physically produce speech. Recent progress has been made in decoding speech-related brain activity to generate synthesized speech. Our vision is to extend these recent advances toward the goal of restoring physical speech production using decoded speech-related brain activity to modulate the electrical stimulation of the orofacial musculature involved in speech. In this pilot study we take a step toward this vision by investigating the feasibility of stimulating orofacial muscles during vocalization in order to alter acoustic production. The results of our study provide necessary foundation for eventual orofacial stimulation controlled directly from decoded speech-related brain activity.


Asunto(s)
Estimulación Eléctrica , Músculos Faciales/fisiología , Movimiento , Habla , Encéfalo/fisiología , Humanos , Proyectos Piloto
14.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 4576-4579, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31946883

RESUMEN

The integration of electroencephalogram (EEG) sensors into virtual reality (VR) headsets can provide the capability of tracking the user's cognitive state and eventually be used to increase the sense of immersion. Recent developments in wireless, room-scale VR tracking allow users to move freely in the physical and virtual spaces. Such motion can create significant movement artifacts in EEG sensors mounted to the VR headset. This study explores the removal of EEG movement artifacts caused by repetitive, stereotyped movements during an interactive VR task.


Asunto(s)
Cognición , Electroencefalografía , Movimiento , Realidad Virtual , Artefactos , Encéfalo/fisiología , Humanos , Movimiento (Física)
15.
Iperception ; 9(1): 2041669518754595, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29375755

RESUMEN

Inattentional blindness is a failure to notice an unexpected event when attention is directed elsewhere. The current study examined participants' awareness of an unexpected object that maintained luminance contrast, switched the luminance once, or repetitively flashed. One hundred twenty participants performed a dynamic tracking task on a computer monitor for which they were instructed to count the number of movement deflections of an attended set of objects while ignoring other objects. On the critical trial, an unexpected cross that did not change its luminance (control condition), switched its luminance once (switch condition), or repetitively flashed (flash condition) traveled across the stimulus display. Participants noticed the unexpected cross more frequently when the luminance feature matched their attention set than when it did not match. Unexpectedly, however, a proportion of the participants who noticed the cross in the switch and flash conditions were statistically comparable. The results suggest that an unexpected object with even a single luminance change can break inattentional blindness in a multi-object tracking task.

16.
IEEE Trans Neural Syst Rehabil Eng ; 25(6): 557-565, 2017 06.
Artículo en Inglés | MEDLINE | ID: mdl-27542113

RESUMEN

Steady-state visual evoked potentials (SSVEPs) are oscillations of the electroencephalogram (EEG) which are mainly observed over the occipital area that exhibit a frequency corresponding to a repetitively flashing visual stimulus. SSVEPs have proven to be very consistent and reliable signals for rapid EEG-based brain-computer interface (BCI) control. There is conflicting evidence regarding whether solid or checkerboard-patterned flashing stimuli produce superior BCI performance. Furthermore, the spatial frequency of checkerboard stimuli can be varied for optimal performance. The present study performs an empirical evaluation of performance for a 4-class SSVEP-based BCI when the spatial frequency of the individual checkerboard stimuli is varied over a continuum ranging from a solid background to single-pixel checkerboard patterns. The results indicate that a spatial frequency of 2.4 cycles per degree can maximize the information transfer rate with a reduction in subjective visual irritation compared to lower spatial frequencies. This important finding on stimulus design can lead to improved performance and usability of SSVEP-based BCIs.


Asunto(s)
Mapeo Encefálico/métodos , Interfaces Cerebro-Computador , Electroencefalografía/métodos , Potenciales Evocados Visuales/fisiología , Estimulación Luminosa/métodos , Corteza Visual/fisiología , Adulto , Algoritmos , Femenino , Humanos , Masculino , Reconocimiento de Normas Patrones Automatizadas/métodos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Análisis Espacio-Temporal
17.
PLoS One ; 11(11): e0166872, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27875590

RESUMEN

How the human brain plans, executes, and monitors continuous and fluent speech has remained largely elusive. For example, previous research has defined the cortical locations most important for different aspects of speech function, but has not yet yielded a definition of the temporal progression of involvement of those locations as speech progresses either overtly or covertly. In this paper, we uncovered the spatio-temporal evolution of neuronal population-level activity related to continuous overt speech, and identified those locations that shared activity characteristics across overt and covert speech. Specifically, we asked subjects to repeat continuous sentences aloud or silently while we recorded electrical signals directly from the surface of the brain (electrocorticography (ECoG)). We then determined the relationship between cortical activity and speech output across different areas of cortex and at sub-second timescales. The results highlight a spatio-temporal progression of cortical involvement in the continuous speech process that initiates utterances in frontal-motor areas and ends with the monitoring of auditory feedback in superior temporal gyrus. Direct comparison of cortical activity related to overt versus covert conditions revealed a common network of brain regions involved in speech that may implement orthographic and phonological processing. Our results provide one of the first characterizations of the spatiotemporal electrophysiological representations of the continuous speech process, and also highlight the common neural substrate of overt and covert speech. These results thereby contribute to a refined understanding of speech functions in the human brain.


Asunto(s)
Corteza Cerebral/fisiología , Electrocorticografía , Lectura , Habla/fisiología , Adulto , Femenino , Humanos , Masculino , Persona de Mediana Edad
18.
IEEE Trans Neural Syst Rehabil Eng ; 24(5): 532-41, 2016 05.
Artículo en Inglés | MEDLINE | ID: mdl-26812728

RESUMEN

Many of the most widely accepted methods for reliable detection of steady-state visual evoked potentials (SSVEPs) in the electroencephalogram (EEG) utilize canonical correlation analysis (CCA). CCA uses pure sine and cosine reference templates with frequencies corresponding to the visual stimulation frequencies. These generic reference templates may not optimally reflect the natural SSVEP features obscured by the background EEG. This paper introduces a new approach that utilizes spatio-temporal feature extraction with multivariate linear regression (MLR) to learn discriminative SSVEP features for improving the detection accuracy. MLR is implemented on dimensionality-reduced EEG training data and a constructed label matrix to find optimally discriminative subspaces. Experimental results show that the proposed MLR method significantly outperforms CCA as well as several other competing methods for SSVEP detection, especially for time windows shorter than 1 second. This demonstrates that the MLR method is a promising new approach for achieving improved real-time performance of SSVEP-BCIs.


Asunto(s)
Interfaces Cerebro-Computador , Electroencefalografía/métodos , Potenciales Evocados Visuales/fisiología , Reconocimiento de Normas Patrones Automatizadas/métodos , Corteza Visual/fisiología , Percepción Visual/fisiología , Adulto , Mapeo Encefálico/métodos , Simulación por Computador , Análisis Discriminante , Humanos , Modelos Lineales , Aprendizaje Automático , Masculino , Análisis Multivariante , Análisis de Regresión , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Adulto Joven
19.
J Neural Eng ; 12(3): 036006, 2015 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-25875047

RESUMEN

OBJECTIVE: Recently, paradigms using code-modulated visual evoked potentials (c-VEPs) have proven to achieve among the highest information transfer rates for noninvasive brain-computer interfaces (BCIs). One issue with current c-VEP paradigms, and visual-evoked paradigms in general, is that they require direct foveal fixation of the flashing stimuli. These interfaces are often visually unpleasant and can be irritating and fatiguing to the user, thus adversely impacting practical performance. In this study, a novel c-VEP BCI paradigm is presented that attempts to perform spatial decoupling of the targets and flashing stimuli using two distinct concepts: spatial separation and boundary positioning. APPROACH: For the paradigm, the flashing stimuli form a ring that encompasses the intended non-flashing targets, which are spatially separated from the stimuli. The user fixates on the desired target, which is classified using the changes to the EEG induced by the flashing stimuli located in the non-foveal visual field. Additionally, a subset of targets is also positioned at or near the stimulus boundaries, which decouples targets from direct association with a single stimulus. This allows a greater number of target locations for a fixed number of flashing stimuli. MAIN RESULTS: Results from 11 subjects showed practical classification accuracies for the non-foveal condition, with comparable performance to the direct-foveal condition for longer observation lengths. Online results from 5 subjects confirmed the offline results with an average accuracy across subjects of 95.6% for a 4-target condition. The offline analysis also indicated that targets positioned at or near the boundaries of two stimuli could be classified with the same accuracy as traditional superimposed (non-boundary) targets. SIGNIFICANCE: The implications of this research are that c-VEPs can be detected and accurately classified to achieve comparable BCI performance without requiring potentially irritating direct foveation of flashing stimuli. Furthermore, this study shows that it is possible to increase the number of targets beyond the number of stimuli without degrading performance. Given the superior information transfer rate of c-VEP paradigms, these results can lead to the development of more practical and ergonomic BCIs.


Asunto(s)
Interfaces Cerebro-Computador , Potenciales Evocados Visuales/fisiología , Estimulación Luminosa/métodos , Análisis y Desempeño de Tareas , Corteza Visual/fisiología , Percepción Visual/fisiología , Adulto , Señales (Psicología) , Femenino , Humanos , Masculino , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Adulto Joven
20.
J Neural Eng ; 11(3): 035012, 2014 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-24836095

RESUMEN

OBJECTIVE: This study presents inter-subject models of scalp-recorded electroencephalographic (sEEG) event-related potentials (ERPs) using intracranially recorded ERPs from electrocorticography and stereotactic depth electrodes in the hippocampus, generally termed as intracranial EEG (iEEG). APPROACH: The participants were six patients with medically-intractable epilepsy that underwent temporary placement of intracranial electrode arrays to localize seizure foci. Participants performed one experimental session using a brain-computer interface matrix spelling paradigm controlled by sEEG prior to the iEEG electrode implantation, and one or more identical sessions controlled by iEEG after implantation. All participants were able to achieve excellent spelling accuracy using sEEG, four of the participants achieved roughly equivalent performance in the iEEG sessions, and all participants were significantly above chance accuracy for the iEEG sessions. The sERPs were modeled using a linear combination of iERPs using two different optimization criteria. MAIN RESULTS: The results indicate that sERPs can be accurately estimated from the iERPs for the patients that exhibited stable ERPs over the respective sessions, and that the transformed iERPs can be accurately classified with an sERP-derived classifier. SIGNIFICANCE: The resulting models provide a new empirical representation of the formation and distribution of sERPs from underlying composite iERPs. These new insights provide a better understanding of ERP relationships and can potentially lead to the development of more robust signal processing methods for noninvasive EEG applications.


Asunto(s)
Interfaces Cerebro-Computador , Encéfalo/fisiopatología , Electroencefalografía/métodos , Epilepsia/fisiopatología , Potenciales Evocados , Modelos Neurológicos , Percepción Visual , Equipos de Comunicación para Personas con Discapacidad , Simulación por Computador , Humanos , Cuero Cabelludo/fisiopatología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...