Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 43
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Neuroimage ; 269: 119913, 2023 04 01.
Artigo em Inglês | MEDLINE | ID: mdl-36731812

RESUMO

Recent studies have demonstrated that it is possible to decode and synthesize various aspects of acoustic speech directly from intracranial measurements of electrophysiological brain activity. In order to continue progressing toward the development of a practical speech neuroprosthesis for the individuals with speech impairments, better understanding and modeling of imagined speech processes are required. The present study uses intracranial brain recordings from participants that performed a speaking task with trials consisting of overt, mouthed, and imagined speech modes, representing various degrees of decreasing behavioral output. Speech activity detection models are constructed using spatial, spectral, and temporal brain activity features, and the features and model performances are characterized and compared across the three degrees of behavioral output. The results indicate the existence of a hierarchy in which the relevant channels for the lower behavioral output modes form nested subsets of the relevant channels from the higher behavioral output modes. This provides important insights for the elusive goal of developing more effective imagined speech decoding models with respect to the better-established overt speech decoding counterparts.


Assuntos
Interfaces Cérebro-Computador , Fala , Humanos , Fala/fisiologia , Encéfalo/fisiologia , Boca , Face , Eletroencefalografia/métodos
2.
Sci Rep ; 13(1): 14021, 2023 08 28.
Artigo em Inglês | MEDLINE | ID: mdl-37640768

RESUMO

Automatic wheelchairs directly controlled by brain activity could provide autonomy to severely paralyzed individuals. Current approaches mostly rely on non-invasive measures of brain activity and translate individual commands into wheelchair movements. For example, an imagined movement of the right hand would steer the wheelchair to the right. No research has investigated decoding higher-order cognitive processes to accomplish wheelchair control. We envision an invasive neural prosthetic that could provide input for wheelchair control by decoding navigational intent from hippocampal signals. Navigation has been extensively investigated in hippocampal recordings, but not for the development of neural prostheses. Here we show that it is possible to train a decoder to classify virtual-movement speeds from hippocampal signals recorded during a virtual-navigation task. These results represent the first step toward exploring the feasibility of an invasive hippocampal BCI for wheelchair control.


Assuntos
Interfaces Cérebro-Computador , Humanos , Mãos , Hipocampo , Intenção , Movimento
3.
Artigo em Inglês | MEDLINE | ID: mdl-36121939

RESUMO

Numerous state-of-the-art solutions for neural speech decoding and synthesis incorporate deep learning into the processing pipeline. These models are typically opaque and can require significant computational resources for training and execution. A deep learning architecture is presented that learns input bandpass filters that capture task-relevant spectral features directly from data. Incorporating such explainable feature extraction into the model furthers the goal of creating end-to-end architectures that enable automated subject-specific parameter tuning while yielding an interpretable result. The model is implemented using intracranial brain data collected during a speech task. Using raw, unprocessed timesamples, the model detects the presence of speech at every timesample in a causal manner, suitable for online application. Model performance is comparable or superior to existing approaches that require substantial signal preprocessing and the learned frequency bands were found to converge to ranges that are supported by previous studies.


Assuntos
Interfaces Cérebro-Computador , Aprendizado Profundo , Encéfalo , Eletrocorticografia , Humanos , Fala
4.
Artigo em Inglês | MEDLINE | ID: mdl-36908334

RESUMO

The Eighth International Brain-Computer Interface (BCI) Meeting was held June 7-9th, 2021 in a virtual format. The conference continued the BCI Meeting series' interactive nature with 21 workshops covering topics in BCI (also called brain-machine interface) research. As in the past, workshops covered the breadth of topics in BCI. Some workshops provided detailed examinations of specific methods, hardware, or processes. Others focused on specific BCI applications or user groups. Several workshops continued consensus building efforts designed to create BCI standards and increase the ease of comparisons between studies and the potential for meta-analysis and large multi-site clinical trials. Ethical and translational considerations were both the primary topic for some workshops or an important secondary consideration for others. The range of BCI applications continues to expand, with more workshops focusing on approaches that can extend beyond the needs of those with physical impairments. This paper summarizes each workshop, provides background information and references for further study, presents an overview of the discussion topics, and describes the conclusion, challenges, or initiatives that resulted from the interactions and discussion at the workshop.

5.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 6045-6048, 2021 11.
Artigo em Inglês | MEDLINE | ID: mdl-34892495

RESUMO

Neurological disorders can lead to significant impairments in speech communication and, in severe cases, cause the complete loss of the ability to speak. Brain-Computer Interfaces have shown promise as an alternative communication modality by directly transforming neural activity of speech processes into a textual or audible representations. Previous studies investigating such speech neuroprostheses relied on electrocorticography (ECoG) or microelectrode arrays that acquire neural signals from superficial areas on the cortex. While both measurement methods have demonstrated successful speech decoding, they do not capture activity from deeper brain structures and this activity has therefore not been harnessed for speech-related BCIs. In this study, we bridge this gap by adapting a previously presented decoding pipeline for speech synthesis based on ECoG signals to implanted depth electrodes (sEEG). For this purpose, we propose a multi-input convolutional neural network that extracts speech-related activity separately for each electrode shaft and estimates spectral coefficients to reconstruct an audible waveform. We evaluate our approach on open-loop data from 5 patients who conducted a recitation task of Dutch utterances. We achieve correlations of up to 0.80 between original and reconstructed speech spectrograms, which are significantly above chance level for all patients (p < 0.001). Our results indicate that sEEG can yield similar speech decoding performance to prior ECoG studies and is a promising modality for speech BCIs.


Assuntos
Interfaces Cérebro-Computador , Fala , Eletrocorticografia , Eletrodos Implantados , Humanos , Redes Neurais de Computação
6.
Commun Biol ; 4(1): 1055, 2021 09 23.
Artigo em Inglês | MEDLINE | ID: mdl-34556793

RESUMO

Speech neuroprosthetics aim to provide a natural communication channel to individuals who are unable to speak due to physical or neurological impairments. Real-time synthesis of acoustic speech directly from measured neural activity could enable natural conversations and notably improve quality of life, particularly for individuals who have severely limited means of communication. Recent advances in decoding approaches have led to high quality reconstructions of acoustic speech from invasively measured neural activity. However, most prior research utilizes data collected during open-loop experiments of articulated speech, which might not directly translate to imagined speech processes. Here, we present an approach that synthesizes audible speech in real-time for both imagined and whispered speech conditions. Using a participant implanted with stereotactic depth electrodes, we were able to reliably generate audible speech in real-time. The decoding models rely predominately on frontal activity suggesting that speech processes have similar representations when vocalized, whispered, or imagined. While reconstructed audio is not yet intelligible, our real-time synthesis approach represents an essential step towards investigating how patients will learn to operate a closed-loop speech neuroprosthesis based on imagined speech.


Assuntos
Interfaces Cérebro-Computador , Eletrodos Implantados/estatística & dados numéricos , Próteses Neurais/estatística & dados numéricos , Qualidade de Vida , Fala , Feminino , Humanos , Adulto Jovem
7.
Front Neurosci ; 14: 123, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-32174810

RESUMO

Stereotactic electroencephalogaphy (sEEG) utilizes localized, penetrating depth electrodes to measure electrophysiological brain activity. It is most commonly used in the identification of epileptogenic zones in cases of refractory epilepsy. The implanted electrodes generally provide a sparse sampling of a unique set of brain regions including deeper brain structures such as hippocampus, amygdala and insula that cannot be captured by superficial measurement modalities such as electrocorticography (ECoG). Despite the overlapping clinical application and recent progress in decoding of ECoG for Brain-Computer Interfaces (BCIs), sEEG has thus far received comparatively little attention for BCI decoding. Additionally, the success of the related deep-brain stimulation (DBS) implants bodes well for the potential for chronic sEEG applications. This article provides an overview of sEEG technology, BCI-related research, and prospective future directions of sEEG for long-term BCI applications.

8.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 4576-4579, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946883

RESUMO

The integration of electroencephalogram (EEG) sensors into virtual reality (VR) headsets can provide the capability of tracking the user's cognitive state and eventually be used to increase the sense of immersion. Recent developments in wireless, room-scale VR tracking allow users to move freely in the physical and virtual spaces. Such motion can create significant movement artifacts in EEG sensors mounted to the VR headset. This study explores the removal of EEG movement artifacts caused by repetitive, stereotyped movements during an interactive VR task.


Assuntos
Cognição , Eletroencefalografia , Movimento , Realidade Virtual , Artefatos , Encéfalo/fisiologia , Humanos , Movimento (Física)
9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 3103-3106, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946544

RESUMO

Virtual Reality (VR) has emerged as a novel paradigm for immersive applications in training, entertainment, rehabilitation, and other domains. In this paper, we investigate the automatic classification of mental workload from brain activity measured through functional near-infrared spectroscopy (fNIRS) in VR. We present results from a study which implements the established n-back task in an immersive visual scene, including physical interaction. Our results show that user workload can be detected from fNIRS signals in immersive VR tasks both person-dependently and -adaptively.


Assuntos
Encéfalo/fisiologia , Espectroscopia de Luz Próxima ao Infravermelho , Realidade Virtual , Carga de Trabalho , Humanos , Processos Mentais
10.
Front Hum Neurosci ; 13: 401, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31803035

RESUMO

With the recent surge of affordable, high-performance virtual reality (VR) headsets, there is unlimited potential for applications ranging from education, to training, to entertainment, to fitness and beyond. As these interfaces continue to evolve, passive user-state monitoring can play a key role in expanding the immersive VR experience, and tracking activity for user well-being. By recording physiological signals such as the electroencephalogram (EEG) during use of a VR device, the user's interactions in the virtual environment could be adapted in real-time based on the user's cognitive state. Current VR headsets provide a logical, convenient, and unobtrusive framework for mounting EEG sensors. The present study evaluates the feasibility of passively monitoring cognitive workload via EEG while performing a classical n-back task in an interactive VR environment. Data were collected from 15 participants and the spatio-spectral EEG features were analyzed with respect to task performance. The results indicate that scalp measurements of electrical activity can effectively discriminate three workload levels, even after suppression of a co-varying high-frequency activity.

11.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 3111-3114, 2019 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-31946546

RESUMO

Millions of individuals suffer from impairments that significantly disrupt or completely eliminate their ability to speak. An ideal intervention would restore one's natural ability to physically produce speech. Recent progress has been made in decoding speech-related brain activity to generate synthesized speech. Our vision is to extend these recent advances toward the goal of restoring physical speech production using decoded speech-related brain activity to modulate the electrical stimulation of the orofacial musculature involved in speech. In this pilot study we take a step toward this vision by investigating the feasibility of stimulating orofacial muscles during vocalization in order to alter acoustic production. The results of our study provide necessary foundation for eventual orofacial stimulation controlled directly from decoded speech-related brain activity.


Assuntos
Estimulação Elétrica , Músculos Faciais/fisiologia , Movimento , Fala , Encéfalo/fisiologia , Humanos , Projetos Piloto
12.
J Neural Eng ; 16(3): 036019, 2019 06.
Artigo em Inglês | MEDLINE | ID: mdl-30831567

RESUMO

OBJECTIVE: Direct synthesis of speech from neural signals could provide a fast and natural way of communication to people with neurological diseases. Invasively-measured brain activity (electrocorticography; ECoG) supplies the necessary temporal and spatial resolution to decode fast and complex processes such as speech production. A number of impressive advances in speech decoding using neural signals have been achieved in recent years, but the complex dynamics are still not fully understood. However, it is unlikely that simple linear models can capture the relation between neural activity and continuous spoken speech. APPROACH: Here we show that deep neural networks can be used to map ECoG from speech production areas onto an intermediate representation of speech (logMel spectrogram). The proposed method uses a densely connected convolutional neural network topology which is well-suited to work with the small amount of data available from each participant. MAIN RESULTS: In a study with six participants, we achieved correlations up to r = 0.69 between the reconstructed and original logMel spectrograms. We transfered our prediction back into an audible waveform by applying a Wavenet vocoder. The vocoder was conditioned on logMel features that harnessed a much larger, pre-existing data corpus to provide the most natural acoustic output. SIGNIFICANCE: To the best of our knowledge, this is the first time that high-quality speech has been reconstructed from neural recordings during speech production using deep neural networks.


Assuntos
Córtex Cerebral/fisiologia , Auxiliares de Comunicação para Pessoas com Deficiência , Eletrocorticografia/métodos , Redes Neurais de Computação , Fala/fisiologia , Humanos , Estimulação Luminosa/métodos
13.
Front Neurosci ; 13: 1267, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31824257

RESUMO

Neural interfaces that directly produce intelligible speech from brain activity would allow people with severe impairment from neurological disorders to communicate more naturally. Here, we record neural population activity in motor, premotor and inferior frontal cortices during speech production using electrocorticography (ECoG) and show that ECoG signals alone can be used to generate intelligible speech output that can preserve conversational cues. To produce speech directly from neural data, we adapted a method from the field of speech synthesis called unit selection, in which units of speech are concatenated to form audible output. In our approach, which we call Brain-To-Speech, we chose subsequent units of speech based on the measured ECoG activity to generate audio waveforms directly from the neural recordings. Brain-To-Speech employed the user's own voice to generate speech that sounded very natural and included features such as prosody and accentuation. By investigating the brain areas involved in speech production separately, we found that speech motor cortex provided more information for the reconstruction process than the other cortical areas.

14.
J Neural Eng ; 5(2): 101-10, 2008 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-18367779

RESUMO

Brain-computer interface (BCI) technology can provide nonmuscular communication and control to people who are severely paralyzed. BCIs can use noninvasive or invasive techniques for recording the brain signals that convey the user's commands. Although noninvasive BCIs are used for simple applications, it has frequently been assumed that only invasive BCIs, which use electrodes implanted in the brain, will be able to provide multidimensional sequential control of a robotic arm or a neuroprosthesis. The present study shows that a noninvasive BCI using scalp-recorded electroencephalographic (EEG) activity and an adaptive algorithm can provide people, including people with spinal cord injuries, with two-dimensional cursor movement and target selection. Multiple targets were presented around the periphery of a computer screen, with one designated as the correct target. The user's task was to use EEG to move a cursor from the center of the screen to the correct target and then to use an additional EEG feature to select the target. If the cursor reached an incorrect target, the user was instructed not to select it. Thus, this task emulated the key features of mouse operation. The results indicate that people with severe motor disabilities could use brain signals for sequential multidimensional movement and selection.


Assuntos
Encéfalo/fisiologia , Auxiliares de Comunicação para Pessoas com Deficiência , Periféricos de Computador , Eletrocardiografia/métodos , Potencial Evocado Motor/fisiologia , Traumatismos da Medula Espinal/reabilitação , Interface Usuário-Computador , Adulto , Algoritmos , Feminino , Humanos , Masculino , Pessoa de Meia-Idade
15.
Iperception ; 9(1): 2041669518754595, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-29375755

RESUMO

Inattentional blindness is a failure to notice an unexpected event when attention is directed elsewhere. The current study examined participants' awareness of an unexpected object that maintained luminance contrast, switched the luminance once, or repetitively flashed. One hundred twenty participants performed a dynamic tracking task on a computer monitor for which they were instructed to count the number of movement deflections of an attended set of objects while ignoring other objects. On the critical trial, an unexpected cross that did not change its luminance (control condition), switched its luminance once (switch condition), or repetitively flashed (flash condition) traveled across the stimulus display. Participants noticed the unexpected cross more frequently when the luminance feature matched their attention set than when it did not match. Unexpectedly, however, a proportion of the participants who noticed the cross in the switch and flash conditions were statistically comparable. The results suggest that an unexpected object with even a single luminance change can break inattentional blindness in a multi-object tracking task.

16.
IEEE Trans Biomed Eng ; 54(2): 273-80, 2007 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-17278584

RESUMO

A brain-computer interface (BCI) is a system that provides an alternate nonmuscular communication/control channel for individuals with severe neuromuscular disabilities. With proper training, individuals can learn to modulate the amplitude of specific electroencephalographic (EEG) components (e.g., the 8-12 Hz mu rhythm and 18-26 Hz beta rhythm) over the sensorimotor cortex and use them to control a cursor on a computer screen. Conventional spectral techniques for monitoring the continuous amplitude fluctuations fail to capture essential amplitude/phase relationships of the mu and beta rhythms in a compact fashion and, therefore, are suboptimal. By extracting the characteristic mu rhythm for a user, the exact morphology can be characterized and exploited as a matched filter. A simple, parameterized model for the characteristic mu rhythm is proposed and its effectiveness as a matched filter is examined online for a one-dimensional cursor control task. The results suggest that amplitude/phase coupling exists between the mu and beta bands during event-related desynchronization, and that an appropriate matched filter can provide improved performance.


Assuntos
Algoritmos , Córtex Cerebral/fisiologia , Eletroencefalografia/métodos , Potenciais Evocados/fisiologia , Imaginação/fisiologia , Reconhecimento Automatizado de Padrão/métodos , Interface Usuário-Computador , Sincronização Cortical/métodos , Humanos
18.
IEEE Trans Neural Syst Rehabil Eng ; 25(6): 557-565, 2017 06.
Artigo em Inglês | MEDLINE | ID: mdl-27542113

RESUMO

Steady-state visual evoked potentials (SSVEPs) are oscillations of the electroencephalogram (EEG) which are mainly observed over the occipital area that exhibit a frequency corresponding to a repetitively flashing visual stimulus. SSVEPs have proven to be very consistent and reliable signals for rapid EEG-based brain-computer interface (BCI) control. There is conflicting evidence regarding whether solid or checkerboard-patterned flashing stimuli produce superior BCI performance. Furthermore, the spatial frequency of checkerboard stimuli can be varied for optimal performance. The present study performs an empirical evaluation of performance for a 4-class SSVEP-based BCI when the spatial frequency of the individual checkerboard stimuli is varied over a continuum ranging from a solid background to single-pixel checkerboard patterns. The results indicate that a spatial frequency of 2.4 cycles per degree can maximize the information transfer rate with a reduction in subjective visual irritation compared to lower spatial frequencies. This important finding on stimulus design can lead to improved performance and usability of SSVEP-based BCIs.


Assuntos
Mapeamento Encefálico/métodos , Interfaces Cérebro-Computador , Eletroencefalografia/métodos , Potenciais Evocados Visuais/fisiologia , Estimulação Luminosa/métodos , Córtex Visual/fisiologia , Adulto , Algoritmos , Feminino , Humanos , Masculino , Reconhecimento Automatizado de Padrão/métodos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade , Análise Espaço-Temporal
19.
Prog Brain Res ; 159: 411-9, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-17071245

RESUMO

The Wadsworth brain-computer interface (BCI), based on mu and beta sensorimotor rhythms, uses one- and two-dimensional cursor movement tasks and relies on user training. This is a real-time closed-loop system. Signal processing consists of channel selection, spatial filtering, and spectral analysis. Feature translation uses a regression approach and normalization. Adaptation occurs at several points in this process on the basis of different criteria and methods. It can use either feedforward (e.g., estimating the signal mean for normalization) or feedback control (e.g., estimating feature weights for the prediction equation). We view this process as the interaction between a dynamic user and a dynamic system that coadapt over time. Understanding the dynamics of this interaction and optimizing its performance represent a major challenge for BCI research.


Assuntos
Ritmo beta , Encéfalo/fisiologia , Eletroencefalografia , Interface Usuário-Computador , Adaptação Fisiológica/fisiologia , Humanos , Processamento de Sinais Assistido por Computador
20.
J Neural Eng ; 3(4): 299-305, 2006 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-17124334

RESUMO

This study assesses the relative performance characteristics of five established classification techniques on data collected using the P300 Speller paradigm, originally described by Farwell and Donchin (1988 Electroenceph. Clin. Neurophysiol. 70 510). Four linear methods: Pearson's correlation method (PCM), Fisher's linear discriminant (FLD), stepwise linear discriminant analysis (SWLDA) and a linear support vector machine (LSVM); and one nonlinear method: Gaussian kernel support vector machine (GSVM), are compared for classifying offline data from eight users. The relative performance of the classifiers is evaluated, along with the practical concerns regarding the implementation of the respective methods. The results indicate that while all methods attained acceptable performance levels, SWLDA and FLD provide the best overall performance and implementation characteristics for practical classification of P300 Speller data.


Assuntos
Eletroencefalografia/classificação , Potenciais Evocados P300/fisiologia , Adulto , Algoritmos , Interpretação Estatística de Dados , Análise Discriminante , Feminino , Humanos , Modelos Lineares , Masculino , Pessoa de Meia-Idade , Dinâmica não Linear , Distribuição Normal
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA