Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 9 de 9
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Speech Lang Hear Res ; : 1-10, 2024 Aug 06.
Artículo en Inglés | MEDLINE | ID: mdl-39106199

RESUMEN

PURPOSE: The aim of this study was to decode intended and overt speech from neuromagnetic signals while the participants performed spontaneous overt speech tasks without cues or prompts (stimuli). METHOD: Magnetoencephalography (MEG), a noninvasive neuroimaging technique, was used to collect neural signals from seven healthy adult English speakers performing spontaneous, overt speech tasks. The participants randomly spoke the words yes or no at a self-paced rate without cues. Two machine learning models, namely, linear discriminant analysis (LDA) and one-dimensional convolutional neural network (1D CNN), were employed to classify the two words from the recorded MEG signals. RESULTS: LDA and 1D CNN achieved average decoding accuracies of 79.02% and 90.40%, respectively, in decoding overt speech, significantly surpassing the chance level (50%). The accuracy for decoding intended speech was 67.19% using 1D CNN. CONCLUSIONS: This study showcases the possibility of decoding spontaneous overt and intended speech directly from neural signals in the absence of perceptual interference. We believe that these findings make a steady step toward the future spontaneous speech-based brain-computer interface.

2.
Front Psychol ; 15: 1114811, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38903475

RESUMEN

Amyotrophic lateral sclerosis (ALS) is an idiopathic, fatal, and fast-progressive neurodegenerative disease characterized by the degeneration of motor neurons. ALS patients often experience an initial misdiagnosis or a diagnostic delay due to the current unavailability of an efficient biomarker. Since impaired speech is typical in ALS, we hypothesized that functional differences between healthy and ALS participants during speech tasks can be explained by cortical pattern changes, thereby leading to the identification of a neural biomarker for ALS. In this pilot study, we collected magnetoencephalography (MEG) recordings from three early-diagnosed patients with ALS and three healthy controls during imagined (covert) and overt speech tasks. First, we computed sensor correlations, which showed greater correlations for speakers with ALS than healthy controls. Second, we compared the power of the MEG signals in canonical bands between the two groups, which showed greater dissimilarity in the beta band for ALS participants. Third, we assessed differences in functional connectivity, which showed greater beta band connectivity for ALS than healthy controls. Finally, we performed single-trial classification, which resulted in highest performance with beta band features (∼ 98%). These findings were consistent across trials, phrases, and participants for both imagined and overt speech tasks. Our preliminary results indicate that speech-evoked beta oscillations could be a potential neural biomarker for diagnosing ALS. To our knowledge, this is the first demonstration of the detection of ALS from single-trial neural signals.

3.
Curr Biol ; 33(15): 3145-3154.e5, 2023 08 07.
Artículo en Inglés | MEDLINE | ID: mdl-37442139

RESUMEN

Human skills are composed of sequences of individual actions performed with utmost precision. When occasional errors occur, they may have serious consequences, for example, when pilots are manually landing a plane. In such cases, the ability to predict an error before it occurs would clearly be advantageous. Here, we asked whether it is possible to predict future errors in a keyboard procedural human motor skill. We report that prolonged keypress transition times (KTTs), reflecting slower speed, and anomalous delta-band oscillatory activity in cingulate-entorhinal-precuneus brain regions precede upcoming errors in skill. Combined anomalous low-frequency activity and prolonged KTTs predicted up to 70% of future errors. Decoding strength (posterior probability of error) increased progressively approaching the errors. We conclude that it is possible to predict future individual errors in skill sequential performance.


Asunto(s)
Encéfalo , Destreza Motora , Humanos , Giro del Cíngulo
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2021: 6543-6546, 2021 11.
Artículo en Inglés | MEDLINE | ID: mdl-34892608

RESUMEN

Neural speech decoding aims at providing natural rate communication assistance to patients with locked-in state (e.g. due to amyotrophic lateral sclerosis, ALS) in contrast to the traditional brain-computer interface (BCI) spellers which are slow. Recent studies have shown that Magnetoencephalography (MEG) is a suitable neuroimaging modality to study neural speech decoding considering its excellent temporal resolution that can characterize the fast dynamics of speech. Gradiometers have been the preferred choice for sensor space analysis with MEG, due to their efficacy in noise suppression over magnetometers. However, recent development of optically pumped magnetometers (OPM) based wearable-MEG devices have shown great potential in future BCI applications, yet, no prior study has evaluated the performance of magnetometers in neural speech decoding. In this study, we decoded imagined and spoken speech from the MEG signals of seven healthy participants and compared the performance of magnetometers and gradiometers. Experimental results indicated that magnetometers also have the potential for neural speech decoding, although the performance was significantly lower than that obtained with gradiometers. Further, we implemented a wavelet based denoising strategy that improved the performance of both magnetometers and gradiometers significantly. These findings reconfirm that gradiometers are preferable in MEG based decoding analysis but also provide the possibility towards the use of magnetometers (or OPMs) for the development of the next-generation speech-BCIs.


Asunto(s)
Habla , Dispositivos Electrónicos Vestibles , Humanos , Magnetoencefalografía , Neuroimagen
5.
IEEE Access ; 8: 182320-182337, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33204579

RESUMEN

Direct decoding of speech from the brain is a faster alternative to current electroencephalography (EEG) speller-based brain-computer interfaces (BCI) in providing communication assistance to locked-in patients. Magnetoencephalography (MEG) has recently shown great potential as a non-invasive neuroimaging modality for neural speech decoding, owing in part to its spatial selectivity over other high-temporal resolution devices. Standard MEG systems have a large number of cryogenically cooled channels/sensors (200 - 300) encapsulated within a fixed liquid helium dewar, precluding their use as wearable BCI devices. Fortunately, recently developed optically pumped magnetometers (OPM) do not require cryogens, and have the potential to be wearable and movable making them more suitable for BCI applications. This design is also modular allowing for customized montages to include only the sensors necessary for a particular task. As the number of sensors bears a heavy influence on the cost, size, and weight of MEG systems, minimizing the number of sensors is critical for designing practical MEG-based BCIs in the future. In this study, we sought to identify an optimal set of MEG channels to decode imagined and spoken phrases from the MEG signals. Using a forward selection algorithm with a support vector machine classifier we found that nine optimally located MEG gradiometers provided higher decoding accuracy compared to using all channels. Additionally, the forward selection algorithm achieved similar performance to dimensionality reduction using a stacked-sparse-autoencoder. Analysis of spatial dynamics of speech decoding suggested that both left and right hemisphere sensors contribute to speech decoding. Sensors approximately located near Broca's area were found to be commonly contributing among the higher-ranked sensors across all subjects.

6.
Front Neurosci ; 14: 290, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32317917

RESUMEN

Speech production is a hierarchical mechanism involving the synchronization of the brain and the oral articulators, where the intention of linguistic concepts is transformed into meaningful sounds. Individuals with locked-in syndrome (fully paralyzed but aware) lose their motor ability completely including articulation and even eyeball movement. The neural pathway may be the only option to resume a certain level of communication for these patients. Current brain-computer interfaces (BCIs) use patients' visual and attentional correlates to build communication, resulting in a slow communication rate (a few words per minute). Direct decoding of imagined speech from the neural signals (and then driving a speech synthesizer) has the potential for a higher communication rate. In this study, we investigated the decoding of five imagined and spoken phrases from single-trial, non-invasive magnetoencephalography (MEG) signals collected from eight adult subjects. Two machine learning algorithms were used. One was an artificial neural network (ANN) with statistical features as the baseline approach. The other was convolutional neural networks (CNNs) applied on the spatial, spectral and temporal features extracted from the MEG signals. Experimental results indicated the possibility to decode imagined and spoken phrases directly from neuromagnetic signals. CNNs were found to be highly effective with an average decoding accuracy of up to 93% for the imagined and 96% for the spoken phrases.

7.
Sensors (Basel) ; 20(8)2020 Apr 16.
Artículo en Inglés | MEDLINE | ID: mdl-32316162

RESUMEN

Neural speech decoding-driven brain-computer interface (BCI) or speech-BCI is a novel paradigm for exploring communication restoration for locked-in (fully paralyzed but aware) patients. Speech-BCIs aim to map a direct transformation from neural signals to text or speech, which has the potential for a higher communication rate than the current BCIs. Although recent progress has demonstrated the potential of speech-BCIs from either invasive or non-invasive neural signals, the majority of the systems developed so far still assume knowing the onset and offset of the speech utterances within the continuous neural recordings. This lack of real-time voice/speech activity detection (VAD) is a current obstacle for future applications of neural speech decoding wherein BCI users can have a continuous conversation with other speakers. To address this issue, in this study, we attempted to automatically detect the voice/speech activity directly from the neural signals recorded using magnetoencephalography (MEG). First, we classified the whole segments of pre-speech, speech, and post-speech in the neural signals using a support vector machine (SVM). Second, for continuous prediction, we used a long short-term memory-recurrent neural network (LSTM-RNN) to efficiently decode the voice activity at each time point via its sequential pattern-learning mechanism. Experimental results demonstrated the possibility of real-time VAD directly from the non-invasive neural signals with about 88% accuracy.


Asunto(s)
Magnetoencefalografía/métodos , Procesamiento de Señales Asistido por Computador , Habla/fisiología , Adulto , Algoritmos , Electrocardiografía , Electrooculografía , Femenino , Humanos , Masculino , Persona de Mediana Edad , Redes Neurales de la Computación , Experimentación Humana no Terapéutica , Máquina de Vectores de Soporte , Voz
8.
Brain Inform (2018) ; 11309: 163-172, 2019 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-31768504

RESUMEN

Advancing the knowledge about neural speech mechanisms is critical for developing next-generation, faster brain computer interface to assist in speech communication for the patients with severe neurological conditions (e.g., locked-in syndrome). Among current neuroimaging techniques, Magnetoencephalography (MEG) provides direct representation for the large-scale neural dynamics of underlying cognitive processes based on its optimal spatiotemporal resolution. However, the MEG measured neural signals are smaller in magnitude compared to the background noise and hence, MEG usually suffers from a low signal-to-noise ratio (SNR) at the single-trial level. To overcome this limitation, it is common to record many trials of the same event-task and use the time-locked average signal for analysis, which can be very time consuming. In this study, we investigated the effect of the number of MEG recording trials required for speech decoding using a machine learning algorithm. We used a wavelet filter for generating the denoised neural features to train an Artificial Neural Network (ANN) for speech decoding. We found that wavelet based denoising increased the SNR of the neural signal prior to analysis and facilitated accurate speech decoding performance using as few as 40 single-trials. This study may open up the possibility of limiting MEG trials for other task evoked studies as well.

9.
Annu Int Conf IEEE Eng Med Biol Soc ; 2019: 5531-5535, 2019 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-31947107

RESUMEN

Decoding speech directly from the brain has the potential for the development of the next generation, more efficient brain computer interfaces (BCIs) to assist in the communication of patients with locked-in syndrome (fully paralyzed but aware). In this study, we have explored the spectral and temporal features of the magnetoencephalography (MEG) signals and trained those features with convolutional neural networks (CNN) for the classification of neural signals corresponding to phrases. Experimental results demonstrated the effectiveness of CNNs in decoding speech during perception, imagination, and production tasks. Furthermore, to overcome the long training time issue of CNNs, we leveraged principal component analysis (PCA) for spatial dimension reduction of MEG data and transfer learning for model initialization. Both PCA and transfer learning were found to be highly beneficial for faster model training. The best configuration (50 principal coefficients + transfer learning) led to more than 10 times faster training than the original setting while the speech decoding accuracy remained at a similarly high level.


Asunto(s)
Interfaces Cerebro-Computador , Aprendizaje Automático , Magnetoencefalografía , Redes Neurales de la Computación , Humanos , Magnetoencefalografía/métodos , Habla
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA