Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-37639420

RESUMO

We show that the task of synthesizing human motion conditioned on a set of key frames can be solved more accurately and effectively if a deep learning based interpolator operates in the delta mode using the spherical linear interpolator as a baseline. We empirically demonstrate the strength of our approach on publicly available datasets achieving state-of-the-art performance. We further generalize these results by showing that the ∆-regime is viable with respect to the reference of the last known frame (also known as the zero-velocity model). This supports the more general conclusion that operating in the reference frame local to input frames is more accurate and robust than in the global (world) reference frame advocated in previous work. Our code is publicly available at: https://github.com/boreshkinai/delta-interpolator.

2.
Front Hum Neurosci ; 17: 1124065, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37425292

RESUMO

Introduction: Speech BCIs aim at reconstructing speech in real time from ongoing cortical activity. Ideal BCIs would need to reconstruct speech audio signal frame by frame on a millisecond-timescale. Such approaches require fast computation. In this respect, linear decoder are good candidates and have been widely used in motor BCIs. Yet, they have been very seldomly studied for speech reconstruction, and never for reconstruction of articulatory movements from intracranial activity. Here, we compared vanilla linear regression, ridge-regularized linear regressions, and partial least squares regressions for offline decoding of overt speech from cortical activity. Methods: Two decoding paradigms were investigated: (1) direct decoding of acoustic vocoder features of speech, and (2) indirect decoding of vocoder features through an intermediate articulatory representation chained with a real-time-compatible DNN-based articulatory-to-acoustic synthesizer. Participant's articulatory trajectories were estimated from an electromagnetic-articulography dataset using dynamic time warping. The accuracy of the decoders was evaluated by computing correlations between original and reconstructed features. Results: We found that similar performance was achieved by all linear methods well above chance levels, albeit without reaching intelligibility. Direct and indirect methods achieved comparable performance, with an advantage for direct decoding. Discussion: Future work will address the development of an improved neural speech decoder compatible with fast frame-by-frame speech reconstruction from ongoing activity at a millisecond timescale.

3.
J Neural Eng ; 17(5): 056028, 2020 10 15.
Artigo em Inglês | MEDLINE | ID: mdl-33055383

RESUMO

OBJECTIVE: A current challenge of neurotechnologies is to develop speech brain-computer interfaces aiming at restoring communication in people unable to speak. To achieve a proof of concept of such system, neural activity of patients implanted for clinical reasons can be recorded while they speak. Using such simultaneously recorded audio and neural data, decoders can be built to predict speech features using features extracted from brain signals. A typical neural feature is the spectral power of field potentials in the high-gamma frequency band, which happens to overlap the frequency range of speech acoustic signals, especially the fundamental frequency of the voice. Here, we analyzed human electrocorticographic and intracortical recordings during speech production and perception as well as a rat microelectrocorticographic recording during sound perception. We observed that several datasets, recorded with different recording setups, contained spectrotemporal features highly correlated with those of the sound produced by or delivered to the participants, especially within the high-gamma band and above, strongly suggesting a contamination of electrophysiological recordings by the sound signal. This study investigated the presence of acoustic contamination and its possible source. APPROACH: We developed analysis methods and a statistical criterion to objectively assess the presence or absence of contamination-specific correlations, which we used to screen several datasets from five centers worldwide. MAIN RESULTS: Not all but several datasets, recorded in a variety of conditions, showed significant evidence of acoustic contamination. Three out of five centers were concerned by the phenomenon. In a recording showing high contamination, the use of high-gamma band features dramatically facilitated the performance of linear decoding of acoustic speech features, while such improvement was very limited for another recording showing no significant contamination. Further analysis and in vitro replication suggest that the contamination is caused by the mechanical action of the sound waves onto the cables and connectors along the recording chain, transforming sound vibrations into an undesired electrical noise affecting the biopotential measurements. SIGNIFICANCE: Although this study does not per se question the presence of speech-relevant physiological information in the high-gamma range and above (multiunit activity), it alerts on the fact that acoustic contamination of neural signals should be proofed and eliminated before investigating the cortical dynamics of these processes. To this end, we make available a toolbox implementing the proposed statistical approach to quickly assess the extent of contamination in an electrophysiological recording (https://doi.org/10.5281/zenodo.3929296).


Assuntos
Percepção da Fala , Fala , Estimulação Acústica , Acústica , Animais , Encéfalo , Humanos , Ruído , Ratos
4.
PLoS Comput Biol ; 12(11): e1005119, 2016 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-27880768

RESUMO

Restoring natural speech in paralyzed and aphasic people could be achieved using a Brain-Computer Interface (BCI) controlling a speech synthesizer in real-time. To reach this goal, a prerequisite is to develop a speech synthesizer producing intelligible speech in real-time with a reasonable number of control parameters. We present here an articulatory-based speech synthesizer that can be controlled in real-time for future BCI applications. This synthesizer converts movements of the main speech articulators (tongue, jaw, velum, and lips) into intelligible speech. The articulatory-to-acoustic mapping is performed using a deep neural network (DNN) trained on electromagnetic articulography (EMA) data recorded on a reference speaker synchronously with the produced speech signal. This DNN is then used in both offline and online modes to map the position of sensors glued on different speech articulators into acoustic parameters that are further converted into an audio signal using a vocoder. In offline mode, highly intelligible speech could be obtained as assessed by perceptual evaluation performed by 12 listeners. Then, to anticipate future BCI applications, we further assessed the real-time control of the synthesizer by both the reference speaker and new speakers, in a closed-loop paradigm using EMA data recorded in real time. A short calibration period was used to compensate for differences in sensor positions and articulatory differences between new speakers and the reference speaker. We found that real-time synthesis of vowels and consonants was possible with good intelligibility. In conclusion, these results open to future speech BCI applications using such articulatory-based speech synthesizer.


Assuntos
Biorretroalimentação Psicológica/métodos , Interfaces Cérebro-Computador , Auxiliares de Comunicação para Pessoas com Deficiência , Redes Neurais de Computação , Espectrografia do Som/métodos , Medida da Produção da Fala/métodos , Biorretroalimentação Psicológica/instrumentação , Sistemas Computacionais , Humanos , Fonética , Espectrografia do Som/instrumentação , Acústica da Fala , Inteligibilidade da Fala , Medida da Produção da Fala/instrumentação
5.
J Physiol Paris ; 110(4 Pt A): 392-401, 2016 11.
Artigo em Inglês | MEDLINE | ID: mdl-28756027

RESUMO

Restoring communication in case of aphasia is a key challenge for neurotechnologies. To this end, brain-computer strategies can be envisioned to allow artificial speech synthesis from the continuous decoding of neural signals underlying speech imagination. Such speech brain-computer interfaces do not exist yet and their design should consider three key choices that need to be made: the choice of appropriate brain regions to record neural activity from, the choice of an appropriate recording technique, and the choice of a neural decoding scheme in association with an appropriate speech synthesis method. These key considerations are discussed here in light of (1) the current understanding of the functional neuroanatomy of cortical areas underlying overt and covert speech production, (2) the available literature making use of a variety of brain recording techniques to better characterize and address the challenge of decoding cortical speech signals, and (3) the different speech synthesis approaches that can be considered depending on the level of speech representation (phonetic, acoustic or articulatory) envisioned to be decoded at the core of a speech BCI paradigm.


Assuntos
Interfaces Cérebro-Computador , Fala/fisiologia , Humanos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...