Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 10 de 10
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Neuroeng Rehabil ; 20(1): 157, 2023 11 18.
Artigo em Inglês | MEDLINE | ID: mdl-37980536

RESUMO

Individuals with a locked-in state live with severe whole-body paralysis that limits their ability to communicate with family and loved ones. Recent advances in brain-computer interface (BCI) technology have presented a potential alternative for these people to communicate by detecting neural activity associated with attempted hand or speech movements and translating the decoded intended movements to a control signal for a computer. A technique that could potentially enrich the communication capacity of BCIs is functional electrical stimulation (FES) of paralyzed limbs and face to restore body and facial movements of paralyzed individuals, allowing to add body language and facial expression to communication BCI utterances. Here, we review the current state of the art of existing BCI and FES work in people with paralysis of body and face and propose that a combined BCI-FES approach, which has already proved successful in several applications in stroke and spinal cord injury, can provide a novel promising mode of communication for locked-in individuals.


Assuntos
Interfaces Cérebro-Computador , Síndrome do Encarceramento , Humanos , Interface Usuário-Computador , Paralisia , Estimulação Elétrica , Encéfalo/fisiologia
2.
J Neural Eng ; 20(5)2023 09 20.
Artigo em Inglês | MEDLINE | ID: mdl-37467739

RESUMO

Objective.Development of brain-computer interface (BCI) technology is key for enabling communication in individuals who have lost the faculty of speech due to severe motor paralysis. A BCI control strategy that is gaining attention employs speech decoding from neural data. Recent studies have shown that a combination of direct neural recordings and advanced computational models can provide promising results. Understanding which decoding strategies deliver best and directly applicable results is crucial for advancing the field.Approach.In this paper, we optimized and validated a decoding approach based on speech reconstruction directly from high-density electrocorticography recordings from sensorimotor cortex during a speech production task.Main results.We show that (1) dedicated machine learning optimization of reconstruction models is key for achieving the best reconstruction performance; (2) individual word decoding in reconstructed speech achieves 92%-100% accuracy (chance level is 8%); (3) direct reconstruction from sensorimotor brain activity produces intelligible speech.Significance.These results underline the need for model optimization in achieving best speech decoding results and highlight the potential that reconstruction-based speech decoding from sensorimotor cortex can offer for development of next-generation BCI technology for communication.


Assuntos
Interfaces Cérebro-Computador , Aprendizado Profundo , Córtex Sensório-Motor , Humanos , Fala , Comunicação , Eletrocorticografia/métodos
3.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 802-806, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36085697

RESUMO

Completely locked-in patients suffer from paralysis affecting every muscle in their body, reducing their communication means to brain-computer interfaces (BCIs). State-of-the-art BCIs have a slow spelling rate, which inevitably places a burden on patients' quality of life. Novel techniques address this problem by following a bio-mimetic approach, which consists of decoding sensory-motor cortex (SMC) activity that underlies the movements of the vocal tract's articulators. As recording articulatory data in combination with neural recordings is often unfeasible, the goal of this study was to develop an acoustic-to-articulatory inversion (AAI) model, i.e. an algorithm that generates articulatory data (speech gestures) from acoustics. A fully convolutional neural network was trained to solve the AAI mapping, and was tested on an unseen acoustic set, recorded simultaneously with neural data. Representational similarity analysis was then used to assess the relationship between predicted gestures and neural responses. The network's predictions and targets were significantly correlated. Moreover, SMC neural activity was correlated to the vocal tract gestural dynamics. The present AAI model has the potential to further our understanding of the relationship between neural, gestural and acoustic signals and lay the foundations for the development of a bio-mimetic speech BCI. Clinical Relevance- This study investigates the relationship between articulatory gestures during speech and the underlying neural activity. The topic is central for development of brain-computer interfaces for severely paralysed individuals.


Assuntos
Gestos , Fala , Acústica , Inversão Cromossômica , Humanos , Idioma , Paralisia , Qualidade de Vida
4.
Annu Int Conf IEEE Eng Med Biol Soc ; 2022: 3100-3104, 2022 07.
Artigo em Inglês | MEDLINE | ID: mdl-36085779

RESUMO

Speech decoding from brain activity can enable development of brain-computer interfaces (BCIs) to restore naturalistic communication in paralyzed patients. Previous work has focused on development of decoding models from isolated speech data with a clean background and multiple repetitions of the material. In this study, we describe a novel approach to speech decoding that relies on a generative adversarial neural network (GAN) to reconstruct speech from brain data recorded during a naturalistic speech listening task (watching a movie). We compared the GAN-based approach, where reconstruction was done from the compressed latent representation of sound decoded from the brain, with several baseline models that reconstructed sound spectrogram directly. We show that the novel approach provides more accurate reconstructions compared to the baselines. These results underscore the potential of GAN models for speech decoding in naturalistic noisy environments and further advancing of BCIs for naturalistic communication. Clinical Relevance - This study presents a novel speech decoding paradigm that combines advances in deep learning, speech synthesis and neural engineering, and has the potential to advance the field of BCI for severely paralyzed individuals.


Assuntos
Interfaces Cérebro-Computador , Fala , Encéfalo , Comunicação , Humanos , Redes Neurais de Computação
5.
Sci Data ; 9(1): 91, 2022 03 21.
Artigo em Inglês | MEDLINE | ID: mdl-35314718

RESUMO

Intracranial human recordings are a valuable and rare resource of information about the brain. Making such data publicly available not only helps tackle reproducibility issues in science, it helps make more use of these valuable data. This is especially true for data collected using naturalistic tasks. Here, we describe a dataset collected from a large group of human subjects while they watched a short audiovisual film. The dataset has several unique features. First, it includes a large amount of intracranial electroencephalography (iEEG) data (51 participants, age range of 5-55 years, who all performed the same task). Second, it includes functional magnetic resonance imaging (fMRI) recordings (30 participants, age range of 7-47) during the same task. Eighteen participants performed both iEEG and fMRI versions of the task, non-simultaneously. Third, the data were acquired using a rich audiovisual stimulus, for which we provide detailed speech and video annotations. This dataset can be used to study neural mechanisms of multimodal perception and language comprehension, and similarity of neural signals across brain recording modalities.


Assuntos
Eletrocorticografia , Imageamento por Ressonância Magnética , Adolescente , Adulto , Encéfalo/diagnóstico por imagem , Encéfalo/fisiologia , Mapeamento Encefálico/métodos , Criança , Pré-Escolar , Humanos , Pessoa de Meia-Idade , Reprodutibilidade dos Testes , Fala , Adulto Jovem
6.
Neuropsychologia ; 158: 107907, 2021 07 30.
Artigo em Inglês | MEDLINE | ID: mdl-34058175

RESUMO

Language difficulties of children with Developmental Language Disorder (DLD) have been associated with multiple underlying factors and are still poorly understood. One way of investigating the mechanisms of DLD language problems is to compare language-related brain activation patterns of children with DLD to those of a population with similar language difficulties and a uniform etiology. Children with 22q11.2 deletion syndrome (22q11DS) constitute such a population. Here, we conducted an fMRI study, in which children (6-10yo) with DLD and 22q11DS listened to speech alternated with reversed speech. We compared language laterality and language-related brain activation levels with those of typically developing (TD) children who performed the same task. The data revealed no significant differences between groups in language lateralization, but task-related activation levels were lower in children with language impairment than in TD children in several nodes of the language network. We conclude that language impairment in children with DLD and in children with 22q11DS may involve (partially) overlapping cortical areas.


Assuntos
Síndrome de DiGeorge , Transtornos do Desenvolvimento da Linguagem , Encéfalo/diagnóstico por imagem , Criança , Linguagem Infantil , Síndrome de DiGeorge/complicações , Síndrome de DiGeorge/diagnóstico por imagem , Humanos , Transtornos do Desenvolvimento da Linguagem/etiologia , Fala
7.
Hum Brain Mapp ; 41(16): 4587-4609, 2020 11.
Artigo em Inglês | MEDLINE | ID: mdl-32744403

RESUMO

Various brain regions are implicated in speech processing, and the specific function of some of them is better understood than others. In particular, involvement of the dorsal precentral cortex (dPCC) in speech perception remains debated, and attribution of the function of this region is more or less restricted to motor processing. In this study, we investigated high-density intracranial responses to speech fragments of a feature film, aiming to determine whether dPCC is engaged in perception of continuous speech. Our findings show that dPCC exhibited preference to speech over other tested sounds. Moreover, the identified area was involved in tracking of speech auditory properties including speech spectral envelope, its rhythmic phrasal pattern and pitch contour. DPCC also showed the ability to filter out noise from the perceived speech. Comparing these results to data from motor experiments showed that the identified region had a distinct location in dPCC, anterior to the hand motor area and superior to the mouth articulator region. The present findings uncovered with high-density intracranial recordings help elucidate the functional specialization of PCC and demonstrate the unique role of its anterior dorsal region in continuous speech perception.


Assuntos
Mapeamento Encefálico , Eletrocorticografia , Córtex Motor/fisiologia , Percepção da Fala/fisiologia , Adolescente , Adulto , Epilepsia Resistente a Medicamentos/fisiopatologia , Feminino , Humanos , Masculino , Adulto Jovem
8.
Sci Rep ; 10(1): 12077, 2020 07 21.
Artigo em Inglês | MEDLINE | ID: mdl-32694561

RESUMO

Research on how the human brain extracts meaning from sensory input relies in principle on methodological reductionism. In the present study, we adopt a more holistic approach by modeling the cortical responses to semantic information that was extracted from the visual stream of a feature film, employing artificial neural network models. Advances in both computer vision and natural language processing were utilized to extract the semantic representations from the film by combining perceptual and linguistic information. We tested whether these representations were useful in studying the human brain data. To this end, we collected electrocorticography responses to a short movie from 37 subjects and fitted their cortical patterns across multiple regions using the semantic components extracted from film frames. We found that individual semantic components reflected fundamental semantic distinctions in the visual input, such as presence or absence of people, human movement, landscape scenes, human faces, etc. Moreover, each semantic component mapped onto a distinct functional cortical network involving high-level cognitive regions in occipitotemporal, frontal and parietal cortices. The present work demonstrates the potential of the data-driven methods from information processing fields to explain patterns of cortical responses, and contributes to the overall discussion about the encoding of high-level perceptual information in the human brain.


Assuntos
Mapeamento Encefálico , Córtex Cerebral/fisiologia , Vias Neurais , Algoritmos , Mapeamento Encefálico/métodos , Córtex Cerebral/diagnóstico por imagem , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Modelos Neurológicos , Rede Nervosa , Reconhecimento Visual de Modelos , Estimulação Luminosa , Reprodutibilidade dos Testes , Semântica
9.
PLoS Comput Biol ; 16(7): e1007992, 2020 07.
Artigo em Inglês | MEDLINE | ID: mdl-32614826

RESUMO

Understanding how the human brain processes auditory input remains a challenge. Traditionally, a distinction between lower- and higher-level sound features is made, but their definition depends on a specific theoretical framework and might not match the neural representation of sound. Here, we postulate that constructing a data-driven neural model of auditory perception, with a minimum of theoretical assumptions about the relevant sound features, could provide an alternative approach and possibly a better match to the neural responses. We collected electrocorticography recordings from six patients who watched a long-duration feature film. The raw movie soundtrack was used to train an artificial neural network model to predict the associated neural responses. The model achieved high prediction accuracy and generalized well to a second dataset, where new participants watched a different film. The extracted bottom-up features captured acoustic properties that were specific to the type of sound and were associated with various response latency profiles and distinct cortical distributions. Specifically, several features encoded speech-related acoustic properties with some features exhibiting shorter latency profiles (associated with responses in posterior perisylvian cortex) and others exhibiting longer latency profiles (associated with responses in anterior perisylvian cortex). Our results support and extend the current view on speech perception by demonstrating the presence of temporal hierarchies in the perisylvian cortex and involvement of cortical sites outside of this region during audiovisual speech perception.


Assuntos
Córtex Auditivo/fisiologia , Percepção Auditiva , Modelos Neurológicos , Redes Neurais de Computação , Som , Adolescente , Adulto , Mapeamento Encefálico/métodos , Eletrocorticografia , Feminino , Humanos , Masculino , Filmes Cinematográficos , Fonética , Processamento de Sinais Assistido por Computador , Fala/fisiologia , Percepção da Fala , Fatores de Tempo , Adulto Jovem
10.
J Neurosci ; 37(33): 7906-7920, 2017 08 16.
Artigo em Inglês | MEDLINE | ID: mdl-28716965

RESUMO

Despite a large body of research, we continue to lack a detailed account of how auditory processing of continuous speech unfolds in the human brain. Previous research showed the propagation of low-level acoustic features of speech from posterior superior temporal gyrus toward anterior superior temporal gyrus in the human brain (Hullett et al., 2016). In this study, we investigate what happens to these neural representations past the superior temporal gyrus and how they engage higher-level language processing areas such as inferior frontal gyrus. We used low-level sound features to model neural responses to speech outside of the primary auditory cortex. Two complementary imaging techniques were used with human participants (both males and females): electrocorticography (ECoG) and fMRI. Both imaging techniques showed tuning of the perisylvian cortex to low-level speech features. With ECoG, we found evidence of propagation of the temporal features of speech sounds along the ventral pathway of language processing in the brain toward inferior frontal gyrus. Increasingly coarse temporal features of speech spreading from posterior superior temporal cortex toward inferior frontal gyrus were associated with linguistic features such as voice onset time, duration of the formant transitions, and phoneme, syllable, and word boundaries. The present findings provide the groundwork for a comprehensive bottom-up account of speech comprehension in the human brain.SIGNIFICANCE STATEMENT We know that, during natural speech comprehension, a broad network of perisylvian cortical regions is involved in sound and language processing. Here, we investigated the tuning to low-level sound features within these regions using neural responses to a short feature film. We also looked at whether the tuning organization along these brain regions showed any parallel to the hierarchy of language structures in continuous speech. Our results show that low-level speech features propagate throughout the perisylvian cortex and potentially contribute to the emergence of "coarse" speech representations in inferior frontal gyrus typically associated with high-level language processing. These findings add to the previous work on auditory processing and underline a distinctive role of inferior frontal gyrus in natural speech comprehension.


Assuntos
Estimulação Acústica/métodos , Córtex Auditivo/fisiologia , Mapeamento Encefálico/métodos , Rede Nervosa/fisiologia , Fonética , Percepção da Fala/fisiologia , Adolescente , Adulto , Eletrocorticografia/métodos , Eletrodos Implantados , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Estimulação Luminosa/métodos , Fala/fisiologia , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...