Your browser doesn't support javascript.
loading
Bimodal electroencephalography-functional magnetic resonance imaging dataset for inner-speech recognition.
Simistira Liwicki, Foteini; Gupta, Vibha; Saini, Rajkumar; De, Kanjar; Abid, Nosheen; Rakesh, Sumit; Wellington, Scott; Wilson, Holly; Liwicki, Marcus; Eriksson, Johan.
Afiliação
  • Simistira Liwicki F; Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden. foteini.liwicki@ltu.se.
  • Gupta V; Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden.
  • Saini R; Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden.
  • De K; Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden.
  • Abid N; Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden.
  • Rakesh S; Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden.
  • Wellington S; University of Bath, Department of Computer Science, Bath, UK.
  • Wilson H; University of Bath, Department of Computer Science, Bath, UK.
  • Liwicki M; Luleå University of Technology, Department of Computer Science, Electrical and Space Engineering, Embedded Intelligent Systems LAB, Luleå, Sweden.
  • Eriksson J; Umeå University, Department of Integrative Medical Biology (IMB) and Umeå Center for Functional Brain Imaging (UFBI), Umeå, Sweden.
Sci Data ; 10(1): 378, 2023 06 13.
Article em En | MEDLINE | ID: mdl-37311807
ABSTRACT
The recognition of inner speech, which could give a 'voice' to patients that have no ability to speak or move, is a challenge for brain-computer interfaces (BCIs). A shortcoming of the available datasets is that they do not combine modalities to increase the performance of inner speech recognition. Multimodal datasets of brain data enable the fusion of neuroimaging modalities with complimentary properties, such as the high spatial resolution of functional magnetic resonance imaging (fMRI) and the temporal resolution of electroencephalography (EEG), and therefore are promising for decoding inner speech. This paper presents the first publicly available bimodal dataset containing EEG and fMRI data acquired nonsimultaneously during inner-speech production. Data were obtained from four healthy, right-handed participants during an inner-speech task with words in either a social or numerical category. Each of the 8-word stimuli were assessed with 40 trials, resulting in 320 trials in each modality for each participant. The aim of this work is to provide a publicly available bimodal dataset on inner speech, contributing towards speech prostheses.
Assuntos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Fala / Percepção da Fala Limite: Humans Idioma: En Ano de publicação: 2023 Tipo de documento: Article

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Fala / Percepção da Fala Limite: Humans Idioma: En Ano de publicação: 2023 Tipo de documento: Article