RESUMEN
BACKGROUND: Thanks to its unrivalled spatial and temporal resolutions and signal-to-noise ratio, intracranial EEG (iEEG) is becoming a valuable tool in neuroscience research. To attribute functional properties to cortical tissue, it is paramount to be able to determine precisely the localization of each electrode with respect to a patient's brain anatomy. Several software packages or pipelines offer the possibility to localize manually or semi-automatically iEEG electrodes. However, their reliability and ease of use may leave to be desired. NEW METHOD: Voxeloc (voxel electrode locator) is a Matlab-based graphical user interface to localize and visualize stereo-EEG electrodes. Voxeloc adopts a semi-automated approach to determine the coordinates of each electrode contact, the user only needing to indicate the deep-most contact of each electrode shaft and another point more proximally. RESULTS: With a deliberately streamlined functionality and intuitive graphical user interface, the main advantages of Voxeloc are ease of use and inter-user reliability. Additionally, oblique slices along the shaft of each electrode can be generated to facilitate the precise localization of each contact. Voxeloc is open-source software and is compatible with the open iEEG-BIDS (Brain Imaging Data Structure) format. COMPARISON WITH EXISTING METHODS: Localizing full patients' iEEG implants was faster using Voxeloc than two comparable software packages, and the inter-user agreement was better. CONCLUSIONS: Voxeloc offers an easy-to-use and reliable tool to localize and visualize stereo-EEG electrodes. This will contribute to democratizing neuroscience research using iEEG.
Asunto(s)
Programas Informáticos , Interfaz Usuario-Computador , Humanos , Electrodos Implantados , Electroencefalografía/métodos , Electroencefalografía/instrumentación , Encéfalo/fisiología , Encéfalo/diagnóstico por imagen , Electrocorticografía/métodos , Electrocorticografía/instrumentación , Reproducibilidad de los ResultadosRESUMEN
Departing from traditional linguistic models, advances in deep learning have resulted in a new type of predictive (autoregressive) deep language models (DLMs). Using a self-supervised next-word prediction task, these models generate appropriate linguistic responses in a given context. In the current study, nine participants listened to a 30-min podcast while their brain responses were recorded using electrocorticography (ECoG). We provide empirical evidence that the human brain and autoregressive DLMs share three fundamental computational principles as they process the same natural narrative: (1) both are engaged in continuous next-word prediction before word onset; (2) both match their pre-onset predictions to the incoming word to calculate post-onset surprise; (3) both rely on contextual embeddings to represent words in natural contexts. Together, our findings suggest that autoregressive DLMs provide a new and biologically feasible computational framework for studying the neural basis of language.
Asunto(s)
Lenguaje , Lingüística , Encéfalo/fisiología , HumanosRESUMEN
OBJECTIVE: The combined spatiotemporal dynamics underlying sign language production remain largely unknown. To investigate these dynamics compared to speech production, we used intracranial electrocorticography during a battery of language tasks. METHODS: We report a unique case of direct cortical surface recordings obtained from a neurosurgical patient with intact hearing who is bilingual in English and American Sign Language. We designed a battery of cognitive tasks to capture multiple modalities of language processing and production. RESULTS: We identified 2 spatially distinct cortical networks: ventral for speech and dorsal for sign production. Sign production recruited perirolandic, parietal, and posterior temporal regions, while speech production recruited frontal, perisylvian, and perirolandic regions. Electrical cortical stimulation confirmed this spatial segregation, identifying mouth areas for speech production and limb areas for sign production. The temporal dynamics revealed superior parietal cortex activity immediately before sign production, suggesting its role in planning and producing sign language. CONCLUSIONS: Our findings reveal a distinct network for sign language and detail the temporal propagation supporting sign production.