RESUMO
Human language units are hierarchical, and reading acquisition involves integrating multisensory information (typically from auditory and visual modalities) to access meaning. However, it is unclear how the brain processes and integrates language information at different linguistic units (words, phrases, and sentences) provided simultaneously in auditory and visual modalities. To address the issue, we presented participants with sequences of short Chinese sentences through auditory, visual, or combined audio-visual modalities while electroencephalographic responses were recorded. With a frequency tagging approach, we analyzed the neural representations of basic linguistic units (i.e. characters/monosyllabic words) and higher-level linguistic structures (i.e. phrases and sentences) across the 3 modalities separately. We found that audio-visual integration occurs in all linguistic units, and the brain areas involved in the integration varied across different linguistic levels. In particular, the integration of sentences activated the local left prefrontal area. Therefore, we used continuous theta-burst stimulation to verify that the left prefrontal cortex plays a vital role in the audio-visual integration of sentence information. Our findings suggest the advantage of bimodal language comprehension at hierarchical stages in language-related information processing and provide evidence for the causal role of the left prefrontal regions in processing information of audio-visual sentences.
Assuntos
Mapeamento Encefálico , Compreensão , Humanos , Compreensão/fisiologia , Encéfalo/fisiologia , Linguística , EletroencefalografiaRESUMO
The hierarchical nature of language requires human brain to internally parse connected-speech and incrementally construct abstract linguistic structures. Recent research revealed multiple neural processing timescales underlying grammar-based configuration of linguistic hierarchies. However, little is known about where in the whole cerebral cortex such temporally scaled neural processes occur. This study used novel magnetoencephalography source imaging techniques combined with a unique language stimulation paradigm to segregate cortical maps synchronized to 3 levels of linguistic units (i.e., words, phrases, and sentences). Notably, distinct ensembles of cortical loci were identified to feature structures at different levels. The superior temporal gyrus was found to be involved in processing all 3 linguistic levels while distinct ensembles of other brain regions were recruited to encode each linguistic level. Neural activities in the right motor cortex only followed the rhythm of monosyllabic words which have clear acoustic boundaries, whereas the left anterior temporal lobe and the left inferior frontal gyrus were selectively recruited in processing phrases or sentences. Our results ground a multi-timescale hierarchical neural processing of speech in neuroanatomical reality with specific sets of cortices responsible for different levels of linguistic units.