Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 87
Filtrar
Mais filtros

Bases de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Ann Neurol ; 93(1): 131-141, 2023 01.
Artigo em Inglês | MEDLINE | ID: mdl-36222470

RESUMO

OBJECTIVE: Little is known about residual cognitive function in the earliest stages of serious brain injury. Functional neuroimaging has yielded valuable diagnostic and prognostic information in chronic disorders of consciousness, such as the vegetative state (also termed unresponsive wakefulness syndrome). The objective of the current study was to determine if functional neuroimaging could be efficacious in the assessment of cognitive function in acute disorders of consciousness, such as coma, where decisions about the withdrawal of life-sustaining therapies are often made. METHODS: A hierarchical functional magnetic resonance imaging (fMRI) approach assessed sound perception, speech perception, language comprehension, and covert command following in 17 critically ill patients admitted to the intensive care unit (ICU). RESULTS: Preserved auditory function was observed in 15 patients (88%), whereas 5 (29%) also had preserved higher-order language comprehension. Notably, one patient could willfully modulate his brain activity when instructed to do so, suggesting a level of covert conscious awareness that was entirely inconsistent with his clinical diagnosis at the time of the scan. Across patients, a positive relationship was also observed between fMRI responsivity and the level of functional recovery, such that patients with the greatest functional recovery had neural responses most similar to those observed in healthy control participants. INTERPRETATION: These results suggest that fMRI may provide important diagnostic and prognostic information beyond standard clinical assessment in acutely unresponsive patients, which may aid discussions surrounding the continuation or removal of life-sustaining therapies during the early post-injury period. ANN NEUROL 2023;93:131-141.


Assuntos
Lesões Encefálicas , Transtornos da Consciência , Humanos , Transtornos da Consciência/diagnóstico , Estado Terminal , Encéfalo/diagnóstico por imagem , Lesões Encefálicas/diagnóstico por imagem , Estado Vegetativo Persistente/diagnóstico por imagem , Imageamento por Ressonância Magnética/métodos , Neuroimagem Funcional , Neuroimagem
2.
Cereb Cortex ; 33(7): 3350-3371, 2023 03 21.
Artigo em Inglês | MEDLINE | ID: mdl-35989307

RESUMO

Sensory deprivation can lead to cross-modal cortical changes, whereby sensory brain regions deprived of input may be recruited to perform atypical function. Enhanced cross-modal responses to visual stimuli observed in auditory cortex of postlingually deaf cochlear implant (CI) users are hypothesized to reflect increased activation of cortical language regions, but it is unclear if this cross-modal activity is "adaptive" or "mal-adaptive" for speech understanding. To determine if increased activation of language regions is correlated with better speech understanding in CI users, we assessed task-related activation and functional connectivity of auditory and visual cortices to auditory and visual speech and non-speech stimuli in CI users (n = 14) and normal-hearing listeners (n = 17) and used functional near-infrared spectroscopy to measure hemodynamic responses. We used visually presented speech and non-speech to investigate neural processes related to linguistic content and observed that CI users show beneficial cross-modal effects. Specifically, an increase in connectivity between the left auditory and visual cortices-presumed primary sites of cortical language processing-was positively correlated with CI users' abilities to understand speech in background noise. Cross-modal activity in auditory cortex of postlingually deaf CI users may reflect adaptive activity of a distributed, multimodal speech network, recruited to enhance speech understanding.


Assuntos
Córtex Auditivo , Implante Coclear , Implantes Cocleares , Surdez , Percepção da Fala , Humanos , Córtex Auditivo/fisiologia , Percepção da Fala/fisiologia
3.
J Neurosci ; 42(3): 435-442, 2022 01 19.
Artigo em Inglês | MEDLINE | ID: mdl-34815317

RESUMO

In everyday conversation, we usually process the talker's face as well as the sound of the talker's voice. Access to visual speech information is particularly useful when the auditory signal is degraded. Here, we used fMRI to monitor brain activity while adult humans (n = 60) were presented with visual-only, auditory-only, and audiovisual words. The audiovisual words were presented in quiet and in several signal-to-noise ratios. As expected, audiovisual speech perception recruited both auditory and visual cortex, with some evidence for increased recruitment of premotor cortex in some conditions (including in substantial background noise). We then investigated neural connectivity using psychophysiological interaction analysis with seed regions in both primary auditory cortex and primary visual cortex. Connectivity between auditory and visual cortices was stronger in audiovisual conditions than in unimodal conditions, including a wide network of regions in posterior temporal cortex and prefrontal cortex. In addition to whole-brain analyses, we also conducted a region-of-interest analysis on the left posterior superior temporal sulcus (pSTS), implicated in many previous studies of audiovisual speech perception. We found evidence for both activity and effective connectivity in pSTS for visual-only and audiovisual speech, although these were not significant in whole-brain analyses. Together, our results suggest a prominent role for cross-region synchronization in understanding both visual-only and audiovisual speech that complements activity in integrative brain regions like pSTS.SIGNIFICANCE STATEMENT In everyday conversation, we usually process the talker's face as well as the sound of the talker's voice. Access to visual speech information is particularly useful when the auditory signal is hard to understand (e.g., background noise). Prior work has suggested that specialized regions of the brain may play a critical role in integrating information from visual and auditory speech. Here, we show a complementary mechanism relying on synchronized brain activity among sensory and motor regions may also play a critical role. These findings encourage reconceptualizing audiovisual integration in the context of coordinated network activity.


Assuntos
Córtex Auditivo/fisiologia , Idioma , Leitura Labial , Rede Nervosa/fisiologia , Percepção da Fala/fisiologia , Córtex Visual/fisiologia , Percepção Visual/fisiologia , Adulto , Idoso , Idoso de 80 Anos ou mais , Córtex Auditivo/diagnóstico por imagem , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Rede Nervosa/diagnóstico por imagem , Córtex Visual/diagnóstico por imagem , Adulto Jovem
4.
J Neurolinguistics ; 682023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-37637379

RESUMO

Although researchers often rely on group-level fMRI results to draw conclusions about the neurobiology of language, doing so without accounting for the complexities of individual brains may reduce the validity of our findings. Furthermore, understanding brain organization in individuals is critically important for both basic science and clinical translation. To assess the state of single-subject language localization in the functional neuroimaging literature, we carried out a systematic review of studies published through April 2020. Out of 977 papers identified through our search, 121 met our inclusion criteria for reporting single-subject fMRI results (fMRI studies of language in adults that report task-based single-subject statistics). Of these, 20 papers reported using a single-subject test-retest analysis to assess reliability. Thus, we found that a relatively modest number of papers reporting single-subject results quantified single-subject reliability. These varied substantially in acquisition parameters, task design, and reliability measures, creating significant challenges for making comparisons across studies. Future endeavors to optimize the localization of language networks in individuals will benefit from the standardization and broader reporting of reliability metrics for different tasks and acquisition parameters.

5.
J Acoust Soc Am ; 154(6): 3973-3985, 2023 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-38149818

RESUMO

Face masks offer essential protection but also interfere with speech communication. Here, audio-only sentences spoken through four types of masks were presented in noise to young adult listeners. Pupil dilation (an index of cognitive demand), intelligibility, and subjective effort and performance ratings were collected. Dilation increased in response to each mask relative to the no-mask condition and differed significantly where acoustic attenuation was most prominent. These results suggest that the acoustic impact of the mask drives not only the intelligibility of speech, but also the cognitive demands of listening. Subjective effort ratings reflected the same trends as the pupil data.


Assuntos
Máscaras , Percepção da Fala , Adulto Jovem , Humanos , Inteligibilidade da Fala/fisiologia , Ruído/efeitos adversos , Pupila/fisiologia , Cognição , Percepção da Fala/fisiologia
6.
J Acoust Soc Am ; 152(6): 3216, 2022 12.
Artigo em Inglês | MEDLINE | ID: mdl-36586857

RESUMO

Although it is clear that sighted listeners use both auditory and visual cues during speech perception, the manner in which multisensory information is combined is a matter of debate. One approach to measuring multisensory integration is to use variants of the McGurk illusion, in which discrepant auditory and visual cues produce auditory percepts that differ from those based on unimodal input. Not all listeners show the same degree of susceptibility to the McGurk illusion, and these individual differences are frequently used as a measure of audiovisual integration ability. However, despite their popularity, we join the voices of others in the field to argue that McGurk tasks are ill-suited for studying real-life multisensory speech perception: McGurk stimuli are often based on isolated syllables (which are rare in conversations) and necessarily rely on audiovisual incongruence that does not occur naturally. Furthermore, recent data show that susceptibility to McGurk tasks does not correlate with performance during natural audiovisual speech perception. Although the McGurk effect is a fascinating illusion, truly understanding the combined use of auditory and visual information during speech perception requires tasks that more closely resemble everyday communication: namely, words, sentences, and narratives with congruent auditory and visual speech cues.


Assuntos
Ilusões , Percepção da Fala , Humanos , Percepção Visual , Idioma , Fala , Percepção Auditiva , Estimulação Luminosa , Estimulação Acústica
7.
J Neurolinguistics ; 552020 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-32226224

RESUMO

The meanings of most open class words are suffused with sensory and affective features. A word such as beach, for example, evokes polymodal associations ranging from gritty sand (tactile) and crashing waves (auditory) to the distinctive smell of sunscreen (olfactory). Aristotle argued for a hierarchy of the senses where vision and audition eclipse the lesser modalities of odor, taste, and touch. A direct test of Aristotle's premise was recently made possible with the establishment of the Lancaster Sensorimotor Norms (2019), a crowdsourced database cataloging sensorimotor salience for nearly 40,000 English words. Neurosynth, a metanalytic database of functional magnetic resonance imaging studies, can potentially confirm if Aristotle's sensory hierarchy is reflected in functional activation within the human brain. We correlated sensory salience of English words as assessed by subjective ratings of vision, audition, olfaction, touch, and gustation (Lancaster Ratings) with volumes of cortical activation for each of these respective sensory modalities (Neurosynth). English word ratings reflected the following sensory hierarchy: vision > audition > haptic > olfaction ≈ gustation. This linguistic hierarchy nearly perfectly correlated with voxel counts of functional activation maps by each sensory modality (Pearson r=.99). These findings are grossly consistent with Aristotle's hierarchy of the senses. We discuss implications and counterevidence from other natural languages.

8.
Behav Res Methods ; 52(4): 1795-1799, 2020 08.
Artigo em Inglês | MEDLINE | ID: mdl-31993960

RESUMO

In everyday language processing, sentence context affects how readers and listeners process upcoming words. In experimental situations, it can be useful to identify words that are predicted to greater or lesser degrees by the preceding context. Here we report completion norms for 3085 English sentences, collected online using a written cloze procedure in which participants were asked to provide their best guess for the word completing a sentence. Sentences varied between eight and ten words in length. At least 100 unique participants contributed to each sentence. All responses were reviewed by human raters to mitigate the influence of mis-spellings and typographical errors. The responses provide a range of predictability values for 13,438 unique target words, 6790 of which appear in more than one sentence context. We also provide entropy values based on the relative predictability of multiple responses. A searchable set of norms is available at http://sentencenorms.net . Finally, we provide the code used to collate and organize the responses to facilitate additional analyses and future research projects.


Assuntos
Compreensão , Idioma , Humanos
9.
Ear Hear ; 39(2): 204-214, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-28938250

RESUMO

Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners' abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication.


Assuntos
Cognição , Percepção da Fala , Humanos , Memória de Curto Prazo , Neuroimagem
10.
J Neurosci ; 36(13): 3829-38, 2016 Mar 30.
Artigo em Inglês | MEDLINE | ID: mdl-27030767

RESUMO

A defining aspect of human cognition is the ability to integrate conceptual information into complex semantic combinations. For example, we can comprehend "plaid" and "jacket" as individual concepts, but we can also effortlessly combine these concepts to form the semantic representation of "plaid jacket." Many neuroanatomic models of semantic memory propose that heteromodal cortical hubs integrate distributed semantic features into coherent representations. However, little work has specifically examined these proposed integrative mechanisms and the causal role of these regions in semantic integration. Here, we test the hypothesis that the angular gyrus (AG) is critical for integrating semantic information by applying high-definition transcranial direct current stimulation (tDCS) to an fMRI-guided region-of-interest in the left AG. We found that anodal stimulation to the left AG modulated semantic integration but had no effect on a letter-string control task. Specifically, anodal stimulation to the left AG resulted in faster comprehension of semantically meaningful combinations like "tiny radish" relative to non-meaningful combinations, such as "fast blueberry," when compared to the effects observed during sham stimulation and stimulation to a right-hemisphere control brain region. Moreover, the size of the effect from brain stimulation correlated with the degree of semantic coherence between the word pairs. These findings demonstrate that the left AG plays a causal role in the integration of lexical-semantic information, and that high-definition tDCS to an associative cortical hub can selectively modulate integrative processes in semantic memory. SIGNIFICANCE STATEMENT: A major goal of neuroscience is to understand the neural basis of behaviors that are fundamental to human intelligence. One essential behavior is the ability to integrate conceptual knowledge from semantic memory, allowing us to construct an almost unlimited number of complex concepts from a limited set of basic constituents (e.g., "leaf" and "wet" can be combined into the more complex representation "wet leaf"). Here, we present a novel approach to studying integrative processes in semantic memory by applying focal brain stimulation to a heteromodal cortical hub implicated in semantic processing. Our findings demonstrate a causal role of the left angular gyrus in lexical-semantic integration and provide motivation for novel therapeutic applications in patients with lexical-semantic deficits.


Assuntos
Mapeamento Encefálico , Lobo Parietal/fisiologia , Semântica , Estimulação Transcraniana por Corrente Contínua , Adulto , Análise de Variância , Aprendizagem por Associação/fisiologia , Formação de Conceito/fisiologia , Feminino , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Oxigênio/sangue , Lobo Parietal/irrigação sanguínea , Tempo de Reação/fisiologia , Vocabulário , Adulto Jovem
11.
J Neurosci ; 35(7): 3276-84, 2015 Feb 18.
Artigo em Inglês | MEDLINE | ID: mdl-25698762

RESUMO

Human thought and language rely on the brain's ability to combine conceptual information. This fundamental process supports the construction of complex concepts from basic constituents. For example, both "jacket" and "plaid" can be represented as individual concepts, but they can also be integrated to form the more complex representation "plaid jacket." Although this process is central to the expression and comprehension of language, little is known about its neural basis. Here we present evidence for a neuroanatomic model of conceptual combination from three experiments. We predicted that the highly integrative region of heteromodal association cortex in the angular gyrus would be critical for conceptual combination, given its anatomic connectivity and its strong association with semantic memory in functional neuroimaging studies. Consistent with this hypothesis, we found that the process of combining concepts to form meaningful representations specifically modulates neural activity in the angular gyrus of healthy adults, independent of the modality of the semantic content integrated. We also found that individual differences in the structure of the angular gyrus in healthy adults are related to variability in behavioral performance on the conceptual combination task. Finally, in a group of patients with neurodegenerative disease, we found that the degree of atrophy in the angular gyrus is specifically related to impaired performance on combinatorial processing. These converging anatomic findings are consistent with a critical role for the angular gyrus in conceptual combination.


Assuntos
Mapeamento Encefálico , Compreensão , Demência/patologia , Lobo Parietal/irrigação sanguínea , Lobo Parietal/fisiologia , Semântica , Adulto , Idoso , Aprendizagem por Associação , Demência/fisiopatologia , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Individualidade , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Oxigênio/sangue , Adulto Jovem
12.
J Cogn Neurosci ; 28(3): 361-78, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26679216

RESUMO

Semantic representations capture the statistics of experience and store this information in memory. A fundamental component of this memory system is knowledge of the visual environment, including knowledge of objects and their associations. Visual semantic information underlies a range of behaviors, from perceptual categorization to cognitive processes such as language and reasoning. Here we examine the neuroanatomic system that encodes visual semantics. Across three experiments, we found converging evidence indicating that knowledge of verbally mediated visual concepts relies on information encoded in a region of the ventral-medial temporal lobe centered on parahippocampal cortex. In an fMRI study, this region was strongly engaged by the processing of concepts relying on visual knowledge but not by concepts relying on other sensory modalities. In a study of patients with the semantic variant of primary progressive aphasia (semantic dementia), atrophy that encompassed this region was associated with a specific impairment in verbally mediated visual semantic knowledge. Finally, in a structural study of healthy adults from the fMRI experiment, gray matter density in this region related to individual variability in the processing of visual concepts. The anatomic location of these findings aligns with recent work linking the ventral-medial temporal lobe with high-level visual representation, contextual associations, and reasoning through imagination. Together, this work suggests a critical role for parahippocampal cortex in linking the visual environment with knowledge systems in the human brain.


Assuntos
Associação , Mapeamento Encefálico/métodos , Formação de Conceito/fisiologia , Demência Frontotemporal/fisiopatologia , Individualidade , Lobo Temporal , Percepção Visual/fisiologia , Adulto , Idoso , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Semântica , Lobo Temporal/anatomia & histologia , Lobo Temporal/fisiologia , Lobo Temporal/fisiopatologia , Testes de Associação de Palavras , Adulto Jovem
13.
Exp Aging Res ; 42(1): 97-111, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26683044

RESUMO

BACKGROUND/STUDY CONTEXT: A common goal during speech comprehension is to remember what we have heard. Encoding speech into long-term memory frequently requires processes such as verbal working memory that may also be involved in processing degraded speech. Here the authors tested whether young and older adult listeners' memory for short stories was worse when the stories were acoustically degraded, or whether the additional contextual support provided by a narrative would protect against these effects. METHODS: The authors tested 30 young adults (aged 18-28 years) and 30 older adults (aged 65-79 years) with good self-reported hearing. Participants heard short stories that were presented as normal (unprocessed) speech or acoustically degraded using a noise vocoding algorithm with 24 or 16 channels. The degraded stories were still fully intelligible. Following each story, participants were asked to repeat the story in as much detail as possible. Recall was scored using a modified idea unit scoring approach, which included separately scoring hierarchical levels of narrative detail. RESULTS: Memory for acoustically degraded stories was significantly worse than for normal stories at some levels of narrative detail. Older adults' memory for the stories was significantly worse overall, but there was no interaction between age and acoustic clarity or level of narrative detail. Verbal working memory (assessed by reading span) significantly correlated with recall accuracy for both young and older adults, whereas hearing ability (better ear pure tone average) did not. CONCLUSION: The present findings are consistent with a framework in which the additional cognitive demands caused by a degraded acoustic signal use resources that would otherwise be available for memory encoding for both young and older adults. Verbal working memory is a likely candidate for supporting both of these processes.


Assuntos
Memória de Curto Prazo , Rememoração Mental , Acústica da Fala , Adulto , Fatores Etários , Idoso , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
14.
Neuroimage ; 117: 319-26, 2015 Aug 15.
Artigo em Inglês | MEDLINE | ID: mdl-26026816

RESUMO

The functional neuroanatomy of speech processing has been investigated using positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) for more than 20years. However, these approaches have relatively poor temporal resolution and/or challenges of acoustic contamination due to the constraints of echoplanar fMRI. Furthermore, these methods are contraindicated because of safety concerns in longitudinal studies and research with children (PET) or in studies of patients with metal implants (fMRI). High-density diffuse optical tomography (HD-DOT) permits presenting speech in a quiet acoustic environment, has excellent temporal resolution relative to the hemodynamic response, and provides noninvasive and metal-compatible imaging. However, the performance of HD-DOT in imaging the brain regions involved in speech processing is not fully established. In the current study, we use an auditory sentence comprehension task to evaluate the ability of HD-DOT to map the cortical networks supporting speech processing. Using sentences with two levels of linguistic complexity, along with a control condition consisting of unintelligible noise-vocoded speech, we recovered a hierarchically organized speech network that matches the results of previous fMRI studies. Specifically, hearing intelligible speech resulted in increased activity in bilateral temporal cortex and left frontal cortex, with syntactically complex speech leading to additional activity in left posterior temporal cortex and left inferior frontal gyrus. These results demonstrate the feasibility of using HD-DOT to map spatially distributed brain networks supporting higher-order cognitive faculties such as spoken language.


Assuntos
Mapeamento Encefálico/métodos , Lobo Frontal/fisiologia , Rede Nervosa/fisiologia , Percepção da Fala/fisiologia , Lobo Temporal/fisiologia , Tomografia Óptica/métodos , Adulto , Feminino , Humanos , Masculino , Adulto Jovem
15.
J Neurophysiol ; 114(3): 1819-26, 2015 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-26245316

RESUMO

Past neuroimaging studies have documented discrete regions of human temporal cortex that are more strongly activated by conspecific voice sounds than by nonvoice sounds. However, the mechanisms underlying this voice sensitivity remain unclear. In the present functional MRI study, we took a novel approach to examining voice sensitivity, in which we applied a signal detection paradigm to the assessment of multivariate pattern classification among several living and nonliving categories of auditory stimuli. Within this framework, voice sensitivity can be interpreted as a distinct neural representation of brain activity that correctly distinguishes human vocalizations from other auditory object categories. Across a series of auditory categorization tests, we found that bilateral superior and middle temporal cortex consistently exhibited robust sensitivity to human vocal sounds. Although the strongest categorization was in distinguishing human voice from other categories, subsets of these regions were also able to distinguish reliably between nonhuman categories, suggesting a general role in auditory object categorization. Our findings complement the current evidence of cortical sensitivity to human vocal sounds by revealing that the greatest sensitivity during categorization tasks is devoted to distinguishing voice from nonvoice categories within human temporal cortex.


Assuntos
Percepção Auditiva , Lobo Temporal/fisiologia , Voz , Adulto , Mapeamento Encefálico , Feminino , Humanos , Masculino
16.
Neuroimage ; 99: 477-86, 2014 Oct 01.
Artigo em Inglês | MEDLINE | ID: mdl-24830834

RESUMO

Linking structural neuroimaging data from multiple modalities to cognitive performance is an important challenge for cognitive neuroscience. In this study we examined the relationship between verbal fluency performance and neuroanatomy in 54 patients with frontotemporal degeneration (FTD) and 15 age-matched controls, all of whom had T1- and diffusion-weighted imaging. Our goal was to incorporate measures of both gray matter (voxel-based cortical thickness) and white matter (fractional anisotropy) into a single statistical model that relates to behavioral performance. We first used eigenanatomy to define data-driven regions of interest (DD-ROIs) for both gray matter and white matter. Eigenanatomy is a multivariate dimensionality reduction approach that identifies spatially smooth, unsigned principal components that explain the maximal amount of variance across subjects. We then used a statistical model selection procedure to see which of these DD-ROIs best modeled performance on verbal fluency tasks hypothesized to rely on distinct components of a large-scale neural network that support language: category fluency requires a semantic-guided search and is hypothesized to rely primarily on temporal cortices that support lexical-semantic representations; letter-guided fluency requires a strategic mental search and is hypothesized to require executive resources to support a more demanding search process, which depends on prefrontal cortex in addition to temporal network components that support lexical representations. We observed that both types of verbal fluency performance are best described by a network that includes a combination of gray matter and white matter. For category fluency, the identified regions included bilateral temporal cortex and a white matter region including left inferior longitudinal fasciculus and frontal-occipital fasciculus. For letter fluency, a left temporal lobe region was also selected, and also regions of frontal cortex. These results are consistent with our hypothesized neuroanatomical models of language processing and its breakdown in FTD. We conclude that clustering the data with eigenanatomy before performing linear regression is a promising tool for multimodal data analysis.


Assuntos
Encéfalo/patologia , Cognição , Idoso , Feminino , Degeneração Lobar Frontotemporal/patologia , Substância Cinzenta/fisiologia , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Modelos Neurológicos , Análise Multivariada , Rede Nervosa/fisiologia , Testes Neuropsicológicos , Reprodutibilidade dos Testes , Comportamento Verbal/fisiologia , Substância Branca/patologia
17.
Cogn Affect Behav Neurosci ; 14(1): 37-48, 2014 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-24425352

RESUMO

We hypothesized that semantic memory for object concepts involves both representations of visual feature knowledge in modality-specific association cortex and heteromodal regions that are important for integrating and organizing this semantic knowledge so that it can be used in a flexible, contextually appropriate manner. We examined this hypothesis in an fMRI study of mild Alzheimer's disease (AD). Participants were presented with pairs of printed words and asked whether the words matched on a given visual-perceptual feature (e.g., guitar, violin: SHAPE). The stimuli probed natural kinds and manufactured objects, and the judgments involved shape or color. We found activation of bilateral ventral temporal cortex and left dorsolateral prefrontal cortex during semantic judgments, with AD patients showing less activation of these regions than healthy seniors. Moreover, AD patients showed less ventral temporal activation than did healthy seniors for manufactured objects, but not for natural kinds. We also used diffusion-weighted MRI of white matter to examine fractional anisotropy (FA). Patients with AD showed significantly reduced FA in the superior longitudinal fasciculus and inferior frontal-occipital fasciculus, which carry projections linking temporal and frontal regions of this semantic network. Our results are consistent with the hypothesis that semantic memory is supported in part by a large-scale neural network involving modality-specific association cortex, heteromodal association cortex, and projections between these regions. The semantic deficit in AD thus arises from gray matter disease that affects the representation of feature knowledge and processing its content, as well as white matter disease that interrupts the integrated functioning of this large-scale network.


Assuntos
Doença de Alzheimer/fisiopatologia , Encéfalo/fisiopatologia , Julgamento/fisiologia , Reconhecimento Visual de Modelos/fisiologia , Reconhecimento Psicológico/fisiologia , Semântica , Idoso , Idoso de 80 Anos ou mais , Doença de Alzheimer/patologia , Anisotropia , Encéfalo/patologia , Mapeamento Encefálico , Imagem de Difusão por Ressonância Magnética , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Fibras Nervosas Mielinizadas/patologia , Fibras Nervosas Mielinizadas/fisiologia , Testes Neuropsicológicos , Estimulação Luminosa , Leitura , Análise e Desempenho de Tarefas
18.
Cereb Cortex ; 23(6): 1378-87, 2013 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-22610394

RESUMO

A growing body of evidence shows that ongoing oscillations in auditory cortex modulate their phase to match the rhythm of temporally regular acoustic stimuli, increasing sensitivity to relevant environmental cues and improving detection accuracy. In the current study, we test the hypothesis that nonsensory information provided by linguistic content enhances phase-locked responses to intelligible speech in the human brain. Sixteen adults listened to meaningful sentences while we recorded neural activity using magnetoencephalography. Stimuli were processed using a noise-vocoding technique to vary intelligibility while keeping the temporal acoustic envelope consistent. We show that the acoustic envelopes of sentences contain most power between 4 and 7 Hz and that it is in this frequency band that phase locking between neural activity and envelopes is strongest. Bilateral oscillatory neural activity phase-locked to unintelligible speech, but this cerebro-acoustic phase locking was enhanced when speech was intelligible. This enhanced phase locking was left lateralized and localized to left temporal cortex. Together, our results demonstrate that entrainment to connected speech does not only depend on acoustic characteristics, but is also affected by listeners' ability to extract linguistic information. This suggests a biological framework for speech comprehension in which acoustic and linguistic cues reciprocally aid in stimulus prediction.


Assuntos
Córtex Auditivo/fisiologia , Compreensão/fisiologia , Variação Contingente Negativa/fisiologia , Fala/fisiologia , Estimulação Acústica , Adulto , Análise de Variância , Mapeamento Encefálico , Eletroencefalografia , Potenciais Evocados Auditivos , Feminino , Análise de Fourier , Humanos , Linguística , Imageamento por Ressonância Magnética , Magnetoencefalografia , Masculino , Espectrografia do Som , Percepção da Fala , Fatores de Tempo , Vocabulário , Adulto Jovem
19.
bioRxiv ; 2024 Jan 09.
Artigo em Inglês | MEDLINE | ID: mdl-37986896

RESUMO

Traditional laboratory tasks offer tight experimental control but lack the richness of our everyday human experience. As a result many cognitive neuroscientists have been motivated to adopt experimental paradigms that are more natural, such as stories and movies. Here we describe data collected from 58 healthy adult participants (aged 18-76 years) who viewed 10 minutes of a movie (The Good, the Bad, and the Ugly, 1966). Most (36) participants viewed the clip more than once, resulting in 106 sessions of data. Cortical responses were mapped using high-density diffuse optical tomography (first- through fourth nearest neighbor separations of 1.3, 3.0, 3.9, and 4.7 cm), covering large portions of superficial occipital, temporal, parietal, and frontal lobes. Consistency of measured activity across subjects was quantified using intersubject correlation analysis. Data are provided in both channel format (SNIRF) and projected to standard space (NIfTI), using an atlas-based light model. These data are suitable for methods exploration as well as investigating a wide variety of cognitive phenomena.

20.
J Neurosci ; 32(25): 8443-53, 2012 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-22723684

RESUMO

A striking feature of human perception is that our subjective experience depends not only on sensory information from the environment but also on our prior knowledge or expectations. The precise mechanisms by which sensory information and prior knowledge are integrated remain unclear, with longstanding disagreement concerning whether integration is strictly feedforward or whether higher-level knowledge influences sensory processing through feedback connections. Here we used concurrent EEG and MEG recordings to determine how sensory information and prior knowledge are integrated in the brain during speech perception. We manipulated listeners' prior knowledge of speech content by presenting matching, mismatching, or neutral written text before a degraded (noise-vocoded) spoken word. When speech conformed to prior knowledge, subjective perceptual clarity was enhanced. This enhancement in clarity was associated with a spatiotemporal profile of brain activity uniquely consistent with a feedback process: activity in the inferior frontal gyrus was modulated by prior knowledge before activity in lower-level sensory regions of the superior temporal gyrus. In parallel, we parametrically varied the level of speech degradation, and therefore the amount of sensory detail, so that changes in neural responses attributable to sensory information and prior knowledge could be directly compared. Although sensory detail and prior knowledge both enhanced speech clarity, they had an opposite influence on the evoked response in the superior temporal gyrus. We argue that these data are best explained within the framework of predictive coding in which sensory activity is compared with top-down predictions and only unexplained activity propagated through the cortical hierarchy.


Assuntos
Conhecimento , Percepção da Fala/fisiologia , Adolescente , Adulto , Análise de Variância , Córtex Cerebral/fisiologia , Sinais (Psicologia) , Interpretação Estatística de Dados , Eletroencefalografia , Lobo Frontal/fisiologia , Lateralidade Funcional , Humanos , Processamento de Imagem Assistida por Computador , Magnetoencefalografia , Desempenho Psicomotor/fisiologia , Fala , Adulto Jovem
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA