Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 17 de 17
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Phonetica ; 80(3-4): 225-258, 2023 06 27.
Artigo em Inglês | MEDLINE | ID: mdl-37312566

RESUMO

Previous research on the phonetic realization of Hawaiian glottal stops has shown that it can be produced several ways, including with creaky voice, full closure, or modal voice. This study investigates whether the realization is conditioned by word-level prosodic or metrical factors, which would be consistent with research demonstrating that segmental distribution and phonetic realization can be sensitive to word-internal structure. At the same time, it has also been shown that prosodic prominence, such as syllable stress, can affect phonetic realization. Data come from the 1970s-80s radio program Ka Leo Hawai'i. Using Parker Jones' (Parker Jones, Oiwi. 2010. A computational phonology and morphology of Hawaiian. University of Oxford DPhil. thesis) computational prosodic grammar, words were parsed and glottal stops were automatically coded for word position, syllable stress, and prosodic word position. The frequency of the word containing the glottal stop was also calculated. Results show that full glottal closures are more likely at the beginning of a prosodic word, especially in word-medial position. Glottal stops with full closure in lexical word initial position are more likely in lower frequency words. The findings for Hawaiian glottal stop suggest that prosodic prominence does not condition a stronger realization, but rather, the role of the prosodic word is similar to other languages exhibiting phonetic cues to word-level prosodic structure.


Assuntos
Idioma , Voz , Humanos , Havaí , Fonética , Sinais (Psicologia)
2.
Hum Brain Mapp ; 41(3): 815-826, 2020 02 15.
Artigo em Inglês | MEDLINE | ID: mdl-31638304

RESUMO

Resting-state fMRI has shown the ability to predict task activation on an individual basis by using a general linear model (GLM) to map resting-state network features to activation z-scores. The question remains whether the relatively simplistic GLM is the best approach to accomplish this prediction. In this study, several regression-based machine-learning approaches were compared, including GLMs, feed-forward neural networks, and random forest bootstrap aggregation (bagging). Resting-state and task data from 350 Human Connectome Project subjects were analyzed. First, the effect of the number of training subjects on the prediction accuracy was evaluated. In addition, the prediction accuracy and Dice coefficient were compared across models. Prediction accuracy increased with the training number up to 200 subjects; however, an elbow in the prediction curve occurred around 30-40 training subjects. All models performed well with correlation matrices, which displayed correlation between actual and predicted task activation for all subjects, exhibiting a strong diagonal trend for all tasks. Overall, the neural network and random forest bagging techniques outperformed the GLM. These approaches, however, require additional computing power and processing time. These results show that, while the GLM performs well, resting-state fMRI prediction of task activation could benefit from more complex machine learning approaches.


Assuntos
Mapeamento Encefálico/métodos , Encéfalo/fisiologia , Aprendizado de Máquina , Imageamento por Ressonância Magnética , Rede Nervosa/fisiologia , Análise e Desempenho de Tarefas , Adulto , Encéfalo/diagnóstico por imagem , Conectoma/métodos , Humanos , Rede Nervosa/diagnóstico por imagem , Análise de Regressão
3.
Neuroimage ; 203: 116184, 2019 12.
Artigo em Inglês | MEDLINE | ID: mdl-31520744

RESUMO

This fMRI study of 24 healthy human participants investigated whether any part of the auditory cortex was more responsive to self-generated speech sounds compared to hearing another person speak. The results demonstrate a double dissociation in two different parts of the auditory cortex. In the right posterior superior temporal sulcus (RpSTS), activation was higher during speech production than listening to auditory stimuli, whereas in bilateral superior temporal gyri (STG), activation was higher for listening to auditory stimuli than during speech production. In the second part of the study, we investigated the function of the identified regions, by examining how activation changed across a range of listening and speech production tasks that systematically varied the demands on acoustic, semantic, phonological and orthographic processing. In RpSTS, activation during auditory conditions was higher in the absence of semantic cues, plausibly indicating increased attention to the spectral-temporal features of auditory inputs. In addition, RpSTS responded in the absence of any auditory inputs when participants were making one-back matching decisions on visually presented pseudowords. After analysing the influence of visual, phonological, semantic and orthographic processing, we propose that RpSTS (i) contributes to short term memory of speech sounds as well as (ii) spectral-temporal processing of auditory input and (iii) may play a role in integrating auditory expectations with auditory input. In contrast, activation in bilateral STG was sensitive to acoustic input and did not respond in the absence of auditory input. The special role of RpSTS during speech production therefore merits further investigation if we are to fully understand the neural mechanisms supporting speech production during speech acquisition, adult life, hearing loss and after brain injury.


Assuntos
Córtex Auditivo/fisiologia , Percepção da Fala/fisiologia , Fala/fisiologia , Lobo Temporal/fisiologia , Estimulação Acústica , Adulto , Mapeamento Encefálico , Feminino , Humanos , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Percepção Visual/fisiologia , Adulto Jovem
4.
Brain ; 141(12): 3389-3404, 2018 12 01.
Artigo em Inglês | MEDLINE | ID: mdl-30418586

RESUMO

Acquired language disorders after stroke are strongly associated with left hemisphere damage. When language difficulties are observed in the context of right hemisphere strokes, patients are usually considered to have atypical functional anatomy. By systematically integrating behavioural and lesion data from brain damaged patients with functional MRI data from neurologically normal participants, we investigated when and why right hemisphere strokes cause language disorders. Experiment 1 studied right-handed patients with unilateral strokes that damaged the right (n = 109) or left (n = 369) hemispheres. The most frequently impaired language task was: auditory sentence-to-picture matching after right hemisphere strokes; and spoken picture description after left hemisphere strokes. For those with auditory sentence-to-picture matching impairments after right hemisphere strokes, the majority (n = 9) had normal performance on tests of perceptual (visual or auditory) and linguistic (semantic, phonological or syntactic) processing. Experiment 2 found that these nine patients had significantly more damage to dorsal parts of the superior longitudinal fasciculus and the right inferior frontal sulcus compared to 75 other patients who also had right hemisphere strokes but were not impaired on the auditory sentence-to-picture matching task. Damage to these right hemisphere regions caused long-term speech comprehension difficulties in 67% of patients. Experiments 3 and 4 used functional MRI in two groups of 25 neurologically normal individuals to show that within the regions identified by Experiment 2, the right inferior frontal sulcus was normally activated by (i) auditory sentence-to-picture matching; and (ii) one-back matching when the demands on linguistic and non-linguistic working memory were high. Together, these experiments demonstrate that the right inferior frontal cortex contributes to linguistic and non-linguistic working memory capacity (executive function) that is needed for normal speech comprehension. Our results link previously unrelated literatures on the role of the right inferior frontal cortex in executive processing and the role of executive processing in sentence comprehension; which in turn helps to explain why right inferior frontal activity has previously been reported to increase during recovery of language function after left hemisphere stroke. The clinical relevance of our findings is that the detrimental effect of right hemisphere strokes on language is (i) much greater than expected; (ii) frequently observed after damage to the right inferior frontal sulcus; (iii) task dependent; (iv) different to the type of impairments observed after left hemisphere strokes; and (v) can result in long-lasting deficits that are (vi) not the consequence of atypical language lateralization.


Assuntos
Compreensão , Lobo Frontal/patologia , Transtornos da Linguagem/patologia , Transtornos da Linguagem/psicologia , Percepção da Fala , Acidente Vascular Cerebral/complicações , Feminino , Lateralidade Funcional , Humanos , Transtornos da Linguagem/etiologia , Linguística , Masculino , Memória de Curto Prazo , Pessoa de Meia-Idade
5.
J Neurosci ; 35(11): 4751-9, 2015 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-25788691

RESUMO

The parietal operculum, particularly the cytoarchitectonic area OP1 of the secondary somatosensory area (SII), is involved in somatosensory feedback. Using fMRI with 58 human subjects, we investigated task-dependent differences in SII/OP1 activity during three familiar speech production tasks: object naming, reading and repeatedly saying "1-2-3." Bilateral SII/OP1 was significantly suppressed (relative to rest) during object naming, to a lesser extent when repeatedly saying "1-2-3" and not at all during reading. These results cannot be explained by task difficulty but the contrasting difference between naming and reading illustrates how the demands on somatosensory activity change with task, even when motor output (i.e., production of object names) is matched. To investigate what determined SII/OP1 deactivation during object naming, we searched the whole brain for areas where activity increased as that in SII/OP1 decreased. This across subject covariance analysis revealed a region in the right superior temporal sulcus (STS) that lies within the auditory cortex, and is activated by auditory feedback during speech production. The tradeoff between activity in SII/OP1 and STS was not observed during reading, which showed significantly more activation than naming in both SII/OP1 and STS bilaterally. These findings suggest that, although object naming is more error prone than reading, subjects can afford to rely more or less on somatosensory or auditory feedback during naming. In contrast, fast and efficient error-free reading places more consistent demands on both types of feedback, perhaps because of the potential for increased competition between lexical and sublexical codes at the articulatory level.


Assuntos
Córtex Auditivo/fisiologia , Mapeamento Encefálico/métodos , Estimulação Luminosa/métodos , Leitura , Córtex Somatossensorial/fisiologia , Fala/fisiologia , Adulto , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Adulto Jovem
6.
Brain ; 138(Pt 4): 1070-83, 2015 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-25688076

RESUMO

Post-stroke prognoses are usually inductive, generalizing trends learned from one group of patients, whose outcomes are known, to make predictions for new patients. Research into the recovery of language function is almost exclusively focused on monolingual stroke patients, but bilingualism is the norm in many parts of the world. If bilingual language recruits qualitatively different networks in the brain, prognostic models developed for monolinguals might not generalize well to bilingual stroke patients. Here, we sought to establish how applicable post-stroke prognostic models, trained with monolingual patient data, are to bilingual stroke patients who had been ordinarily resident in the UK for many years. We used an algorithm to extract binary lesion images for each stroke patient, and assessed their language with a standard tool. We used feature selection and cross-validation to find 'good' prognostic models for each of 22 different language skills, using monolingual data only (174 patients; 112 males and 62 females; age at stroke: mean = 53.0 years, standard deviation = 12.2 years, range = 17.2-80.1 years; time post-stroke: mean = 55.6 months, standard deviation = 62.6 months, range = 3.1-431.9 months), then made predictions for both monolinguals and bilinguals (33 patients; 18 males and 15 females; age at stroke: mean = 49.0 years, standard deviation = 13.2 years, range = 23.1-77.0 years; time post-stroke: mean = 49.2 months, standard deviation = 55.8 months, range = 3.9-219.9 months) separately, after training with monolingual data only. We measured group differences by comparing prediction error distributions, and used a Bayesian test to search for group differences in terms of lesion-deficit associations in the brain. Our models distinguish better outcomes from worse outcomes equally well within each group, but tended to be over-optimistic when predicting bilingual language outcomes: our bilingual patients tended to have poorer language skills than expected, based on trends learned from monolingual data alone, and this was significant (P < 0.05, corrected for multiple comparisons) in 13/22 language tasks. Both patient groups appeared to be sensitive to damage in the same sets of regions, though the bilinguals were more sensitive than the monolinguals. media-1vid1 10.1093/brain/awv020_video_abstract awv020_video_abstract.


Assuntos
Bases de Dados Factuais , Testes de Linguagem , Idioma , Multilinguismo , Acidente Vascular Cerebral/diagnóstico , Adolescente , Adulto , Idoso , Idoso de 80 Anos ou mais , Feminino , Humanos , Masculino , Pessoa de Meia-Idade , Acidente Vascular Cerebral/epidemiologia , Resultado do Tratamento , Adulto Jovem
7.
Cereb Cortex ; 24(6): 1601-8, 2014 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-23382515

RESUMO

Unlike most languages that are written using a single script, Japanese uses multiple scripts including morphographic Kanji and syllabographic Hiragana and Katakana. Here, we used functional magnetic resonance imaging with dynamic causal modeling to investigate competing theories regarding the neural processing of Kanji and Hiragana during a visual lexical decision task. First, a bilateral model investigated interhemispheric connectivity between ventral occipito-temporal (vOT) cortex and Broca's area ("pars opercularis"). We found that Kanji significantly increased the connection strength from right-to-left vOT. This is interpreted in terms of increased right vOT activity for visually complex Kanji being integrated into the left (i.e. language dominant) hemisphere. Secondly, we used a unilateral left hemisphere model to test whether Kanji and Hiragana rely preferentially on ventral and dorsal paths, respectively, that is, they have different intrahemispheric functional connectivity profiles. Consistent with this hypothesis, we found that Kanji increased connectivity within the ventral path (V1 ↔ vOT ↔ Broca's area), and that Hiragana increased connectivity within the dorsal path (V1 ↔ supramarginal gyrus ↔ Broca's area). Overall, the results illustrate how the differential processing demands of Kanji and Hiragana influence both inter- and intrahemispheric interactions.


Assuntos
Encéfalo/fisiologia , Idioma , Reconhecimento Visual de Modelos/fisiologia , Leitura , Adulto , Mapeamento Encefálico , Compreensão/fisiologia , Feminino , Lateralidade Funcional , Humanos , Japão , Testes de Linguagem , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Modelos Neurológicos , Vias Neurais/fisiologia , Processamento de Sinais Assistido por Computador , Análise e Desempenho de Tarefas , Adulto Jovem
8.
J Neurosci ; 33(6): 2376-87, 2013 Feb 06.
Artigo em Inglês | MEDLINE | ID: mdl-23392667

RESUMO

During speech production, auditory processing of self-generated speech is used to adjust subsequent articulations. The current study investigated how the proposed auditory-motor interactions are manifest at the neural level in native and non-native speakers of English who were overtly naming pictures of objects and reading their written names. Data were acquired with functional magnetic resonance imaging and analyzed with dynamic causal modeling. We found that (1) higher activity in articulatory regions caused activity in auditory regions to decrease (i.e., auditory suppression), and (2) higher activity in auditory regions caused activity in articulatory regions to increase (i.e., auditory feedback). In addition, we were able to demonstrate that (3) speaking in a non-native language involves more auditory feedback and less auditory suppression than speaking in a native language. The difference between native and non-native speakers was further supported by finding that, within non-native speakers, there was less auditory feedback for those with better verbal fluency. Consequently, the networks of more fluent non-native speakers looked more like those of native speakers. Together, these findings provide a foundation on which to explore auditory-motor interactions during speech production in other human populations, particularly those with speech difficulties.


Assuntos
Vias Auditivas/fisiologia , Retroalimentação Fisiológica/fisiologia , Idioma , Atividade Motora/fisiologia , Leitura , Fala/fisiologia , Adulto , Idoso , Estudos de Coortes , Feminino , Humanos , Imageamento por Ressonância Magnética/métodos , Masculino , Pessoa de Meia-Idade , Estimulação Luminosa/métodos , Projetos Piloto , Desempenho Psicomotor/fisiologia , Tempo de Reação/fisiologia , Adulto Jovem
9.
Sci Rep ; 14(1): 11434, 2024 05 19.
Artigo em Inglês | MEDLINE | ID: mdl-38763969

RESUMO

Sensorimotor control of complex, dynamic systems such as humanoids or quadrupedal robots is notoriously difficult. While artificial systems traditionally employ hierarchical optimisation approaches or black-box policies, recent results in systems neuroscience suggest that complex behaviours such as locomotion and reaching are correlated with limit cycles in the primate motor cortex. A recent result suggests that, when applied to a learned latent space, oscillating patterns of activation can be used to control locomotion in a physical robot. While reminiscent of limit cycles observed in primate motor cortex, these dynamics are unsurprising given the cyclic nature of the robot's behaviour (walking). In this preliminary investigation, we consider how a similar approach extends to a less obviously cyclic behaviour (reaching). This has been explored in prior work using computational simulations. But simulations necessarily make simplifying assumptions that do not necessarily correspond to reality, so do not trivially transfer to real robot platforms. Our primary contribution is to demonstrate that we can infer and control real robot states in a learnt representation using oscillatory dynamics during reaching tasks. We further show that the learned latent representation encodes interpretable movements in the robot's workspace. Compared to robot locomotion, the dynamics that we observe for reaching are not fully cyclic, as they do not begin and end at the same position of latent space. However, they do begin to trace out the shape of a cycle, and, by construction, they are driven by the same underlying oscillatory mechanics.


Assuntos
Robótica , Caminhada , Robótica/métodos , Caminhada/fisiologia , Humanos , Animais , Simulação por Computador , Locomoção/fisiologia , Córtex Motor/fisiologia
10.
Cereb Cortex ; 22(4): 892-902, 2012 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-21705392

RESUMO

Using functional magnetic resonance imaging, we found that when bilinguals named pictures or read words aloud, in their native or nonnative language, activation was higher relative to monolinguals in 5 left hemisphere regions: dorsal precentral gyrus, pars triangularis, pars opercularis, superior temporal gyrus, and planum temporale. We further demonstrate that these areas are sensitive to increasing demands on speech production in monolinguals. This suggests that the advantage of being bilingual comes at the expense of increased work in brain areas that support monolingual word processing. By comparing the effect of bilingualism across a range of tasks, we argue that activation is higher in bilinguals compared with monolinguals because word retrieval is more demanding; articulation of each word is less rehearsed; and speech output needs careful monitoring to avoid errors when competition for word selection occurs between, as well as within, language.


Assuntos
Mapeamento Encefálico , Encéfalo/fisiologia , Multilinguismo , Nomes , Leitura , Adolescente , Adulto , Idoso , Encéfalo/irrigação sanguínea , Feminino , Humanos , Processamento de Imagem Assistida por Computador , Imageamento por Ressonância Magnética , Masculino , Pessoa de Meia-Idade , Oxigênio/sangue , Reconhecimento Visual de Modelos , Estimulação Luminosa , Valor Preditivo dos Testes , Psicolinguística , Tempo de Reação , Percepção da Fala/fisiologia , Adulto Jovem
11.
Front Hum Neurosci ; 16: 803163, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35652007

RESUMO

Using fMRI, we investigated how right temporal lobe gliomas affecting the posterior superior temporal sulcus alter neural processing observed during speech perception and production tasks. Behavioural language testing showed that three pre-operative neurosurgical patients with grade 2, grade 3 or grade 4 tumours had the same pattern of mild language impairment in the domains of object naming and written word comprehension. When matching heard words for semantic relatedness (a speech perception task), these patients showed under-activation in the tumour infiltrated right superior temporal lobe compared to 61 neurotypical participants and 16 patients with tumours that preserved the right postero-superior temporal lobe, with enhanced activation within the (tumour-free) contralateral left superior temporal lobe. In contrast, when correctly naming objects (a speech production task), the patients with right postero-superior temporal lobe tumours showed higher activation than both control groups in the same right postero-superior temporal lobe region that was under-activated during auditory semantic matching. The task dependent pattern of under-activation during the auditory speech task and over-activation during object naming was also observed in eight stroke patients with right hemisphere infarcts that affected the right postero-superior temporal lobe compared to eight stroke patients with right hemisphere infarcts that spared it. These task-specific and site-specific cross-pathology effects highlight the importance of the right temporal lobe for language processing and motivate further study of how right temporal lobe tumours affect language performance and neural reorganisation. These findings may have important implications for surgical management of these patients, as knowledge of the regions showing functional reorganisation may help to avoid their inadvertent damage during neurosurgery.

12.
Clin Neuroradiol ; 31(1): 245-256, 2021 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-32274518

RESUMO

PURPOSE: Functional magnetic resonance imaging (fMRI) has an established role in neurosurgical planning; however, ambiguity surrounds the comparative value of resting and task-based fMRI relative to anatomical localization of the sensorimotor cortex. This study was carried out to determine: 1) how often fMRI adds to prediction of motor risks beyond expert neuroradiological review, 2) success rates of presurgical resting and task-based sensorimotor mapping, and 3) the impact of accelerated resting fMRI acquisitions on network detectability. METHODS: Data were collected at 2 centers from 71 patients with a primary brain tumor (31 women; mean age 41.9 ± 13.9 years) and 14 healthy individuals (6 women; mean age 37.9 ± 12.7 years). Preoperative 3T MRI included anatomical scans and resting fMRI using unaccelerated (TR = 3.5 s), intermediate (TR = 1.56 s) or high temporal resolution (TR = 0.72 s) sequences. Task fMRI finger tapping data were acquired in 45 patients. Group differences in fMRI reproducibility, spatial overlap and success frequencies were assessed with t­tests and χ2-tests. RESULTS: Radiological review identified the central sulcus in 98.6% (70/71) patients. Task-fMRI succeeded in 100% (45/45). Resting fMRI failed to identify a sensorimotor network in up to 10 patients; it succeeded in 97.9% (47/48) of accelerated fMRIs, compared to only 60.9% (14/23) of unaccelerated fMRIs ([Formula: see text](2) = 17.84, p < 0.001). Of the patients 12 experienced postoperative deterioration, largely predicted by anatomical proximity to the central sulcus. CONCLUSION: The use of fMRI in patients with residual or intact presurgical motor function added value to uncertain anatomical localization in just a single peri-Rolandic glioma case. Resting fMRI showed high correspondence to task localization when acquired with accelerated sequences but offered limited success at standard acquisitions.


Assuntos
Neoplasias Encefálicas , Glioma , Córtex Sensório-Motor , Adulto , Mapeamento Encefálico , Neoplasias Encefálicas/diagnóstico por imagem , Neoplasias Encefálicas/cirurgia , Feminino , Glioma/diagnóstico por imagem , Glioma/cirurgia , Humanos , Imageamento por Ressonância Magnética , Masculino , Reprodutibilidade dos Testes , Córtex Sensório-Motor/diagnóstico por imagem
13.
Neuroimage Clin ; 30: 102689, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34215157

RESUMO

Large individual differences in how brain networks respond to treatment hinder efforts to personalise treatment in neurological conditions. We used a brain network fingerprinting approach to longitudinally track re-organisation of complementary phonological and semantic language networks in 19 patients before and after brain-tumour surgery. Patient task fingerprints were individually compared to normal networks established in 17 healthy controls. Additionally, pre- and post-operative patient fingerprints were directly compared to assess longitudinal network adaptations. We found that task networks remained stable over time in healthy controls, whereas treatment induced reorganisation in 47.4% of patient fluency networks and 15.8% of semantic networks. How networks adapted after surgery was highly unique; a subset of patients (10%) showed 'normalisation' while others (21%) developed newly atypical networks after treatment. The strongest predictor of adaptation of the fluency network was the presence of clinically reported language symptoms. Our findings indicate a tight coupling between processes disrupting performance and neural network adaptation, the patterns of which appear to be both task- and individually-unique. We propose that connectivity fingerprinting offers potential as a clinical marker to track adaptation of specific functional networks across treatment interventions over time.


Assuntos
Idioma , Imageamento por Ressonância Magnética , Encéfalo/diagnóstico por imagem , Mapeamento Encefálico , Humanos , Individualidade , Vias Neurais/diagnóstico por imagem
14.
Neuroimage Clin ; 24: 101952, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-31357148

RESUMO

The occurrence of wide-scale neuroplasticity in the injured human brain raises hopes for biomarkers to guide personalised treatment. At the individual level, functional reorganisation has proven challenging to quantify using current techniques that are optimised for population-based analyses. In this cross-sectional study, we acquired functional MRI scans in 44 patients (22 men, 22 women, mean age: 39.4 ±â€¯14 years) with a language-dominant hemisphere brain tumour prior to surgery and 23 healthy volunteers (11 men, 12 women, mean age: 36.3 ±â€¯10.9 years) during performance of a verbal fluency task. We applied a recently developed approach to characterise the normal range of functional connectivity patterns during task performance in healthy controls. Next, we statistically quantified differences from the normal in individual patients and evaluated factors driving these differences. We show that the functional connectivity of brain regions involved in language fluency identifies "fingerprints" of brain plasticity in individual patients, not detected using standard task-evoked analyses. In contrast to healthy controls, patients with a tumour in their language dominant hemisphere showed highly variable fingerprints that uniquely distinguished individuals. Atypical fingerprints were influenced by tumour grade and tumour location relative to the typical fluency-activated network. Our findings show how alterations in brain networks can be visualised and statistically quantified from connectivity fingerprints in individual brains. We propose that connectivity fingerprints offer a statistical metric of individually-specific network organisation through which behaviourally-relevant adaptations could be formally quantified and monitored across individuals, treatments and time.


Assuntos
Mapeamento Encefálico/tendências , Encéfalo/diagnóstico por imagem , Idioma , Imageamento por Ressonância Magnética/tendências , Rede Nervosa/diagnóstico por imagem , Plasticidade Neuronal , Adulto , Idoso , Encéfalo/fisiologia , Mapeamento Encefálico/métodos , Estudos Transversais , Feminino , Humanos , Individualidade , Imageamento por Ressonância Magnética/métodos , Masculino , Pessoa de Meia-Idade , Rede Nervosa/fisiologia , Plasticidade Neuronal/fisiologia , Estudos Prospectivos
15.
Front Hum Neurosci ; 8: 24, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24550807

RESUMO

The aim of this paper was to investigate the neurological underpinnings of auditory-to-motor translation during auditory repetition of unfamiliar pseudowords. We tested two different hypotheses. First we used functional magnetic resonance imaging in 25 healthy subjects to determine whether a functionally defined area in the left temporo-parietal junction (TPJ), referred to as Sylvian-parietal-temporal region (Spt), reflected the demands on auditory-to-motor integration during the repetition of pseudowords relative to a semantically mediated nonverbal sound-naming task. The experiment also allowed us to test alternative accounts of Spt function, namely that Spt is involved in subvocal articulation or auditory processing that can be driven either bottom-up or top-down. The results did not provide convincing evidence that activation increased in either Spt or any other cortical area when non-semantic auditory inputs were being translated into motor outputs. Instead, the results were most consistent with Spt responding to bottom up or top down auditory processing, independent of the demands on auditory-to-motor integration. Second, we investigated the lesion sites in eight patients who had selective difficulties repeating heard words but with preserved word comprehension, picture naming and verbal fluency (i.e., conduction aphasia). All eight patients had white-matter tract damage in the vicinity of the arcuate fasciculus and only one of the eight patients had additional damage to the Spt region, defined functionally in our fMRI data. Our results are therefore most consistent with the neurological tradition that emphasizes the importance of the arcuate fasciculus in the non-semantic integration of auditory and motor speech processing.

16.
Front Hum Neurosci ; 8: 246, 2014.
Artigo em Inglês | MEDLINE | ID: mdl-24834043

RESUMO

This fMRI study used a single, multi-factorial, within-subjects design to dissociate multiple linguistic and non-linguistic processing areas that are all involved in repeating back heard words. The study compared: (1) auditory to visual inputs; (2) phonological to non-phonological inputs; (3) semantic to non-semantic inputs; and (4) speech production to finger-press responses. The stimuli included words (semantic and phonological inputs), pseudowords (phonological input), pictures and sounds of animals or objects (semantic input), and colored patterns and hums (non-semantic and non-phonological). The speech production tasks involved auditory repetition, reading, and naming while the finger press tasks involved one-back matching. The results from the main effects and interactions were compared to predictions from a previously reported functional anatomical model of language based on a meta-analysis of many different neuroimaging experiments. Although many findings from the current experiment replicated many of those predicted, our within-subject design also revealed novel results by providing sufficient anatomical precision to dissect several different regions within the anterior insula, pars orbitalis, anterior cingulate, SMA, and cerebellum. For example, we found one part of the pars orbitalis was involved in phonological processing and another in semantic processing. We also dissociated four different types of phonological effects in the left superior temporal sulcus (STS), left putamen, left ventral premotor cortex, and left pars orbitalis. Our findings challenge some of the commonly-held opinions on the functional anatomy of language, and resolve some previously conflicting findings about specific brain regions-and our experimental design reveals details of the word repetition process that are not well captured by current models.

17.
Front Hum Neurosci ; 7: 787, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24312042

RESUMO

Previous studies have investigated orthographic-to-phonological mapping during reading by comparing brain activation for (1) reading words to object naming, or (2) reading pseudowords (e.g., "phume") to words (e.g., "plume"). Here we combined both approaches to provide new insights into the underlying neural mechanisms. In fMRI data from 25 healthy adult readers, we first identified activation that was greater for reading words and pseudowords relative to picture and color naming. The most significant effect was observed in the left putamen, extending to both anterior and posterior borders. Second, consistent with previous studies, we show that both the anterior and posterior putamen are involved in articulating speech with greater activation during our overt speech production tasks (reading, repetition, object naming, and color naming) than silent one-back-matching on the same stimuli. Third, we compared putamen activation for words versus pseudowords during overt reading and auditory repetition. This revealed that the anterior putamen was most activated by reading pseudowords, whereas the posterior putamen was most activated by words irrespective of whether the task was reading words or auditory word repetition. The pseudoword effect in the anterior putamen is consistent with prior studies that associated this region with the initiation of novel sequences of movements. In contrast, the heightened word response in the posterior putamen is consistent with other studies that associated this region with "memory guided movement." Our results illustrate how the functional dissociation between the anterior and posterior putamen supports sublexical and lexical processing during reading.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA