Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 36
Filtrar
2.
Schizophr Res ; 259: 111-120, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-36564239

RESUMEN

BACKGROUND: Disorganization, presenting as impairment in thought, language and goal-directed behavior, is a core multidimensional syndrome of psychotic disorders. This study examined whether scalable computational measures of spoken language, and smartphone usage pattern, could serve as digital biomarkers of clinical disorganization symptoms. METHODS: We examined in a longitudinal cohort of adults with a psychotic disorder, the associations between clinical measures of disorganization and computational measures of 1) spoken language derived from monthly, semi-structured, recorded clinical interviews; and 2) smartphone usage pattern derived via passive sensing technologies over the month prior to the interview. The language features included speech quantity, rate, fluency, and semantic regularity. The smartphone features included data missingness and phone usage during sleep time. The clinical measures consisted of the Positive and Negative Symptom Scale (PANSS) conceptual disorganization, difficulty in abstract thinking, and poor attention, items. Mixed linear regression analyses were used to estimate both fixed and random effects. RESULTS: Greater severity of clinical symptoms of conceptual disorganization was associated with greater verbosity and more disfluent speech. Greater severity of conceptual disorganization was also associated with greater missingness of smartphone data, and greater smartphone usage during sleep time. While the observed associations were significant across the group, there was also significant variation between individuals. CONCLUSIONS: The findings suggest that digital measures of speech disfluency may serve as scalable markers of conceptual disorganization. The findings warrant further investigation into the use of recorded interviews and passive sensing technologies to assist in the characterization and tracking of psychotic illness.


Asunto(s)
Trastornos Psicóticos , Adulto , Humanos , Trastornos Psicóticos/diagnóstico , Lenguaje , Pensamiento , Cognición , Habla
3.
Artículo en Inglés | MEDLINE | ID: mdl-38282890

RESUMEN

In this paper, we describe the design, collection, and validation of a new video database that includes holistic and dynamic emotion ratings from 83 participants watching 22 affective movie clips. In contrast to previous work in Affective Computing, which pursued a single "ground truth" label for the affective content of each moment of each video (e.g., by averaging the ratings of 2 to 7 trained participants), we embrace the subjectivity inherent to emotional experiences and provide the full distribution of all participants' ratings (with an average of 76.7 raters per video). We argue that this choice represents a paradigm shift with the potential to unlock new research directions, generate new hypotheses, and inspire novel methods in the Affective Computing community. We also describe several interdisciplinary use cases for the database: to provide dynamic norms for emotion elicitation studies (e.g., in psychology, medicine, and neuroscience), to train and test affective content analysis algorithms (e.g., for dynamic emotion recognition, video summarization, and movie recommendation), and to study subjectivity in emotional reactions (e.g., to identify moments of emotional ambiguity or ambivalence within movies, identify predictors of subjectivity, and develop personalized affective content analysis algorithms). The database is made freely available to researchers for noncommercial use at https://dynamos.mgb.org.

4.
J Neurosci Methods ; 369: 109477, 2022 Mar 01.
Artículo en Inglés | MEDLINE | ID: mdl-34998799

RESUMEN

BACKGROUND: Meaningful integration of functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) requires knowing whether these measurements reflect the activity of the same neural sources, i.e., estimating the degree of coupling and decoupling between the neuroimaging modalities. NEW METHOD: This paper proposes a method to quantify the coupling and decoupling of fMRI and EEG signals based on the mixing matrix produced by joint independent component analysis (jICA). The method is termed fMRI/EEG-jICA. RESULTS: fMRI and EEG acquired during a syllable detection task with variable syllable presentation rates (0.25-3 Hz) were separated with jICA into two spatiotemporally distinct components, a primary component that increased nonlinearly in amplitude with syllable presentation rate, putatively reflecting an obligatory auditory response, and a secondary component that declined nonlinearly with syllable presentation rate, putatively reflecting an auditory attention orienting response. The two EEG subcomponents were of similar amplitude, but the secondary fMRI subcomponent was ten folds smaller than the primary one. COMPARISON TO EXISTING METHOD: FMRI multiple regression analysis yielded a map more consistent with the primary than secondary fMRI subcomponent of jICA, as determined by a greater area under the curve (0.5 versus 0.38) in a sensitivity and specificity analysis of spatial overlap. CONCLUSION: fMRI/EEG-jICA revealed spatiotemporally distinct brain networks with greater sensitivity than fMRI multiple regression analysis, demonstrating how this method can be used for leveraging EEG signals to inform the detection and functional characterization of fMRI signals. fMRI/EEG-jICA may be useful for studying neurovascular coupling at a macro-level, e.g., in neurovascular disorders.


Asunto(s)
Imagen por Resonancia Magnética , Acoplamiento Neurovascular , Encéfalo/diagnóstico por imagen , Mapeo Encefálico/métodos , Electroencefalografía/métodos , Imagen por Resonancia Magnética/métodos
6.
Schizophr Res ; 245: 97-115, 2022 07.
Artículo en Inglés | MEDLINE | ID: mdl-34456131

RESUMEN

OBJECTIVES: This study aimed to (1) determine the feasibility of collecting behavioral data from participants hospitalized with acute psychosis and (2) begin to evaluate the clinical information that can be computationally derived from such data. METHODS: Behavioral data was collected across 99 sessions from 38 participants recruited from an inpatient psychiatric unit. Each session started with a semi-structured interview modeled on a typical "clinical rounds" encounter and included administration of the Positive and Negative Syndrome Scale (PANSS). ANALYSIS: We quantified aspects of participants' verbal behavior during the interview using lexical, coherence, and disfluency features. We then used two complementary approaches to explore our second objective. The first approach used predictive models to estimate participants' PANSS scores from their language features. Our second approach used inferential models to quantify the relationships between individual language features and symptom measures. RESULTS: Our predictive models showed promise but lacked sufficient data to achieve clinically useful accuracy. Our inferential models identified statistically significant relationships between numerous language features and symptom domains. CONCLUSION: Our interview recording procedures were well-tolerated and produced adequate data for transcription and analysis. The results of our inferential modeling suggest that automatic measurements of expressive language contain signals highly relevant to the assessment of psychosis. These findings establish the potential of measuring language during a clinical interview in a naturalistic setting and generate specific hypotheses that can be tested in future studies. This, in turn, will lead to more accurate modeling and better understanding of the relationships between expressive language and psychosis.


Asunto(s)
Manía , Trastornos Psicóticos , Humanos , Lenguaje , Trastornos Psicóticos/psicología
7.
Schizophr Res ; 228: 394-402, 2021 02.
Artículo en Inglés | MEDLINE | ID: mdl-33549981

RESUMEN

BACKGROUND: Schizophrenia (SZ) is associated with devastating emotional, cognitive and language impairments. Understanding the deficits in each domain and their interactions is important for developing novel, targeted psychotherapies. This study tested whether negative-threat word processing is altered in individuals with SZ compared to healthy controls (HC), in relation to SZ symptom severity across domains. METHODS: Thirty-one SZ and seventeen HC subjects were scanned with functional magnetic resonance imaging while silently reading negative-threat and neutral words. Post-scan, subjects rated the valence of each word. The effects of group (SZ, HC), word type (negative, neutral), task period (early, late), and severity of clinical symptoms (positive, negative, excitement/hostility, cognitive, depression/anxiety), on word valence ratings and brain activation, were analyzed. RESULTS: SZ and HC subjects rated negative versus neutral words as more negative. The SZ subgroup with severe versus mild excitement/hostility symptoms rated the negative words as more negative. SZ versus HC subjects hyperactivated left language areas (angular gyrus, middle/inferior temporal gyrus (early period)) and the amygdala (early period) to negative words, and the amygdala (late period) to neutral words. In SZ, activation to negative versus neutral words in left dorsal temporal pole and dorsal anterior cingulate was positively correlated with excitement/hostility scores. CONCLUSIONS: A negatively-biased behavioral response to negative-threat words was seen in SZ with severe versus mild excitement/hostility symptoms. The biased behavioral response was mediated by hyperactivation of brain networks associated with semantic processing of emotion concepts. Thus, word-level semantic processing may be a relevant psychotherapeutic target in SZ.


Asunto(s)
Esquizofrenia , Encéfalo/diagnóstico por imagen , Emociones , Hostilidad , Humanos , Imagen por Resonancia Magnética , Esquizofrenia/complicaciones , Esquizofrenia/diagnóstico por imagen , Semántica
8.
Neuropsychologia ; 146: 107543, 2020 09.
Artículo en Inglés | MEDLINE | ID: mdl-32598966

RESUMEN

Developmental dyslexia is a learning disorder characterized by difficulties reading words accurately and/or fluently. Several behavioral studies have suggested the presence of anomalies at an early stage of phoneme processing, when the complex spectrotemporal patterns in the speech signal are analyzed and assigned to phonemic categories. In this study, fMRI was used to compare brain responses associated with categorical discrimination of speech syllables (P) and acoustically matched nonphonemic stimuli (N) in children and adolescents with dyslexia and in typically developing (TD) controls, aged 8-17 years. The TD group showed significantly greater activation during the P condition relative to N in an area of the left ventral occipitotemporal cortex that corresponds well with the region referred to as the "visual word form area" (VWFA). Regression analyses using reading performance as a continuous variable across the full group of participants yielded similar results. Overall, the findings are consistent with those of previous neuroimaging studies using print stimuli in individuals with dyslexia that found reduced activation in left occipitotemporal regions; however, the current study shows that these activation differences seen during reading are apparent during auditory phoneme discrimination in youth with dyslexia, suggesting that the primary deficit in at least a subset of children may lie early in the speech processing stream and that categorical perception may be an important target of early intervention in children at risk for dyslexia.


Asunto(s)
Dislexia/fisiopatología , Lóbulo Occipital/fisiopatología , Fonética , Lectura , Percepción del Habla , Lóbulo Temporal/fisiopatología , Adolescente , Niño , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Lóbulo Occipital/diagnóstico por imagen , Lóbulo Temporal/diagnóstico por imagen
9.
Front Neurosci ; 14: 4, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-32038154

RESUMEN

Differences between males and females in brain development and in the organization and hemispheric lateralization of brain functions have been described, including in language. Sex differences in language organization may have important implications for language mapping performed to assess, and minimize neurosurgical risk to, language function. This study examined the effect of sex on the activation and functional connectivity of the brain, measured with presurgical functional magnetic resonance imaging (fMRI) language mapping in patients with a brain tumor. We carried out a retrospective analysis of data from neurosurgical patients treated at our institution who met the criteria of pathological diagnosis (malignant brain tumor), tumor location (left hemisphere), and fMRI paradigms [sentence completion (SC); antonym generation (AG); and resting-state fMRI (rs-fMRI)]. Forty-seven patients (22 females, mean age = 56.0 years) were included in the study. Across the SC and AG tasks, females relative to males showed greater activation in limited areas, including the left inferior frontal gyrus classically associated with language. In contrast, males relative to females showed greater activation in extended areas beyond the classic language network, including the supplementary motor area (SMA) and precentral gyrus. The rs-fMRI functional connectivity of the left SMA in the females was stronger with inferior temporal pole (TP) areas, and in the males with several midline areas. The findings are overall consistent with theories of greater reliance on specialized language areas in females relative to males, and generalized brain areas in males relative to females, for language function. Importantly, the findings suggest that sex could affect fMRI language mapping. Thus, considering sex as a variable in presurgical language mapping merits further investigation.

10.
Front Neurosci ; 12: 13, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29410611

RESUMEN

Joint independent component analysis (jICA) can be applied within subject for fusion of multi-channel event-related potentials (ERP) and functional magnetic resonance imaging (fMRI), to measure brain function at high spatiotemporal resolution (Mangalathu-Arumana et al., 2012). However, the impact of experimental design choices on jICA performance has not been systematically studied. Here, the sensitivity of jICA for recovering neural sources in individual data was evaluated as a function of imaging SNR, number of independent representations of the ERP/fMRI data, relationship between instantiations of the joint ERP/fMRI activity (linear, non-linear, uncoupled), and type of sources (varying parametrically and non-parametrically across representations of the data), using computer simulations. Neural sources were simulated with spatiotemporal and noise attributes derived from experimental data. The best performance, maximizing both cross-modal data fusion and the separation of brain sources, occurred with a moderate number of representations of the ERP/fMRI data (10-30), as in a mixed block/event related experimental design. Importantly, the type of relationship between instantiations of the ERP/fMRI activity, whether linear, non-linear or uncoupled, did not in itself impact jICA performance, and was accurately recovered in the common profiles (i.e., mixing coefficients). Thus, jICA provides an unbiased way to characterize the relationship between ERP and fMRI activity across brain regions, in individual data, rendering it potentially useful for characterizing pathological conditions in which neurovascular coupling is adversely affected.

11.
eNeuro ; 5(1)2018.
Artículo en Inglés | MEDLINE | ID: mdl-29354680

RESUMEN

Primary and nonprimary cerebral cortex mature along different timescales; however, the differences between the rates of maturation of primary and nonprimary cortex are unclear. Cortical maturation can be measured through changes in tissue microstructure detectable by diffusion magnetic resonance imaging (MRI). In this study, diffusion tensor imaging (DTI) was used to characterize the maturation of Heschl's gyrus (HG), which contains both primary auditory cortex (pAC) and nonprimary auditory cortex (nAC), in 90 preterm infants between 26 and 42 weeks postmenstrual age (PMA). The preterm infants were in different acoustical environments during their hospitalization: 46 in open ward beds and 44 in single rooms. A control group consisted of 15 term-born infants. Diffusion parameters revealed that (1) changes in cortical microstructure that accompany cortical maturation had largely already occurred in pAC by 28 weeks PMA, and (2) rapid changes were taking place in nAC between 26 and 42 weeks PMA. At term equivalent PMA, diffusion parameters for auditory cortex were different between preterm infants and term control infants, reflecting either delayed maturation or injury. No effect of room type was observed. For the preterm group, disturbed maturation of nonprimary (but not primary) auditory cortex was associated with poorer language performance at age two years.


Asunto(s)
Corteza Auditiva/diagnóstico por imagen , Corteza Auditiva/crecimiento & desarrollo , Lenguaje Infantil , Preescolar , Estudios de Cohortes , Imagen de Difusión por Resonancia Magnética , Imagen de Difusión Tensora , Femenino , Sustancia Gris/diagnóstico por imagen , Sustancia Gris/crecimiento & desarrollo , Humanos , Recién Nacido , Recien Nacido Prematuro , Masculino , Sustancia Blanca/diagnóstico por imagen , Sustancia Blanca/crecimiento & desarrollo
12.
Brain Lang ; 187: 33-40, 2018 12.
Artículo en Inglés | MEDLINE | ID: mdl-29268943

RESUMEN

Mounting evidence indicates a role in perceptual decoding of speech for the dorsal auditory stream connecting between temporal auditory and frontal-parietal articulatory areas. The activation time course in auditory, somatosensory and motor regions during speech processing is seldom taken into account in models of speech perception. We critically review the literature with a focus on temporal information, and contrast between three alternative models of auditory-motor speech processing: parallel, hierarchical, and interactive. We argue that electrophysiological and transcranial magnetic stimulation studies support the interactive model. The findings reveal that auditory and somatomotor areas are engaged almost simultaneously, before 100 ms. There is also evidence of early interactions between auditory and motor areas. We propose a new interactive model of auditory-motor speech perception in which auditory and articulatory somatomotor areas are connected from early stages of speech processing. We also discuss how attention and other factors can affect the timing and strength of auditory-motor interactions and propose directions for future research.


Asunto(s)
Corteza Auditiva/fisiología , Corteza Motora/fisiología , Percepción del Habla , Conectoma , Humanos , Desempeño Psicomotor
13.
Brain Lang ; 171: 14-22, 2017 08.
Artículo en Inglés | MEDLINE | ID: mdl-28437659

RESUMEN

Relationships between maternal education (ME) and both behavioral performances and brain activation during the discrimination of phonemic and nonphonemic sounds were examined using fMRI in children with different levels of phoneme categorization proficiency (CP). Significant relationships were found between ME and intellectual functioning and vocabulary, with a trend for phonological awareness. A significant interaction between CP and ME was seen for nonverbal reasoning abilities. In addition, fMRI analyses revealed a significant interaction between CP and ME for phonemic discrimination in left prefrontal cortex. Thus, ME was associated with differential patterns of both neuropsychological performance and brain activation contingent on the level of CP. These results highlight the importance of examining SES effects at different proficiency levels. The pattern of results may suggest the presence of neurobiological differences in the children with low CP that affect the nature of relationships with ME.


Asunto(s)
Escolaridad , Lenguaje , Clase Social , Percepción del Habla/fisiología , Concienciación , Niño , Desarrollo Infantil , Femenino , Humanos , Lingüística , Imagen por Resonancia Magnética , Masculino , Fonética , Sonido , Vocabulario
15.
Front Neurosci ; 10: 506, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27877106

RESUMEN

Rapid assessment of emotions is important for detecting and prioritizing salient input. Emotions are conveyed in spoken words via verbal and non-verbal channels that are mutually informative and unveil in parallel over time, but the neural dynamics and interactions of these processes are not well understood. In this paper, we review the literature on emotion perception in faces, written words, and voices, as a basis for understanding the functional organization of emotion perception in spoken words. The characteristics of visual and auditory routes to the amygdala-a subcortical center for emotion perception-are compared across these stimulus classes in terms of neural dynamics, hemispheric lateralization, and functionality. Converging results from neuroimaging, electrophysiological, and lesion studies suggest the existence of an afferent route to the amygdala and primary visual cortex for fast and subliminal processing of coarse emotional face cues. We suggest that a fast route to the amygdala may also function for brief non-verbal vocalizations (e.g., laugh, cry), in which emotional category is conveyed effectively by voice tone and intensity. However, emotional prosody which evolves on longer time scales and is conveyed by fine-grained spectral cues appears to be processed via a slower, indirect cortical route. For verbal emotional content, the bulk of current evidence, indicating predominant left lateralization of the amygdala response and timing of emotional effects attributable to speeded lexical access, is more consistent with an indirect cortical route to the amygdala. Top-down linguistic modulation may play an important role for prioritized perception of emotions in words. Understanding the neural dynamics and interactions of emotion and language perception is important for selecting potent stimuli and devising effective training and/or treatment approaches for the alleviation of emotional dysfunction across a range of neuropsychiatric states.

16.
Front Neurosci ; 8: 386, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25520611

RESUMEN

This paper examines the questions, what levels of speech can be perceived visually, and how is visual speech represented by the brain? Review of the literature leads to the conclusions that every level of psycholinguistic speech structure (i.e., phonetic features, phonemes, syllables, words, and prosody) can be perceived visually, although individuals differ in their abilities to do so; and that there are visual modality-specific representations of speech qua speech in higher-level vision brain areas. That is, the visual system represents the modal patterns of visual speech. The suggestion that the auditory speech pathway receives and represents visual speech is examined in light of neuroimaging evidence on the auditory speech pathways. We outline the generally agreed-upon organization of the visual ventral and dorsal pathways and examine several types of visual processing that might be related to speech through those pathways, specifically, face and body, orthography, and sign language processing. In this context, we examine the visual speech processing literature, which reveals widespread diverse patterns of activity in posterior temporal cortices in response to visual speech stimuli. We outline a model of the visual and auditory speech pathways and make several suggestions: (1) The visual perception of speech relies on visual pathway representations of speech qua speech. (2) A proposed site of these representations, the temporal visual speech area (TVSA) has been demonstrated in posterior temporal cortex, ventral and posterior to multisensory posterior superior temporal sulcus (pSTS). (3) Given that visual speech has dynamic and configural features, its representations in feedforward visual pathways are expected to integrate these features, possibly in TVSA.

17.
Front Neurosci ; 8: 289, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25309312

RESUMEN

The superior temporal sulcus (STS) in the left hemisphere is functionally diverse, with sub-areas implicated in both linguistic and non-linguistic functions. However, the number and boundaries of distinct functional regions remain to be determined. Here, we present new evidence, from meta-analysis of a large number of positron emission tomography (PET) and functional magnetic resonance imaging (fMRI) studies, of different functional specificity in the left STS supporting a division of its middle to terminal extent into at least three functional areas. The middle portion of the left STS stem (fmSTS) is highly specialized for speech perception and the processing of language material. The posterior portion of the left STS stem (fpSTS) is highly versatile and involved in multiple functions supporting semantic memory and associative thinking. The fpSTS responds to both language and non-language stimuli but the sensitivity to non-language material is greater. The horizontal portion of the left STS stem and terminal ascending branches (ftSTS) display intermediate functional specificity, with the anterior-dorsal ascending branch (fatSTS) supporting executive functions and motor planning and showing greater sensitivity to language material, and the horizontal stem and posterior-ventral ascending branch (fptSTS) supporting primarily semantic processing and displaying greater sensitivity to non-language material. We suggest that the high functional specificity of the left fmSTS for speech is an important means by which the human brain achieves exquisite affinity and efficiency for native speech perception. In contrast, the extreme multi-functionality of the left fpSTS reflects the role of this area as a cortical hub for semantic processing and the extraction of meaning from multiple sources of information. Finally, in the left ftSTS, further functional differentiation between the dorsal and ventral aspect is warranted.

18.
Neuropsychologia ; 61: 269-79, 2014 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-24946314

RESUMEN

Whether and how working memory disrupts or alters auditory selective attention is unclear. We compared simultaneous event-related potentials (ERP) and functional magnetic resonance imaging (fMRI) responses associated with task-irrelevant sounds across high and low working memory load in a dichotic-listening paradigm. Participants performed n-back tasks (1-back, 2-back) in one ear (Attend ear) while ignoring task-irrelevant speech sounds in the other ear (Ignore ear). The effects of working memory load on selective attention were observed at 130-210ms, with higher load resulting in greater irrelevant syllable-related activation in localizer-defined regions in auditory cortex. The interaction between memory load and presence of irrelevant information revealed stronger activations primarily in frontal and parietal areas due to presence of irrelevant information in the higher memory load. Joint independent component analysis of ERP and fMRI data revealed that the ERP component in the N1 time-range is associated with activity in superior temporal gyrus and medial prefrontal cortex. These results demonstrate a dynamic relationship between working memory load and auditory selective attention, in agreement with the load model of attention and the idea of common neural resources for memory and attention.


Asunto(s)
Atención/fisiología , Percepción Auditiva/fisiología , Encéfalo/fisiología , Memoria a Corto Plazo/fisiología , Adulto , Electroencefalografía/métodos , Potenciales Evocados , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Imagen Multimodal , Pruebas Neuropsicológicas , Tiempo de Reacción , Procesamiento de Señales Asistido por Computador
19.
Brain Struct Funct ; 219(4): 1369-83, 2014 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-23708059

RESUMEN

The auditory system is organized such that progressively more complex features are represented across successive cortical hierarchical stages. Just when and where the processing of phonemes, fundamental elements of the speech signal, is achieved in this hierarchy remains a matter of vigorous debate. Non-invasive measures of phonemic representation have been somewhat equivocal. While some studies point to a primary role for middle/anterior regions of the superior temporal gyrus (STG), others implicate the posterior STG. Differences in stimulation, task and inter-individual anatomical/functional variability may account for these discrepant findings. Here, we sought to clarify this issue by mapping phonemic representation across left perisylvian cortex, taking advantage of the excellent sampling density afforded by intracranial recordings in humans. We asked whether one or both major divisions of the STG were sensitive to phonemic transitions. The high signal-to-noise characteristics of direct intracranial recordings allowed for analysis at the individual participant level, circumventing issues of inter-individual anatomic and functional variability that may have obscured previous findings at the group level of analysis. The mismatch negativity (MMN), an electrophysiological response elicited by changes in repetitive streams of stimulation, served as our primary dependent measure. Oddball configurations of pairs of phonemes, spectro-temporally matched non-phonemes, and simple tones were presented. The loci of the MMN clearly differed as a function of stimulus type. Phoneme representation was most robust over middle/anterior STG/STS, but was also observed over posterior STG/SMG. These data point to multiple phonemic processing zones along perisylvian cortex, both anterior and posterior to primary auditory cortex. This finding is considered within the context of a dual stream model of auditory processing in which functionally distinct ventral and dorsal auditory processing pathways may be engaged by speech stimuli.


Asunto(s)
Corteza Auditiva/fisiología , Vías Auditivas/fisiología , Percepción Auditiva/fisiología , Habla/fisiología , Lóbulo Temporal/fisiología , Estimulación Acústica , Adolescente , Mapeo Encefálico/métodos , Electroencefalografía , Femenino , Lateralidad Funcional , Humanos , Lenguaje , Masculino , Adulto Joven
20.
Neuroimage ; 89: 192-202, 2014 Apr 01.
Artículo en Inglés | MEDLINE | ID: mdl-24315840

RESUMEN

Neuroimaging studies suggest that categorical perception of speech phonemes in adults is primarily subserved by a pathway from bilateral primary auditory areas to association areas in the left middle superior temporal cortex, but the neural substrates underlying categorical speech perception in children are not yet known. Here, fMRI was used to examine the neural substrates associated with phoneme perception in 7- to 12-year-old children as well as the relationships among level of expertise in phoneme perception, the associated activation, and the development of reading and phonological processing abilities. While multiple regions in left frontal, temporal, and parietal cortex were found to be more responsive to phonemic than nonphonemic sounds, the extent of left lateralization in posterior temporal and parietal regions during phonemic relative to nonphonemic discrimination differed depending on the degree of categorical phoneme perception. In addition, an unexpected finding was that proficiency in categorical perception was strongly related to activation in the left ventral occipitotemporal cortex, an area frequently associated with orthographic processing. Furthermore, in children who showed lower proficiency in categorical perception, the level of categorical perception was positively correlated with reading ability and reading and reading-related abilities were inversely correlated with right mid-temporal activation in the phonemic relative to nonphonemic perception contrast. These results suggest that greater specialization of left hemisphere temporal and parietal regions for the categorical perception of phonemes, as well as activation of the region termed the visual word form area, may be important for the optimal developmental refinement of both phoneme perception and reading ability.


Asunto(s)
Encéfalo/fisiología , Lateralidad Funcional/fisiología , Lectura , Percepción del Habla/fisiología , Niño , Femenino , Humanos , Imagen por Resonancia Magnética , Masculino , Acústica del Lenguaje
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...