Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 57
Filtrar
Más filtros












Base de datos
Intervalo de año de publicación
1.
Cogn Res Princ Implic ; 9(1): 35, 2024 Jun 05.
Artículo en Inglés | MEDLINE | ID: mdl-38834918

RESUMEN

Multilingual speakers can find speech recognition in everyday environments like restaurants and open-plan offices particularly challenging. In a world where speaking multiple languages is increasingly common, effective clinical and educational interventions will require a better understanding of how factors like multilingual contexts and listeners' language proficiency interact with adverse listening environments. For example, word and phrase recognition is facilitated when competing voices speak different languages. Is this due to a "release from masking" from lower-level acoustic differences between languages and talkers, or higher-level cognitive and linguistic factors? To address this question, we created a "one-man bilingual cocktail party" selective attention task using English and Mandarin speech from one bilingual talker to reduce low-level acoustic cues. In Experiment 1, 58 listeners more accurately recognized English targets when distracting speech was Mandarin compared to English. Bilingual Mandarin-English listeners experienced significantly more interference and intrusions from the Mandarin distractor than did English listeners, exacerbated by challenging target-to-masker ratios. In Experiment 2, 29 Mandarin-English bilingual listeners exhibited linguistic release from masking in both languages. Bilinguals experienced greater release from masking when attending to English, confirming an influence of linguistic knowledge on the "cocktail party" paradigm that is separate from primarily energetic masking effects. Effects of higher-order language processing and expertise emerge only in the most demanding target-to-masker contexts. The "one-man bilingual cocktail party" establishes a useful tool for future investigations and characterization of communication challenges in the large and growing worldwide community of Mandarin-English bilinguals.


Asunto(s)
Atención , Multilingüismo , Percepción del Habla , Humanos , Percepción del Habla/fisiología , Adulto , Femenino , Masculino , Adulto Joven , Atención/fisiología , Enmascaramiento Perceptual/fisiología , Psicolingüística
2.
Dev Sci ; 27(1): e13420, 2024 Jan.
Artículo en Inglés | MEDLINE | ID: mdl-37350014

RESUMEN

Auditory selective attention forms an important foundation of children's learning by enabling the prioritisation and encoding of relevant stimuli. It may also influence reading development, which relies on metalinguistic skills including the awareness of the sound structure of spoken language. Reports of attentional impairments and speech perception difficulties in noisy environments in dyslexic readers are also suggestive of the putative contribution of auditory attention to reading development. To date, it is unclear whether non-speech selective attention and its underlying neural mechanisms are impaired in children with dyslexia and to which extent these deficits relate to individual reading and speech perception abilities in suboptimal listening conditions. In this EEG study, we assessed non-speech sustained auditory selective attention in 106 7-to-12-year-old children with and without dyslexia. Children attended to one of two tone streams, detecting occasional sequence repeats in the attended stream, and performed a speech-in-speech perception task. Results show that when children directed their attention to one stream, inter-trial-phase-coherence at the attended rate increased in fronto-central sites; this, in turn, was associated with better target detection. Behavioural and neural indices of attention did not systematically differ as a function of dyslexia diagnosis. However, behavioural indices of attention did explain individual differences in reading fluency and speech-in-speech perception abilities: both these skills were impaired in dyslexic readers. Taken together, our results show that children with dyslexia do not show group-level auditory attention deficits but these deficits may represent a risk for developing reading impairments and problems with speech perception in complex acoustic environments. RESEARCH HIGHLIGHTS: Non-speech sustained auditory selective attention modulates EEG phase coherence in children with/without dyslexia Children with dyslexia show difficulties in speech-in-speech perception Attention relates to dyslexic readers' speech-in-speech perception and reading skills Dyslexia diagnosis is not linked to behavioural/EEG indices of auditory attention.


Asunto(s)
Dislexia , Percepción del Habla , Niño , Humanos , Lectura , Sonido , Habla , Trastornos del Habla , Fonética
3.
Cognition ; 237: 105467, 2023 08.
Artículo en Inglés | MEDLINE | ID: mdl-37148640

RESUMEN

Multiple lines of research have developed training approaches that foster category learning, with important translational implications for education. Increasing exemplar variability, blocking or interleaving by category-relevant dimension, and providing explicit instructions about diagnostic dimensions each have been shown to facilitate category learning and/or generalization. However, laboratory research often must distill the character of natural input regularities that define real-world categories. As a result, much of what we know about category learning has come from studies with simplifying assumptions. We challenge the implicit expectation that these studies reflect the process of category learning of real-world input by creating an auditory category learning paradigm that intentionally violates some common simplifying assumptions of category learning tasks. Across five experiments and nearly 300 adult participants, we used training regimes previously shown to facilitate category learning, but here drew from a more complex and multidimensional category space with tens of thousands of unique exemplars. Learning was equivalently robust across training regimes that changed exemplar variability, altered the blocking of category exemplars, or provided explicit instructions of the category-diagnostic dimension. Each drove essentially equivalent accuracy measures of learning generalization following 40 min of training. These findings suggest that auditory category learning across complex input is not as susceptible to training regime manipulation as previously thought.


Asunto(s)
Generalización Psicológica , Aprendizaje , Adulto , Humanos , Formación de Concepto
4.
Neuroimage Clin ; 36: 103240, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36510411

RESUMEN

Leber Hereditary Optic Neuropathy (LHON) is an inherited mitochondrial retinal disease that causes the degeneration of retinal ganglion cells and leads to drastic loss of visual function. In the last decades, there has been a growing interest in using Magnetic Resonance Imaging (MRI) to better understand mechanisms of LHON beyond the retina. This is partially due to the emergence of gene-therapies for retinal diseases, and the accompanying expanded need for reliably quantifying and monitoring visual processing and treatment efficiency in patient populations. This paper aims to draw a current picture of key findings in this field so far, the challenges of using neuroimaging methods in patients with LHON, and important open questions that MRI can help address about LHON disease mechanisms and prognoses, including how downstream visual brain regions are affected by the disease and treatment and why, and how scope for neural plasticity in these pathways may limit or facilitate recovery.


Asunto(s)
Enfermedades Mitocondriales , Atrofia Óptica Hereditaria de Leber , Humanos , Atrofia Óptica Hereditaria de Leber/diagnóstico por imagen , Atrofia Óptica Hereditaria de Leber/genética , Atrofia Óptica Hereditaria de Leber/metabolismo , Células Ganglionares de la Retina/metabolismo , Retina/diagnóstico por imagen , Retina/patología , Imagen por Resonancia Magnética
5.
Trends Hear ; 26: 23312165221118792, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36131515

RESUMEN

Most human auditory psychophysics research has historically been conducted in carefully controlled environments with calibrated audio equipment, and over potentially hours of repetitive testing with expert listeners. Here, we operationally define such conditions as having high 'auditory hygiene'. From this perspective, conducting auditory psychophysical paradigms online presents a serious challenge, in that results may hinge on absolute sound presentation level, reliably estimated perceptual thresholds, low and controlled background noise levels, and sustained motivation and attention. We introduce a set of procedures that address these challenges and facilitate auditory hygiene for online auditory psychophysics. First, we establish a simple means of setting sound presentation levels. Across a set of four level-setting conditions conducted in person, we demonstrate the stability and robustness of this level setting procedure in open air and controlled settings. Second, we test participants' tone-in-noise thresholds using widely adopted online experiment platforms and demonstrate that reliable threshold estimates can be derived online in approximately one minute of testing. Third, using these level and threshold setting procedures to establish participant-specific stimulus conditions, we show that an online implementation of the classic probe-signal paradigm can be used to demonstrate frequency-selective attention on an individual-participant basis, using a third of the trials used in recent in-lab experiments. Finally, we show how threshold and attentional measures relate to well-validated assays of online participants' in-task motivation, fatigue, and confidence. This demonstrates the promise of online auditory psychophysics for addressing new auditory perception and neuroscience questions quickly, efficiently, and with more diverse samples. Code for the tests is publicly available through Pavlovia and Gorilla.


Asunto(s)
Percepción Auditiva , Ruido , Umbral Auditivo , Humanos , Psicofísica
6.
Neurosci Biobehav Rev ; 139: 104730, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35691470

RESUMEN

The English idiom "on the tip of my tongue" commonly acknowledges that something is known, but it cannot be immediately brought to mind. This phrase accurately describes sensorimotor functions of the tongue, which are fundamental for many tongue-related behaviors (e.g., speech), but often neglected by scientific research. Here, we review a wide range of studies conducted on non-primates, non-human and human primates with the aim of providing a comprehensive description of the cortical representation of the tongue's somatosensory inputs and motor outputs across different phylogenetic domains. First, we summarize how the properties of passive non-noxious mechanical stimuli are encoded in the putative somatosensory tongue area, which has a conserved location in the ventral portion of the somatosensory cortex across mammals. Second, we review how complex self-generated actions involving the tongue are represented in more anterior regions of the putative somato-motor tongue area. Finally, we describe multisensory response properties of the primate and non-primate tongue area by also defining how the cytoarchitecture of this area is affected by experience and deafferentation.


Asunto(s)
Lenguaje , Corteza Somatosensorial , Animales , Mapeo Encefálico , Humanos , Mamíferos , Filogenia , Primates , Corteza Somatosensorial/fisiología , Lengua
7.
J Exp Psychol Learn Mem Cogn ; 48(6): 769-784, 2022 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-34570548

RESUMEN

Category learning is fundamental to cognition, but little is known about how it proceeds in real-world environments when learners do not have instructions to search for category-relevant information, do not make overt category decisions, and do not experience direct feedback. Prior research demonstrates that listeners can acquire task-irrelevant auditory categories incidentally as they engage in primarily visuomotor tasks. The current study examines the factors that support this incidental category learning. Three experiments systematically manipulated the relationship of four novel auditory categories with a consistent visual feature (color or location) that informed a simple behavioral keypress response regarding the visual feature. In both an in-person experiment and two online replications with extensions, incidental auditory category learning occurred reliably when category exemplars consistently aligned with visuomotor demands of the primary task, but not when they were misaligned. The presence of an additional irrelevant visual feature that was uncorrelated with the primary task demands neither enhanced nor harmed incidental learning. By contrast, incidental learning did not occur when auditory categories were aligned consistently with one visual feature, but the motor response in the primary task was aligned with another, category-unaligned visual feature. Moreover, category learning did not reliably occur across passive observation or when participants made a category-nonspecific, generic motor response. These findings show that incidental learning of categories is strongly mediated by the character of coincident behavior. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Asunto(s)
Cognición , Aprendizaje , Retroalimentación , Humanos , Aprendizaje/fisiología
8.
J Exp Psychol Gen ; 151(3): 555-577, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-34582231

RESUMEN

Statistical learning plays an important role in acquiring the structure of cultural communication signals such as speech and music, which are both perceived and reproduced. However, statistical learning is typically investigated through passive exposure to structured signals, followed by offline explicit recognition tasks assessing the degree of learning. Such experimental approaches fail to capture statistical learning as it takes place and require post hoc conscious reflection on what is thought to be an implicit process of knowledge acquisition. To better understand the process of statistical learning in active contexts while addressing these shortcomings, we introduce a novel, processing-based measure of statistical learning based on the position of errors in sequence reproduction. Across five experiments, we employed this new technique to assess statistical learning using artificial pure-tone or environmental-sound languages with controlled statistical properties in passive exposure, active reproduction, and explicit recognition tasks. The new error position measure provided a robust, online indicator of statistical learning during reproduction, with little carryover from prior statistical learning via passive exposure and no correlation with recognition-based estimates of statistical learning. Error position effects extended consistently across auditory domains, including sequences of pure tones and environmental sounds. Whereas recall performance showed significant variability across experiments, and little evidence of being improved by statistical learning, the error position effect was highly consistent for all participant groups, including musicians and nonmusicians. We discuss the implications of these results for understanding psychological mechanisms underlying statistical learning and compare the evidence provided by different experimental measures. (PsycInfo Database Record (c) 2022 APA, all rights reserved).


Asunto(s)
Música , Percepción del Habla , Humanos , Aprendizaje , Reconocimiento en Psicología , Reproducción
9.
Brain ; 144(7): 2120-2134, 2021 08 17.
Artículo en Inglés | MEDLINE | ID: mdl-33725125

RESUMEN

Post-stroke cognitive and linguistic impairments are debilitating conditions, with limited therapeutic options. Domain-general brain networks play an important role in stroke recovery and characterizing their residual function with functional MRI has the potential to yield biomarkers capable of guiding patient-specific rehabilitation. However, this is challenging as such detailed characterization requires testing patients on multitudes of cognitive tasks in the scanner, rendering experimental sessions unfeasibly lengthy. Thus, the current status quo in clinical neuroimaging research involves testing patients on a very limited number of tasks, in the hope that it will reveal a useful neuroimaging biomarker for the whole cohort. Given the great heterogeneity among stroke patients and the volume of possible tasks this approach is unsustainable. Advancing task-based functional MRI biomarker discovery requires a paradigm shift in order to be able to swiftly characterize residual network activity in individual patients using a diverse range of cognitive tasks. Here, we overcome this problem by leveraging neuroadaptive Bayesian optimization, an approach combining real-time functional MRI with machine-learning, by intelligently searching across many tasks, this approach rapidly maps out patient-specific profiles of residual domain-general network function. We used this technique in a cross-sectional study with 11 left-hemispheric stroke patients with chronic aphasia (four female, age ± standard deviation: 59 ± 10.9 years) and 14 healthy, age-matched control subjects (eight female, age ± standard deviation: 55.6 ± 6.8 years). To assess intra-subject reliability of the functional profiles obtained, we conducted two independent runs per subject, for which the algorithm was entirely reinitialized. Our results demonstrate that this technique is both feasible and robust, yielding reliable patient-specific functional profiles. Moreover, we show that group-level results are not representative of patient-specific results. Whereas controls have highly similar profiles, patients show idiosyncratic profiles of network abnormalities that are associated with behavioural performance. In summary, our study highlights the importance of moving beyond traditional 'one-size-fits-all' approaches where patients are treated as one group and single tasks are used. Our approach can be extended to diverse brain networks and combined with brain stimulation or other therapeutics, thereby opening new avenues for precision medicine targeting a diverse range of neurological and psychiatric conditions.


Asunto(s)
Mapeo Encefálico/métodos , Encéfalo/diagnóstico por imagen , Interpretación de Imagen Asistida por Computador/métodos , Aprendizaje Automático , Accidente Cerebrovascular/diagnóstico por imagen , Adulto , Anciano , Teorema de Bayes , Encéfalo/fisiopatología , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino , Persona de Mediana Edad , Accidente Cerebrovascular/fisiopatología
10.
Elife ; 92020 08 07.
Artículo en Inglés | MEDLINE | ID: mdl-32762842

RESUMEN

Individuals with congenital amusia have a lifelong history of unreliable pitch processing. Accordingly, they downweight pitch cues during speech perception and instead rely on other dimensions such as duration. We investigated the neural basis for this strategy. During fMRI, individuals with amusia (N = 15) and controls (N = 15) read sentences where a comma indicated a grammatical phrase boundary. They then heard two sentences spoken that differed only in pitch and/or duration cues and selected the best match for the written sentence. Prominent reductions in functional connectivity were detected in the amusia group between left prefrontal language-related regions and right hemisphere pitch-related regions, which reflected the between-group differences in cue weights in the same groups of listeners. Connectivity differences between these regions were not present during a control task. Our results indicate that the reliability of perceptual dimensions is linked with functional connectivity between frontal and perceptual regions and suggest a compensatory mechanism.


Spoken language is colored by fluctuations in pitch and rhythm. Rather than speaking in a flat monotone, we allow our sentences to rise and fall. We vary the length of syllables, drawing out some, and shortening others. These fluctuations, known as prosody, add emotion to speech and denote punctuation. In written language, we use a comma or a period to signal a boundary between phrases. In speech, we use changes in pitch ­ how deep or sharp a voice sounds ­ or in the length of syllables. Having more than one type of cue that can signal emotion or transitions between sentences has a number of advantages. It means that people can understand each other even when factors such as background noise obscure one set of cues. It also means that people with impaired sound perception can still understand speech. Those with a condition called congenital amusia, for example, struggle to perceive pitch, but they can compensate for this difficulty by placing greater emphasis on other aspects of speech. Jasmin et al. showed how the brain achieves this by comparing the brain activities of people with and without amusia. Participants were asked to read sentences on a screen where a comma indicated a boundary between two phrases. They then heard two spoken sentences, and had to choose the one that matched the written sentence. The spoken sentences used changes in pitch and/or syllable duration to signal the position of the comma. This provided listeners with the information needed to distinguish between "after John runs the race, ..." and "after John runs, the race...", for example. When two brain regions communicate, they tend to increase their activity at around the same time. The brain regions are then said to show functional connectivity. Jasmin et al. found that compared to healthy volunteers, people with amusia showed less functional connectivity between left hemisphere brain regions that process language and right hemisphere regions that process pitch. In other words, because pitch is a less reliable source of information for people with amusia, they recruit pitch-related brain regions less when processing speech. These results add to our understanding of how brains compensate for impaired perception. This may be useful for understanding the neural basis of compensation in other clinical conditions. It could also help us design bespoke hearing aids or other communication devices, such as computer programs that convert text into speech. Such programs could tailor the pitch and rhythm characteristics of the speech they produce to suit the perception of individual users.


Asunto(s)
Trastornos de la Percepción Auditiva/fisiopatología , Percepción del Habla/fisiología , Adulto , Anciano , Femenino , Humanos , Imagen por Resonancia Magnética , Persona de Mediana Edad , Reino Unido
11.
Wellcome Open Res ; 5: 4, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-35282675

RESUMEN

Prosody can be defined as the rhythm and intonation patterns spanning words, phrases and sentences. Accurate perception of prosody is an important component of many aspects of language processing, such as parsing grammatical structures, recognizing words, and determining where emphasis may be placed. Prosody perception is important for language acquisition and can be impaired in language-related developmental disorders. However, existing assessments of prosodic perception suffer from some shortcomings.  These include being unsuitable for use with typically developing adults due to ceiling effects, or failing to allow the investigator to distinguish the unique contributions of individual acoustic features such as pitch and temporal cues. Here we present the Multi-Dimensional Battery of Prosody Perception (MBOPP), a novel tool for the assessment of prosody perception. It consists of two subtests: Linguistic Focus, which measures the ability to hear emphasis or sentential stress, and Phrase Boundaries, which measures the ability to hear where in a compound sentence one phrase ends, and another begins. Perception of individual acoustic dimensions (Pitch and Time) can be examined separately, and test difficulty can be precisely calibrated by the experimenter because stimuli were created using a continuous voice morph space. We present validation analyses from a sample of 57 individuals and discuss how the battery might be deployed to examine perception of prosody in various populations.

12.
Elife ; 72018 10 22.
Artículo en Inglés | MEDLINE | ID: mdl-30346274

RESUMEN

Distinct anatomical and spectral channels are thought to play specialized roles in the communication within cortical networks. While activity in the alpha and beta frequency range (7 - 40 Hz) is thought to predominantly originate from infragranular cortical layers conveying feedback-related information, activity in the gamma range (>40 Hz) dominates in supragranular layers communicating feedforward signals. We leveraged high precision MEG to test this proposal, directly and non-invasively, in human participants performing visually cued actions. We found that visual alpha mapped onto deep cortical laminae, whereas visual gamma predominantly occurred more superficially. This lamina-specificity was echoed in movement-related sensorimotor beta and gamma activity. These lamina-specific pre- and post- movement changes in sensorimotor beta and gamma activity suggest a more complex functional role than the proposed feedback and feedforward communication in sensory cortex. Distinct frequency channels thus operate in a lamina-specific manner across cortex, but may fulfill distinct functional roles in sensory and motor processes.


Asunto(s)
Retroalimentación Sensorial , Desempeño Psicomotor/fisiología , Corteza Sensoriomotora/fisiología , Corteza Visual/fisiología , Adulto , Ritmo alfa , Ritmo beta , Mapeo Encefálico , Femenino , Ritmo Gamma , Humanos , Masculino , Movimiento/fisiología , Lóbulo Parietal/fisiología , Lámina Espiral/fisiología
13.
Hear Res ; 366: 50-64, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-30131109

RESUMEN

The contribution of acoustic dimensions to an auditory percept is dynamically adjusted and reweighted based on prior experience about how informative these dimensions are across the long-term and short-term environment. This is especially evident in speech perception, where listeners differentially weight information across multiple acoustic dimensions, and use this information selectively to update expectations about future sounds. The dynamic and selective adjustment of how acoustic input dimensions contribute to perception has made it tempting to conceive of this as a form of non-spatial auditory selective attention. Here, we review several human speech perception phenomena that might be consistent with auditory selective attention although, as of yet, the literature does not definitively support a mechanistic tie. We relate these human perceptual phenomena to illustrative nonhuman animal neurobiological findings that offer informative guideposts in how to test mechanistic connections. We next present a novel empirical approach that can serve as a methodological bridge from human research to animal neurobiological studies. Finally, we describe four preliminary results that demonstrate its utility in advancing understanding of human non-spatial dimension-based auditory selective attention.


Asunto(s)
Atención/fisiología , Percepción del Habla/fisiología , Estimulación Acústica , Animales , Corteza Auditiva/anatomía & histología , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico , Humanos , Aprendizaje/fisiología , Modelos Neurológicos , Modelos Psicológicos , Psicoacústica , Especificidad de la Especie , Acústica del Lenguaje
14.
Neuroimage ; 178: 574-582, 2018 09.
Artículo en Inglés | MEDLINE | ID: mdl-29860083

RESUMEN

Speech sounds are encoded by distributed patterns of activity in bilateral superior temporal cortex. However, it is unclear whether speech sounds are topographically represented in cortex, or which acoustic or phonetic dimensions might be spatially mapped. Here, using functional MRI, we investigated the potential spatial representation of vowels, which are largely distinguished from one another by the frequencies of their first and second formants, i.e. peaks in their frequency spectra. This allowed us to generate clear hypotheses about the representation of specific vowels in tonotopic regions of auditory cortex. We scanned participants as they listened to multiple natural tokens of the vowels [ɑ] and [i], which we selected because their first and second formants overlap minimally. Formant-based regions of interest were defined for each vowel based on spectral analysis of the vowel stimuli and independently acquired tonotopic maps for each participant. We found that perception of [ɑ] and [i] yielded differential activation of tonotopic regions corresponding to formants of [ɑ] and [i], such that each vowel was associated with increased signal in tonotopic regions corresponding to its own formants. This pattern was observed in Heschl's gyrus and the superior temporal gyrus, in both hemispheres, and for both the first and second formants. Using linear discriminant analysis of mean signal change in formant-based regions of interest, the identity of untrained vowels was predicted with ∼73% accuracy. Our findings show that cortical encoding of vowels is scaffolded on tonotopy, a fundamental organizing principle of auditory cortex that is not language-specific.


Asunto(s)
Corteza Auditiva/fisiología , Mapeo Encefálico/métodos , Fonética , Percepción del Habla/fisiología , Estimulación Acústica , Adulto , Femenino , Humanos , Imagen por Resonancia Magnética/métodos , Masculino
15.
Neuroimage ; 182: 429-440, 2018 11 15.
Artículo en Inglés | MEDLINE | ID: mdl-29203455

RESUMEN

Measuring the structural composition of the cortex is critical to understanding typical development, yet few investigations in humans have charted markers in vivo that are sensitive to tissue microstructural attributes. Here, we used a well-validated quantitative MR protocol to measure four parameters (R1, MT, R2*, PD*) that differ in their sensitivity to facets of the tissue microstructural environment (R1, MT: myelin, macromolecular content; R2*: myelin, paramagnetic ions, i.e., iron; PD*: free water content). Mapping these parameters across cortical regions in a young adult cohort (18-39 years, N = 93) revealed expected patterns of increased macromolecular content as well as reduced tissue water content in primary and primary adjacent cortical regions. Mapping across cortical depth within regions showed decreased expression of myelin and related processes - but increased tissue water content - when progressing from the grey/white to the grey/pial boundary, in all regions. Charting developmental change in cortical microstructure cross-sectionally, we found that parameters with sensitivity to tissue myelin (R1 & MT) showed linear increases with age across frontal and parietal cortex (change 0.5-1.0% per year). Overlap of robust age effects for both parameters emerged in left inferior frontal, right parietal and bilateral pre-central regions. Our findings afford an improved understanding of ontogeny in early adulthood and offer normative quantitative MR data for inter- and intra-cortical composition, which may be used as benchmarks in further studies.


Asunto(s)
Agua Corporal/diagnóstico por imagen , Corteza Cerebral/anatomía & histología , Corteza Cerebral/diagnóstico por imagen , Imagen por Resonancia Magnética/métodos , Vaina de Mielina , Neuroimagen/métodos , Adolescente , Adulto , Factores de Edad , Femenino , Humanos , Masculino , Adulto Joven
16.
J Neurosci ; 37(50): 12187-12201, 2017 12 13.
Artículo en Inglés | MEDLINE | ID: mdl-29109238

RESUMEN

Auditory selective attention is vital in natural soundscapes. But it is unclear how attentional focus on the primary dimension of auditory representation-acoustic frequency-might modulate basic auditory functional topography during active listening. In contrast to visual selective attention, which is supported by motor-mediated optimization of input across saccades and pupil dilation, the primate auditory system has fewer means of differentially sampling the world. This makes spectrally-directed endogenous attention a particularly crucial aspect of auditory attention. Using a novel functional paradigm combined with quantitative MRI, we establish in male and female listeners that human frequency-band-selective attention drives activation in both myeloarchitectonically estimated auditory core, and across the majority of tonotopically mapped nonprimary auditory cortex. The attentionally driven best-frequency maps show strong concordance with sensory-driven maps in the same subjects across much of the temporal plane, with poor concordance in areas outside traditional auditory cortex. There is significantly greater activation across most of auditory cortex when best frequency is attended, versus ignored; the same regions do not show this enhancement when attending to the least-preferred frequency band. Finally, the results demonstrate that there is spatial correspondence between the degree of myelination and the strength of the tonotopic signal across a number of regions in auditory cortex. Strong frequency preferences across tonotopically mapped auditory cortex spatially correlate with R1-estimated myeloarchitecture, indicating shared functional and anatomical organization that may underlie intrinsic auditory regionalization.SIGNIFICANCE STATEMENT Perception is an active process, especially sensitive to attentional state. Listeners direct auditory attention to track a violin's melody within an ensemble performance, or to follow a voice in a crowded cafe. Although diverse pathologies reduce quality of life by impacting such spectrally directed auditory attention, its neurobiological bases are unclear. We demonstrate that human primary and nonprimary auditory cortical activation is modulated by spectrally directed attention in a manner that recapitulates its tonotopic sensory organization. Further, the graded activation profiles evoked by single-frequency bands are correlated with attentionally driven activation when these bands are presented in complex soundscapes. Finally, we observe a strong concordance in the degree of cortical myelination and the strength of tonotopic activation across several auditory cortical regions.


Asunto(s)
Atención/fisiología , Corteza Auditiva/fisiología , Percepción Auditiva/fisiología , Mapeo Encefálico/métodos , Imagen por Resonancia Magnética/métodos , Estimulación Acústica , Adulto , Corteza Auditiva/ultraestructura , Señales (Psicología) , Femenino , Análisis de Fourier , Humanos , Masculino , Persona de Mediana Edad , Vaina de Mielina/ultraestructura , Patrones de Reconocimiento Fisiológico/fisiología , Percepción de la Altura Tonal/fisiología , Espectrografía del Sonido
17.
PLoS One ; 12(7): e0178356, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28704379

RESUMEN

The ability to reproduce novel words is a sensitive marker of language impairment across a variety of developmental disorders. Nonword repetition tasks are thought to reflect phonological short-term memory skills. Yet, when children hear and then utter a word for the first time, they must transform a novel speech signal into a series of coordinated, precisely timed oral movements. Little is known about how children's oromotor speed, planning and co-ordination abilities might influence their ability to repeat novel nonwords, beyond the influence of higher-level cognitive and linguistic skills. In the present study, we tested 35 typically developing children between the ages of 5-8 years on measures of nonword repetition, digit span, memory for non-verbal sequences, reading fluency, oromotor praxis, and oral diadochokinesis. We found that oromotor praxis uniquely predicted nonword repetition ability in school-age children, and that the variance it accounted for was additional to that of digit span, memory for non-verbal sequences, articulatory rate (measured by oral diadochokinesis) as well as reading fluency. We conclude that the ability to compute and execute novel sensorimotor transformations affects the production of novel words. These results have important implications for understanding motor/language relations in neurodevelopmental disorders.


Asunto(s)
Trastornos del Desarrollo del Lenguaje/diagnóstico , Memoria a Corto Plazo/fisiología , Niño , Femenino , Humanos , Trastornos del Desarrollo del Lenguaje/fisiopatología , Pruebas del Lenguaje , Masculino , Fonética , Lectura , Habla , Medición de la Producción del Habla
18.
Cereb Cortex ; 27(1): 265-278, 2017 01 01.
Artículo en Inglés | MEDLINE | ID: mdl-28069761

RESUMEN

Speech articulation requires precise control of and coordination between the effectors of the vocal tract (e.g., lips, tongue, soft palate, and larynx). However, it is unclear how the cortex represents movements of and contact between these effectors during speech, or how these cortical responses relate to inter-regional anatomical borders. Here, we used phase-encoded fMRI to map somatomotor representations of speech articulations. Phonetically trained participants produced speech phones, progressing from front (bilabial) to back (glottal) place of articulation. Maps of cortical myelin proxies (R1 = 1/T1) further allowed us to situate functional maps with respect to anatomical borders of motor and somatosensory regions. Across participants, we found a consistent topological map of place of articulation, spanning the central sulcus and primary motor and somatosensory areas, that moved from lateral to inferior as place of articulation progressed from front to back. Phones produced at velar and glottal places of articulation activated the inferior aspect of the central sulcus, but with considerable across-subject variability. R1 maps for a subset of participants revealed that articulator maps extended posteriorly into secondary somatosensory regions. These results show consistent topological organization of cortical representations of the vocal apparatus in the context of speech behavior.


Asunto(s)
Mapeo Encefálico/métodos , Corteza Cerebral/anatomía & histología , Vías Nerviosas/anatomía & histología , Adulto , Femenino , Humanos , Nervios Laríngeos/anatomía & histología , Laringe , Labio/inervación , Imagen por Resonancia Magnética , Masculino , Persona de Mediana Edad , Paladar Blando/inervación , Lengua/inervación , Adulto Joven
19.
Nat Commun ; 6: 8901, 2015 Nov 17.
Artículo en Inglés | MEDLINE | ID: mdl-26573340

RESUMEN

An evolutionary account of human language as a neurobiological system must distinguish between human-unique neurocognitive processes supporting language and evolutionarily conserved, domain-general processes that can be traced back to our primate ancestors. Neuroimaging studies across species may determine whether candidate neural processes are supported by homologous, functionally conserved brain areas or by different neurobiological substrates. Here we use functional magnetic resonance imaging in Rhesus macaques and humans to examine the brain regions involved in processing the ordering relationships between auditory nonsense words in rule-based sequences. We find that key regions in the human ventral frontal and opercular cortex have functional counterparts in the monkey brain. These regions are also known to be associated with initial stages of human syntactic processing. This study raises the possibility that certain ventral frontal neural systems, which play a significant role in language function in modern humans, originally evolved to support domain-general abilities involved in sequence processing.


Asunto(s)
Percepción Auditiva/fisiología , Lóbulo Frontal/fisiología , Lenguaje , Animales , Evolución Biológica , Medidas del Movimiento Ocular , Femenino , Neuroimagen Funcional , Humanos , Macaca mulatta , Imagen por Resonancia Magnética , Masculino , Grabación en Video , Adulto Joven
20.
J Exp Psychol Hum Percept Perform ; 41(4): 1124-38, 2015 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-26010588

RESUMEN

Very little is known about how auditory categories are learned incidentally, without instructions to search for category-diagnostic dimensions, overt category decisions, or experimenter-provided feedback. This is an important gap because learning in the natural environment does not arise from explicit feedback and there is evidence that the learning systems engaged by traditional tasks are distinct from those recruited by incidental category learning. We examined incidental auditory category learning with a novel paradigm, the Systematic Multimodal Associations Reaction Time (SMART) task, in which participants rapidly detect and report the appearance of a visual target in 1 of 4 possible screen locations. Although the overt task is rapid visual detection, a brief sequence of sounds precedes each visual target. These sounds are drawn from 1 of 4 distinct sound categories that predict the location of the upcoming visual target. These many-to-one auditory-to-visuomotor correspondences support incidental auditory category learning. Participants incidentally learn categories of complex acoustic exemplars and generalize this learning to novel exemplars and tasks. Further, learning is facilitated when category exemplar variability is more tightly coupled to the visuomotor associations than when the same stimulus variability is experienced across trials. We relate these findings to phonetic category learning.


Asunto(s)
Percepción Auditiva/fisiología , Formación de Concepto/fisiología , Aprendizaje/fisiología , Desempeño Psicomotor/fisiología , Percepción Visual/fisiología , Adulto , Humanos , Adulto Joven
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA
...