Your browser doesn't support javascript.
loading
: 20 | 50 | 100
1 - 20 de 61
1.
Percept Mot Skills ; 131(1): 74-105, 2024 Feb.
Article En | MEDLINE | ID: mdl-37977135

Auditory-motor and visual-motor networks are often coupled in daily activities, such as when listening to music and dancing; but these networks are known to be highly malleable as a function of sensory input. Thus, congenital deafness may modify neural activities within the connections between the motor, auditory, and visual cortices. Here, we investigated whether the cortical responses of children with cochlear implants (CI) to a simple and repetitive motor task would differ from that of children with typical hearing (TH) and we sought to understand whether this response related to their language development. Participants were 75 school-aged children, including 50 with CI (with varying language abilities) and 25 controls with TH. We used functional near-infrared spectroscopy (fNIRS) to record cortical responses over the whole brain, as children squeezed the back triggers of a joystick that vibrated or not with the squeeze. Motor cortex activity was reflected by an increase in oxygenated hemoglobin concentration (HbO) and a decrease in deoxygenated hemoglobin concentration (HbR) in all children, irrespective of their hearing status. Unexpectedly, the visual cortex (supposedly an irrelevant region) was deactivated in this task, particularly for children with CI who had good language skills when compared to those with CI who had language delays. Presence or absence of vibrotactile feedback made no difference in cortical activation. These findings support the potential of fNIRS to examine cognitive functions related to language in children with CI.


Cochlear Implantation , Cochlear Implants , Deafness , Child , Humans , Spectroscopy, Near-Infrared/methods , Cochlear Implantation/methods , Deafness/surgery , Hemoglobins
2.
Brain Res Bull ; 205: 110817, 2023 Dec.
Article En | MEDLINE | ID: mdl-37989460

Sensory deprivation can offset the balance of audio versus visual information in multimodal processing. Such a phenomenon could persist for children born deaf, even after they receive cochlear implants (CIs), and could potentially explain why one modality is given priority over the other. Here, we recorded cortical responses to a single speaker uttering two syllables, presented in audio-only (A), visual-only (V), and audio-visual (AV) modes. Electroencephalography (EEG) and functional near-infrared spectroscopy (fNIRS) were successively recorded in seventy-five school-aged children. Twenty-five were children with normal hearing (NH) and fifty wore CIs, among whom 26 had relatively high language abilities (HL) comparable to those of NH children, while 24 others had low language abilities (LL). In EEG data, visual-evoked potentials were captured in occipital regions, in response to V and AV stimuli, and they were accentuated in the HL group compared to the LL group (the NH group being intermediate). Close to the vertex, auditory-evoked potentials were captured in response to A and AV stimuli and reflected a differential treatment of the two syllables but only in the NH group. None of the EEG metrics revealed any interaction between group and modality. In fNIRS data, each modality induced a corresponding activity in visual or auditory regions, but no group difference was observed in A, V, or AV stimulation. The present study did not reveal any sign of abnormal AV integration in children with CI. An efficient multimodal integrative network (at least for rudimentary speech materials) is clearly not a sufficient condition to exhibit good language and literacy.


Cochlear Implants , Deafness , Speech Perception , Child , Humans , Speech Perception/physiology , Visual Perception/physiology , Acoustic Stimulation , Electroencephalography
3.
Front Neurosci ; 17: 1141886, 2023.
Article En | MEDLINE | ID: mdl-37409105

Background: Cochlear implantation (CI) in prelingually deafened children has been shown to be an effective intervention for developing language and reading skill. However, there is a substantial proportion of the children receiving CI who struggle with language and reading. The current study-one of the first to implement electrical source imaging in CI population was designed to identify the neural underpinnings in two groups of CI children with good and poor language and reading skill. Methods: Data using high density electroencephalography (EEG) under a resting state condition was obtained from 75 children, 50 with CIs having good (HL) or poor language skills (LL) and 25 normal hearing (NH) children. We identified coherent sources using dynamic imaging of coherent sources (DICS) and their effective connectivity computing time-frequency causality estimation based on temporal partial directed coherence (TPDC) in the two CI groups compared to a cohort of age and gender matched NH children. Findings: Sources with higher coherence amplitude were observed in three frequency bands (alpha, beta and gamma) for the CI groups when compared to normal hearing children. The two groups of CI children with good (HL) and poor (LL) language ability exhibited not only different cortical and subcortical source profiles but also distinct effective connectivity between them. Additionally, a support vector machine (SVM) algorithm using these sources and their connectivity patterns for each CI group across the three frequency bands was able to predict the language and reading scores with high accuracy. Interpretation: Increased coherence in the CI groups suggest overall that the oscillatory activity in some brain areas become more strongly coupled compared to the NH group. Moreover, the different sources and their connectivity patterns and their association to language and reading skill in both groups, suggest a compensatory adaptation that either facilitated or impeded language and reading development. The neural differences in the two groups of CI children may reflect potential biomarkers for predicting outcome success in CI children.

4.
Clin Neurophysiol ; 149: 133-145, 2023 05.
Article En | MEDLINE | ID: mdl-36965466

OBJECTIVE: Although children with cochlear implants (CI) achieve remarkable success with their device, considerable variability remains in individual outcomes. Here, we explored whether auditory evoked potentials recorded during an oddball paradigm could provide useful markers of auditory processing in this pediatric population. METHODS: High-density electroencephalography (EEG) was recorded in 75 children listening to standard and odd noise stimuli: 25 had normal hearing (NH) and 50 wore a CI, divided between high language (HL) and low language (LL) abilities. Three metrics were extracted: the first negative and second positive components of the standard waveform (N1-P2 complex) close to the vertex, the mismatch negativity (MMN) around Fz and the late positive component (P3) around Pz of the difference waveform. RESULTS: While children with CIs generally exhibited a well-formed N1-P2 complex, those with language delays typically lacked reliable MMN and P3 components. But many children with CIs with age-appropriate skills showed MMN and P3 responses similar to those of NH children. Moreover, larger and earlier P3 (but not MMN) was linked to better literacy skills. CONCLUSIONS: Auditory evoked responses differentiated children with CIs based on their good or poor skills with language and literacy. SIGNIFICANCE: This short paradigm could eventually serve as a clinical tool for tracking the developmental outcomes of implanted children.


Cochlear Implantation , Cochlear Implants , Child , Humans , Acoustic Stimulation , Evoked Potentials, Auditory/physiology , Auditory Perception/physiology , Electroencephalography
5.
J Acoust Soc Am ; 152(1): 511, 2022 07.
Article En | MEDLINE | ID: mdl-35931533

Parkinson's disease (PD) is a neurodegenerative condition primarily associated with its motor consequences. Although much of the focus within the speech domain has focused on PD's consequences for production, people with PD have been shown to differ in the perception of emotional prosody, loudness, and speech rate from age-matched controls. The current study targeted the effect of PD on perceptual phonetic plasticity, defined as the ability to learn and adjust to novel phonetic input, both in second language and native language contexts. People with PD were compared to age-matched controls (and, for three of the studies, a younger control population) in tasks of explicit non-native speech learning and adaptation to variation in native speech (compressed rate, accent, and the use of timing information within a sentence to parse ambiguities). The participants with PD showed significantly worse performance on the task of compressed rate and used the duration of an ambiguous fricative to segment speech to a lesser degree than age-matched controls, indicating impaired speech perceptual abilities. Exploratory comparisons also showed people with PD who were on medication performed significantly worse than their peers off medication on those two tasks and the task of explicit non-native learning.


Parkinson Disease , Speech Perception , Humans , Language , Phonetics , Speech
6.
Brain Commun ; 4(2): fcac058, 2022.
Article En | MEDLINE | ID: mdl-35368614

Persistent developmental stuttering is a speech disorder that primarily affects normal speech fluency but encompasses a complex set of symptoms ranging from reduced sensorimotor integration to socioemotional challenges. Here, we investigated the whole-brain structural connectome and its topological alterations in adults who stutter. Diffusion-weighted imaging data of 33 subjects (13 adults who stutter and 20 fluent speakers) were obtained along with a stuttering severity evaluation. The structural brain network properties were analysed using network-based statistics and graph theoretical measures particularly focussing on community structure, network hubs and controllability. Bayesian power estimation was used to assess the reliability of the structural connectivity differences by examining the effect size. The analysis revealed reliable and wide-spread decreases in connectivity for adults who stutter in regions associated with sensorimotor, cognitive, emotional and memory-related functions. The community detection algorithms revealed different subnetworks for fluent speakers and adults who stutter, indicating considerable network adaptation in adults who stutter. Average and modal controllability differed between groups in a subnetwork encompassing frontal brain regions and parts of the basal ganglia. The results revealed extensive structural network alterations and substantial adaptation in neural architecture in adults who stutter well beyond the sensorimotor network. These findings highlight the impact of the neurodevelopmental effects of persistent stuttering on neural organization and the importance of examining the full structural connectome and the network alterations that underscore the behavioural phenotype.

7.
Sci Rep ; 12(1): 1019, 2022 01 19.
Article En | MEDLINE | ID: mdl-35046514

Parkinson's disease (PD), as a manifestation of basal ganglia dysfunction, is associated with a number of speech deficits, including reduced voice modulation and vocal output. Interestingly, previous work has shown that participants with PD show an increased feedback-driven motor response to unexpected fundamental frequency perturbations during speech production, and a heightened ability to detect differences in vocal pitch relative to control participants. Here, we explored one possible contributor to these enhanced responses. We recorded the frequency-following auditory brainstem response (FFR) to repetitions of the speech syllable [da] in PD and control participants. Participants with PD displayed a larger amplitude FFR related to the fundamental frequency of speech stimuli relative to the control group. The current preliminary results suggest the dysfunction of the basal ganglia in PD contributes to the early stage of auditory processing and may reflect one component of a broader sensorimotor processing impairment associated with the disease.


Parkinson Disease/physiopathology , Pitch Perception/physiology , Speech Perception , Acoustic Stimulation , Aged , Brain Stem/physiopathology , Case-Control Studies , Female , Humans , Male , Middle Aged , Speech/physiology
8.
Front Psychol ; 12: 705668, 2021.
Article En | MEDLINE | ID: mdl-34603133

Previous studies of word segmentation in a second language have yielded equivocal results. This is not surprising given the differences in the bilingual experience and proficiency of the participants and the varied experimental designs that have been used. The present study tried to account for a number of relevant variables to determine if bilingual listeners are able to use native-like word segmentation strategies. Here, 61 French-English bilingual adults who varied in L1 (French or English) and language dominance took part in an audiovisual integration task while event-related brain potentials (ERPs) were recorded. Participants listened to sentences built around ambiguous syllable strings (which could be disambiguated based on different word segmentation patterns), during which an illustration was presented on screen. Participants were asked to determine if the illustration was related to the heard utterance or not. Each participant listened to both English and French utterances, providing segmentation patterns that included both their native language (used as reference) and their L2. Interestingly, different patterns of results were observed in the event-related potentials (online) and behavioral (offline) results, suggesting that L2 participants showed signs of being able to adapt their segmentation strategies to the specifics of the L2 (online ERP results), but that the extent of the adaptation varied as a function of listeners' language experience (offline behavioral results).

9.
Cortex ; 143: 195-204, 2021 10.
Article En | MEDLINE | ID: mdl-34450567

Recent studies have demonstrated that the auditory speech perception of a listener can be modulated by somatosensory input applied to the facial skin suggesting that perception is an embodied process. However, speech perception is a multisensory process involving both the auditory and visual modalities. It is unknown whether and to what extent somatosensory stimulation to the facial skin modulates audio-visual speech perception. If speech perception is an embodied process, then somatosensory stimulation applied to the perceiver should influence audio-visual speech processing. Using the McGurk effect (the perceptual illusion that occurs when a sound is paired with the visual representation of a different sound, resulting in the perception of a third sound) we tested the prediction using a simple behavioral paradigm and at the neural level using event-related potentials (ERPs) and their cortical sources. We recorded ERPs from 64 scalp sites in response to congruent and incongruent audio-visual speech randomly presented with and without somatosensory stimulation associated with facial skin deformation. Subjects judged whether the production was /ba/ or not under all stimulus conditions. In the congruent audio-visual condition subjects identifying the sound as /ba/, but not in the incongruent condition consistent with the McGurk effect. Concurrent somatosensory stimulation improved the ability of participants to more correctly identify the production as /ba/ relative to the non-somatosensory condition in both congruent and incongruent conditions. ERP in response to the somatosensory stimulation for the incongruent condition reliably diverged 220 msec after stimulation onset. Cortical sources were estimated around the left anterior temporal gyrus, the right middle temporal gyrus, the right posterior superior temporal lobe and the right occipital region. The results demonstrate a clear multisensory convergence of somatosensory and audio-visual processing in both behavioral and neural processing consistent with the perspective that speech perception is a self-referenced, sensorimotor process.


Speech Perception , Speech , Acoustic Stimulation , Auditory Perception , Humans , Photic Stimulation , Visual Perception
10.
Neurosci Lett ; 730: 135045, 2020 06 21.
Article En | MEDLINE | ID: mdl-32413541

Modulation of auditory activity occurs before and during voluntary speech movement. However, it is unknown whether orofacial somatosensory input is modulated in the same manner. The current study examined whether or not the somatosensory event-related potentials (ERPs) in response to facial skin stretch are changed during speech and nonspeech production tasks. Specifically, we compared ERP changes to somatosensory stimulation for different orofacial postures and speech utterances. Participants produced three different vowel sounds (voicing) or non-speech oral tasks in which participants maintained a similar posture without voicing. ERP's were recorded from 64 scalp sites in response to the somatosensory stimulation under six task conditions (three vowels × voicing/posture) and compared to a resting baseline condition. The first negative peak for the vowel /u/ was reliably reduced from the baseline in both the voicing and posturing tasks, but the other conditions did not differ. The second positive peak was reduced for all voicing tasks compared to the posturing tasks. The results suggest that the sensitivity of somatosensory ERP to facial skin deformation is modulated by the task and that somatosensory processing during speaking may be modulated differently relative to phonetic identity.


Evoked Potentials/physiology , Speech Perception/physiology , Speech/physiology , Voice/physiology , Acoustic Stimulation/methods , Electroencephalography/methods , Humans , Phonetics , Somatosensory Cortex/physiology
11.
Front Hum Neurosci ; 14: 18, 2020.
Article En | MEDLINE | ID: mdl-32161525

Stuttering is a disorder that impacts the smooth flow of speech production and is associated with a deficit in sensorimotor integration. In a previous experiment, individuals who stutter were able to vocally compensate for pitch shifts in their auditory feedback, but they exhibited more variability in the timing of their corrective responses. In the current study, we focused on the neural correlates of the task using functional MRI. Participants produced a vowel sound in the scanner while hearing their own voice in real time through headphones. On some trials, the audio was shifted up or down in pitch, eliciting a corrective vocal response. Contrasting pitch-shifted vs. unshifted trials revealed bilateral superior temporal activation over all the participants. However, the groups differed in the activation of middle temporal gyrus and superior frontal gyrus [Brodmann area 10 (BA 10)], with individuals who stutter displaying deactivation while controls displayed activation. In addition to the standard univariate general linear modeling approach, we employed a data-driven technique (independent component analysis, or ICA) to separate task activity into functional networks. Among the networks most correlated with the experimental time course, there was a combined auditory-motor network in controls, but the two networks remained separable for individuals who stuttered. The decoupling of these networks may account for temporal variability in pitch compensation reported in our previous work, and supports the idea that neural network coherence is disturbed in the stuttering brain.

12.
Front Hum Neurosci ; 13: 394, 2019.
Article En | MEDLINE | ID: mdl-31798431

Adults who stutter (AWS) display altered patterns of neural phase coherence within the speech motor system preceding disfluencies. These altered patterns may distinguish fluent speech episodes from disfluent ones. Phase coherence is relevant to the study of stuttering because it reflects neural communication within brain networks. In this follow-up study, the oscillatory cortical dynamics preceding fluent speech in AWS and adults who do not stutter (AWNS) were examined during a single-word delayed reading task using electroencephalographic (EEG) techniques. Compared to AWNS, fluent speech preparation in AWS was characterized by a decrease in theta-gamma phase coherence and a corresponding increase in theta-beta coherence level. Higher spectral powers in the beta and gamma bands were also observed preceding fluent utterances by AWS. Overall, there was altered neural communication during speech planning in AWS that provides novel evidence for atypical allocation of feedforward control by AWS even before fluent utterances.

13.
J Speech Lang Hear Res ; 62(12): 4256-4268, 2019 12 18.
Article En | MEDLINE | ID: mdl-31738857

Purpose We recently demonstrated that individuals with Parkinson's disease (PD) respond differentially to specific altered auditory feedback parameters during speech production. Participants with PD respond more robustly to pitch and less robustly to formant manipulations compared to control participants. In this study, we investigated whether differences in perceptual processing may in part underlie these compensatory differences in speech production. Methods Pitch and formant feedback manipulations were presented under 2 conditions: production and listening. In the production condition, 15 participants with PD and 15 age- and gender-matched healthy control participants judged whether their own speech output was manipulated in real time. During the listening task, participants judged whether paired tokens of their previously recorded speech samples were the same or different. Results Under listening, 1st formant manipulation discrimination was significantly reduced for the PD group compared to the control group. There was a trend toward better discrimination of pitch in the PD group, but the group difference was not significant. Under the production condition, the ability of participants with PD to identify pitch manipulations was greater than that of the controls. Conclusion The findings suggest perceptual processing differences associated with acoustic parameters of fundamental frequency and 1st formant perturbations in PD. These findings extend our previous results, indicating that different patterns of compensation to pitch and 1st formant shifts may reflect a combination of sensory and motor mechanisms that are differentially influenced by basal ganglia dysfunction.


Parkinson Disease/physiopathology , Pitch Discrimination/physiology , Speech/physiology , Aged , Basal Ganglia/physiopathology , Case-Control Studies , Feedback, Sensory , Female , Humans , Male , Middle Aged , Speech Acoustics , Speech Discrimination Tests
14.
Ann N Y Acad Sci ; 1449(1): 56-69, 2019 08.
Article En | MEDLINE | ID: mdl-31144336

Speech timing deficits have been proposed as a causal factor in the disorder of stuttering. The question of whether individuals who stutter have deficits in nonspeech timing is one that has been revisited often, with conflicting results. Here, we uncover subtle differences in a manual metronome synchronization task that included tempo changes with adults who stutter and fluent speakers. We used sensitive circular statistics to examine both asynchrony and consistency in motor production. While both groups displayed a classic negative mean asynchrony (tapping before the beat), individuals who stutter anticipated the beat even more than their fluent peers, and their consistency was particularly affected at slow tempi. Surprisingly, individuals who stutter did not have problems with interval correction at tempo changes. We also examined the influence of music experience on synchronization behavior in both groups. While music perception and training were related to synchronization behavior in fluent participants, these correlations were not present for the stuttering group; however, one measure of stuttering severity (self-rated severity) was negatively correlated with music training. Overall, we found subtle differences in paced auditory-motor synchronization in individuals who stutter, consistent with a timing problem extending to nonspeech.


Periodicity , Speech/physiology , Stuttering/pathology , Adult , Auditory Perception/physiology , Female , Humans , Male , Motor Activity/physiology , Music/psychology
15.
J Acoust Soc Am ; 145(2): 847, 2019 02.
Article En | MEDLINE | ID: mdl-30823786

In cocktail-party situations, listeners can use the fundamental frequency (F0) of a voice to segregate it from competitors, but other cues in speech could help, such as co-modulation of envelopes across frequency or more complex cues related to the semantic/syntactic content of the utterances. For simplicity, this (non-pitch) form of grouping is referred to as "articulatory." By creating a new type of speech with two steady F0s, it was examined how these two forms of segregation compete: articulatory grouping would bind the partials of a double-F0 source together, whereas harmonic segregation would tend to split them in two subsets. In experiment 1, maskers were two same-male sentences. Speech reception thresholds were high in this task (vicinity of 0 dB), and harmonic segregation behaved as though double-F0 stimuli were two independent sources. This was not the case in experiment 2, where maskers were speech-shaped complexes (buzzes). First, double-F0 targets were immune to the masking of a single-F0 buzz matching one of the two target F0s. Second, double-F0 buzzes were particularly effective at masking a single-F0 target matching one of the two buzz F0s. As a conclusion, the strength of F0-segregation appears to depend on whether the masker is speech or not.

16.
Neuroimage ; 192: 26-37, 2019 05 15.
Article En | MEDLINE | ID: mdl-30831311

The relation between language processing and the cognitive control of thought and action is a widely debated issue in cognitive neuroscience. While recent research suggests a modular separation between a 'language system' for meaningful linguistic processing and a 'multiple-demand system' for cognitive control, other findings point to more integrated perspectives in which controlled language processing emerges from a division of labor between (parts of) the language system and (parts of) the multiple-demand system. We test here a dual approach to the cognitive control of language predicated on the notion of cognitive control as the combined contribution of a semantic control network (SCN) and a working memory network (WMN) supporting top-down manipulation of (lexico-)semantic information and the monitoring of information in verbal working memory, respectively. We reveal these networks in a large-scale coordinate-based meta-analysis contrasting functional imaging studies of verbal working memory vs. active judgments on (lexico-)semantic information and show the extent of their overlap with the multiple-demand system and the language system. Testing these networks' involvement in a functional imaging study of object naming and verb generation, we then show that SCN specializes in top-down retrieval and selection of (lexico-)semantic representations amongst competing alternatives, while WMN intervenes at a more general level of control modulated in part by the amount of competing responses available for selection. These results have implications in conceptualizing the neurocognitive architecture of language and cognitive control.


Brain/physiology , Cognition/physiology , Language , Speech/physiology , Functional Neuroimaging , Humans
17.
Sci Rep ; 8(1): 16340, 2018 11 05.
Article En | MEDLINE | ID: mdl-30397215

Persistent developmental stuttering affects close to 1% of adults and is thought to be a problem of sensorimotor integration. Previous research has demonstrated that individuals who stutter respond differently to changes in their auditory feedback while speaking. Here we explore a number of changes that accompany alterations in the feedback of pitch during vocal production. Participants sustained the vowel /a/ while hearing on-line feedback of their own voice through headphones. In some trials, feedback was briefly shifted up or down by 100 cents to simulate a vocal production error. As previously shown, participants compensated for the auditory pitch change by altering their vocal production in the opposite direction of the shift. The average compensatory response was smaller for adults who stuttered than for adult controls. Detailed analyses revealed that adults who stuttered had fewer trials with a robust corrective response, and that within the trials showing compensation, the timing of their responses was more variable. These results support the idea that dysfunctional sensorimotor integration in stuttering is characterized by timing variability, reflecting reduced coupling of the auditory and speech motor systems.


Feedback, Sensory , Speech/physiology , Stuttering/physiopathology , Adolescent , Adult , Case-Control Studies , Female , Humans , Male , Middle Aged , Pitch Discrimination , Time Factors , Young Adult
18.
Exp Brain Res ; 236(6): 1713-1723, 2018 06.
Article En | MEDLINE | ID: mdl-29623381

The role of somatosensory feedback in speech and the perception of loudness was assessed in adults without speech or hearing disorders. Participants completed two tasks: loudness magnitude estimation of a short vowel and oral reading of a standard passage. Both tasks were carried out in each of three conditions: no-masking, auditory masking alone, and mixed auditory masking plus vibration of the perilaryngeal area. A Lombard effect was elicited in both masking conditions: speakers unconsciously increased vocal intensity. Perilaryngeal vibration further increased vocal intensity above what was observed for auditory masking alone. Both masking conditions affected fundamental frequency and the first formant frequency as well, but only vibration was associated with a significant change in the second formant frequency. An additional analysis of pure-tone thresholds found no difference in auditory thresholds between masking conditions. Taken together, these findings indicate that perilaryngeal vibration effectively masked somatosensory feedback, resulting in an enhanced Lombard effect (increased vocal intensity) that did not alter speakers' self-perception of loudness. This implies that the Lombard effect results from a general sensorimotor process, rather than from a specific audio-vocal mechanism, and that the conscious self-monitoring of speech intensity is not directly based on either auditory or somatosensory feedback.


Auditory Perception/physiology , Feedback, Sensory/physiology , Perceptual Masking/physiology , Pharynx/physiology , Self Concept , Speech/physiology , Touch Perception/physiology , Vibration , Adolescent , Adult , Female , Humans , Male , Middle Aged , Young Adult
19.
Neurosci Lett ; 668: 37-42, 2018 03 06.
Article En | MEDLINE | ID: mdl-29309858

Stuttering is a neurodevelopmental speech disorder with a phenotype characterized by speech sound repetitions, prolongations and silent blocks during speech production. Developmental stuttering affects 1% of the population and 5% of children. Neuroanatomical abnormalities in the major white matter tracts, including the arcuate fasciculus, corpus callosum, corticospinal, and frontal aslant tracts (FAT), are associated with the disorder in adults who stutter but are less well studied in children who stutter (CWS). We used deterministic tractography to assess the structural connectivity of the neural network for speech production in CWS and controls. CWS had higher fractional anisotropy and axial diffusivity in the right FAT than controls. Our findings support the involvement of the corticostriatal network early in persistent developmental stuttering.


Diffusion Tensor Imaging/methods , Motor Activity/physiology , Nerve Net/diagnostic imaging , Neural Pathways/diagnostic imaging , Speech/physiology , Stuttering/diagnostic imaging , White Matter/diagnostic imaging , Child , Humans , Male
20.
Hum Brain Mapp ; 39(3): 1391-1402, 2018 03.
Article En | MEDLINE | ID: mdl-29265695

Previous research suggests a pivotal role of the prefrontal cortex (PFC) in word selection during tasks of confrontation naming (CN) and verb generation (VG), both of which feature varying degrees of competition between candidate responses. However, discrepancies in prefrontal activity have also been reported between the two tasks, in particular more widespread and intense activation in VG extending into (left) ventrolateral PFC, the functional significance of which remains unclear. We propose that these variations reflect differences in competition resolution processes tied to distinct underlying lexico-semantic operations: Although CN involves selecting lexical entries out of limited sets of alternatives, VG requires exploration of possible semantic relations not readily evident from the object itself, requiring prefrontal areas previously shown to be recruited in top-down retrieval of information from lexico-semantic memory. We tested this hypothesis through combined independent component analysis of functional imaging data and information-theoretic measurements of variations in selection competition associated with participants' performance in overt CN and VG tasks. Selection competition during CN engaged the anterior insula and surrounding opercular tissue, while competition during VG recruited additional activity of left ventrolateral PFC. These patterns remained after controlling for participants' speech onset latencies indicative of possible task differences in mental effort. These findings have implications for understanding the neural-computational dynamics of cognitive control in language production and how it relates to the functional architecture of adaptive behavior.


Language , Prefrontal Cortex/physiology , Adult , Brain Mapping , Female , Humans , Magnetic Resonance Imaging , Male , Mental Processes/physiology , Prefrontal Cortex/diagnostic imaging , Young Adult
...