Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 55
Filter
1.
J Neurosci ; 44(17)2024 Apr 24.
Article in English | MEDLINE | ID: mdl-38508715

ABSTRACT

Previous studies have demonstrated that auditory cortex activity can be influenced by cross-sensory visual inputs. Intracortical laminar recordings in nonhuman primates have suggested a feedforward (FF) type profile for auditory evoked but feedback (FB) type for visual evoked activity in the auditory cortex. To test whether cross-sensory visual evoked activity in the auditory cortex is associated with FB inputs also in humans, we analyzed magnetoencephalography (MEG) responses from eight human subjects (six females) evoked by simple auditory or visual stimuli. In the estimated MEG source waveforms for auditory cortex regions of interest, auditory evoked response showed peaks at 37 and 90 ms and visual evoked response at 125 ms. The inputs to the auditory cortex were modeled through FF- and FB-type connections targeting different cortical layers using the Human Neocortical Neurosolver (HNN), which links cellular- and circuit-level mechanisms to MEG signals. HNN modeling suggested that the experimentally observed auditory response could be explained by an FF input followed by an FB input, whereas the cross-sensory visual response could be adequately explained by just an FB input. Thus, the combined MEG and HNN results support the hypothesis that cross-sensory visual input in the auditory cortex is of FB type. The results also illustrate how the dynamic patterns of the estimated MEG source activity can provide information about the characteristics of the input into a cortical area in terms of the hierarchical organization among areas.


Subject(s)
Acoustic Stimulation , Auditory Cortex , Evoked Potentials, Visual , Magnetoencephalography , Photic Stimulation , Humans , Auditory Cortex/physiology , Magnetoencephalography/methods , Female , Male , Adult , Photic Stimulation/methods , Evoked Potentials, Visual/physiology , Acoustic Stimulation/methods , Models, Neurological , Young Adult , Evoked Potentials, Auditory/physiology , Neurons/physiology , Brain Mapping/methods
2.
J Neurosci ; 44(7)2024 Feb 14.
Article in English | MEDLINE | ID: mdl-38129133

ABSTRACT

Neuroimaging studies suggest cross-sensory visual influences in human auditory cortices (ACs). Whether these influences reflect active visual processing in human ACs, which drives neuronal firing and concurrent broadband high-frequency activity (BHFA; >70 Hz), or whether they merely modulate sound processing is still debatable. Here, we presented auditory, visual, and audiovisual stimuli to 16 participants (7 women, 9 men) with stereo-EEG depth electrodes implanted near ACs for presurgical monitoring. Anatomically normalized group analyses were facilitated by inverse modeling of intracranial source currents. Analyses of intracranial event-related potentials (iERPs) suggested cross-sensory responses to visual stimuli in ACs, which lagged the earliest auditory responses by several tens of milliseconds. Visual stimuli also modulated the phase of intrinsic low-frequency oscillations and triggered 15-30 Hz event-related desynchronization in ACs. However, BHFA, a putative correlate of neuronal firing, was not significantly increased in ACs after visual stimuli, not even when they coincided with auditory stimuli. Intracranial recordings demonstrate cross-sensory modulations, but no indication of active visual processing in human ACs.


Subject(s)
Auditory Cortex , Male , Humans , Female , Auditory Cortex/physiology , Acoustic Stimulation/methods , Evoked Potentials/physiology , Electroencephalography/methods , Visual Perception/physiology , Auditory Perception/physiology , Photic Stimulation
3.
Hum Brain Mapp ; 44(2): 362-372, 2023 02 01.
Article in English | MEDLINE | ID: mdl-35980015

ABSTRACT

Invasive neurophysiological studies in nonhuman primates have shown different laminar activation profiles to auditory vs. visual stimuli in auditory cortices and adjacent polymodal areas. Means to examine the underlying feedforward vs. feedback type influences noninvasively have been limited in humans. Here, using 1-mm isotropic resolution 3D echo-planar imaging at 7 T, we studied the intracortical depth profiles of functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) signals to brief auditory (noise bursts) and visual (checkerboard) stimuli. BOLD percent-signal-changes were estimated at 11 equally spaced intracortical depths, within regions-of-interest encompassing auditory (Heschl's gyrus, Heschl's sulcus, planum temporale, and posterior superior temporal gyrus) and polymodal (middle and posterior superior temporal sulcus) areas. Effects of differing BOLD signal strengths for auditory and visual stimuli were controlled via normalization and statistical modeling. The BOLD depth profile shapes, modeled with quadratic regression, were significantly different for auditory vs. visual stimuli in auditory cortices, but not in polymodal areas. The different depth profiles could reflect sensory-specific feedforward versus cross-sensory feedback influences, previously shown in laminar recordings in nonhuman primates. The results suggest that intracortical BOLD profiles can help distinguish between feedforward and feedback type influences in the human brain. Further experimental studies are still needed to clarify how underlying signal strength influences BOLD depth profiles under different stimulus conditions.


Subject(s)
Auditory Cortex , Magnetic Resonance Imaging , Humans , Animals , Acoustic Stimulation , Magnetic Resonance Imaging/methods , Auditory Cortex/diagnostic imaging , Auditory Cortex/physiology , Brain/physiology , Brain Mapping , Primates
4.
Brain Topogr ; 33(4): 477-488, 2020 07.
Article in English | MEDLINE | ID: mdl-32441009

ABSTRACT

Auditory attention allows us to focus on relevant target sounds in the acoustic environment while maintaining the capability to orient to unpredictable (novel) sound changes. An open question is whether orienting to expected vs. unexpected auditory events are governed by anatomically distinct attention pathways, respectively, or by differing communication patterns within a common system. To address this question, we applied a recently developed PeSCAR analysis method to evaluate spectrotemporal functional connectivity patterns across subregions of broader cortical regions of interest (ROIs) to analyze magnetoencephalography data obtained during a cued auditory attention task. Subjects were instructed to detect a predictable harmonic target sound embedded among standard tones in one ear and to ignore the standard tones and occasional unpredictable novel sounds presented in the opposite ear. Phase coherence of estimated source activity was calculated between subregions of superior temporal, frontal, inferior parietal, and superior parietal cortex ROIs. Functional connectivity was stronger in response to target than novel stimuli between left superior temporal and left parietal ROIs and between left frontal and right parietal ROIs, with the largest effects observed in the beta band (15-35 Hz). In contrast, functional connectivity was stronger in response to novel than target stimuli in inter-hemispheric connections between left and right frontal ROIs, observed in early time windows in the alpha band (8-12 Hz). Our findings suggest that auditory processing of expected target vs. unexpected novel sounds involves different spatially, temporally, and spectrally distributed oscillatory connectivity patterns across temporal, parietal, and frontal areas.


Subject(s)
Attention , Auditory Cortex , Auditory Perception , Magnetoencephalography , Acoustic Stimulation , Brain Mapping , Female , Humans , Parietal Lobe
5.
Neuroimage ; 184: 954-963, 2019 01 01.
Article in English | MEDLINE | ID: mdl-30296557

ABSTRACT

Classical theories suggest placebo analgesia and nocebo hyperalgesia are based on expectation and conditioned experience. Whereas the neural mechanism of how expectation modulates placebo and nocebo effects during pain anticipation have been extensively studied, little is known about how experience may change brain networks to produce placebo and nocebo responses. We investigated the neural pathways of direct and observational conditioning for conscious and nonconscious conditioned placebo/nocebo effects using magnetoencephalography and a face visual cue conditioning model. We found that both direct and observational conditioning produced conscious conditioned placebo and nocebo effects and a nonconscious conditioned nocebo effect. Alpha band brain connectivity changes before and after conditioning could predict the magnitude of conditioned placebo and nocebo effects. Particularly, the connectivity between the rostral anterior cingulate cortex and middle temporal gyrus was an important indicator for the manipulation of placebo and nocebo effects. Our study suggests that conditioning can mediate our pain experience by encoding experience and modulating brain networks.


Subject(s)
Brain/physiology , Conditioning, Psychological , Nocebo Effect , Placebo Effect , Adult , Brain Waves , Facial Recognition , Female , Humans , Magnetoencephalography , Male , Neural Pathways/physiology , Nociception/physiology , Pain Measurement , Young Adult
6.
Organ Res Methods ; 22(1): 95-115, 2019 Jan.
Article in English | MEDLINE | ID: mdl-30636863

ABSTRACT

Magnetoencephalography (MEG) is a method to study electrical activity in the human brain by recording the neuromagnetic field outside the head. MEG, like electroencephalography (EEG), provides an excellent, millisecond-scale time resolution, and allows the estimation of the spatial distribution of the underlying activity, in favorable cases with a localization accuracy of a few millimeters. To detect the weak neuromagnetic signals, superconducting sensors, magnetically shielded rooms, and advanced signal processing techniques are used. The analysis and interpretation of MEG data typically involves comparisons between subject groups and experimental conditions using various spatial, temporal, and spectral measures of cortical activity and connectivity. The application of MEG to cognitive neuroscience studies is illustrated with studies of spoken language processing in subjects with normal and impaired reading ability. The mapping of spatiotemporal patterns of activity within networks of cortical areas can provide useful information about the functional architecture of the brain related to sensory and cognitive processing, including language, memory, attention, and perception.

7.
Neuroimage ; 124(Pt A): 858-868, 2016 Jan 01.
Article in English | MEDLINE | ID: mdl-26419388

ABSTRACT

Spatial and non-spatial information of sound events is presumably processed in parallel auditory cortex (AC) "what" and "where" streams, which are modulated by inputs from the respective visual-cortex subsystems. How these parallel processes are integrated to perceptual objects that remain stable across time and the source agent's movements is unknown. We recorded magneto- and electroencephalography (MEG/EEG) data while subjects viewed animated video clips featuring two audiovisual objects, a black cat and a gray cat. Adaptor-probe events were either linked to the same object (the black cat meowed twice in a row in the same location) or included a visually conveyed identity change (the black and then the gray cat meowed with identical voices in the same location). In addition to effects in visual (including fusiform, middle temporal or MT areas) and frontoparietal association areas, the visually conveyed object-identity change was associated with a release from adaptation of early (50-150ms) activity in posterior ACs, spreading to left anterior ACs at 250-450ms in our combined MEG/EEG source estimates. Repetition of events belonging to the same object resulted in increased theta-band (4-8Hz) synchronization within the "what" and "where" pathways (e.g., between anterior AC and fusiform areas). In contrast, the visually conveyed identity changes resulted in distributed synchronization at higher frequencies (alpha and beta bands, 8-32Hz) across different auditory, visual, and association areas. The results suggest that sound events become initially linked to perceptual objects in posterior AC, followed by modulations of representations in anterior AC. Hierarchical what and where pathways seem to operate in parallel after repeating audiovisual associations, whereas the resetting of such associations engages a distributed network across auditory, visual, and multisensory areas.


Subject(s)
Auditory Cortex/physiology , Auditory Pathways/physiology , Auditory Perception/physiology , Visual Perception/physiology , Acoustic Stimulation , Adult , Animals , Cats , Cortical Synchronization , Electroencephalography , Evoked Potentials, Auditory/physiology , Female , Humans , Magnetoencephalography , Male , Middle Aged , Photic Stimulation , Visual Cortex/physiology , Vocalization, Animal , Young Adult
8.
Brain Topogr ; 29(6): 783-790, 2016 11.
Article in English | MEDLINE | ID: mdl-27503196

ABSTRACT

Magnetoencephalography (MEG) signals are commonly contaminated by cardiac artefacts (CAs). Principle component analysis and independent component analysis have been widely used for removing CAs, but they typically require a complex procedure for the identification of CA-related components. We propose a simple and efficient method, resampled moving average subtraction (RMAS), to remove CAs from MEG data. Based on an electrocardiogram (ECG) channel, a template for each cardiac cycle was estimated by a weighted average of epochs of MEG data over consecutive cardiac cycles, combined with a resampling technique for accurate alignment of the time waveforms. The template was subtracted from the corresponding epoch of the MEG data. The resampling reduced distortions due to asynchrony between the cardiac cycle and the MEG sampling times. The RMAS method successfully suppressed CAs while preserving both event-related responses and high-frequency (>45 Hz) components in the MEG data.


Subject(s)
Artifacts , Brain/physiology , Electrocardiography/methods , Magnetoencephalography/methods , Subtraction Technique , Adult , Female , Healthy Volunteers , Humans , Male , Principal Component Analysis , Young Adult
9.
Proc Natl Acad Sci U S A ; 108(10): 4182-7, 2011 Mar 08.
Article in English | MEDLINE | ID: mdl-21368107

ABSTRACT

How can we concentrate on relevant sounds in noisy environments? A "gain model" suggests that auditory attention simply amplifies relevant and suppresses irrelevant afferent inputs. However, it is unclear whether this suffices when attended and ignored features overlap to stimulate the same neuronal receptive fields. A "tuning model" suggests that, in addition to gain, attention modulates feature selectivity of auditory neurons. We recorded magnetoencephalography, EEG, and functional MRI (fMRI) while subjects attended to tones delivered to one ear and ignored opposite-ear inputs. The attended ear was switched every 30 s to quantify how quickly the effects evolve. To produce overlapping inputs, the tones were presented alone vs. during white-noise masking notch-filtered ±1/6 octaves around the tone center frequencies. Amplitude modulation (39 vs. 41 Hz in opposite ears) was applied for "frequency tagging" of attention effects on maskers. Noise masking reduced early (50-150 ms; N1) auditory responses to unattended tones. In support of the tuning model, selective attention canceled out this attenuating effect but did not modulate the gain of 50-150 ms activity to nonmasked tones or steady-state responses to the maskers themselves. These tuning effects originated at nonprimary auditory cortices, purportedly occupied by neurons that, without attention, have wider frequency tuning than ±1/6 octaves. The attentional tuning evolved rapidly, during the first few seconds after attention switching, and correlated with behavioral discrimination performance. In conclusion, a simple gain model alone cannot explain auditory selective attention. In nonprimary auditory cortices, attention-driven short-term plasticity retunes neurons to segregate relevant sounds from noise.


Subject(s)
Attention , Auditory Cortex/physiology , Neuronal Plasticity , Noise , Electroencephalography , Humans , Magnetic Resonance Imaging
10.
bioRxiv ; 2024 Feb 13.
Article in English | MEDLINE | ID: mdl-37398025

ABSTRACT

Previous studies have demonstrated that auditory cortex activity can be influenced by crosssensory visual inputs. Intracortical recordings in non-human primates (NHP) have suggested a bottom-up feedforward (FF) type laminar profile for auditory evoked but top-down feedback (FB) type for cross-sensory visual evoked activity in the auditory cortex. To test whether this principle applies also to humans, we analyzed magnetoencephalography (MEG) responses from eight human subjects (six females) evoked by simple auditory or visual stimuli. In the estimated MEG source waveforms for auditory cortex region of interest, auditory evoked responses showed peaks at 37 and 90 ms and cross-sensory visual responses at 125 ms. The inputs to the auditory cortex were then modeled through FF and FB type connections targeting different cortical layers using the Human Neocortical Neurosolver (HNN), which consists of a neocortical circuit model linking the cellular- and circuit-level mechanisms to MEG. The HNN models suggested that the measured auditory response could be explained by an FF input followed by an FB input, and the crosssensory visual response by an FB input. Thus, the combined MEG and HNN results support the hypothesis that cross-sensory visual input in the auditory cortex is of FB type. The results also illustrate how the dynamic patterns of the estimated MEG/EEG source activity can provide information about the characteristics of the input into a cortical area in terms of the hierarchical organization among areas.

11.
Psychon Bull Rev ; 2024 Apr 30.
Article in English | MEDLINE | ID: mdl-38689188

ABSTRACT

While the neural bases of the earliest stages of speech categorization have been widely explored using neural decoding methods, there is still a lack of consensus on questions as basic as how wordforms are represented and in what way this word-level representation influences downstream processing in the brain. Isolating and localizing the neural representations of wordform is challenging because spoken words activate a variety of representations (e.g., segmental, semantic, articulatory) in addition to form-based representations. We addressed these challenges through a novel integrated neural decoding and effective connectivity design using region of interest (ROI)-based, source-reconstructed magnetoencephalography/electroencephalography (MEG/EEG) data collected during a lexical decision task. To identify wordform representations, we trained classifiers on words and nonwords from different phonological neighborhoods and then tested the classifiers' ability to discriminate between untrained target words that overlapped phonologically with the trained items. Training with word neighbors supported significantly better decoding than training with nonword neighbors in the period immediately following target presentation. Decoding regions included mostly right hemisphere regions in the posterior temporal lobe implicated in phonetic and lexical representation. Additionally, neighbors that aligned with target word beginnings (critical for word recognition) supported decoding, but equivalent phonological overlap with word codas did not, suggesting lexical mediation. Effective connectivity analyses showed a rich pattern of interaction between ROIs that support decoding based on training with lexical neighbors, especially driven by right posterior middle temporal gyrus. Collectively, these results evidence functional representation of wordforms in temporal lobes isolated from phonemic or semantic representations.

12.
Hum Brain Mapp ; 34(9): 2190-201, 2013 Sep.
Article in English | MEDLINE | ID: mdl-22438263

ABSTRACT

Spatially focal source estimates for magnetoencephalography (MEG) and electroencephalography (EEG) data can be obtained by imposing a minimum ℓ(1) -norm constraint on the distribution of the source currents. Anatomical information about the expected locations and orientations of the sources can be included in the source models. In particular, the sources can be assumed to be oriented perpendicular to the cortical surface. We introduce a minimum ℓ(1) -norm estimation source modeling approach with loose orientation constraints (ℓ(1) LOC), which integrates the estimation of the orientation, location, and strength of the source currents into a cost function to jointly model the residual error and the ℓ(1) -norm of the source estimates. Evaluation with simulated MEG data indicated that the ℓ(1) LOC method can provide low spatial dispersion, high localization accuracy, and high source detection rates. Application to somatosensory and auditory MEG data resulted in physiologically reasonable source distributions. The proposed ℓ(1) LOC method appears useful for incorporating anatomical information about the source orientations into sparse source estimation of MEG data.


Subject(s)
Algorithms , Brain Mapping/methods , Brain/physiology , Magnetoencephalography/methods , Signal Processing, Computer-Assisted , Adult , Female , Humans , Male , Models, Neurological
13.
Cognition ; 230: 105322, 2023 01.
Article in English | MEDLINE | ID: mdl-36370613

ABSTRACT

Acceptability judgments are a primary source of evidence in formal linguistic research. Within the generative linguistic tradition, these judgments are attributed to evaluation of novel forms based on implicit knowledge of rules or constraints governing well-formedness. In the domain of phonological acceptability judgments, other factors including ease of articulation and similarity to known forms have been hypothesized to influence evaluation. We used data-driven neural techniques to identify the relative contributions of these factors. Granger causality analysis of magnetic resonance imaging (MRI)-constrained magnetoencephalography (MEG) and electroencephalography (EEG) data revealed patterns of interaction between brain regions that support explicit judgments of the phonological acceptability of spoken nonwords. Comparisons of data obtained with nonwords that varied in terms of onset consonant cluster attestation and acceptability revealed different cortical regions and effective connectivity patterns associated with phonological acceptability judgments. Attested forms produced stronger influences of brain regions implicated in lexical representation and sensorimotor simulation on acoustic-phonetic regions, whereas unattested forms produced stronger influence of phonological control mechanisms on acoustic-phonetic processing. Unacceptable forms produced widespread patterns of interaction consistent with attempted search or repair. Together, these results suggest that speakers' phonological acceptability judgments reflect lexical and sensorimotor factors.


Subject(s)
Judgment , Phonetics , Humans , Magnetoencephalography , Brain Mapping , Electroencephalography
14.
Lang Cogn Neurosci ; 38(6): 765-778, 2023.
Article in English | MEDLINE | ID: mdl-37332658

ABSTRACT

Generativity, the ability to create and evaluate novel constructions, is a fundamental property of human language and cognition. The productivity of generative processes is determined by the scope of the representations they engage. Here we examine the neural representation of reduplication, a productive phonological process that can create novel forms through patterned syllable copying (e.g. ba-mih → ba-ba-mih, ba-mih-mih, or ba-mih-ba). Using MRI-constrained source estimates of combined MEG/EEG data collected during an auditory artificial grammar task, we identified localized cortical activity associated with syllable reduplication pattern contrasts in novel trisyllabic nonwords. Neural decoding analyses identified a set of predominantly right hemisphere temporal lobe regions whose activity reliably discriminated reduplication patterns evoked by untrained, novel stimuli. Effective connectivity analyses suggested that sensitivity to abstracted reduplication patterns was propagated between these temporal regions. These results suggest that localized temporal lobe activity patterns function as abstract representations that support linguistic generativity.

15.
bioRxiv ; 2023 Jul 21.
Article in English | MEDLINE | ID: mdl-37503242

ABSTRACT

While the neural bases of the earliest stages of speech categorization have been widely explored using neural decoding methods, there is still a lack of consensus on questions as basic as how wordforms are represented and in what way this word-level representation influences downstream processing in the brain. Isolating and localizing the neural representations of wordform is challenging because spoken words evoke activation of a variety of representations (e.g., segmental, semantic, articulatory) in addition to form-based representations. We addressed these challenges through a novel integrated neural decoding and effective connectivity design using region of interest (ROI)-based, source reconstructed magnetoencephalography/electroencephalography (MEG/EEG) data collected during a lexical decision task. To localize wordform representations, we trained classifiers on words and nonwords from different phonological neighborhoods and then tested the classifiers' ability to discriminate between untrained target words that overlapped phonologically with the trained items. Training with either word or nonword neighbors supported decoding in many brain regions during an early analysis window (100-400 ms) reflecting primarily incremental phonological processing. Training with word neighbors, but not nonword neighbors, supported decoding in a bilateral set of temporal lobe ROIs, in a later time window (400-600 ms) reflecting activation related to word recognition. These ROIs included bilateral posterior temporal regions implicated in wordform representation. Effective connectivity analyses among regions within this subset indicated that word-evoked activity influenced the decoding accuracy more than nonword-evoked activity did. Taken together, these results evidence functional representation of wordforms in bilateral temporal lobes isolated from phonemic or semantic representations.

16.
J Autism Dev Disord ; 2023 Mar 17.
Article in English | MEDLINE | ID: mdl-36932270

ABSTRACT

Auditory steady-state response (ASSR) has been studied as a potential biomarker for abnormal auditory sensory processing in autism spectrum disorder (ASD), with mixed results. Motivated by prior somatosensory findings of group differences in inter-trial coherence (ITC) between ASD and typically developing (TD) individuals at twice the steady-state stimulation frequency, we examined ASSR at 25 and 50 as well as 43 and 86 Hz in response to 25-Hz and 43-Hz auditory stimuli, respectively, using magnetoencephalography. Data were recorded from 22 ASD and 31 TD children, ages 6-17 years. ITC measures showed prominent ASSRs at the stimulation and double frequencies, without significant group differences. These results do not support ASSR as a robust ASD biomarker of abnormal auditory processing in ASD. Furthermore, the previously observed atypical double-frequency somatosensory response in ASD did not generalize to the auditory modality. Thus, the hypothesis about modality-independent abnormal local connectivity in ASD was not supported.

17.
Hum Brain Mapp ; 33(9): 2125-34, 2012 Sep.
Article in English | MEDLINE | ID: mdl-21882299

ABSTRACT

Recurrent anticipation of ominous events is central to obsessions, the core symptom of obsessive-compulsive disorder (OCD), yet the neural basis of intrinsic anticipatory processing in OCD is unknown. We studied nonmedicated adults with OCD and case matched healthy controls in a visual-spatial working memory task with distractor. Magnetoencephalography was used to examine the medial cortex activity during anticipation of to-be-inhibited distractors and to-be-facilitated retrieval stimuli. In OCD anticipatory activation to distractors was abnormally reduced within the posterior cingulate and fusiform gyrus compared to prominent activation in controls. Conversely, OCD subjects displayed significantly increased activation to retrieval stimuli within the anterior cingulate and supplementary motor cortex. This previously unreported discordant pattern of medial anticipatory activation in OCD was accompanied by normal performance accuracy. While increased anterior cortex activation in OCD is commonly viewed as failure of inhibition, the current pattern of data implicates the operation of an anterior compensatory mechanism amending the posterior medial self-regulatory networks disrupted in OCD.


Subject(s)
Anticipation, Psychological/physiology , Nerve Net/physiology , Obsessive-Compulsive Disorder/physiopathology , Obsessive-Compulsive Disorder/psychology , Adult , Analysis of Variance , Behavior/physiology , Brain Mapping , Contingent Negative Variation , Data Interpretation, Statistical , Female , Gyrus Cinguli/physiology , Humans , Image Processing, Computer-Assisted , Magnetic Resonance Imaging , Magnetoencephalography , Male , Motor Cortex/physiology , Neuropsychological Tests , Photic Stimulation , Psychiatric Status Rating Scales
18.
Front Psychiatry ; 13: 902332, 2022.
Article in English | MEDLINE | ID: mdl-35990048

ABSTRACT

Autism Spectrum (AS) is defined primarily by differences in social interactions, with impairments in sensory processing also characterizing the condition. In the search for neurophysiological biomarkers associated with traits relevant to the condition, focusing on sensory processing offers a path that is likely to be translatable across populations with different degrees of ability, as well as into animal models and across imaging modalities. In a prior study, a somatosensory neurophysiological signature of AS was identified using magnetoencephalography (MEG). Specifically, source estimation results showed differences between AS and neurotypically developing (NTD) subjects in the brain response to 25-Hz vibrotactile stimulation of the right fingertips, with lower inter-trial coherence (ITC) observed in the AS group. Here, we examined whether these group differences can be detected without source estimation using scalp electroencephalography (EEG), which is more commonly available in clinical settings than MEG, and therefore offers a greater potential for clinical translation. To that end, we recorded simultaneous whole-head MEG and EEG in 14 AS and 10 NTD subjects (age 15-28 years) using the same vibrotactile paradigm. Based on the scalp topographies, small sets of left hemisphere MEG and EEG sensors showing the maximum overall ITC were selected for group comparisons. Significant differences between the AS and NTD groups in ITC at 25 Hz as well as at 50 Hz were recorded in both MEG and EEG sensor data. For each measure, the mean ITC was lower in the AS than in the NTD group. EEG ITC values correlated with behaviorally assessed somatosensory sensation avoiding scores. The results show that information about ITC from MEG and EEG signals have substantial overlap, and thus EEG sensor-based ITC measures of the AS somatosensory processing biomarker previously identified using source localized MEG data have a potential to be developed into clinical use in AS, thanks to the higher accessibility to EEG in clinical settings.

19.
Front Psychol ; 12: 590155, 2021.
Article in English | MEDLINE | ID: mdl-33776832

ABSTRACT

Processes governing the creation, perception and production of spoken words are sensitive to the patterns of speech sounds in the language user's lexicon. Generative linguistic theory suggests that listeners infer constraints on possible sound patterning from the lexicon and apply these constraints to all aspects of word use. In contrast, emergentist accounts suggest that these phonotactic constraints are a product of interactive associative mapping with items in the lexicon. To determine the degree to which phonotactic constraints are lexically mediated, we observed the effects of learning new words that violate English phonotactic constraints (e.g., srigin) on phonotactic perceptual repair processes in nonword consonant-consonant-vowel (CCV) stimuli (e.g., /sre/). Subjects who learned such words were less likely to "repair" illegal onset clusters (/sr/) and report them as legal ones (/∫r/). Effective connectivity analyses of MRI-constrained reconstructions of simultaneously collected magnetoencephalography (MEG) and EEG data showed that these behavioral shifts were accompanied by changes in the strength of influences of lexical areas on acoustic-phonetic areas. These results strengthen the interpretation of previous results suggesting that phonotactic constraints on perception are produced by top-down lexical influences on speech processing.

20.
Clin Neurophysiol ; 132(3): 708-719, 2021 03.
Article in English | MEDLINE | ID: mdl-33571879

ABSTRACT

OBJECTIVE: To clarify the effects of unfused cranial bones on magnetoencephalography (MEG) signals during early development. METHODS: In a simulation study, we compared the MEG signals over a spherical head model with a circular hole mimicking the anterior fontanel to those over the same head model without the fontanel for different head and fontanel sizes with varying skull thickness and conductivity. RESULTS: The fontanel had small effects according to three indices. The sum of differences in signal over a sensor array due to a fontanel, for example, was < 6% of the sum without the fontanel. However, the fontanel effects were extensive for dipole sources deep in the brain or outside the fontanel for larger fontanels. The effects were comparable in magnitude for tangential and radial sources. Skull thickness significantly increased the effect, while skull conductivity had minor effects. CONCLUSION: MEG signal is weakly affected by a fontanel. However, the effects can be extensive and significant for radial sources, thicker skull and large fontanels. The fontanel effects can be intuitively explained by the concept of secondary sources at the fontanel wall. SIGNIFICANCE: The minor influence of unfused cranial bones simplifies MEG analysis, but it should be considered for quantitative analysis.


Subject(s)
Cranial Fontanelles/anatomy & histology , Cranial Fontanelles/physiology , Magnetoencephalography/methods , Models, Anatomic , Humans , Infant , Infant, Newborn , Skull/anatomy & histology , Skull/physiology
SELECTION OF CITATIONS
SEARCH DETAIL