Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 18 de 18
Filter
1.
eNeuro ; 10(8)2023 08.
Article in English | MEDLINE | ID: mdl-37500493

ABSTRACT

When listening to speech, the low-frequency cortical response below 10 Hz can track the speech envelope. Previous studies have demonstrated that the phase lag between speech envelope and cortical response can reflect the mechanism by which the envelope-tracking response is generated. Here, we analyze whether the mechanism to generate the envelope-tracking response is modulated by the level of consciousness, by studying how the stimulus-response phase lag is modulated by the disorder of consciousness (DoC). It is observed that DoC patients in general show less reliable neural tracking of speech. Nevertheless, the stimulus-response phase lag changes linearly with frequency between 3.5 and 8 Hz, for DoC patients who show reliable cortical tracking to speech, regardless of the consciousness state. The mean phase lag is also consistent across these DoC patients. These results suggest that the envelope-tracking response to speech can be generated by an automatic process that is barely modulated by the consciousness state.


Subject(s)
Consciousness Disorders , Speech Perception , Humans , Consciousness , Acoustic Stimulation/methods , Speech Perception/physiology , Electroencephalography/methods
2.
Neuroimage ; 255: 119182, 2022 07 15.
Article in English | MEDLINE | ID: mdl-35395403

ABSTRACT

Natural scenes contain multi-modal information, which is integrated to form a coherent perception. Previous studies have demonstrated that cross-modal information can modulate neural encoding of low-level sensory features. These studies, however, mostly focus on the processing of single sensory events or rhythmic sensory sequences. Here, we investigate how the neural encoding of basic auditory and visual features is modulated by cross-modal information when the participants watch movie clips primarily composed of non-rhythmic events. We presented audiovisual congruent and audiovisual incongruent movie clips, and since attention can modulate cross-modal interactions, we separately analyzed high- and low-arousal movie clips. We recorded neural responses using electroencephalography (EEG), and employed the temporal response function (TRF) to quantify the neural encoding of auditory and visual features. The neural encoding of sound envelope is enhanced in the audiovisual congruent condition than the incongruent condition, but this effect is only significant for high-arousal movie clips. In contrast, audiovisual congruency does not significantly modulate the neural encoding of visual features, e.g., luminance or visual motion. In summary, our findings demonstrate asymmetrical cross-modal interactions during the processing of natural scenes that lack rhythmicity: Congruent visual information enhances low-level auditory processing, while congruent auditory information does not significantly modulate low-level visual processing.


Subject(s)
Auditory Perception , Visual Perception , Acoustic Stimulation , Auditory Perception/physiology , Electroencephalography , Humans , Photic Stimulation , Visual Perception/physiology
3.
Mol Phylogenet Evol ; 167: 107336, 2022 02.
Article in English | MEDLINE | ID: mdl-34757169

ABSTRACT

Potato virus X (PVX) is the type potexvirus of economic significance. The pathogen is distributed worldwide, threatening solanaceous plants in particular. Based on the coat protein (CP) gene, PVX isolates are classified into two major genotypes (I and II). To gain more insights into the molecular epidemiology and evolution of PVX, recombination analyses were conducted and significant signals were detected. Bayesian coalescent method was then applied to the time-stamped entire CP sequences. According to the estimates, the global subtype I-1 went into expansion in the 20th century and was evolving at a moderate rate. Based on the CP phylogenies, a divergence scenario was proposed for PVX. Surveys of codon usage variation showed that PVX genes had additional bias independent of compositional constraint. In codon preference, PVX was both similar to and different from the three major hosts, potato (Solanum tuberosum), tobacco (Nicotiana tabacum), and tomato (S. lycopersicum). Moreover, the suppression of CpG and UpA dinucleotide frequencies was observed in PVX.


Subject(s)
Potexvirus , Solanum tuberosum , Bayes Theorem , Phylogeny , Potexvirus/genetics , Solanum tuberosum/genetics
4.
Reproduction ; 162(6): 461-472, 2021 11 10.
Article in English | MEDLINE | ID: mdl-34591784

ABSTRACT

As a multifunctional transcription factor, YY1 regulates the expression of many genes essential for early embryonic development. RTCB is an RNA ligase that plays a role in tRNA maturation and Xbp1 mRNA splicing. YY1 can bind in vitro to the response element in the proximal promoter of Rtcb and regulate Rtcb promoter activity. However, the in vivo regulation and whether these two genes are involved in the mother-fetal dialogue during early pregnancy remain unclear. In this study, we validated that YY1 bound in vivo to the proximal promoter of Rtcb in mouse uterus of early pregnancy. Moreover, via building a variety of animal models, our study suggested that both YY1 and RTCB might play a role in mouse uterus decidualization and embryo implantation during early pregnancy.


Subject(s)
Amino Acyl-tRNA Synthetases/metabolism , Embryo Implantation , Transcription Factors , YY1 Transcription Factor/metabolism , Animals , Decidua/physiology , Embryo Implantation/physiology , Female , Mice , Pregnancy , RNA Splicing , Transcription Factors/genetics , Uterus
5.
Elife ; 92020 12 21.
Article in English | MEDLINE | ID: mdl-33345775

ABSTRACT

Speech contains rich acoustic and linguistic information. Using highly controlled speech materials, previous studies have demonstrated that cortical activity is synchronous to the rhythms of perceived linguistic units, for example, words and phrases, on top of basic acoustic features, for example, the speech envelope. When listening to natural speech, it remains unclear, however, how cortical activity jointly encodes acoustic and linguistic information. Here we investigate the neural encoding of words using electroencephalography and observe neural activity synchronous to multi-syllabic words when participants naturally listen to narratives. An amplitude modulation (AM) cue for word rhythm enhances the word-level response, but the effect is only observed during passive listening. Furthermore, words and the AM cue are encoded by spatially separable neural responses that are differentially modulated by attention. These results suggest that bottom-up acoustic cues and top-down linguistic knowledge separately contribute to cortical encoding of linguistic units in spoken narratives.


Subject(s)
Cerebral Cortex/physiology , Comprehension/physiology , Speech Perception/physiology , Acoustic Stimulation , Adult , Electroencephalography , Female , Humans , Language , Male , Young Adult
6.
Nat Neurosci ; 23(6): 761-770, 2020 06.
Article in English | MEDLINE | ID: mdl-32451482

ABSTRACT

Assessing residual consciousness and cognitive abilities in unresponsive patients is a major clinical concern and a challenge for cognitive neuroscience. Although neuroimaging studies have demonstrated a potential for informing diagnosis and prognosis in unresponsive patients, these methods involve sophisticated brain imaging technologies, which limit their clinical application. In this study, we adopted a new language paradigm that elicited rhythmic brain responses tracking the single-word, phrase and sentence rhythms in speech, to examine whether bedside electroencephalography (EEG) recordings can help inform diagnosis and prognosis. EEG-derived neural signals, including both speech-tracking responses and temporal dynamics of global brain states, were associated with behavioral diagnosis of consciousness. Crucially, multiple EEG measures in the language paradigm were robust to predict future outcomes in individual patients. Thus, EEG-based language assessment provides a new and reliable approach to objectively characterize and predict states of consciousness and to longitudinally track individual patients' language processing abilities at the bedside.


Subject(s)
Language , Persistent Vegetative State/diagnosis , Symptom Assessment/methods , Unconsciousness/diagnosis , Acoustic Stimulation , Adolescent , Adult , Aged , Case-Control Studies , Child , Electroencephalography , Female , Humans , Male , Middle Aged , Photic Stimulation , Prognosis , Speech , Young Adult
7.
Sheng Li Xue Bao ; 71(6): 935-945, 2019 Dec 25.
Article in Chinese | MEDLINE | ID: mdl-31879748

ABSTRACT

Speech comprehension is a central cognitive function of the human brain. In cognitive neuroscience, a fundamental question is to understand how neural activity encodes the acoustic properties of a continuous speech stream and resolves multiple levels of linguistic structures at the same time. This paper reviews the recently developed research paradigms that employ electroencephalography (EEG) or magnetoencephalography (MEG) to capture neural tracking of acoustic features or linguistic structures of continuous speech. This review focuses on two questions in speech processing: (1) The encoding of continuously changing acoustic properties of speech; (2) The representation of hierarchical linguistic units, including syllables, words, phrases and sentences. Studies have found that the low-frequency cortical activity tracks the speech envelope. In addition, the cortical activities on different time scales track multiple levels of linguistic units and constitute a representation of hierarchically organized linguistic units. The article reviewed these studies, which provided new insights into the processes of continuous speech in the human brain.


Subject(s)
Electroencephalography , Magnetoencephalography , Speech , Acoustic Stimulation , Humans , Speech/physiology , Speech Perception
8.
Cereb Cortex ; 29(4): 1561-1571, 2019 04 01.
Article in English | MEDLINE | ID: mdl-29788144

ABSTRACT

Segregating concurrent sound streams is a computationally challenging task that requires integrating bottom-up acoustic cues (e.g. pitch) and top-down prior knowledge about sound streams. In a multi-talker environment, the brain can segregate different speakers in about 100 ms in auditory cortex. Here, we used magnetoencephalographic (MEG) recordings to investigate the temporal and spatial signature of how the brain utilizes prior knowledge to segregate 2 speech streams from the same speaker, which can hardly be separated based on bottom-up acoustic cues. In a primed condition, the participants know the target speech stream in advance while in an unprimed condition no such prior knowledge is available. Neural encoding of each speech stream is characterized by the MEG responses tracking the speech envelope. We demonstrate that an effect in bilateral superior temporal gyrus and superior temporal sulcus is much stronger in the primed condition than in the unprimed condition. Priming effects are observed at about 100 ms latency and last more than 600 ms. Interestingly, prior knowledge about the target stream facilitates speech segregation by mainly suppressing the neural tracking of the non-target speech stream. In sum, prior knowledge leads to reliable speech segregation in auditory cortex, even in the absence of reliable bottom-up speech segregation cue.


Subject(s)
Auditory Cortex/physiology , Cues , Speech Perception/physiology , Acoustic Stimulation , Adolescent , Adult , Attention , Female , Humans , Magnetoencephalography , Male , Speech Acoustics , Young Adult
9.
Nat Commun ; 9(1): 5374, 2018 12 18.
Article in English | MEDLINE | ID: mdl-30560906

ABSTRACT

The sensory and motor systems jointly contribute to complex behaviors, but whether motor systems are involved in high-order perceptual tasks such as speech and auditory comprehension remain debated. Here, we show that ocular muscle activity is synchronized to mentally constructed sentences during speech listening, in the absence of any sentence-related visual or prosodic cue. Ocular tracking of sentences is observed in the vertical electrooculogram (EOG), whether the eyes are open or closed, and in eye blinks measured by eyetracking. Critically, the phase of sentence-tracking ocular activity is strongly modulated by temporal attention, i.e., which word in a sentence is attended. Ocular activity also tracks high-level structures in non-linguistic auditory and visual sequences, and captures rapid fluctuations in temporal attention. Ocular tracking of non-visual rhythms possibly reflects global neural entrainment to task-relevant temporal structures across sensory and motor areas, which could serve to implement temporal attention and coordinate cortical networks.


Subject(s)
Acoustic Stimulation/psychology , Auditory Cortex/physiology , Auditory Perception/physiology , Eye Movements/physiology , Speech , Adult , Attention/physiology , Comprehension/physiology , Electronystagmography , Electrooculography , Female , Humans , Male , Young Adult
10.
J Neurosci ; 38(5): 1178-1188, 2018 01 31.
Article in English | MEDLINE | ID: mdl-29255005

ABSTRACT

How the brain groups sequential sensory events into chunks is a fundamental question in cognitive neuroscience. This study investigates whether top-down attention or specific tasks are required for the brain to apply lexical knowledge to group syllables into words. Neural responses tracking the syllabic and word rhythms of a rhythmic speech sequence were concurrently monitored using electroencephalography (EEG). The participants performed different tasks, attending to either the rhythmic speech sequence or a distractor, which was another speech stream or a nonlinguistic auditory/visual stimulus. Attention to speech, but not a lexical-meaning-related task, was required for reliable neural tracking of words, even when the distractor was a nonlinguistic stimulus presented cross-modally. Neural tracking of syllables, however, was reliably observed in all tested conditions. These results strongly suggest that neural encoding of individual auditory events (i.e., syllables) is automatic, while knowledge-based construction of temporal chunks (i.e., words) crucially relies on top-down attention.SIGNIFICANCE STATEMENT Why we cannot understand speech when not paying attention is an old question in psychology and cognitive neuroscience. Speech processing is a complex process that involves multiple stages, e.g., hearing and analyzing the speech sound, recognizing words, and combining words into phrases and sentences. The current study investigates which speech-processing stage is blocked when we do not listen carefully. We show that the brain can reliably encode syllables, basic units of speech sounds, even when we do not pay attention. Nevertheless, when distracted, the brain cannot group syllables into multisyllabic words, which are basic units for speech meaning. Therefore, the process of converting speech sound into meaning crucially relies on attention.


Subject(s)
Attention/physiology , Knowledge , Language Development , Learning/physiology , Acoustic Stimulation , Adolescent , Adult , Dichotic Listening Tests , Electroencephalography , Evoked Potentials, Auditory , Female , Humans , Language , Male , Phonetics , Photic Stimulation , Psychomotor Performance , Speech , Young Adult
11.
J Neurosci ; 37(32): 7772-7781, 2017 08 09.
Article in English | MEDLINE | ID: mdl-28626013

ABSTRACT

The extent to which the sleeping brain processes sensory information remains unclear. This is particularly true for continuous and complex stimuli such as speech, in which information is organized into hierarchically embedded structures. Recently, novel metrics for assessing the neural representation of continuous speech have been developed using noninvasive brain recordings that have thus far only been tested during wakefulness. Here we investigated, for the first time, the sleeping brain's capacity to process continuous speech at different hierarchical levels using a newly developed Concurrent Hierarchical Tracking (CHT) approach that allows monitoring the neural representation and processing-depth of continuous speech online. Speech sequences were compiled with syllables, words, phrases, and sentences occurring at fixed time intervals such that different linguistic levels correspond to distinct frequencies. This enabled us to distinguish their neural signatures in brain activity. We compared the neural tracking of intelligible versus unintelligible (scrambled and foreign) speech across states of wakefulness and sleep using high-density EEG in humans. We found that neural tracking of stimulus acoustics was comparable across wakefulness and sleep and similar across all conditions regardless of speech intelligibility. In contrast, neural tracking of higher-order linguistic constructs (words, phrases, and sentences) was only observed for intelligible speech during wakefulness and could not be detected at all during nonrapid eye movement or rapid eye movement sleep. These results suggest that, whereas low-level auditory processing is relatively preserved during sleep, higher-level hierarchical linguistic parsing is severely disrupted, thereby revealing the capacity and limits of language processing during sleep.SIGNIFICANCE STATEMENT Despite the persistence of some sensory processing during sleep, it is unclear whether high-level cognitive processes such as speech parsing are also preserved. We used a novel approach for studying the depth of speech processing across wakefulness and sleep while tracking neuronal activity with EEG. We found that responses to the auditory sound stream remained intact; however, the sleeping brain did not show signs of hierarchical parsing of the continuous stream of syllables into words, phrases, and sentences. The results suggest that sleep imposes a functional barrier between basic sensory processing and high-level cognitive processing. This paradigm also holds promise for studying residual cognitive abilities in a wide array of unresponsive states.


Subject(s)
Acoustic Stimulation/methods , Auditory Cortex/physiology , Evoked Potentials, Auditory/physiology , Sleep Stages/physiology , Speech Perception/physiology , Wakefulness/physiology , Adult , Electroencephalography/methods , Female , Humans , Male , Sleep/physiology , Speech Intelligibility/physiology , Young Adult
12.
J Neurosci ; 33(13): 5728-35, 2013 Mar 27.
Article in English | MEDLINE | ID: mdl-23536086

ABSTRACT

Speech recognition is remarkably robust to the listening background, even when the energy of background sounds strongly overlaps with that of speech. How the brain transforms the corrupted acoustic signal into a reliable neural representation suitable for speech recognition, however, remains elusive. Here, we hypothesize that this transformation is performed at the level of auditory cortex through adaptive neural encoding, and we test the hypothesis by recording, using MEG, the neural responses of human subjects listening to a narrated story. Spectrally matched stationary noise, which has maximal acoustic overlap with the speech, is mixed in at various intensity levels. Despite the severe acoustic interference caused by this noise, it is here demonstrated that low-frequency auditory cortical activity is reliably synchronized to the slow temporal modulations of speech, even when the noise is twice as strong as the speech. Such a reliable neural representation is maintained by intensity contrast gain control and by adaptive processing of temporal modulations at different time scales, corresponding to the neural δ and θ bands. Critically, the precision of this neural synchronization predicts how well a listener can recognize speech in noise, indicating that the precision of the auditory cortical representation limits the performance of speech recognition in noise. Together, these results suggest that, in a complex listening environment, auditory cortex can selectively encode a speech stream in a background insensitive manner, and this stable neural representation of speech provides a plausible basis for background-invariant recognition of speech.


Subject(s)
Adaptation, Psychological/physiology , Auditory Cortex/cytology , Auditory Perception/physiology , Discrimination, Psychological/physiology , Neurons/physiology , Speech/physiology , Acoustic Stimulation , Adult , Analysis of Variance , Auditory Cortex/physiology , Brain Waves/physiology , Computer Simulation , Cortical Synchronization , Female , Humans , Magnetoencephalography , Male , Models, Neurological , Noise , Psychoacoustics , Reaction Time , Spectrum Analysis , Statistics as Topic , Young Adult
13.
Proc Natl Acad Sci U S A ; 109(29): 11854-9, 2012 Jul 17.
Article in English | MEDLINE | ID: mdl-22753470

ABSTRACT

A visual scene is perceived in terms of visual objects. Similar ideas have been proposed for the analogous case of auditory scene analysis, although their hypothesized neural underpinnings have not yet been established. Here, we address this question by recording from subjects selectively listening to one of two competing speakers, either of different or the same sex, using magnetoencephalography. Individual neural representations are seen for the speech of the two speakers, with each being selectively phase locked to the rhythm of the corresponding speech stream and from which can be exclusively reconstructed the temporal envelope of that speech stream. The neural representation of the attended speech dominates responses (with latency near 100 ms) in posterior auditory cortex. Furthermore, when the intensity of the attended and background speakers is separately varied over an 8-dB range, the neural representation of the attended speech adapts only to the intensity of that speaker but not to the intensity of the background speaker, suggesting an object-level intensity gain control. In summary, these results indicate that concurrent auditory objects, even if spectrotemporally overlapping and not resolvable at the auditory periphery, are neurally encoded individually in auditory cortex and emerge as fundamental representational units for top-down attentional modulation and bottom-up neural adaptation.


Subject(s)
Attention , Auditory Cortex/physiology , Auditory Perception/physiology , Discrimination, Psychological/physiology , Hearing/physiology , Acoustic Stimulation , Adult , Female , Humans , Magnetoencephalography , Male , Models, Theoretical , Sex Factors
14.
J Neurophysiol ; 107(8): 2033-41, 2012 Apr.
Article in English | MEDLINE | ID: mdl-21975451

ABSTRACT

Slow acoustic modulations below 20 Hz, of varying bandwidths, are dominant components of speech and many other natural sounds. The dynamic neural representations of these modulations are difficult to study through noninvasive neural-recording methods, however, because of the omnipresent background of slow neural oscillations throughout the brain. We recorded the auditory steady-state responses (aSSR) to slow amplitude modulations (AM) from 14 human subjects using magnetoencephalography. The responses to five AM rates (1.5, 3.5, 7.5, 15.5, and 31.5 Hz) and four types of carrier (pure tone and 1/3-, 2-, and 5-octave pink noise) were investigated. The phase-locked aSSR was detected reliably in all conditions. The response power generally decreases with increasing modulation rate, and the response latency is between 100 and 150 ms for all but the highest rates. Response properties depend only weakly on the bandwidth. Analysis of the complex-valued aSSR magnetic fields in the Fourier domain reveals several neural sources with different response phases. These neural sources of the aSSR, when approximated by a single equivalent current dipole (ECD), are distinct from and medial to the ECD location of the N1m response. These results demonstrate that the globally synchronized activity in the human auditory cortex is phase locked to slow temporal modulations below 30 Hz, and the neural sensitivity decreases with an increasing AM rate, with relative insensitivity to bandwidth.


Subject(s)
Acoustic Stimulation/methods , Auditory Cortex/physiology , Auditory Perception/physiology , Evoked Potentials, Auditory/physiology , Magnetoencephalography/methods , Female , Humans , Male , Reaction Time/physiology , Time Factors
15.
DNA Cell Biol ; 31(4): 496-503, 2012 Apr.
Article in English | MEDLINE | ID: mdl-21977911

ABSTRACT

The focal adhesion-associated protein (FAAP), product of the murine D10Wsu52e gene, is involved in modulating cell adhesion dynamics. The ubiquitously expressed protein belongs to the highly conserved UPF0027 family, the newly identified RNA >p ligase family. To understand the mechanisms underlying FAAP expression and regulation, we first mapped its major transcription start site at the nucleotide 79 bp upstream of the ATG codon. The murine FAAP 2.1 kb 5'-flanking region was cloned, analyzed, and aligned with the corresponding 1.7 kb region of its human homolog HSPC117. Despite the differences in activity, cell in vitro transfection and testis in vivo electroporation identified a 0.2 kb efficient promoter region lacking a functional TATA-box. Gel shift assays confirmed the specific interaction between Yin Yang-1 (YY1) and the potential element in the proximal region of the FAAP promoter. Site mutation, truncation, RNAi, and overexpression analyses suggested that YY1 is an important regulator of the FAAP promoter.


Subject(s)
Cell Adhesion/genetics , Gene Expression Regulation/genetics , Promoter Regions, Genetic/genetics , Proteins/metabolism , YY1 Transcription Factor/metabolism , 5' Flanking Region/genetics , Amino Acyl-tRNA Synthetases , Animals , Base Sequence , Blotting, Northern , Cloning, Molecular , DNA Primers/genetics , DNA, Complementary/genetics , Electrophoretic Mobility Shift Assay , Electroporation , Male , Mice , Molecular Sequence Data , Proteins/genetics , RNA Interference , Sequence Alignment , Sequence Analysis, DNA , Sequence Homology , Testis/metabolism
16.
J Neurophysiol ; 102(5): 2731-43, 2009 Nov.
Article in English | MEDLINE | ID: mdl-19692508

ABSTRACT

Natural sounds such as speech contain multiple levels and multiple types of temporal modulations. Because of nonlinearities of the auditory system, however, the neural response to multiple, simultaneous temporal modulations cannot be predicted from the neural responses to single modulations. Here we show the cortical neural representation of an auditory stimulus simultaneously frequency modulated (FM) at a high rate, f(FM) approximately 40 Hz, and amplitude modulation (AM) at a slow rate, f(AM) <15 Hz. Magnetoencephalography recordings show fast FM and slow AM stimulus features evoke two separate but not independent auditory steady-state responses (aSSR) at f(FM) and f(AM), respectively. The power, rather than phase locking, of the aSSR of both decreases with increasing stimulus f(AM). The aSSR at f(FM) is itself simultaneously amplitude modulated and phase modulated with fundamental frequency f(AM), showing that the slow stimulus AM is not only encoded in the neural response at f(AM) but also encoded in the instantaneous amplitude and phase of the neural response at f(FM). Both the amplitude modulation and phase modulation of the aSSR at f(FM) are most salient for low stimulus f(AM) but remain observable at the highest tested f(AM) (13.8 Hz). The instantaneous amplitude of the aSSR at f(FM) is successfully predicted by a model containing temporal integration on two time scales, approximately 25 and approximately 200 ms, followed by a static compression nonlinearity.


Subject(s)
Auditory Cortex/cytology , Auditory Cortex/physiology , Auditory Perception/physiology , Brain Mapping , Neurons/physiology , Acoustic Stimulation/methods , Adolescent , Adult , Computer Simulation , Female , Humans , Magnetoencephalography/methods , Male , Models, Neurological , Nonlinear Dynamics , Psychoacoustics , Reaction Time/physiology , Sound , Young Adult
17.
Mol Cell Biochem ; 308(1-2): 247-52, 2008 Jan.
Article in English | MEDLINE | ID: mdl-17973082

ABSTRACT

Peroxisome proliferator-activated receptors delta (PPARdelta) is a nuclear hormone receptor belonging to the steroid receptor superfamily and is molecular targets for drugs to treat hypertriglyceridemia and type 2 diabetes. Yin Yang 1 (YY1) is a transcription factor that can repress or activate transcription of the genes with which it interacts. In this report, we show that YY1 specifically interacts with the PPARdelta promoter. Overexpression of YY1 in Hela and NIH 3T3 cells repressed the activity of the PPARdelta promoter, while the PPARdelta promoter activity was enhanced when YY1 was knocked down by siRNA YY1. We also show that YY1 in nuclear extracts was able to bind the PPARdelta promoter directly. These results suggest that YY1 might be a negative regulator of PPARdelta gene expression through its direct interaction with the PPARdelta promoter.


Subject(s)
PPAR delta/genetics , Promoter Regions, Genetic/genetics , Repressor Proteins/metabolism , YY1 Transcription Factor/metabolism , Animals , Binding Sites , COS Cells , Cell Extracts , Cell Nucleus/metabolism , Chlorocebus aethiops , Female , HeLa Cells , Humans , Mice , Mutation/genetics , NIH 3T3 Cells , Pregnancy , Protein Binding , Transcription, Genetic , Uterus/cytology , Uterus/metabolism
18.
Mol Reprod Dev ; 63(1): 47-54, 2002 Sep.
Article in English | MEDLINE | ID: mdl-12211060

ABSTRACT

Basigin, a transmembrane glycoprotein belonging to the immunoglobulin superfamily, has been shown to be essential for fertilization and implantation. The aim of this study was to determine the expression and hormonal regulation of basigin gene in mouse uterus during the peri-implantation period. Basigin immunostaining and mRNA were strongly localized in luminal and glandular epithelium on day 1 of pregnancy and gradually decreased to a basal level from day 2-4 of pregnancy. Basigin mRNA expression in the sub-luminal stroma was first detected on day 3 of pregnancy and increased on day 4 of pregnancy. On day 5 of pregnancy, the expression of basigin protein and mRNA was only detected in the implanting embryos, and the luminal epithelium and sub-luminal stroma surrounding the embryos. A similar expression pattern of basigin was also induced in the delayed-implantation uterus which was activated by estrogen injection. On day 6-8 of pregnancy, although a basal level of basigin protein was detected in the secondary decidual zone, basigin mRNA expression was strongly seen in this location. Basigin mRNA was also highly expressed in the decidualized cells under artificial decidualization. Estrogen significantly stimulated basigin expression in the ovariectomized mouse uterus. A high level of basigin immunostaining and mRNA was also seen in proestrus and estrus uteri. These results suggest that basigin expression is closely related to mouse implantation and up-regulated by estrogen.


Subject(s)
Antigens, CD , Antigens, Neoplasm , Antigens, Surface , Avian Proteins , Blood Proteins , Embryo Implantation , Estradiol/pharmacology , Gene Expression Regulation, Developmental , Membrane Glycoproteins/biosynthesis , Uterus/metabolism , Animals , Basigin , Decidua/metabolism , Female , Gene Expression Regulation, Developmental/drug effects , Male , Membrane Glycoproteins/genetics , Mice , Ovariectomy , Pregnancy , Progesterone/pharmacology , Pseudopregnancy/genetics , Pseudopregnancy/metabolism , RNA, Messenger/biosynthesis , Sesame Oil/pharmacology , Time Factors , Uterus/drug effects
SELECTION OF CITATIONS
SEARCH DETAIL