RESUMO
Adrenergic receptors (ARs) are preferentially expressed by innate lymphocytes such as natural killer (NK) cells. Here, we study the effect of epinephrine-mediated stimulation of the ß2-adrenergic receptor (ß2AR) on the function of human NK cells. Epinephrine stimulation inhibited early NK cell signaling events and blocked the function of the integrin LFA-1. This reduced the adhesion of NK cells to ICAM-1, explaining how NK cells are mobilized into the peripheral blood upon epinephrine release during acute stress or exercise. Additionally, epinephrine stimulation transiently reduced NK cell degranulation, serial killing, and cytokine production and affected metabolic changes upon NK cell activation via the cAMP-protein kinase A (PKA) pathway. Repeated exposure to ß2AR agonists resulted in the desensitization of the ß2AR via a PKA feedback loop-initiated G-protein switch. Therefore, acute epinephrine stimulation of chronically ß2AR stimulated NK cells no longer resulted in inhibited signaling and reduced LFA-1 activity. Sustained stimulation by long-acting ß2-agonists (LABA) not only inhibited NK cell functions but also resulted in desensitization of the ß2AR. However, peripheral NK cells from LABA-treated asthma patients still reacted unchanged to epinephrine stimulation, demonstrating that local LABA administration does not result in detectable systemic effects on NK cells.
RESUMO
BACKGROUND: Immune checkpoint inhibition (ICI) currently is the most effective treatment to induce durable responses in metastatic melanoma. The aims of this study are the characterization of patients with early, late and non-response to ICI and analysis of survival outcomes in a real-world patient cohort. METHODS: Patients who received PD-1-based immunotherapy for non-resectable stage-IV melanoma in any therapy line were selected from the prospective multicenter real-world DeCOG study ADOREG-TRIM (NCT05750511). Patients showing complete (CR) or partial (PR) response already during the first 3 months of treatment (Early Responders, EarlyR) were compared to patients showing CR/PR at a later time (Late Responders, LateR), a stable disease (SD) and to patients showing progressive disease (Non-Responders, NonR). RESULTS: Of 522 patients, 8.2 % were EarlyR (n = 43), 19.0 % were LateR (n = 99), 37.0 % had a SD (n = 193) and 35.8 % were NonR (n = 187). EarlyR, LateR and SD patients had comparable baseline characteristics. Multivariate logbinomial regression analyses adjusted for age and sex revealed positive tumor PD-L1 (RR=1.99, 95 %-CI=1.14-3.46, p = 0.015), and normal serum CRP (RR=1.59, 95 %-CI=0.93-2.70, p = 0.036) as independently associated with the achievement of an early response compared to NonR. The median progression-free and overall survival was 46.0 months (95 % CI 19.1; NR) and 47.8 months (95 %-CI 36.9; NR) for EarlyR, NR (95 %-CI NR; NR) for LateR, 8.1 months (7.0; 10.4) and 35.4 months (29.2; NR) for SD, and 2.0 months (95 %-CI 1.9; 2.1) and 6.1 months (95 %-CI 4.6; 8.8) for NonR patients. CONCLUSION: Less than 10 % of metastatic melanoma patients achieved an early response during the first 3 months of PD-1-based immunotherapy. Early responders were not superior to late responders in terms of response durability and survival.
Assuntos
Inibidores de Checkpoint Imunológico , Melanoma , Receptor de Morte Celular Programada 1 , Humanos , Melanoma/tratamento farmacológico , Melanoma/imunologia , Melanoma/mortalidade , Melanoma/terapia , Melanoma/secundário , Melanoma/patologia , Masculino , Feminino , Pessoa de Meia-Idade , Idoso , Inibidores de Checkpoint Imunológico/uso terapêutico , Inibidores de Checkpoint Imunológico/efeitos adversos , Receptor de Morte Celular Programada 1/antagonistas & inibidores , Estudos Prospectivos , Neoplasias Cutâneas/imunologia , Neoplasias Cutâneas/patologia , Neoplasias Cutâneas/mortalidade , Neoplasias Cutâneas/tratamento farmacológico , Neoplasias Cutâneas/terapia , Imunoterapia/métodos , Fatores de Tempo , AdultoRESUMO
Simulation is a crucial tool for the evaluation and comparison of statistical methods. How to design fair and neutral simulation studies is therefore of great interest for both researchers developing new methods and practitioners confronted with the choice of the most suitable method. The term simulation usually refers to parametric simulation, that is, computer experiments using artificial data made up of pseudo-random numbers. Plasmode simulation, that is, computer experiments using the combination of resampling feature data from a real-life dataset and generating the target variable with a known user-selected outcome-generating model, is an alternative that is often claimed to produce more realistic data. We compare parametric and Plasmode simulation for the example of estimating the mean squared error (MSE) of the least squares estimator (LSE) in linear regression. If the true underlying data-generating process (DGP) and the outcome-generating model (OGM) were known, parametric simulation would obviously be the best choice in terms of estimating the MSE well. However, in reality, both are usually unknown, so researchers have to make assumptions: in Plasmode simulation studies for the OGM, in parametric simulation for both DGP and OGM. Most likely, these assumptions do not exactly reflect the truth. Here, we aim to find out how assumptions deviating from the true DGP and the true OGM affect the performance of parametric and Plasmode simulations in the context of MSE estimation for the LSE and in which situations which simulation type is preferable. Our results suggest that the preferable simulation method depends on many factors, including the number of features, and on how and to what extent the assumptions of a parametric simulation differ from the true DGP. Also, the resampling strategy used for Plasmode influences the results. In particular, subsampling with a small sampling proportion can be recommended.
Assuntos
Simulação por Computador , Análise dos Mínimos Quadrados , Modelos Lineares , HumanosRESUMO
The alkaline comet assay is frequently used as in vivo follow-up test within different regulatory environments to characterize the DNA-damaging potential of different test items. The corresponding OECD Test guideline 489 highlights the importance of statistical analyses and historical control data (HCD) but does not provide detailed procedures. Therefore, the working group "Statistics" of the German-speaking Society for Environmental Mutation Research (GUM) collected HCD from five laboratories and >200 comet assay studies and performed several statistical analyses. Key results included that (I) observed large inter-laboratory effects argue against the use of absolute quality thresholds, (II) > 50% zero values on a slide are considered problematic, due to their influence on slide or animal summary statistics, (III) the type of summarizing measure for single-cell data (e.g., median, arithmetic and geometric mean) may lead to extreme differences in resulting animal tail intensities and study outcome in the HCD. These summarizing values increase the reliability of analysis results by better meeting statistical model assumptions, but at the cost of information loss. Furthermore, the relation between negative and positive control groups in the data set was always satisfactorily (or sufficiently) based on ratio, difference and quantile analyses.
Assuntos
Dano ao DNA , Projetos de Pesquisa , Animais , Ensaio Cometa/métodos , Reprodutibilidade dos Testes , MutaçãoRESUMO
BACKGROUND & AIMS: Cholemic nephropathy (CN) is a severe complication of cholestatic liver diseases for which there is no specific treatment. We revisited its pathophysiology with the aim of identifying novel therapeutic strategies. METHODS: Cholestasis was induced by bile duct ligation (BDL) in mice. Bile flux in kidneys and livers was visualized by intravital imaging, supported by MALDI mass spectrometry imaging and liquid chromatography-tandem mass spectrometry. The effect of AS0369, a systemically bioavailable apical sodium-dependent bile acid transporter (ASBT) inhibitor, was evaluated by intravital imaging, RNA-sequencing, histological, blood, and urine analyses. Translational relevance was assessed in kidney biopsies from patients with CN, mice with a humanized bile acid (BA) spectrum, and via analysis of serum BAs and KIM-1 (kidney injury molecule 1) in patients with liver disease and hyperbilirubinemia. RESULTS: Proximal tubular epithelial cells (TECs) reabsorbed and enriched BAs, leading to oxidative stress and death of proximal TECs, casts in distal tubules and collecting ducts, peritubular capillary leakiness, and glomerular cysts. Renal ASBT inhibition by AS0369 blocked BA uptake into TECs and prevented kidney injury up to 6 weeks after BDL. Similar results were obtained in mice with humanized BA composition. In patients with advanced liver disease, serum BAs were the main determinant of KIM-1 levels. ASBT expression in TECs was preserved in biopsies from patients with CN, further highlighting the translational potential of targeting ASBT to treat CN. CONCLUSIONS: BA enrichment in proximal TECs followed by oxidative stress and cell death is a key early event in CN. Inhibiting renal ASBT and consequently BA enrichment in TECs prevents CN and systemically decreases BA concentrations. IMPACT AND IMPLICATIONS: Cholemic nephropathy (CN) is a severe complication of cholestasis and an unmet clinical need. We demonstrate that CN is triggered by the renal accumulation of bile acids (BAs) that are considerably increased in the systemic blood. Specifically, the proximal tubular epithelial cells of the kidney take up BAs via the apical sodium-dependent bile acid transporter (ASBT). We developed a therapeutic compound that blocks ASBT in the kidneys, prevents BA overload in tubular epithelial cells, and almost completely abolished all disease hallmarks in a CN mouse model. Renal ASBT inhibition represents a potential therapeutic strategy for patients with CN.
Assuntos
Proteínas de Transporte , Colestase , Nefropatias , Hepatopatias , Glicoproteínas de Membrana , Transportadores de Ânions Orgânicos Dependentes de Sódio , Simportadores , Humanos , Camundongos , Animais , Colestase/complicações , Colestase/metabolismo , Rim/metabolismo , Simportadores/metabolismo , Ácidos e Sais Biliares/metabolismo , Fígado/metabolismo , Ductos Biliares/metabolismo , Hepatopatias/metabolismo , SódioRESUMO
High throughput RNA sequencing experiments are widely conducted and analyzed to identify differentially expressed genes (DEGs). The statistical models calculated for this task are often not clear to practitioners, and analyses may not be optimally tailored to the research hypothesis. Often, interaction effects (IEs) are the mathematical equivalent of the biological research question but are not considered for different reasons. We fill this gap by explaining and presenting the potential benefit of IEs in the search for DEGs using RNA-Seq data of mice that receive different diets for different time periods. Using an IE model leads to a smaller, but likely more biologically informative set of DEGs compared to a common approach that avoids the calculation of IEs.
Assuntos
Perfilação da Expressão Gênica , Transcriptoma , Animais , Camundongos , Perfilação da Expressão Gênica/métodos , RNA-Seq , Sequenciamento de Nucleotídeos em Larga Escala/métodos , Análise de Sequência de RNA/métodosRESUMO
BACKGROUND: An important problem in toxicology in the context of gene expression data is the simultaneous inference of a large number of concentration-response relationships. The quality of the inference substantially depends on the choice of design of the experiments, in particular, on the set of different concentrations, at which observations are taken for the different genes under consideration. As this set has to be the same for all genes, the efficient planning of such experiments is very challenging. We address this problem by determining efficient designs for the simultaneous inference of a large number of concentration-response models. For that purpose, we both construct a D-optimality criterion for simultaneous inference and a K-means procedure which clusters the support points of the locally D-optimal designs of the individual models. RESULTS: We show that a planning of experiments that addresses the simultaneous inference of a large number of concentration-response relationships yields a substantially more accurate statistical analysis. In particular, we compare the performance of the constructed designs to the ones of other commonly used designs in terms of D-efficiencies and in terms of the quality of the resulting model fits using a real data example dealing with valproic acid. For the quality comparison we perform an extensive simulation study. CONCLUSIONS: The design maximizing the D-optimality criterion for simultaneous inference improves the inference of the different concentration-response relationships substantially. The design based on the K-means procedure also performs well, whereas a log-equidistant design, which was also included in the analysis, performs poorly in terms of the quality of the simultaneous inference. Based on our findings, the D-optimal design for simultaneous inference should be used for upcoming analyses dealing with high-dimensional gene expression data.
Assuntos
Projetos de Pesquisa , Simulação por ComputadorRESUMO
In toxicological concentration-response studies, a frequent goal is the determination of an 'alert concentration', i.e. the lowest concentration where a notable change in the response in comparison to the control is observed. In high-throughput gene expression experiments, e.g. based on microarray or RNA-seq technology, concentration-response profiles can be measured for thousands of genes simultaneously. One approach for determining the alert concentration is given by fitting a parametric model to the data which allows interpolation between the tested concentrations. It is well known that the quality of a model fit improves with the number of measured data points. However, adding new replicates for existing concentrations or even several replicates for new concentrations is time-consuming and expensive. Here, we propose an empirical Bayes approach to information sharing across genes, where in essence a weighted mean of the individual estimate for one specific parameter of a fitted model and the mean of all estimates of the entire set of genes is calculated as a result. Results of a controlled plasmode simulation study show that for many genes a notable improvement in terms of the mean squared error (MSE) between estimate and true underlying value of the parameter can be observed. However, for some genes, the MSE increases, and this cannot be prevented by using a more sophisticated prior distribution in the Bayesian approach.
Assuntos
Disseminação de Informação , Teorema de Bayes , Simulação por Computador , RNA-Seq , Expressão GênicaRESUMO
Parabens have been used for decades as preservatives in food, drugs and cosmetics. The majority however, were banned in 2009 and 2014 leaving only methyl-, ethyl-, propyl-, and butyl-derivates available for subsequent use. Methyl- and propylparaben have been extensively tested in vivo, with no resulting evidence for developmental and reproductive toxicity (DART). In contrast, ethylparaben has not yet been tested for DART in animal experiments, and it is currently debated if additional animal studies are warranted. In order to perform a comparison of the four currently approved parabens, we used a previously established in vitro test based on human induced pluripotent stem cells (iPSC) that are exposed to test substances during their differentiation to neuroectodermal cells. EC50 values for cytotoxicity were 906 µM, 698 µM, 216 µM and 63 µM for methyl-, ethyl-, propyl- and butylparaben, respectively, demonstrating that cytotoxicity increases with increasing alkyl chain length. Genome-wide analysis demonstrated that FDR-adjusted significant gene expression changes occurred only at cytotoxic or close to cytotoxic concentrations, for example 1720 differentially expressed genes (DEG) at 1000 µM ethylparaben, 1 DEG at 316 µM, and no DEG at 100 µM or lower concentrations. The highest concentration of ethylparaben that did not induce any cytotoxicity nor DEG was 1670-fold above the highest concentration reported in biomonitoring studies (60 nM ethylparaben in cord blood). In conclusion, cytotoxicity and gene expression alterations of ethylparaben occurred at concentrations of approximately three orders of magnitude above human blood concentrations; moreover, the substance fitted well into a scenario where toxicity increases with the alkyl chain length, and gene expression changes only occur at cytotoxic or close to cytotoxic concentrations. Therefore, no evidence was obtained suggesting that in vivo DART with ethylparaben would lead to different results as the methyl- or propyl derivates.
RESUMO
The analysis of dose-response, concentration-response, and time-response relationships is a central component of toxicological research. A major decision with respect to the statistical analysis is whether to consider only the actually measured concentrations or to assume an underlying (parametric) model that allows extrapolation. Recent research suggests the application of modelling approaches for various types of toxicological assays. However, there is a discrepancy between the state of the art in statistical methodological research and published analyses in the toxicological literature. The extent of this gap is quantified in this work using an extensive literature review that considered all dose-response analyses published in three major toxicological journals in 2021. The aspects of the review include biological considerations (type of assay and of exposure), statistical design considerations (number of measured conditions, design, and sample sizes), and statistical analysis considerations (display, analysis goal, statistical testing or modelling method, and alert concentration). Based on the results of this review and the critical assessment of three selected issues in the context of statistical research, concrete guidance for planning, execution, and analysis of dose-response studies from a statistical viewpoint is proposed.
RESUMO
Animal studies for embryotoxicity evaluation of potential therapeutics and environmental factors are complex, costly, and time-consuming. Often, studies are not of human relevance because of species differences. In the present study, we recapitulated the process of cardiomyogenesis in human induced pluripotent stem cells (hiPSCs) by modulation of the Wnt signaling pathway to identify a key cardiomyogenesis gene signature that can be applied to identify compounds and/or stress factors compromising the cardiomyogenesis process. Among the 23 tested teratogens and 16 non-teratogens, we identified three retinoids including 13-cis-retinoic acid that completely block the process of cardiomyogenesis in hiPSCs. Moreover, we have identified an early gene signature consisting of 31 genes and associated biological processes that are severely affected by the retinoids. To predict the inhibitory potential of teratogens and non-teratogens in the process of cardiomyogenesis we established the "Developmental Cardiotoxicity Index" (CDI31g) that accurately differentiates teratogens and non-teratogens to do or do not affect the differentiation of hiPSCs to functional cardiomyocytes.
RESUMO
BACKGROUND: In high-dimensional data (HDD) settings, the number of variables associated with each observation is very large. Prominent examples of HDD in biomedical research include omics data with a large number of variables such as many measurements across the genome, proteome, or metabolome, as well as electronic health records data that have large numbers of variables recorded for each patient. The statistical analysis of such data requires knowledge and experience, sometimes of complex methods adapted to the respective research questions. METHODS: Advances in statistical methodology and machine learning methods offer new opportunities for innovative analyses of HDD, but at the same time require a deeper understanding of some fundamental statistical concepts. Topic group TG9 "High-dimensional data" of the STRATOS (STRengthening Analytical Thinking for Observational Studies) initiative provides guidance for the analysis of observational studies, addressing particular statistical challenges and opportunities for the analysis of studies involving HDD. In this overview, we discuss key aspects of HDD analysis to provide a gentle introduction for non-statisticians and for classically trained statisticians with little experience specific to HDD. RESULTS: The paper is organized with respect to subtopics that are most relevant for the analysis of HDD, in particular initial data analysis, exploratory data analysis, multiple testing, and prediction. For each subtopic, main analytical goals in HDD settings are outlined. For each of these goals, basic explanations for some commonly used analysis methods are provided. Situations are identified where traditional statistical methods cannot, or should not, be used in the HDD setting, or where adequate analytic tools are still lacking. Many key references are provided. CONCLUSIONS: This review aims to provide a solid statistical foundation for researchers, including statisticians and non-statisticians, who are new to research with HDD or simply want to better evaluate and understand the results of HDD analyses.
Assuntos
Pesquisa Biomédica , Objetivos , Humanos , Projetos de PesquisaRESUMO
BACKGROUND & AIMS: Nonalcoholic fatty liver disease (NAFLD) is a major health burden associated with the metabolic syndrome leading to liver fibrosis, cirrhosis and ultimately liver cancer. In humans, the PNPLA3 I148M polymorphism of the phospholipase patatin-like phospholipid domain containing protein 3 (PNPLA3) has a well-documented impact on metabolic liver disease. In this study, we used a mouse model mimicking the human PNPLA3 I148M polymorphism in a long-term high fat diet (HFD) experiment to better define its role for NAFLD progression. METHODS: Male mice bearing wild-type Pnpla3 (Pnpla3WT ), or the human polymorphism PNPLA3 I148M (Pnpla3148M/M ) were subjected to HFD feeding for 24 and 52 weeks. Further analysis concerning basic phenotype, inflammation, proliferation and cell death, fibrosis and microbiota were performed in each time point. RESULTS: After 52 weeks HFD Pnpla3148M/M animals had more liver fibrosis, enhanced numbers of inflammatory cells as well as increased Kupffer cell activity. Increased hepatocyte cell turnover and ductular proliferation were evident in HFD Pnpla3148M/M livers. Microbiome diversity was decreased after HFD feeding, changes were influenced by HFD feeding (36%) and the PNPLA3 I148M genotype (12%). Pnpla3148M/M mice had more faecal bile acids. RNA-sequencing of liver tissue defined an HFD-associated signature, and a Pnpla3148M/M specific pattern, which suggests Kupffer cell and monocytes-derived macrophages as significant drivers of liver disease progression in Pnpla3148M/M animals. CONCLUSION: With long-term HFD feeding, mice with the PNPLA3 I148M genotype show exacerbated NAFLD. This finding is linked to PNPLA3 I148M-specific changes in microbiota composition and liver gene expression showing a stronger inflammatory response leading to enhanced liver fibrosis progression.
Assuntos
Doenças Metabólicas , Hepatopatia Gordurosa não Alcoólica , Animais , Masculino , Camundongos , Aciltransferases/genética , Dieta , Predisposição Genética para Doença , Genótipo , Fígado/patologia , Cirrose Hepática/genética , Cirrose Hepática/metabolismo , Hepatopatia Gordurosa não Alcoólica/genética , Hepatopatia Gordurosa não Alcoólica/metabolismo , Fosfolipases A2 Independentes de Cálcio/genética , Fosfolipases A2 Independentes de Cálcio/metabolismoRESUMO
We examined differences in HER2 expression between primary tumors and distant metastases, particularly within the HER2-negative primary breast cancer cohort (HER2-low and HER2-zero). The retrospective study included 191 consecutive paired samples of primary breast cancer and distant metastases diagnosed between 1995 and 2019. HER2-negative samples were divided into HER2-zero (immunohistochemistry [IHC] score 0) and HER2-low (IHC score 1+ or 2+/in situ hybridization [ISH]-negative). The main objective was to analyze the discordance rate between matched primary and metastatic samples, focusing on the site of distant metastasis, molecular subtype, and de novo metastatic breast cancer. The relationship was determined by cross-tabulation and calculation of Cohen's Kappa coefficient. The final study cohort included 148 paired samples. The largest proportion in the HER2-negative cohort was HER2-low [primary tumor 61.4% (n = 78), metastatic samples 73.5% (n = 86)]. The discordance rate between the HER2 status of primary tumors and corresponding distant metastases was 49.6% (n = 63) (Kappa -0.003, 95%CI -0.15-0.15). Development of a HER2-low phenotype occurred most frequently (n = 52, 40.9%), mostly with a switch from HER2-zero to HER2-low (n = 34, 26.8%). Relevant HER2 discordance rates were observed between different metastatic sites and molecular subtypes. Primary metastatic breast cancer had a significantly lower HER2 discordance rate than secondary metastatic breast cancer [30.2% (Kappa 0.48, 95%CI 0.27-0.69) versus 50.5% (Kappa 0.14, 95% CI -0.03-0.32)]. This highlights the importance of evaluating potentially therapy-relevant discordance rates between a primary tumor and corresponding distant metastases.
RESUMO
BACKGROUND: Intrinsic or acquired resistance to HER2-targeted therapy is often a problem when small molecule tyrosine kinase inhibitors or antibodies are used to treat patients with HER2 positive breast cancer. Therefore, the identification of new targets and therapies for this patient group is warranted. Activated choline metabolism, characterized by elevated levels of choline-containing compounds, has been previously reported in breast cancer. The glycerophosphodiesterase EDI3 (GPCPD1), which hydrolyses glycerophosphocholine to choline and glycerol-3-phosphate, directly influences choline and phospholipid metabolism, and has been linked to cancer-relevant phenotypes in vitro. While the importance of choline metabolism has been addressed in breast cancer, the role of EDI3 in this cancer type has not been explored. METHODS: EDI3 mRNA and protein expression in human breast cancer tissue were investigated using publicly-available Affymetrix gene expression microarray datasets (n = 540) and with immunohistochemistry on a tissue microarray (n = 265), respectively. A panel of breast cancer cell lines of different molecular subtypes were used to investigate expression and activity of EDI3 in vitro. To determine whether EDI3 expression is regulated by HER2 signalling, the effect of pharmacological inhibition and siRNA silencing of HER2, as well as the influence of inhibiting key components of signalling cascades downstream of HER2 were studied. Finally, the influence of silencing and pharmacologically inhibiting EDI3 on viability was investigated in vitro and on tumour growth in vivo. RESULTS: In the present study, we show that EDI3 expression is highest in ER-HER2 + human breast tumours, and both expression and activity were also highest in ER-HER2 + breast cancer cell lines. Silencing HER2 using siRNA, as well as inhibiting HER2 signalling with lapatinib decreased EDI3 expression. Pathways downstream of PI3K/Akt/mTOR and GSK3ß, and transcription factors, including HIF1α, CREB and STAT3 were identified as relevant in regulating EDI3 expression. Silencing EDI3 preferentially decreased cell viability in the ER-HER2 + cells. Furthermore, silencing or pharmacologically inhibiting EDI3 using dipyridamole in ER-HER2 + cells resistant to HER2-targeted therapy decreased cell viability in vitro and tumour growth in vivo. CONCLUSIONS: Our results indicate that EDI3 may be a potential novel therapeutic target in patients with HER2-targeted therapy-resistant ER-HER2 + breast cancer that should be further explored.
Assuntos
Neoplasias da Mama , Humanos , Feminino , Neoplasias da Mama/tratamento farmacológico , Neoplasias da Mama/genética , Neoplasias da Mama/patologia , Fosfatidilinositol 3-Quinases , Linhagem Celular Tumoral , Colina/metabolismo , Colina/uso terapêutico , RNA Interferente Pequeno , Receptor ErbB-2/metabolismo , Resistencia a Medicamentos Antineoplásicos/genética , Fosfolipases/genéticaRESUMO
Proteasome inhibition is associated with parkinsonian pathology in vivo and degeneration of dopaminergic neurons in vitro. We explored here the metabolome (386 metabolites) and transcriptome (3257 transcripts) regulations of human LUHMES neurons, following exposure to MG-132 [100 nM]. This proteasome inhibitor killed cells within 24 h but did not reduce viability for 12 h. Overall, 206 metabolites were changed in live neurons. The early (3 h) metabolome changes suggested a compromised energy metabolism. For instance, AMP, NADH and lactate were up-regulated, while glycolytic and citric acid cycle intermediates were down-regulated. At later time points, glutathione-related metabolites were up-regulated, most likely by an early oxidative stress response and activation of NRF2/ATF4 target genes. The transcriptome pattern confirmed proteostatic stress (fast up-regulation of proteasome subunits) and also suggested the progressive activation of additional stress response pathways. The early ones (e.g., HIF-1, NF-kB, HSF-1) can be considered a cytoprotective cellular counter-regulation, which maintained cell viability. For instance, a very strong up-regulation of AIFM2 (=FSP1) may have prevented fast ferroptotic death. For most of the initial period, a definite life-death decision was not taken, as neurons could be rescued for at least 10 h after the start of proteasome inhibition. Late responses involved p53 activation and catabolic processes such as a loss of pyrimidine synthesis intermediates. We interpret this as a phase of co-occurrence of protective and maladaptive cellular changes. Altogether, this combined metabolomics-transcriptomics analysis informs on responses triggered in neurons by proteasome dysfunction that may be targeted by novel therapeutic intervention in Parkinson's disease.
RESUMO
The experience of adversity in childhood has been associated with poor health outcomes in adulthood. In search of the biological mechanisms underlying these effects, research so far focused on alterations of DNA methylation or shifts in transcriptomic profiles. The level of protein, however, has been largely neglected. We utilized mass spectrometry to investigate the proteome of CD14+ monocytes in healthy adults reporting childhood adversity and a control group before and after psychosocial stress exposure. Particular proteins involved in (i) immune processes, such as neutrophil-related proteins, (ii) protein metabolism, or (iii) proteins related to mitochondrial biology, such as those involved in energy production processes, were upregulated in participants reporting exposure to adversity in childhood. This functional triad was further corroborated by protein interaction- and co-expression analyses, was independent of stress exposure, i.e. observed at both pre- and post-stress time points, and became evident especially in females. In line with the mitochondrial allostatic load model, our findings provide evidence for the long-term effects of childhood adversity on mitochondrial biology.
Assuntos
Experiências Adversas da Infância , Mitocôndrias , Proteoma , Adulto , Feminino , Humanos , Metilação de DNA , MonócitosRESUMO
A range of regularization approaches have been proposed in the data sciences to overcome overfitting, to exploit sparsity or to improve prediction. Using a broad definition of regularization, namely controlling model complexity by adding information in order to solve ill-posed problems or to prevent overfitting, we review a range of approaches within this framework including penalization, early stopping, ensembling and model averaging. Aspects of their practical implementation are discussed including available R-packages and examples are provided. To assess the extent to which these approaches are used in medicine, we conducted a review of three general medical journals. It revealed that regularization approaches are rarely applied in practical clinical applications, with the exception of random effects models. Hence, we suggest a more frequent use of regularization approaches in medical research. In situations where also other approaches work well, the only downside of the regularization approaches is increased complexity in the conduct of the analyses which can pose challenges in terms of computational resources and expertise on the side of the data analyst. In our view, both can and should be overcome by investments in appropriate computing facilities and educational resources.
RESUMO
Human-relevant tests to predict developmental toxicity are urgently needed. A currently intensively studied approach makes use of differentiating human stem cells to measure chemically-induced deviations of the normal developmental program, as in a recent study based on cardiac differentiation (UKK2). Here, we (i) tested the performance of an assay modeling neuroepithelial differentiation (UKN1), and (ii) explored the benefit of combining assays (UKN1 and UKK2) that model different germ layers. Substance-induced cytotoxicity and genome-wide expression profiles of 23 teratogens and 16 non-teratogens at human-relevant concentrations were generated and used for statistical classification, resulting in accuracies of the UKN1 assay of 87-90%. A comparison to the UKK2 assay (accuracies of 90-92%) showed, in general, a high congruence in compound classification that may be explained by the fact that there was a high overlap of signaling pathways. Finally, the combination of both assays improved the prediction compared to each test alone, and reached accuracies of 92-95%. Although some compounds were misclassified by the individual tests, we conclude that UKN1 and UKK2 can be used for a reliable detection of teratogens in vitro, and that a combined analysis of tests that differentiate hiPSCs into different germ layers and cell types can even further improve the prediction of developmental toxicants.
Assuntos
Teratogênicos , Testes de Toxicidade , Humanos , Teratogênicos/toxicidade , Diferenciação Celular , Células-Tronco , Técnicas In VitroRESUMO
In bottom-up proteomics, proteins are enzymatically digested into peptides before measurement with mass spectrometry. The relationship between proteins and their corresponding peptides can be represented by bipartite graphs. We conduct a comprehensive analysis of bipartite graphs using quantified peptides from measured data sets as well as theoretical peptides from an in silico digestion of the corresponding complete taxonomic protein sequence databases. The aim of this study is to characterize and structure the different types of graphs that occur and to compare them between data sets. We observed a large influence of the accepted minimum peptide length during in silico digestion. When changing from theoretical peptides to measured ones, the graph structures are subject to two opposite effects. On the one hand, the graphs based on measured peptides are on average smaller and less complex compared to graphs using theoretical peptides. On the other hand, the proportion of protein nodes without unique peptides, which are a complicated case for protein inference and quantification, is considerably larger for measured data. Additionally, the proportion of graphs containing at least one protein node without unique peptides rises when going from database to quantitative level. The fraction of shared peptides and proteins without unique peptides as well as the complexity and size of the graphs highly depends on the data set and organism. Large differences between the structures of bipartite peptide-protein graphs have been observed between database and quantitative level as well as between analyzed species. In the analyzed measured data sets, the proportion of protein nodes without unique peptides ranged from 6.4% to 55.0%. This highlights the need for novel methods that can quantify proteins without unique peptides. The knowledge about the structure of the bipartite peptide-protein graphs gained in this study will be useful for the development of such algorithms.