RESUMEN
INTRODUCTION: A dynamic molecular biomarker that can identify early efficacy of immune checkpoint inhibitor (ICI) therapy remains an unmet clinical need. Here we evaluate if a novel circulating tumor DNA (ctDNA) assay, xM, used for treatment response monitoring (TRM), that quantifies changes in ctDNA tumor fraction (TF), can predict outcome benefits in patients treated with ICI alone or in combination with chemotherapy in a real-world (RW) cohort. METHODS: This retrospective study consisted of patients with advanced cancer from the Tempus de-identified clinical genomic database who received longitudinal liquid-based next-generation sequencing. Eligible patients had a blood sample ≤ 40 days prior to the start of ICI initiation and an on-treatment blood sample 15-180 days post ICI initiation. TF was calculated via an ensemble algorithm that utilizes TF estimates derived from variants and copy number information. Patients with molecular response (MR) were defined as patients with a ≥ 50% decrease in TF between tests. In the subset of patients with rw-imaging data between 2 and 18 weeks of ICI initiation, the predictive value of MR in addition to rw-imaging was compared to a model of rw-imaging alone. RESULTS: The evaluable cohort (N = 86) was composed of 14 solid cancer types. Patients received either ICI monotherapy (38.4%, N = 33) or ICI in combination with chemotherapy (61.6%, N = 53). Patients with MR had significantly longer rw-overall survival (rwOS) (hazard ratio (HR) 0.4, P = 0.004) and rw-progression free survival (rwPFS) (HR 0.4, P = 0.005) than patients with molecular non-response (nMR). Similar results were seen in the ICI monotherapy subcohort; HR 0.2, P = 0.02 for rwOS and HR 0.2, P = 0.01 for rwPFS. In the subset of patients with matched rw-imaging data (N = 51), a model incorporating both MR and rw-imaging was superior in predicting rwOS than rw-imaging alone (P = 0.02). CONCLUSIONS: xM used for TRM is a novel serial quantitative TF algorithm that can be used clinically to evaluate ICI therapy efficacy.
RESUMEN
BACKGROUND: A multi-biomarker disease activity (MBDA)-based cardiovascular disease (CVD) risk score was developed and internally validated in a Medicare cohort to predict 3-year risk for myocardial infarction (MI), stroke or CVD death in patients with rheumatoid arthritis (RA). It combines the MBDA score, leptin, MMP-3, TNF-R1, age and four clinical variables. We are now externally validating it in a younger RA cohort. METHODS: Claims data from a private aggregator were linked to MBDA test data to create a cohort of RA patients ≥18 years old. A univariable Cox proportional hazards regression model was fit using the MBDA-based CVD risk score as sole predictor of time-to-a-CVD event (hospitalized MI or stroke). Hazard ratio (HR) estimate was determined for all patients and for clinically relevant subgroups. A multivariable Cox model evaluated whether the MBDA-based CVD risk score adds predictive information to clinical data. RESULTS: 49,028 RA patients (340 CVD events) were studied. Mean age was 52.3 years; 18.3% were male. HR for predicting 3-year risk of a CVD event by the MBDA-based CVD risk score in the full cohort was 3.99 (95% CI: 3.51-4.49, p = 5.0×10-95). HR were also significant for subgroups based on age, comorbidities, disease activity, and drug use. In a multivariable model, the MBDA-based CVD risk score added significant information to hypertension, diabetes, tobacco use, history of CVD, age, sex and CRP (HR = 2.27, p = 1.7×10-7). CONCLUSION: The MBDA-based CVD risk score has been externally validated in an RA cohort that is younger than and independent of the Medicare cohort that was used for development and internal validation.
Asunto(s)
Artritis Reumatoide , Biomarcadores , Enfermedades Cardiovasculares , Humanos , Artritis Reumatoide/complicaciones , Artritis Reumatoide/sangre , Masculino , Femenino , Persona de Mediana Edad , Biomarcadores/sangre , Enfermedades Cardiovasculares/epidemiología , Adulto , Modelos de Riesgos Proporcionales , Anciano , Factores de Riesgo , Medición de Riesgo/métodos , Infarto del Miocardio/epidemiología , Estudios de CohortesRESUMEN
Importance: Tissue-based next-generation sequencing (NGS) of solid tumors is the criterion standard for identifying somatic mutations that can be treated with National Comprehensive Cancer Network guideline-recommended targeted therapies. Sequencing of circulating tumor DNA (ctDNA) can also identify tumor-derived mutations, and there is increasing clinical evidence supporting ctDNA testing as a diagnostic tool. The clinical value of concurrent tissue and ctDNA profiling has not been formally assessed in a large, multicancer cohort from heterogeneous clinical settings. Objective: To evaluate whether patients concurrently tested with both tissue and ctDNA NGS testing have a higher rate of detection of guideline-based targeted mutations compared with tissue testing alone. Design, Setting, and Participants: This cohort study comprised 3209 patients who underwent sequencing between May 2020, and December 2022, within the deidentified, Tempus multimodal database, consisting of linked molecular and clinical data. Included patients had stage IV disease (non-small cell lung cancer, breast cancer, prostate cancer, or colorectal cancer) with sufficient tissue and blood sample quantities for analysis. Exposures: Received results from tissue and plasma ctDNA genomic profiling, with biopsies and blood draws occurring within 30 days of one another. Main Outcomes and Measures: Detection rates of guideline-based variants found uniquely by ctDNA and tissue profiling. Results: The cohort of 3209 patients (median age at diagnosis of stage IV disease, 65.3 years [2.5%-97.5% range, 43.3-83.3 years]) who underwent concurrent tissue and ctDNA testing included 1693 women (52.8%). Overall, 1448 patients (45.1%) had a guideline-based variant detected. Of these patients, 9.3% (135 of 1448) had variants uniquely detected by ctDNA profiling, and 24.2% (351 of 1448) had variants uniquely detected by solid-tissue testing. Although largely concordant with one another, differences in the identification of actionable variants by either assay varied according to cancer type, gene, variant, and ctDNA burden. Of 352 patients with breast cancer, 20.2% (71 of 352) with actionable variants had unique findings in ctDNA profiling results. Most of these unique, actionable variants (55.0% [55 of 100]) were found in ESR1, resulting in a 24.7% increase (23 of 93) in the identification of patients harboring an ESR1 mutation relative to tissue testing alone. Conclusions and Relevance: This study suggests that unique actionable biomarkers are detected by both concurrent tissue and ctDNA testing, with higher ctDNA identification among patients with breast cancer. Integration of concurrent NGS testing into the routine management of advanced solid cancers may expand the delivery of molecularly guided therapy and improve patient outcomes.
Asunto(s)
Neoplasias de la Mama , Carcinoma de Pulmón de Células no Pequeñas , ADN Tumoral Circulante , Neoplasias Pulmonares , Masculino , Humanos , Femenino , ADN Tumoral Circulante/genética , Estudios de Cohortes , MutaciónRESUMEN
Importance: There are few studies assessing the association of tumor mutational burden (TMB) and clinical outcomes in a large cohort of patients with diverse advanced cancers. Objective: To clinically validate a TMB biomarker from a next-generation sequencing targeted gene panel assay. Design, Setting, and Participants: A prespecified cohort study using the deidentified clinicogenomic Tempus database of patients sequenced between 2018 and 2022, which contained retrospective, observational data originating from 300 cancer sites including 199 community sites and 101 academic sites. Patients with advanced solid tumors across 8 cancer types and more than 20 histologies, sequenced with Tempus xT who were treated with immune checkpoint inhibitors (ICIs) in the first-line or second-line setting were included. Data were analyzed from September 2018 to August 2022. Exposure: Treatment with US Food and Drug Administration (FDA)-approved antiprogrammed cell death-1/programmed cell death-ligand 1 (PD-1/PD-L1) ICI and/or in combination with a cytotoxic T-lymphocyte-associated protein-4 ICI. Main Outcomes and Measures: The primary outcome was the association of tumor mutational burden (TMB) binary category (high [≥10 mut/mb] vs low) with overall survival (OS) in patients treated with ICIs. Secondary outcomes were progression-free survival (PFS), and time to progression (TTP). Results: In the evaluable cohort of 674 patients, the median (IQR) age was 69.4 (28.6-89.8) years, 271 patients (40.2%) were female, and 435 patients (64.5%) were White. The most common advanced cancers were non-small cell lung cancer (330 patients [49.0%]), followed by bladder cancer (148 patients [22.0%]), and head and neck squamous cell carcinoma (96 patients [14.8%]). Median (IQR) follow-up was 7.2 (3.2-14.1) months. High TMB (TMB-H) cancers (206 patients [30.6%]) were significantly associated with longer OS than low TMB (TMB-L) cancers (hazard ratio [HR], 0.72; upper confidence bound [UCB], 0.91; P = .01). In a prospective subset of 403 patients treated with ICIs after TMB testing, TMB-H cancers (135 patients [33.5%]) were significantly associated with longer OS (HR, 0.61; UCB, 0.84; P = .005), PFS (HR, 0.62; UCB, 0.82; P = .003), and TTP (HR, 0.67; UCB, 0.92; P = .02) than TMB-L cancers. An overall survival benefit was seen regardless of the type of ICI used (pembrolizumab, 339 patients; HR, 0.67; UCB, 0.94; P = .03), other ICIs (64 patients; HR, 0.37; UCB, 0.85; P = .03), and after adjusting for PD-L1 and microsatellite stability status (403 patients; HR = 0.67; UCB, 0.92; P = .02). Conclusions and Relevance: In this cohort study of patients with advanced solid tumors treated with ICIs in diverse clinics, TMB-H cancers were significantly associated with improved clinical outcomes compared with TMB-L cancers.
Asunto(s)
Antineoplásicos Inmunológicos , Carcinoma de Pulmón de Células no Pequeñas , Neoplasias Pulmonares , Estados Unidos/epidemiología , Humanos , Femenino , Anciano , Anciano de 80 o más Años , Masculino , Carcinoma de Pulmón de Células no Pequeñas/tratamiento farmacológico , Neoplasias Pulmonares/patología , Antígeno B7-H1 , Estudios Retrospectivos , Estudios de Cohortes , Estudios Prospectivos , Mutación , Antineoplásicos Inmunológicos/uso terapéutico , Antineoplásicos Inmunológicos/farmacología , Inmunoterapia , Biomarcadores de Tumor/genéticaRESUMEN
PURPOSE: The American College of Obstetricians and Gynecologists (ACOG) and the American College of Medical Genetics and Genomics (ACMG) suggest carrier screening panel design criteria intended to ensure meaningful results. This study used a data-driven approach to interpret the criteria to identify guidelines-consistent panels. METHODS: Carrier frequencies in >460,000 individuals across 11 races/ethnicities were used to assess carrier frequency. Other criteria were interpreted on the basis of published data. A total of 176 conditions were then evaluated. Stringency thresholds were set as suggested by ACOG and/or ACMG or by evaluating conditions already recommended by ACOG and ACMG. RESULTS: Forty and 75 conditions had carrier frequencies of ≥1 in 100 and ≥1 in 200, respectively; 175 had a well-defined phenotype; and 165 met at least 1 severity criterion and had an onset early in life. Thirty-seven conditions met conservative thresholds, including a carrier frequency of ≥1 in 100, and 74 conditions met permissive thresholds, including a carrier frequency of ≥1 in 200; thus, both were identified as guidelines-consistent panels. CONCLUSION: Clear panel design criteria are needed to ensure quality and consistency among carrier screening panels. Evidence-based analyses of criteria resulted in the identification of guidelines-consistent panels of 37 and 74 conditions.
Asunto(s)
Etnicidad , Pruebas Genéticas , Tamización de Portadores Genéticos/métodos , Pruebas Genéticas/métodos , Genómica , Humanos , InvestigaciónRESUMEN
Dengue is the most prevalent arboviral disease worldwide, and the four dengue virus (DENV) serotypes circulate endemically in many tropical and subtropical regions. Numerous studies have shown that the majority of DENV infections are inapparent, and that the ratio of inapparent to symptomatic infections (I/S) fluctuates substantially year-to-year. For example, in the ongoing Pediatric Dengue Cohort Study (PDCS) in Nicaragua, which was established in 2004, the I/S ratio has varied from 16.5:1 in 2006-2007 to 1.2:1 in 2009-2010. However, the mechanisms explaining these large fluctuations are not well understood. We hypothesized that in dengue-endemic areas, frequent boosting (i.e., exposures to DENV that do not lead to extensive viremia and result in a less than fourfold rise in antibody titers) of the immune response can be protective against symptomatic disease, and this can explain fluctuating I/S ratios. We formulate mechanistic epidemiologic models to examine the epidemiologic effects of protective homologous and heterologous boosting of the antibody response in preventing subsequent symptomatic DENV infection. We show that models that include frequent boosts that protect against symptomatic disease can recover the fluctuations in the I/S ratio that we observe, whereas a classic model without boosting cannot. Furthermore, we show that a boosting model can recover the inverse relationship between the number of symptomatic cases and the I/S ratio observed in the PDCS. These results highlight the importance of robust dengue control efforts, as intermediate dengue control may have the potential to decrease the protective effects of boosting.
Asunto(s)
Infecciones Asintomáticas/epidemiología , Virus del Dengue/inmunología , Dengue/inmunología , Modelos Teóricos , Adolescente , Niño , Preescolar , Estudios de Cohortes , Dengue/epidemiología , Humanos , Nicaragua/epidemiologíaRESUMEN
OBJECTIVE: To evaluate the efficacy of three different carrier screening workflows designed to identify couples at risk for having offspring with autosomal recessive conditions. METHODS: Partner testing compliance, unnecessary testing, turnaround time, and ability to identify at-risk couples (ARCs) were measured across all three screening strategies (sequential, tandem, or tandem reflex). RESULTS: A total of 314,100 individuals who underwent carrier screening were analyzed. Sequential, tandem, and tandem reflex screening yielded compliance frequencies of 25.8%, 100%, and 95.9%, respectively. Among 14,595 couples tested in tandem, 42.2% of females were screen-negative, resulting in unnecessary testing of the male partner. In contrast, less than 1% of tandem reflex couples included unnecessary male testing. The median turnaround times were 29.2 days (sequential), 8 days (tandem), and 13.3 days (tandem reflex). The proportion of ARCs detected per total number of individual screens were 0.5% for sequential testing and 1.3% for both tandem and tandem reflex testing. CONCLUSION: The tandem reflex strategy simplifies a potentially complex clinical scenario by providing a mechanism by which providers can maximize partner compliance and the detection of at-risk couples while minimizing workflow burden and unnecessary testing and is more efficacious than both sequential and tandem screening strategies.
Asunto(s)
Tamización de Portadores Genéticos/métodos , Heterocigoto , Padres/psicología , Femenino , Tamización de Portadores Genéticos/estadística & datos numéricos , Pruebas Genéticas/métodos , Humanos , Atención Preconceptiva/métodos , Atención Preconceptiva/normas , Atención Preconceptiva/estadística & datos numéricos , Embarazo , Estudios Retrospectivos , Flujo de TrabajoRESUMEN
BACKGROUND: The multi-biomarker disease activity (MBDA) test measures 12 serum protein biomarkers to quantify disease activity in RA patients. A newer version of the MBDA score, adjusted for age, sex, and adiposity, has been validated in two cohorts (OPERA and BRASS) for predicting risk for radiographic progression. We now extend these findings with additional cohorts to further validate the adjusted MBDA score as a predictor of radiographic progression risk and compare its performance with that of other risk factors. METHODS: Four cohorts were analyzed: the BRASS and Leiden registries and the OPERA and SWEFOT studies (total N = 953). Treatments included conventional DMARDs and anti-TNFs. Associations of radiographic progression (ΔTSS) per year with the adjusted MBDA score, seropositivity, and clinical measures were evaluated using linear and logistic regression. The adjusted MBDA score was (1) validated in Leiden and SWEFOT, (2) compared with other measures in all four cohorts, and (3) used to generate curves for predicting risk of radiographic progression. RESULTS: Univariable and bivariable analyses validated the adjusted MBDA score and found it to be the strongest, independent predicator of radiographic progression (ΔTSS > 5) compared with seropositivity (rheumatoid factor and/or anti-CCP), baseline TSS, DAS28-CRP, CRP SJC, or CDAI. Neither DAS28-CRP, CDAI, SJC, nor CRP added significant information to the adjusted MBDA score as a predictor, and the frequency of radiographic progression agreed with the adjusted MBDA score when it was discordant with these measures. The rate of progression (ΔTSS > 5) increased from < 2% in the low (1-29) adjusted MBDA category to 16% in the high (45-100) category. A modeled risk curve indicated that risk increased continuously, exceeding 40% for the highest adjusted MBDA scores. CONCLUSION: The adjusted MBDA score was validated as an RA disease activity measure that is prognostic for radiographic progression. The adjusted MBDA score was a stronger predictor of radiographic progression than conventional risk factors, including seropositivity, and its prognostic ability was not significantly improved by the addition of DAS28-CRP, CRP, SJC, or CDAI.
Asunto(s)
Antirreumáticos , Artritis Reumatoide , Antirreumáticos/uso terapéutico , Artritis Reumatoide/diagnóstico por imagen , Artritis Reumatoide/tratamiento farmacológico , Biomarcadores , Progresión de la Enfermedad , Humanos , Pronóstico , Índice de Severidad de la EnfermedadRESUMEN
BACKGROUND: Rheumatoid arthritis (RA) patients have increased risk for cardiovascular disease (CVD). Accurate CVD risk prediction could improve care for RA patients. Our goal is to develop and validate a biomarker-based model for predicting CVD risk in RA patients. METHODS: Medicare claims data were linked to multi-biomarker disease activity (MBDA) test results to create an RA patient cohort with age ≥ 40 years that was split 2:1 for training and internal validation. Clinical and RA-related variables, MBDA score, and its 12 biomarkers were evaluated as predictors of a composite CVD outcome: myocardial infarction (MI), stroke, or fatal CVD within 3 years. Model building used Cox proportional hazard regression with backward elimination. The final MBDA-based CVD risk score was internally validated and compared to four clinical CVD risk prediction models. RESULTS: 30,751 RA patients (904 CVD events) were analyzed. Covariates in the final MBDA-based CVD risk score were age, diabetes, hypertension, tobacco use, history of CVD (excluding MI/stroke), MBDA score, leptin, MMP-3 and TNF-R1. In internal validation, the MBDA-based CVD risk score was a strong predictor of 3-year risk for a CVD event, with hazard ratio (95% CI) of 2.89 (2.46-3.41). The predicted 3-year CVD risk was low for 9.4% of patients, borderline for 10.2%, intermediate for 52.2%, and high for 28.2%. Model fit was good, with mean predicted versus observed 3-year CVD risks of 4.5% versus 4.4%. The MBDA-based CVD risk score significantly improved risk discrimination by the likelihood ratio test, compared to four clinical models. The risk score also improved prediction, reclassifying 42% of patients versus the simplest clinical model (age + sex), with a net reclassification index (NRI) (95% CI) of 0.19 (0.10-0.27); and 28% of patients versus the most comprehensive clinical model (age + sex + diabetes + hypertension + tobacco use + history of CVD + CRP), with an NRI of 0.07 (0.001-0.13). C-index was 0.715 versus 0.661 to 0.696 for the four clinical models. CONCLUSION: A prognostic score has been developed to predict 3-year CVD risk for RA patients by using clinical data, three serum biomarkers and the MBDA score. In internal validation, it had good accuracy and outperformed clinical models with and without CRP. The MBDA-based CVD risk prediction score may improve RA patient care by offering a risk stratification tool that incorporates the effect of RA inflammation.
Asunto(s)
Artritis Reumatoide , Enfermedades Cardiovasculares , Adulto , Anciano , Artritis Reumatoide/diagnóstico , Artritis Reumatoide/epidemiología , Biomarcadores , Enfermedades Cardiovasculares/diagnóstico , Enfermedades Cardiovasculares/epidemiología , Progresión de la Enfermedad , Humanos , Medicare , Índice de Severidad de la Enfermedad , Estados UnidosRESUMEN
BACKGROUND: Disease severity is important when considering genes for inclusion on reproductive expanded carrier screening (ECS) panels. We applied a validated and previously published algorithm that classifies diseases into four severity categories (mild, moderate, severe, and profound) to 176 genes screened by ECS. Disease traits defining severity categories in the algorithm were then mapped to four severity-related ECS panel design criteria cited by the American College of Obstetricians and Gynecologists (ACOG). METHODS: Eight genetic counselors (GCs) and four medical geneticists (MDs) applied the severity algorithm to subsets of 176 genes. MDs and GCs then determined by group consensus how each of these disease traits mapped to ACOG severity criteria, enabling determination of the number of ACOG severity criteria met by each gene. RESULTS: Upon consensus GC and MD application of the severity algorithm, 68 (39%) genes were classified as profound, 71 (40%) as severe, 36 (20%) as moderate, and one (1%) as mild. After mapping of disease traits to ACOG severity criteria, 170 out of 176 genes (96.6%) were found to meet at least one of the four criteria, 129 genes (73.3%) met at least two, 73 genes (41.5%) met at least three, and 17 genes (9.7%) met all four. CONCLUSION: This study classified the severity of a large set of Mendelian genes by collaborative clinical expert application of a trait-based algorithm. Further, it operationalized difficult to interpret ACOG severity criteria via mapping of disease traits, thereby promoting consistency of ACOG criteria interpretation.
Asunto(s)
Anomalías Congénitas/clasificación , Anomalías Congénitas/diagnóstico , Genes del Desarrollo , Tamización de Portadores Genéticos/métodos , Asesoramiento Genético , Adolescente , Algoritmos , Niño , Preescolar , Anomalías Congénitas/genética , Anomalías Congénitas/patología , Femenino , Genes del Desarrollo/genética , Tamización de Portadores Genéticos/normas , Asesoramiento Genético/métodos , Asesoramiento Genético/normas , Enfermedades Genéticas Congénitas/clasificación , Enfermedades Genéticas Congénitas/diagnóstico , Enfermedades Genéticas Congénitas/genética , Enfermedades Genéticas Congénitas/patología , Predisposición Genética a la Enfermedad , Humanos , Lactante , Recién Nacido , Masculino , Guías de Práctica Clínica como Asunto , Embarazo , Diagnóstico Prenatal/métodos , Diagnóstico Prenatal/normas , Índice de Severidad de la Enfermedad , Adulto JovenRESUMEN
Background: Noninvasive prenatal screening (NIPS) utilization has grown dramatically and is increasingly offered to the general population by nongenetic specialists. Web-based technologies and telegenetic services offer potential solutions for efficient results delivery and genetic counseling. Introduction: All major guidelines recommend patients with both negative and positive results be counseled. The main objective of this study was to quantify patient utilization, motivation for posttest counseling, and satisfaction of a technology platform designed for large-scale dissemination of NIPS results. Methods: The technology platform provided general education videos to patients, results delivery through a secure portal, and access to telegenetic counseling through phone. Automatic results delivery to patients was sent only to patients with screen-negative results. For patients with screen-positive results, either the ordering provider or a board-certified genetic counselor contacted the patient directly through phone to communicate the test results and provide counseling. Results: Over a 39-month period, 67,122 NIPS results were issued through the platform, and 4,673 patients elected genetic counseling consultations; 95.2% (n = 4,450) of consultations were for patients receiving negative results. More than 70% (n = 3,370) of consultations were on-demand rather than scheduled. A positive screen, advanced maternal age, family history, previous history of a pregnancy with a chromosomal abnormality, and other high-risk pregnancy were associated with the greatest odds of electing genetic counseling. By combining web education, automated notifications, and telegenetic counseling, we implemented a service that facilitates results disclosure for ordering providers. Discussion: This automated results delivery platform illustrates the use of technology in managing large-scale disclosure of NIPS results. Further studies should address effectiveness and satisfaction among patients and providers in greater detail. Conclusions: These data demonstrate the capability to deliver NIPS results, education, and counseling-congruent with professional society management guidelines-to a large population.
Asunto(s)
Revelación , Asesoramiento Genético , Pruebas Prenatales no Invasivas , Telemedicina/tendencias , Canadá , Femenino , Humanos , Embarazo , TecnologíaRESUMEN
PURPOSE: The American College of Obstetricians and Gynecologists (ACOG) proposed seven criteria for expanded carrier screening (ECS) panel design. To ensure that screening for a condition is sufficiently sensitive to identify carriers and reduce residual risk of noncarriers, one criterion requires a per-condition carrier rate greater than 1 in 100. However, it is unestablished whether this threshold corresponds with a loss in clinical detection. The impact of the proposed panel design criteria on at-risk couple detection warrants data-driven evaluation. METHODS: Carrier rates and at-risk couple rates were calculated in 56,281 patients who underwent a 176-condition ECS and were evaluated for panels satisfying various criteria. Condition-specific clinical detection rates were estimated via simulation. RESULTS: Different interpretations of the 1-in-100 criterion have variable impact: a compliant panel would include between 3 and 38 conditions, identify 11-81% fewer at-risk couples, and detect 36-79% fewer carriers than a 176-condition panel. If the carrier rate threshold must be exceeded in all ethnicities, ECS panels would lack prevalent conditions like cystic fibrosis. Simulations suggest that the clinical detection rate remains >84% for conditions with carrier rates as low as 1 in 1000. CONCLUSION: The 1-in-100 criterion limits at-risk couple detection and should be reconsidered.
Asunto(s)
Tamización de Portadores Genéticos/métodos , Asesoramiento Genético/métodos , Enfermedades Genéticas Congénitas/genética , Pruebas Genéticas , Variaciones en el Número de Copia de ADN/genética , Femenino , Tamización de Portadores Genéticos/normas , Asesoramiento Genético/normas , Enfermedades Genéticas Congénitas/diagnóstico , Guías como Asunto , Heterocigoto , Humanos , Masculino , Mutación/genéticaRESUMEN
Dengue virus (DENV) is the most prevalent human vector-borne viral disease. The force of infection (FoI), the rate at which susceptible individuals are infected in a population, is an important metric for infectious disease modeling. Understanding how and why the FoI of DENV changes over time is critical for developing immunization and vector control policies. We used age-stratified seroprevalence data from 12 years of the Pediatric Dengue Cohort Study in Nicaragua to estimate the annual FoI of DENV from 1994 to 2015. Seroprevalence data revealed a change in the rate at which children acquire DENV-specific immunity: in 2004, 50% of children age >4 years were seropositive, but by 2015, 50% seropositivity was reached only by age 11 years. We estimated a spike in the FoI in 1997-1998 and 1998-1999 and a gradual decline thereafter, and children age <4 years experienced a lower FoI. Two hypotheses to explain the change in the FoI were tested: (i) a transition from introduction of specific DENV serotypes to their endemic transmission and (ii) a population demographic transition due to declining birth rates and increasing life expectancy. We used mathematical models to simulate these hypotheses. We show that the initial high FoI can be explained by the introduction of DENV-3 in 1994-1998, and that the overall gradual decline in the FoI can be attributed to demographic shifts. Changes in immunity and demographics strongly impacted DENV transmission in Nicaragua. Population-level measures of transmission intensity are dynamic and thus challenging to use to guide vaccine implementation locally and globally.
Asunto(s)
Anticuerpos Antivirales/sangre , Virus del Dengue/aislamiento & purificación , Dengue/epidemiología , Dengue/transmisión , Estudios Seroepidemiológicos , Adolescente , Niño , Preescolar , Dengue/virología , Femenino , Humanos , Masculino , Nicaragua/epidemiología , Estudios Prospectivos , Vigilancia en Salud Pública , Factores de TiempoRESUMEN
An extensive body of theory addresses the topic of pathogen virulence evolution, yet few studies have empirically demonstrated the presence of fitness trade-offs that would select for intermediate virulence. Here we show the presence of transmission-clearance trade-offs in dengue virus using viremia measurements. By fitting a within-host model to these data, we further find that the interaction between dengue and the host immune response can account for the observed trade-offs. Finally, we consider dengue virulence evolution when selection acts on the virus's production rate. By combining within-host model simulations with empirical findings on how host viral load affects human-to-mosquito transmission success, we show that the virus's transmission potential is maximized at production rates associated with intermediate virulence and that the optimal production rate critically depends on dengue's epidemiological context. These results indicate that long-term changes in dengue's global distribution impact the invasion and spread of virulent dengue virus genotypes.
Asunto(s)
Evolución Biológica , Virus del Dengue/genética , Dengue/epidemiología , Dengue/virología , Interacciones Huésped-Patógeno/genética , Virulencia/genética , Adolescente , Adulto , Animales , Culicidae , Dengue/transmisión , Femenino , Aptitud Genética , Variación Genética , Genotipo , Humanos , Sistema Inmunológico , Masculino , Vietnam/epidemiología , Carga Viral , Adulto JovenRESUMEN
Dengue is a vector-borne viral disease of humans that endemically circulates in many tropical and subtropical regions worldwide. Infection with dengue can result in a range of disease outcomes. A considerable amount of research has sought to improve our understanding of this variation in disease outcomes and to identify predictors of severe disease. Contributing to this research, patterns of viral load in dengue infected patients have been quantified, with analyses indicating that peak viral load levels, rates of viral load decline, and time to peak viremia are useful predictors of severe disease. Here, we take a complementary approach to understanding patterns of clinical manifestation and inter-individual variation in viral load dynamics. Specifically, we statistically fit mathematical within-host models of dengue to individual-level viral load data to test virological and immunological hypotheses explaining inter-individual variation in dengue viral load. We choose between alternative models using model selection criteria to determine which hypotheses are best supported by the data. We first show that the cellular immune response plays an important role in regulating viral load in secondary dengue infections. We then provide statistical support for the process of antibody-dependent enhancement (but not original antigenic sin) in the development of severe disease in secondary dengue infections. Finally, we show statistical support for serotype-specific differences in viral infectivity rates, with infectivity rates of dengue serotypes 2 and 3 exceeding those of serotype 1. These results contribute to our understanding of dengue viral load patterns and their relationship to the development of severe dengue disease. They further have implications for understanding how dengue transmissibility may depend on the immune status of infected individuals and the identity of the infecting serotype.
Asunto(s)
Virus del Dengue/aislamiento & purificación , Virus del Dengue/fisiología , Dengue/epidemiología , Dengue/virología , Modelos Estadísticos , Carga Viral/estadística & datos numéricos , Adolescente , Adulto , Simulación por Computador , Dengue/diagnóstico , Virus del Dengue/clasificación , Femenino , Humanos , Masculino , Persona de Mediana Edad , Prevalencia , Reproducibilidad de los Resultados , Factores de Riesgo , Sensibilidad y Especificidad , Especificidad de la Especie , Vietnam/epidemiología , Adulto JovenRESUMEN
In recent years, the within-host viral dynamics of dengue infections have been increasingly characterized, and the relationship between aspects of these dynamics and the manifestation of severe disease has been increasingly probed. Despite this progress, there are few mathematical models of within-host dengue dynamics, and the ones that exist focus primarily on the general role of immune cells in the clearance of infected cells, while neglecting other components of the immune response in limiting viraemia. Here, by considering a suite of mathematical within-host dengue models of increasing complexity, we aim to isolate the critical components of the innate and the adaptive immune response that suffice in the reproduction of several well-characterized features of primary and secondary dengue infections. By building up from a simple target cell limited model, we show that only the innate immune response is needed to recover the characteristic features of a primary symptomatic dengue infection, while a higher rate of viral infectivity (indicative of antibody-dependent enhancement) and infected cell clearance by T cells are further needed to recover the characteristic features of a secondary dengue infection. We show that these minimal models can reproduce the increased risk of disease associated with secondary heterologous infections that arises as a result of a cytokine storm, and, further, that they are consistent with virological indicators that predict the onset of severe disease, such as the magnitude of peak viraemia, time to peak viral load, and viral clearance rate. Finally, we show that the effectiveness of these virological indicators to predict the onset of severe disease depends on the contribution of T cells in fuelling the cytokine storm.
Asunto(s)
Virus del Dengue/fisiología , Dengue/inmunología , Interacciones Huésped-Patógeno/inmunología , Modelos Inmunológicos , Animales , Dengue/virología , Humanos , Viremia/inmunologíaRESUMEN
BACKGROUND: Acetaminophen (N-acetyl-para-aminophenol) is the most widely used over-the-counter or prescription painkiller in the world. Acetaminophen is metabolized in the liver where a toxic byproduct is produced that can be removed by conjugation with glutathione. Acetaminophen overdoses, either accidental or intentional, are the leading cause of acute liver failure in the United States, accounting for 56,000 emergency room visits per year. The standard treatment for overdose is N-acetyl-cysteine (NAC), which is given to stimulate the production of glutathione. METHODS: We have created a mathematical model for acetaminophen transport and metabolism including the following compartments: gut, plasma, liver, tissue, urine. In the liver compartment the metabolism of acetaminophen includes sulfation, glucoronidation, conjugation with glutathione, production of the toxic metabolite, and liver damage, taking biochemical parameters from the literature whenever possible. This model is then connected to a previously constructed model of glutathione metabolism. RESULTS: We show that our model accurately reproduces published clinical and experimental data on the dose-dependent time course of acetaminophen in the plasma, the accumulation of acetaminophen and its metabolites in the urine, and the depletion of glutathione caused by conjugation with the toxic product. We use the model to study the extent of liver damage caused by overdoses or by chronic use of therapeutic doses, and the effects of polymorphisms in glucoronidation enzymes. We use the model to study the depletion of glutathione and the effect of the size and timing of N-acetyl-cysteine doses given as an antidote. Our model accurately predicts patient death or recovery depending on size of APAP overdose and time of treatment. CONCLUSIONS: The mathematical model provides a new tool for studying the effects of various doses of acetaminophen on the liver metabolism of acetaminophen and glutathione. It can be used to study how the metabolism of acetaminophen depends on the expression level of liver enzymes. Finally, it can be used to predict patient metabolic and physiological responses to APAP doses and different NAC dosing strategies.
Asunto(s)
Acetaminofén/toxicidad , Antiinflamatorios no Esteroideos/toxicidad , Modelos Estadísticos , Acetaminofén/metabolismo , Regulación Alostérica , Antiinflamatorios no Esteroideos/metabolismo , Glucurónidos/metabolismo , Glutatión/metabolismo , HumanosRESUMEN
BACKGROUND: As in adults, thyroidectomy in pediatric patients with differentiated thyroid cancer is often followed by (131)I remnant ablation. A standard protocol is to give normalizing oral thyroxine (T(4)) or triiodothyronine (T(3)) after surgery and then withdraw it for 2 to 6 weeks. Thyroid remnants or metastases are treated most effectively when serum thyrotropin (TSH) is high, but prolonged withdrawals should be avoided to minimize hypothyroid morbidity. METHODS: A published feedback control system model of adult human thyroid hormone regulation was modified for children using pediatric T(4) kinetic data. The child model was developed from data for patients ranging from 3 to 9 years old. We simulated a range of T(4) and T(3) replacement protocols for children, exploring alternative regimens for minimizing the withdrawal period, while maintaining normal or suppressed TSH during replacement. The results are presented with the intent of providing a quantitative basis to guide further studies of pediatric treatment options. Replacement was simulated for up to 3 weeks post-thyroidectomy, followed by various withdrawal periods. T(4) vs. T(3) replacement, remnant size, dose size, and dose frequency were tested for effects on the time for TSH to reach 25 mU/L (withdrawal period). RESULTS: For both T(3) and T(4) replacement, higher doses were associated with longer withdrawal periods. T(3) replacement yielded shorter withdrawal periods than T(4) replacement (up to 3.5 days versus 7-10 days). Higher than normal serum T(3) concentrations were required to normalize or suppress TSH during T(3) monotherapy, but not T(4) monotherapy. Larger remnant sizes resulted in longer withdrawal periods if T(4) replacement was used, but had little effect for T(3) replacement. CONCLUSIONS: T(3) replacement yielded withdrawal periods about half those for T(4) replacement. Higher than normal hormone levels under T(3) monotherapy can be partially alleviated by more frequent, smaller doses (e.g., twice a day). LT(4) may be the preferred option for most children, given the convenience of single daily dosing and familiarity of pediatric endocrinologists with its administration. Remnant effects on withdrawal period highlight the importance of minimizing remnant size.