Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 25
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Genet Med ; : 101285, 2024 Sep 30.
Artículo en Inglés | MEDLINE | ID: mdl-39360752

RESUMEN

INTRODUCTION: Genomic screening to identify individuals with Lynch Syndrome (LS) and those with a high polygenic risk score (PRS) promises to personalize Colorectal Cancer (CRC) screening. Understanding its clinical and economic impact is needed to inform screening guidelines and reimbursement policies. METHODS: We developed a Markov model to simulate individuals over a lifetime. We compared LS+PRS genomic screening to standard of care (SOC) for a cohort of US adults at age 30. The Markov model included health states of "no CRC", CRC stages (A-D) and death. We estimated incidence, mortality, and discounted economic outcomes of the population under different interventions. RESULTS: Screening 1000 individuals for LS+PRS resulted in 1.36 fewer CRC cases and 0.65 fewer deaths compared to SOC. The incremental cost-effectiveness ratio (ICER) was $124,415 per quality-adjusted life-year (QALY); screening had a 69% probability of being cost-effective using a willingness to pay threshold of $150,000/QALY. Setting the PRS threshold at the 90th percentile of the LS+PRS screening program to define individuals at high risk was most likely to be cost-effective compared to 95th, 85th, and 80th percentiles. CONCLUSION: Population-level LS+PRS screening is marginally cost-effective and a threshold of 90th percentile is more likely to be cost-effective than other thresholds.

2.
J Biomed Inform ; 155: 104660, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38788889

RESUMEN

INTRODUCTION: Electronic Health Records (EHR) are a useful data source for research, but their usability is hindered by measurement errors. This study investigated an automatic error detection algorithm for adult height and weight measurements in EHR for the All of Us Research Program (All of Us). METHODS: We developed reference charts for adult heights and weights that were stratified on participant sex. Our analysis included 4,076,534 height and 5,207,328 wt measurements from âˆ¼ 150,000 participants. Errors were identified using modified standard deviation scores, differences from their expected values, and significant changes between consecutive measurements. We evaluated our method with chart-reviewed heights (8,092) and weights (9,039) from 250 randomly selected participants and compared it with the current cleaning algorithm in All of Us. RESULTS: The proposed algorithm classified 1.4 % of height and 1.5 % of weight errors in the full cohort. Sensitivity was 90.4 % (95 % CI: 79.0-96.8 %) for heights and 65.9 % (95 % CI: 56.9-74.1 %) for weights. Precision was 73.4 % (95 % CI: 60.9-83.7 %) for heights and 62.9 (95 % CI: 54.0-71.1 %) for weights. In comparison, the current cleaning algorithm has inferior performance in sensitivity (55.8 %) and precision (16.5 %) for height errors while having higher precision (94.0 %) and lower sensitivity (61.9 %) for weight errors. DISCUSSION: Our proposed algorithm outperformed in detecting height errors compared to weights. It can serve as a valuable addition to the current All of Us cleaning algorithm for identifying erroneous height values.


Asunto(s)
Algoritmos , Estatura , Peso Corporal , Registros Electrónicos de Salud , Humanos , Masculino , Adulto , Femenino , Persona de Mediana Edad , Estados Unidos , Valores de Referencia , Anciano , Adulto Joven
3.
Ann Intern Med ; 176(5): 585-595, 2023 05.
Artículo en Inglés | MEDLINE | ID: mdl-37155986

RESUMEN

BACKGROUND: The cost-effectiveness of screening the U.S. population for Centers for Disease Control and Prevention (CDC) Tier 1 genomic conditions is unknown. OBJECTIVE: To estimate the cost-effectiveness of simultaneous genomic screening for Lynch syndrome (LS), hereditary breast and ovarian cancer syndrome (HBOC), and familial hypercholesterolemia (FH). DESIGN: Decision analytic Markov model. DATA SOURCES: Published literature. TARGET POPULATION: Separate age-based cohorts (ages 20 to 60 years at time of screening) of racially and ethnically representative U.S. adults. TIME HORIZON: Lifetime. PERSPECTIVE: U.S. health care payer. INTERVENTION: Population genomic screening using clinical sequencing with a restricted panel of high-evidence genes, cascade testing of first-degree relatives, and recommended preventive interventions for identified probands. OUTCOME MEASURES: Incident breast, ovarian, and colorectal cancer cases; incident cardiovascular events; quality-adjusted survival; and costs. RESULTS OF BASE-CASE ANALYSIS: Screening 100 000 unselected 30-year-olds resulted in 101 (95% uncertainty interval [UI], 77 to 127) fewer overall cancer cases and 15 (95% UI, 4 to 28) fewer cardiovascular events and an increase of 495 quality-adjusted life-years (QALYs) (95% UI, 401 to 757) at an incremental cost of $33.9 million (95% UI, $27.0 million to $41.1 million). The incremental cost-effectiveness ratio was $68 600 per QALY gained (95% UI, $41 800 to $88 900). RESULTS OF SENSITIVITY ANALYSIS: Screening 30-, 40-, and 50-year-old cohorts was cost-effective in 99%, 88%, and 19% of probabilistic simulations, respectively, at a $100 000-per-QALY threshold. The test costs at which screening 30-, 40-, and 50-year-olds reached the $100 000-per-QALY threshold were $413, $290, and $166, respectively. Variant prevalence and adherence to preventive interventions were also highly influential parameters. LIMITATIONS: Population averages for model inputs, which were derived predominantly from European populations, vary across ancestries and health care environments. CONCLUSION: Population genomic screening with a restricted panel of high-evidence genes associated with 3 CDC Tier 1 conditions is likely to be cost-effective in U.S. adults younger than 40 years if the testing cost is relatively low and probands have access to preventive interventions. PRIMARY FUNDING SOURCE: National Human Genome Research Institute.


Asunto(s)
Enfermedades Cardiovasculares , Hiperlipoproteinemia Tipo II , Adulto , Humanos , Adulto Joven , Persona de Mediana Edad , Análisis de Costo-Efectividad , Análisis Costo-Beneficio , Metagenómica , Años de Vida Ajustados por Calidad de Vida , Tamizaje Masivo
4.
J Vasc Interv Radiol ; 34(3): 378-385, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36481322

RESUMEN

PURPOSE: To evaluate whether same-day discharge increased the incidence of 30-day readmission (30dR) after conventional transarterial chemoembolization (TACE) for hepatocellular carcinoma (HCC) at a single institution. MATERIALS AND METHODS: In this retrospective study, 253 patients with HCC underwent 521 transarterial chemoembolization procedures between 2013 and 2020. TACE was performed with 50-mg doxorubicin/10-mg mitomycin C/5-10-mL ethiodized oil/particles. Patients not requiring intravenous pain medications were discharged after a 3-hour observation, and 30dR was tracked. The primary objective was to determine the incidence of 30dR in same-day discharge patients versus patients admitted for observation using the chi-square test. Secondary objectives assessed factors associated with overnight admission and factors predictive of 30dR using generalized estimated equation calculations and logistic regression. RESULTS: In the cohort, 24 readmissions occurred within 30 days (4.6%). Same-day discharge was completed after 331 TACE procedures with sixteen 30dRs (4.8%). Patients admitted overnight were readmitted 8 times after 190 TACE procedures (4.2%). This difference was not statistically significant (P = .4). Factors predicting overnight admission included female sex (58/190 [30.5%] vs 58/331 [17.5%], P < .001) and tumor size of ≥3.8 cm (104/190 [55%] vs 85/190 [45%]). Factors predicting 30dR included female sex (10/116 [8.6%] vs 14/405 [0.2%]) and younger age (median [interquartile range], 63 years [55-65 years] vs 65 years [59-71 years]). At regression, factors predictive of 30dR were Child-Pugh Class B/C (odds ratio [OR], 2.1; P = .04) and female sex (OR, 2.9; P = .004). CONCLUSIONS: Same-day discharge after conventional TACE is a safe and effective strategy with 30dR rate of <5%, similar to overnight observation.


Asunto(s)
Carcinoma Hepatocelular , Quimioembolización Terapéutica , Neoplasias Hepáticas , Humanos , Femenino , Persona de Mediana Edad , Carcinoma Hepatocelular/terapia , Neoplasias Hepáticas/terapia , Alta del Paciente , Estudios Retrospectivos , Quimioembolización Terapéutica/métodos , Aceite Etiodizado/uso terapéutico , Doxorrubicina , Mitomicina , Resultado del Tratamiento
5.
Genet Med ; 24(5): 1017-1026, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35227606

RESUMEN

PURPOSE: Genomic screening for Lynch syndrome (LS) could prevent colorectal cancer (CRC) by identifying high-risk patients and instituting intensive CRC screening. We estimated the cost-effectiveness of a population-wide LS genomic screening vs family history-based screening alone in an unselected US population. METHODS: We developed a decision-analytic Markov model including health states for precancer, stage-specific CRC, and death and assumed an inexpensive test cost of $200. We conducted sensitivity and threshold analyses to evaluate model uncertainty. RESULTS: Screening unselected 30-year-olds for LS variants resulted in 48 (95% credible range [CR] = 35-63) fewer overall CRC cases per 100,000 screened individuals, leading to 187 quality-adjusted life-years (QALYs; 95% CR = 123-260) gained at an incremental cost of $24.6 million (95% CR = $20.3 million-$29.1 million). The incremental cost-effectiveness ratio was $132,200, with an 8% and 71% probability of being cost-effective at $100,000 and $150,000 per QALY willingness-to-pay thresholds, respectively. CONCLUSION: Population LS screening may be cost-effective in younger patient populations under a $150,000 willingness-to-pay per QALY threshold and with a relatively inexpensive test cost. Further reductions in testing costs and/or the inclusion of LS testing within a broader multiplex screening panel are needed for screening to become highly cost-effective.


Asunto(s)
Neoplasias Colorrectales Hereditarias sin Poliposis , Neoplasias Colorrectales , Neoplasias Colorrectales/diagnóstico , Neoplasias Colorrectales/epidemiología , Neoplasias Colorrectales/genética , Neoplasias Colorrectales Hereditarias sin Poliposis/diagnóstico , Neoplasias Colorrectales Hereditarias sin Poliposis/epidemiología , Neoplasias Colorrectales Hereditarias sin Poliposis/genética , Análisis Costo-Beneficio , Genómica , Humanos , Años de Vida Ajustados por Calidad de Vida , Estados Unidos/epidemiología
6.
Stat Med ; 41(14): 2497-2512, 2022 06 30.
Artículo en Inglés | MEDLINE | ID: mdl-35253265

RESUMEN

Studies of critically ill, hospitalized patients often follow participants and characterize daily health status using an ordinal outcome variable. Statistically, longitudinal proportional odds models are a natural choice in these settings since such models can parsimoniously summarize differences across patient groups and over time. However, when one or more of the outcome states is absorbing, the proportional odds assumption for the follow-up time parameter will likely be violated, and more flexible longitudinal models are needed. Motivated by the VIOLET Study (Ginde et al), a parallel-arm, randomized clinical trial of Vitamin D3 in critically ill patients, we discuss and contrast several treatment effect estimands based on time-dependent odds ratio parameters, and we detail contemporary modeling approaches. In VIOLET, the outcome is a four-level ordinal variable where the lowest "not alive" state is absorbing and the highest "at-home" state is nearly absorbing. We discuss flexible extensions of the proportional odds model for longitudinal data that can be used for either model-based inference, where the odds ratio estimator is taken directly from the model fit, or for model-assisted inferences, where heterogeneity across cumulative log odds dichotomizations is modeled and results are summarized to obtain an overall odds ratio estimator. We focus on direct estimation of cumulative probability model (CPM) parameters using likelihood-based analysis procedures that naturally handle absorbing states. We illustrate the modeling procedures, the relative precision of model-based and model-assisted estimators, and the possible differences in the values for which the estimators are consistent through simulations and analysis of the VIOLET Study data.


Asunto(s)
Biometría , Enfermedad Crítica , Humanos , Funciones de Verosimilitud , Estudios Longitudinales , Oportunidad Relativa
7.
Am J Epidemiol ; 189(2): 81-90, 2020 02 28.
Artículo en Inglés | MEDLINE | ID: mdl-31165875

RESUMEN

We propose a general class of 2-phase epidemiologic study designs for quantitative, longitudinal data that are useful when phase 1 longitudinal outcome and covariate data are available but data on the exposure (e.g., a biomarker) can only be collected on a subset of subjects during phase 2. To conduct a study using a design in the class, one first summarizes the longitudinal outcomes by fitting a simple linear regression of the response on a time-varying covariate for each subject. Sampling strata are defined by splitting the estimated regression intercept or slope distributions into distinct (low, medium, and high) regions. Stratified sampling is then conducted from strata defined by the intercepts, by the slopes, or from a mixture. In general, samples selected with extreme intercept values will yield low variances for associations of time-fixed exposures with the outcome and samples enriched with extreme slope values will yield low variances for associations of time-varying exposures with the outcome (including interactions with time-varying exposures). We describe ascertainment-corrected maximum likelihood and multiple-imputation estimation procedures that permit valid and efficient inferences. We embed all methodological developments within the framework of conducting a substudy that seeks to examine genetic associations with lung function among continuous smokers in the Lung Health Study (United States and Canada, 1986-1994).


Asunto(s)
Diseño de Investigaciones Epidemiológicas , Modelos Estadísticos , Evaluación de Resultado en la Atención de Salud/métodos , Estudios de Casos y Controles , Humanos , Modelos Lineales , Estudios Longitudinales , Muestreo
8.
Pharmacogenomics J ; 20(5): 724-735, 2020 10.
Artículo en Inglés | MEDLINE | ID: mdl-32042096

RESUMEN

Current guidelines recommend dual antiplatelet therapy (DAPT) consisting of aspirin and a P2Y12 inhibitors following percutaneous coronary intervention (PCI). CYP2C19 genotype can guide DAPT selection, prescribing ticagrelor or prasugrel for loss-of-function (LOF) allele carriers (genotype-guided escalation). Cost-effectiveness analyses (CEA) are traditionally grounded in clinical trial data. We conduct a CEA using real-world data using a 1-year decision-analytic model comparing primary strategies: universal empiric clopidogrel (base case), universal ticagrelor, and genotype-guided escalation. We also explore secondary strategies commonly implemented in practice, wherein all patients are prescribed ticagrelor for 30 days post PCI. After 30 days, all patients are switched to clopidogrel irrespective of genotype (nonguided de-escalation) or to clopidogrel only if patients do not harbor an LOF allele (genotype-guided de-escalation). Compared with universal clopidogrel, both universal ticagrelor and genotype-guided escalation were superior with improvement in quality-adjusted life years (QALY's). Only genotype-guided escalation was cost-effective ($42,365/QALY) and demonstrated the highest probability of being cost-effective across conventional willingness-to-pay thresholds. In the secondary analysis, compared with the nonguided de-escalation strategy, although genotype-guided de-escalation and universal ticagrelor were more effective, with ICER of $188,680/QALY and $678,215/QALY, respectively, they were not cost-effective. CYP2C19 genotype-guided antiplatelet prescribing is cost-effective compared with either universal clopidogrel or universal ticagrelor using real-world implementation data. The secondary analysis suggests genotype-guided and nonguided de-escalation may be viable strategies, needing further evaluation.


Asunto(s)
Síndrome Coronario Agudo/economía , Síndrome Coronario Agudo/terapia , Citocromo P-450 CYP2C19/genética , Costos de los Medicamentos , Técnicas de Diagnóstico Molecular/economía , Intervención Coronaria Percutánea , Variantes Farmacogenómicas , Inhibidores de Agregación Plaquetaria/economía , Inhibidores de Agregación Plaquetaria/uso terapéutico , Síndrome Coronario Agudo/genética , Aspirina/economía , Aspirina/uso terapéutico , Clopidogrel/economía , Clopidogrel/uso terapéutico , Análisis Costo-Beneficio , Técnicas de Apoyo para la Decisión , Terapia Antiplaquetaria Doble/economía , Humanos , Intervención Coronaria Percutánea/efectos adversos , Inhibidores de Agregación Plaquetaria/efectos adversos , Valor Predictivo de las Pruebas , Años de Vida Ajustados por Calidad de Vida , Ticagrelor/economía , Ticagrelor/uso terapéutico , Factores de Tiempo , Resultado del Tratamiento
9.
Nat Methods ; 13(6): 497-500, 2016 06.
Artículo en Inglés | MEDLINE | ID: mdl-27135974

RESUMEN

In vitro cell proliferation assays are widely used in pharmacology, molecular biology, and drug discovery. Using theoretical modeling and experimentation, we show that current metrics of antiproliferative small molecule effect suffer from time-dependent bias, leading to inaccurate assessments of parameters such as drug potency and efficacy. We propose the drug-induced proliferation (DIP) rate, the slope of the line on a plot of cell population doublings versus time, as an alternative, time-independent metric.


Asunto(s)
Proliferación Celular/efectos de los fármacos , Descubrimiento de Drogas/métodos , Modelos Teóricos , Biología Molecular/métodos , Bibliotecas de Moléculas Pequeñas/farmacología , Línea Celular Tumoral , Simulación por Computador , Relación Dosis-Respuesta a Droga , Humanos , Microscopía Fluorescente , Sensibilidad y Especificidad , Bibliotecas de Moléculas Pequeñas/química , Factores de Tiempo
10.
Nat Methods ; 9(9): 923-8, 2012 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-22886092

RESUMEN

We present an integrated method that uses extended time-lapse automated imaging to quantify the dynamics of cell proliferation. Cell counts are fit with a quiescence-growth model that estimates rates of cell division, entry into quiescence and death. The model is constrained with rates extracted experimentally from the behavior of tracked single cells over time. We visualize the output of the analysis in fractional proliferation graphs, which deconvolve dynamic proliferative responses to perturbations into the relative contributions of dividing, quiescent (nondividing) and dead cells. The method reveals that the response of 'oncogene-addicted' human cancer cells to tyrosine kinase inhibitors is a composite of altered rates of division, death and entry into quiescence, a finding that challenges the notion that such cells simply die in response to oncogene-targeted therapy.


Asunto(s)
Carcinoma de Células Escamosas/patología , Neoplasias Pulmonares/patología , Microscopía por Video/métodos , Análisis de la Célula Individual/métodos , Carcinoma de Células Escamosas/tratamiento farmacológico , Recuento de Células , Proliferación Celular , Humanos , Neoplasias Pulmonares/tratamiento farmacológico , Inhibidores de Proteínas Quinasas/farmacología , Proteínas Tirosina Quinasas/antagonistas & inhibidores , Relación Estructura-Actividad , Factores de Tiempo , Células Tumorales Cultivadas
11.
Biomed Microdevices ; 16(1): 91-6, 2014 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-24065585

RESUMEN

Polydimethylsiloxane (PDMS) is a commonly used polymer in the fabrication of microfluidic devices due to such features as transparency, gas permeability, and ease of patterning with soft lithography. The surface characteristics of PDMS can also be easily changed with oxygen or low pressure air plasma converting it from a hydrophobic to a hydrophilic state. As part of such a transformation, surface methyl groups are removed and replaced with hydroxyl groups making the exposed surface to resemble silica, a gas impermeable substance. We have utilized Platinum(II)-tetrakis(pentaflourophenyl)porphyrin immobilized within a thin (~1.5 um thick) polystyrene matrix as an oxygen sensor, Stern-Volmer relationship, and Fick's Law of simple diffusion to measure the effects of PDMS composition, treatment, and storage on oxygen diffusion through PDMS. Results indicate that freshly oxidized PDMS showed a significantly smaller diffusion coefficient, indicating that the SiO2 layer formed on the PDMS surface created an impeding barrier. This barrier disappeared after a 3-day storage in air, but remained significant for up to 3 weeks if PDMS was maintained in contact with water. Additionally, higher density PDMS formulation (5:1 ratio) showed similar diffusion characteristics as normal (10:1 ratio) formulation, but showed 60 % smaller diffusion coefficient after plasma treatment that never recovered to pre-treatment levels even after a 3-week storage in air. Understanding how plasma surface treatments contribute to oxygen diffusion will be useful in exploiting the gas permeability of PDMS to establish defined normoxic and hypoxic oxygen conditions within microfluidic bioreactor systems.


Asunto(s)
Dimetilpolisiloxanos/química , Gases/química , Técnicas de Cultivo de Célula , Difusión , Técnicas Analíticas Microfluídicas/instrumentación , Microfluídica/métodos , Oxidación-Reducción , Oxígeno/química , Permeabilidad , Poliestirenos/química , Dióxido de Silicio/química , Propiedades de Superficie , Agua/química
12.
iScience ; 27(6): 109989, 2024 Jun 21.
Artículo en Inglés | MEDLINE | ID: mdl-38846004

RESUMEN

Mathematical models of biomolecular networks are commonly used to study cellular processes; however, their usefulness to explain and predict dynamic behaviors is often questioned due to the unclear relationship between parameter uncertainty and network dynamics. In this work, we introduce PyDyNo (Python dynamic analysis of biochemical networks), a non-equilibrium reaction-flux based analysis to identify dominant reaction paths within a biochemical reaction network calibrated to experimental data. We first show, in a simplified apoptosis execution model, that despite the thousands of parameter vectors with equally good fits to experimental data, our framework identifies the dynamic differences between these parameter sets and outputs three dominant execution modes, which exhibit varying sensitivity to perturbations. We then apply our methodology to JAK2/STAT5 network in colony-forming unit-erythroid (CFU-E) cells and provide previously unrecognized mechanistic explanation for the survival responses of CFU-E cell population that would have been impossible to deduce with traditional protein-concentration based analyses.

13.
Artículo en Inglés | MEDLINE | ID: mdl-39138951

RESUMEN

IMPORTANCE: Scales often arise from multi-item questionnaires, yet commonly face item non-response. Traditional solutions use weighted mean (WMean) from available responses, but potentially overlook missing data intricacies. Advanced methods like multiple imputation (MI) address broader missing data, but demand increased computational resources. Researchers frequently use survey data in the All of Us Research Program (All of Us), and it is imperative to determine if the increased computational burden of employing MI to handle non-response is justifiable. OBJECTIVES: Using the 5-item Physical Activity Neighborhood Environment Scale (PANES) in All of Us, this study assessed the tradeoff between efficacy and computational demands of WMean, MI, and inverse probability weighting (IPW) when dealing with item non-response. MATERIALS AND METHODS: Synthetic missingness, allowing 1 or more item non-response, was introduced into PANES across 3 missing mechanisms and various missing percentages (10%-50%). Each scenario compared WMean of complete questions, MI, and IPW on bias, variability, coverage probability, and computation time. RESULTS: All methods showed minimal biases (all <5.5%) for good internal consistency, with WMean suffered most with poor consistency. IPW showed considerable variability with increasing missing percentage. MI required significantly more computational resources, taking >8000 and >100 times longer than WMean and IPW in full data analysis, respectively. DISCUSSION AND CONCLUSION: The marginal performance advantages of MI for item non-response in highly reliable scales do not warrant its escalated cloud computational burden in All of Us, particularly when coupled with computationally demanding post-imputation analyses. Researchers using survey scales with low missingness could utilize WMean to reduce computing burden.

14.
Nat Methods ; 7(4): 269-72, 2010 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-20354516

RESUMEN

Stochastic profiling, a method to rank heterogeneity of gene expression in a cell population, shows that quantifying cell-to-cell variability has come of age and leads to biological insight.


Asunto(s)
Técnicas Citológicas/métodos , Células Eucariotas/fisiología , Regulación de la Expresión Génica/fisiología , Procesos Estocásticos
15.
Biometrics ; 69(2): 405-16, 2013 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-23409789

RESUMEN

The analysis of longitudinal trajectories usually focuses on evaluation of explanatory factors that are either associated with rates of change, or with overall mean levels of a continuous outcome variable. In this article, we introduce valid design and analysis methods that permit outcome dependent sampling of longitudinal data for scenarios where all outcome data currently exist, but a targeted substudy is being planned in order to collect additional key exposure information on a limited number of subjects. We propose a stratified sampling based on specific summaries of individual longitudinal trajectories, and we detail an ascertainment corrected maximum likelihood approach for estimation using the resulting biased sample of subjects. In addition, we demonstrate that the efficiency of an outcome-based sampling design relative to use of a simple random sample depends highly on the choice of outcome summary statistic used to direct sampling, and we show a natural link between the goals of the longitudinal regression model and corresponding desirable designs. Using data from the Childhood Asthma Management Program, where genetic information required retrospective ascertainment, we study a range of designs that examine lung function profiles over 4 years of follow-up for children classified according to their genotype for the IL 13 cytokine.


Asunto(s)
Biometría/métodos , Asma/tratamiento farmacológico , Asma/fisiopatología , Niño , Interpretación Estadística de Datos , Volumen Espiratorio Forzado , Humanos , Funciones de Verosimilitud , Modelos Lineales , Estudios Longitudinales , Modelos Estadísticos
16.
bioRxiv ; 2023 Dec 13.
Artículo en Inglés | MEDLINE | ID: mdl-38168311

RESUMEN

Many recent studies have demonstrated the inflated type 1 error rate of the original Gaussian random field (GRF) methods for inference of neuroimages and identified resampling (permutation and bootstrapping) methods that have better performance. There has been no evaluation of resampling procedures when using robust (sandwich) statistical images with different topological features (TF) used for neuroimaging inference. Here, we consider estimation of distributions TFs of a statistical image and evaluate resampling procedures that can be used when exchangeability is violated. We compare the methods using realistic simulations and study sex differences in life-span age-related changes in gray matter volume in the Nathan Kline Institute Rockland sample. We find that our proposed wild bootstrap and the commonly used permutation procedure perform well in sample sizes above 50 under realistic simulations with heteroskedasticity. The Rademacher wild bootstrap has fewer assumptions than the permutation and performs similarly in samples of 100 or more, so is valid in a broader range of conditions. We also evaluate the GRF-based pTFCE method and show that it has inflated error rates in samples less than 200. Our R package, pbj , is available on Github and allows the user to reproducibly implement various resampling-based group level neuroimage analyses.

17.
PLoS One ; 18(5): e0285848, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37200348

RESUMEN

OBJECTIVE: The All of Us Research Program collects data from multiple information sources, including health surveys, to build a national longitudinal research repository that researchers can use to advance precision medicine. Missing survey responses pose challenges to study conclusions. We describe missingness in All of Us baseline surveys. STUDY DESIGN AND SETTING: We extracted survey responses between May 31, 2017, to September 30, 2020. Missing percentages for groups historically underrepresented in biomedical research were compared to represented groups. Associations of missing percentages with age, health literacy score, and survey completion date were evaluated. We used negative binomial regression to evaluate participant characteristics on the number of missed questions out of the total eligible questions for each participant. RESULTS: The dataset analyzed contained data for 334,183 participants who submitted at least one baseline survey. Almost all (97.0%) of the participants completed all baseline surveys, and only 541 (0.2%) participants skipped all questions in at least one of the baseline surveys. The median skip rate was 5.0% of the questions, with an interquartile range (IQR) of 2.5% to 7.9%. Historically underrepresented groups were associated with higher missingness (incidence rate ratio (IRR) [95% CI]: 1.26 [1.25, 1.27] for Black/African American compared to White). Missing percentages were similar by survey completion date, participant age, and health literacy score. Skipping specific questions were associated with higher missingness (IRRs [95% CI]: 1.39 [1.38, 1.40] for skipping income, 1.92 [1.89, 1.95] for skipping education, 2.19 [2.09-2.30] for skipping sexual and gender questions). CONCLUSION: Surveys in the All of Us Research Program will form an essential component of the data researchers can use to perform their analyses. Missingness was low in All of Us baseline surveys, but group differences exist. Additional statistical methods and careful analysis of surveys could help mitigate challenges to the validity of conclusions.


Asunto(s)
Salud Poblacional , Humanos , Encuestas y Cuestionarios , Encuestas Epidemiológicas , Conducta Sexual
18.
J Theor Biol ; 311: 19-27, 2012 Oct 21.
Artículo en Inglés | MEDLINE | ID: mdl-22796330

RESUMEN

Cells grown in culture act as a model system for analyzing the effects of anticancer compounds, which may affect cell behavior in a cell cycle position-dependent manner. Cell synchronization techniques have been generally employed to minimize the variation in cell cycle position. However, synchronization techniques are cumbersome and imprecise and the agents used to synchronize the cells potentially have other unknown effects on the cells. An alternative approach is to determine the age structure in the population and account for the cell cycle positional effects post hoc. Here we provide a formalism to use quantifiable lifespans from live cell microscopy experiments to parameterize an age-structured model of cell population response.


Asunto(s)
Antineoplásicos/farmacología , Ciclo Celular/efectos de los fármacos , Senescencia Celular/efectos de los fármacos , Modelos Biológicos , Neoplasias/metabolismo , Antineoplásicos/uso terapéutico , Humanos , Neoplasias/tratamiento farmacológico , Neoplasias/patología
19.
mSphere ; 7(5): e0025722, 2022 Oct 26.
Artículo en Inglés | MEDLINE | ID: mdl-36173112

RESUMEN

Accurate, highly specific immunoassays for severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) are needed to evaluate seroprevalence. This study investigated the concordance of results across four immunoassays targeting different antigens for sera collected at the beginning of the SARS-CoV-2 pandemic in the United States. Specimens from All of Us participants contributed between January and March 2020 were tested using the Abbott Architect SARS-CoV-2 IgG (immunoglobulin G) assay (Abbott) and the EuroImmun SARS-CoV-2 enzyme-linked immunosorbent assay (ELISA) (EI). Participants with discordant results, participants with concordant positive results, and a subset of concordant negative results by Abbott and EI were also tested using the Roche Elecsys anti-SARS-CoV-2 (IgG) test (Roche) and the Ortho-Clinical Diagnostics Vitros anti-SARS-CoV-2 IgG test (Ortho). The agreement and 95% confidence intervals were estimated for paired assay combinations. SARS-CoV-2 antibody concentrations were quantified for specimens with at least two positive results across four immunoassays. Among the 24,079 participants, the percent agreement for the Abbott and EI assays was 98.8% (95% confidence interval, 98.7%, 99%). Of the 490 participants who were also tested by Ortho and Roche, the probability-weighted percentage of agreement (95% confidence interval) between Ortho and Roche was 98.4% (97.9%, 98.9%), that between EI and Ortho was 98.5% (92.9%, 99.9%), that between Abbott and Roche was 98.9% (90.3%, 100.0%), that between EI and Roche was 98.9% (98.6%, 100.0%), and that between Abbott and Ortho was 98.4% (91.2%, 100.0%). Among the 32 participants who were positive by at least 2 immunoassays, 21 had quantifiable anti-SARS-CoV-2 antibody concentrations by research assays. The results across immunoassays revealed concordance during a period of low prevalence. However, the frequency of false positivity during a period of low prevalence supports the use of two sequentially performed tests for unvaccinated individuals who are seropositive by the first test. IMPORTANCE What is the agreement of commercial SARS-CoV-2 immunoglobulin G (IgG) assays during a time of low coronavirus disease 2019 (COVID-19) prevalence and no vaccine availability? Serological tests produced concordant results in a time of low SARS-CoV-2 prevalence and no vaccine availability, driven largely by the proportion of samples that were negative by two immunoassays. The CDC recommends two sequential tests for positivity for future pandemic preparedness. In a subset analysis, quantified antinucleocapsid and antispike SARS-CoV-2 IgG antibodies do not suggest the need to specify the antigen targets of the sequential assays in the CDC's recommendation because false positivity varied as much between assays targeting the same antigen as it did between assays targeting different antigens.


Asunto(s)
COVID-19 , Salud Poblacional , Humanos , SARS-CoV-2 , COVID-19/diagnóstico , COVID-19/epidemiología , Prevalencia , Estudios Seroepidemiológicos , Sensibilidad y Especificidad , Anticuerpos Antivirales , Inmunoglobulina G
20.
Med Decis Making ; 41(4): 453-464, 2021 05.
Artículo en Inglés | MEDLINE | ID: mdl-33733932

RESUMEN

We discuss tradeoffs and errors associated with approaches to modeling health economic decisions. Through an application in pharmacogenomic (PGx) testing to guide drug selection for individuals with a genetic variant, we assessed model accuracy, optimal decisions, and computation time for an identical decision scenario modeled 4 ways: using 1) coupled-time differential equations (DEQ), 2) a cohort-based discrete-time state transition model (MARKOV), 3) an individual discrete-time state transition microsimulation model (MICROSIM), and 4) discrete event simulation (DES). Relative to DEQ, the net monetary benefit for PGx testing (v. a reference strategy of no testing) based on MARKOV with rate-to-probability conversions using commonly used formulas resulted in different optimal decisions. MARKOV was nearly identical to DEQ when transition probabilities were embedded using a transition intensity matrix. Among stochastic models, DES model outputs converged to DEQ with substantially fewer simulated patients (1 million) v. MICROSIM (1 billion). Overall, properly embedded Markov models provided the most favorable mix of accuracy and runtime but introduced additional complexity for calculating cost and quality-adjusted life year outcomes due to the inclusion of "jumpover" states after proper embedding of transition probabilities. Among stochastic models, DES offered the most favorable mix of accuracy, reliability, and speed.


Asunto(s)
Tecnología Biomédica , Técnicas de Apoyo para la Decisión , Análisis Costo-Beneficio , Humanos , Cadenas de Markov , Políticas , Reproducibilidad de los Resultados
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA