Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 6.476
Filtrar
Más filtros

Intervalo de año de publicación
1.
Am J Hum Genet ; 109(12): 2163-2177, 2022 12 01.
Artículo en Inglés | MEDLINE | ID: mdl-36413997

RESUMEN

Recommendations from the American College of Medical Genetics and Genomics and the Association for Molecular Pathology (ACMG/AMP) for interpreting sequence variants specify the use of computational predictors as "supporting" level of evidence for pathogenicity or benignity using criteria PP3 and BP4, respectively. However, score intervals defined by tool developers, and ACMG/AMP recommendations that require the consensus of multiple predictors, lack quantitative support. Previously, we described a probabilistic framework that quantified the strengths of evidence (supporting, moderate, strong, very strong) within ACMG/AMP recommendations. We have extended this framework to computational predictors and introduce a new standard that converts a tool's scores to PP3 and BP4 evidence strengths. Our approach is based on estimating the local positive predictive value and can calibrate any computational tool or other continuous-scale evidence on any variant type. We estimate thresholds (score intervals) corresponding to each strength of evidence for pathogenicity and benignity for thirteen missense variant interpretation tools, using carefully assembled independent data sets. Most tools achieved supporting evidence level for both pathogenic and benign classification using newly established thresholds. Multiple tools reached score thresholds justifying moderate and several reached strong evidence levels. One tool reached very strong evidence level for benign classification on some variants. Based on these findings, we provide recommendations for evidence-based revisions of the PP3 and BP4 ACMG/AMP criteria using individual tools and future assessment of computational methods for clinical interpretation.


Asunto(s)
Calibración , Humanos , Consenso , Escolaridad , Virulencia
2.
Biostatistics ; 2024 Jul 09.
Artículo en Inglés | MEDLINE | ID: mdl-38981039

RESUMEN

The goal of radiation therapy for cancer is to deliver prescribed radiation dose to the tumor while minimizing dose to the surrounding healthy tissues. To evaluate treatment plans, the dose distribution to healthy organs is commonly summarized as dose-volume histograms (DVHs). Normal tissue complication probability (NTCP) modeling has centered around making patient-level risk predictions with features extracted from the DVHs, but few have considered adapting a causal framework to evaluate the safety of alternative treatment plans. We propose causal estimands for NTCP based on deterministic and stochastic interventions, as well as propose estimators based on marginal structural models that impose bivariable monotonicity between dose, volume, and toxicity risk. The properties of these estimators are studied through simulations, and their use is illustrated in the context of radiotherapy treatment of anal canal cancer patients.

3.
Biostatistics ; 2024 Jul 30.
Artículo en Inglés | MEDLINE | ID: mdl-39078115

RESUMEN

Micro-randomized trials are commonly conducted for optimizing mobile health interventions such as push notifications for behavior change. In analyzing such trials, causal excursion effects are often of primary interest, and their estimation typically involves inverse probability weighting (IPW). However, in a micro-randomized trial, additional treatments can often occur during the time window over which an outcome is defined, and this can greatly inflate the variance of the causal effect estimator because IPW would involve a product of numerous weights. To reduce variance and improve estimation efficiency, we propose two new estimators using a modified version of IPW, which we call "per-decision IPW." The second estimator further improves efficiency using the projection idea from the semiparametric efficiency theory. These estimators are applicable when the outcome is binary and can be expressed as the maximum of a series of sub-outcomes defined over sub-intervals of time. We establish the estimators' consistency and asymptotic normality. Through simulation studies and real data applications, we demonstrate substantial efficiency improvement of the proposed estimator over existing estimators. The new estimators can be used to improve the precision of primary and secondary analyses for micro-randomized trials with binary outcomes.

4.
Brief Bioinform ; 24(1)2023 01 19.
Artículo en Inglés | MEDLINE | ID: mdl-36642411

RESUMEN

Accurately predicting the interaction modes for metalloproteins remains extremely challenging in structure-based drug design and mechanism analysis of enzymatic catalysis due to the complexity of metal coordination in metalloproteins. Here, we report a docking method for metalloproteins based on geometric probability (GPDOCK) with unprecedented accuracy. The docking tests of 10 common metal ions with 9360 metalloprotein-ligand complexes demonstrate that GPDOCK has an accuracy of 94.3% in predicting binding pose. What is more, it can accurately realize the docking of metalloproteins with ligand when one or two water molecules are engaged in the metal ion coordination. Since GPDOCK only depends on the three-dimensional structure of metalloprotein and ligand, structure-based machine learning model is employed for the scoring of binding poses, which significantly improves computational efficiency. The proposed docking strategy can be an effective and efficient tool for drug design and further study of binding mechanism of metalloproteins. The manual of GPDOCK and the code for the logistical regression model used to re-rank the docking results are available at https://github.com/wangkai-zhku/GPDOCK.git.


Asunto(s)
Metaloproteínas , Metaloproteínas/química , Metaloproteínas/metabolismo , Unión Proteica , Ligandos , Aprendizaje Automático , Catálisis , Simulación del Acoplamiento Molecular , Sitios de Unión
5.
Methods ; 231: 15-25, 2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39218170

RESUMEN

Predicting drug-target interactions (DTI) is a crucial stage in drug discovery and development. Understanding the interaction between drugs and targets is essential for pinpointing the specific relationship between drug molecules and targets, akin to solving a link prediction problem using information technology. While knowledge graph (KG) and knowledge graph embedding (KGE) methods have been rapid advancements and demonstrated impressive performance in drug discovery, they often lack authenticity and accuracy in identifying DTI. This leads to increased misjudgment rates and reduced efficiency in drug development. To address these challenges, our focus lies in refining the accuracy of DTI prediction models through KGE, with a specific emphasis on causal intervention confidence measures (CI). These measures aim to assess triplet scores, enhancing the precision of the predictions. Comparative experiments conducted on three datasets and utilizing 9 KGE models reveal that our proposed confidence measure approach via causal intervention, significantly improves the accuracy of DTI link prediction compared to traditional approaches. Furthermore, our experimental analysis delves deeper into the embedding of intervention values, offering valuable insights for guiding the design and development of subsequent drug development experiments. As a result, our predicted outcomes serve as valuable guidance in the pursuit of more efficient drug development processes.

6.
Mol Cell Proteomics ; 22(3): 100509, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36791992

RESUMEN

Lysosomes, the main degradative organelles of mammalian cells, play a key role in the regulation of metabolism. It is becoming more and more apparent that they are highly active, diverse, and involved in a large variety of processes. The essential role of lysosomes is exemplified by the detrimental consequences of their malfunction, which can result in lysosomal storage disorders, neurodegenerative diseases, and cancer. Using lysosome enrichment and mass spectrometry, we investigated the lysosomal proteomes of HEK293, HeLa, HuH-7, SH-SY5Y, MEF, and NIH3T3 cells. We provide evidence on a large scale for cell type-specific differences of lysosomes, showing that levels of distinct lysosomal proteins are highly variable within one cell type, while expression of others is highly conserved across several cell lines. Using differentially stable isotope-labeled cells and bimodal distribution analysis, we furthermore identify a high confidence population of lysosomal proteins for each cell line. Multi-cell line correlation of these data reveals potential novel lysosomal proteins, and we confirm lysosomal localization for six candidates. All data are available via ProteomeXchange with identifier PXD020600.


Asunto(s)
Neuroblastoma , Proteoma , Ratones , Animales , Humanos , Proteoma/metabolismo , Células HEK293 , Células 3T3 NIH , Neuroblastoma/metabolismo , Lisosomas/metabolismo , Mamíferos/metabolismo
7.
Eur Heart J ; 2024 Sep 01.
Artículo en Inglés | MEDLINE | ID: mdl-39217444

RESUMEN

BACKGROUND AND AIMS: Overtesting of low-risk patients with suspect chronic coronary syndrome (CCS) is widespread. The acoustic-based coronary artery disease (CAD) score has superior rule-out capabilities when added to pre-test probability (PTP). FILTER-SCAD tested whether providing a CAD score and PTP to cardiologists was superior to PTP alone in limiting testing. METHODS: At six Danish and Swedish outpatient clinics, patients with suspected new-onset CCS were randomised to either standard diagnostic examination (SDE) with PTP, or SDE plus CAD score, and cardiologists provided with corresponding recommended diagnostic flowcharts. The primary endpoint was cumulative number of diagnostic tests at one year and key safety endpoint major adverse cardiac events (MACE). RESULTS: In total 2008 patients (46% male, median age 63 years) were randomised from October 2019 to September 2022. When randomised to CAD score (n=1002), it was successfully measured in 94.5%. Overall, 13.5% had PTP ≤5%, and 39.5% had CAD score ≤20. Testing was deferred in 22% with no differences in diagnostic tests between groups (p for superiority =0.56). In the PTP ≤5% subgroup, the proportion with deferred testing increased from 28% to 52% (p<0.001). Overall MACE was 2.4 per 100 person-years. Non-inferiority regarding safety was established, absolute risk difference 0.4% (95% CI -1.85 to 1.06) (p for non-inferiority = 0.005). No differences were seen in angina-related health status or quality of life. CONCLUSIONS: The implementation strategy of providing cardiologists with a CAD score alongside SDE did not reduce testing overall but indicated a possible role in patients with low CCS likelihood. Further strategies are warranted to address resistance to modifying diagnostic pathways in this patient population.

8.
Nano Lett ; 24(35): 11116-11123, 2024 Sep 04.
Artículo en Inglés | MEDLINE | ID: mdl-39116042

RESUMEN

Single-molecule surface-enhanced Raman spectroscopy (SM-SERS) holds great potential to revolutionize ultratrace quantitative analysis. However, achieving quantitative SM-SERS is challenging because of strong intensity fluctuation and blinking characteristics. In this study, we reveal the relation P = 1 - e-α between the statistical SERS probability P and the microscopic average molecule number α in SERS spectra, which lays the physical foundation for a statistical route to implement SM-SERS quantitation. Utilizing SERS probability calibration, we achieve quantitative SERS analysis with batch-to-batch robustness, extremely wide detection range of concentration covering 9 orders of magnitude, and ultralow detection limit far below the single-molecule level. These results indicate the physical feasibility of robust SERS quantitation through statistical route and certainly open a new avenue for implementing SERS as a practical analysis tool in various application scenarios.

9.
J Neurosci ; 43(25): 4650-4663, 2023 06 21.
Artículo en Inglés | MEDLINE | ID: mdl-37208178

RESUMEN

An important open question in neuroeconomics is how the brain represents the value of offers in a way that is both abstract (allowing for comparison) and concrete (preserving the details of the factors that influence value). Here, we examine neuronal responses to risky and safe options in five brain regions that putatively encode value in male macaques. Surprisingly, we find no detectable overlap in the neural codes used for risky and safe options, even when the options have identical subjective values (as revealed by preference) in any of the regions. Indeed, responses are weakly correlated and occupy distinct (semi-orthogonal) encoding subspaces. Notably, however, these subspaces are linked through a linear transform of their constituent encodings, a property that allows for comparison of dissimilar option types. This encoding scheme allows these regions to multiplex decision related processes: they can encode the detailed factors that influence offer value (here, risky and safety) but also directly compare dissimilar offer types. Together these results suggest a neuronal basis for the qualitatively different psychological properties of risky and safe options and highlight the power of population geometry to resolve outstanding problems in neural coding.SIGNIFICANCE STATEMENT To make economic choices, we must have some mechanism for comparing dissimilar offers. We propose that the brain uses distinct neural codes for risky and safe offers, but that these codes are linearly transformable. This encoding scheme has the dual advantage of allowing for comparison across offer types while preserving information about offer type, which in turn allows for flexibility in changing circumstances. We show that responses to risky and safe offers exhibit these predicted properties in five different reward-sensitive regions. Together, these results highlight the power of population coding principles for solving representation problems in economic choice.


Asunto(s)
Conducta de Elección , Neuronas , Masculino , Animales , Conducta de Elección/fisiología , Neuronas/fisiología , Recompensa , Encéfalo , Solución de Problemas , Toma de Decisiones/fisiología , Corteza Prefrontal/fisiología
10.
BMC Bioinformatics ; 25(1): 123, 2024 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-38515011

RESUMEN

BACKGROUND: Chromosome is one of the most fundamental part of cell biology where DNA holds the hierarchical information. DNA compacts its size by forming loops, and these regions house various protein particles, including CTCF, SMC3, H3 histone. Numerous sequencing methods, such as Hi-C, ChIP-seq, and Micro-C, have been developed to investigate these properties. Utilizing these data, scientists have developed a variety of loop prediction techniques that have greatly improved their methods for characterizing loop prediction and related aspects. RESULTS: In this study, we categorized 22 loop calling methods and conducted a comprehensive study of 11 of them. Additionally, we have provided detailed insights into the methodologies underlying these algorithms for loop detection, categorizing them into five distinct groups based on their fundamental approaches. Furthermore, we have included critical information such as resolution, input and output formats, and parameters. For this analysis, we utilized the GM12878 Hi-C datasets at 5 KB, 10 KB, 100 KB and 250 KB resolutions. Our evaluation criteria encompassed various factors, including memory usages, running time, sequencing depth, and recovery of protein-specific sites such as CTCF, H3K27ac, and RNAPII. CONCLUSION: This analysis offers insights into the loop detection processes of each method, along with the strengths and weaknesses of each, enabling readers to effectively choose suitable methods for their datasets. We evaluate the capabilities of these tools and introduce a novel Biological, Consistency, and Computational robustness score ( B C C score ) to measure their overall robustness ensuring a comprehensive evaluation of their performance.


Asunto(s)
Cromatina , Cromosomas , Cromatina/genética , ADN , Secuenciación de Inmunoprecipitación de Cromatina , Algoritmos
11.
Clin Infect Dis ; 78(1): 164-171, 2024 01 25.
Artículo en Inglés | MEDLINE | ID: mdl-37773767

RESUMEN

BACKGROUND: Quantification of recurrence risk following successful treatment is crucial to evaluating regimens for multidrug- or rifampicin-resistant (MDR/RR) tuberculosis (TB). However, such analyses are complicated when some patients die or become lost during post-treatment follow-up. METHODS: We analyzed data on 1991 patients who successfully completed a longer MDR/RR-TB regimen containing bedaquiline and/or delamanid between 2015 and 2018 in 16 countries. Using 5 approaches for handling post-treatment deaths, we estimated 6-month post-treatment TB recurrence risk overall and by HIV status. We used inverse-probability weighting to account for patients with missing follow-up and investigated the impact of potential bias from excluding these patients without applying inverse-probability weights. RESULTS: The estimated TB recurrence risk was 7.4/1000 (95% credible interval: 3.3-12.8) when deaths were handled as non-recurrences and 7.6/1000 (3.3-13.0) when deaths were censored and inverse-probability weights were applied to account for the excluded deaths. The estimated risks of composite recurrence outcomes were 25.5 (15.3-38.1), 11.7 (6.4-18.2), and 8.6 (4.1-14.4) per 1000 for recurrence or (1) any death, (2) death with unknown or TB-related cause, or (3) TB-related death, respectively. Corresponding relative risks for HIV status varied in direction and magnitude. Exclusion of patients with missing follow-up without inverse-probability weighting had a small impact on estimates. CONCLUSIONS: The estimated 6-month TB recurrence risk was low, and the association with HIV status was inconclusive due to few recurrence events. Estimation of post-treatment recurrence will be enhanced by explicit assumptions about deaths and appropriate adjustment for missing follow-up data.


Asunto(s)
Infecciones por VIH , Tuberculosis Resistente a Múltiples Medicamentos , Humanos , Antituberculosos/uso terapéutico , Estudios de Seguimiento , VIH , Resultado del Tratamiento , Tuberculosis Resistente a Múltiples Medicamentos/tratamiento farmacológico , Tuberculosis Resistente a Múltiples Medicamentos/epidemiología , Infecciones por VIH/complicaciones , Infecciones por VIH/tratamiento farmacológico , Infecciones por VIH/epidemiología
12.
Clin Infect Dis ; 2024 Sep 20.
Artículo en Inglés | MEDLINE | ID: mdl-39302162

RESUMEN

BACKGROUND: Treatment guidelines were developed early in the pandemic when much about COVID-19 was unknown. Given the evolution of SARS-CoV-2, real-world data can provide clinicians with updated information. The objective of this analysis was to assess mortality risk in patients hospitalized for COVID-19 during the Omicron period receiving remdesivir+dexamethasone versus dexamethasone alone. METHODS: A large, multicenter US hospital database was used to identify hospitalized adult patients, with a primary discharge diagnosis of COVID-19 also flagged as "present on admission" treated with remdesivir+dexamethasone or dexamethasone alone from December 2021 to April 2023. Patients were matched 1:1 using propensity score matching and stratified by baseline oxygen requirements. Cox proportional hazards model was used to assess time to 14- and 28-day in-hospital all-cause mortality. RESULTS: A total of 33 037 patients were matched, with most patients ≥65 years old (72%), White (78%), and non-Hispanic (84%). Remdesivir+dexamethasone was associated with lower mortality risk versus dexamethasone alone across all baseline oxygen requirements at 14 days (no supplemental oxygen charges: adjusted hazard ratio [95% CI]: 0.79 [0.72-0.87], low flow oxygen: 0.70 [0.64-0.77], high flow oxygen/non-invasive ventilation: 0.69 [0.62-0.76], invasive mechanical ventilation/extracorporeal membrane oxygen (IMV/ECMO): 0.78 [0.64-0.94]), with similar results at 28 days. CONCLUSIONS: Remdesivir+dexamethasone was associated with a significant reduction in 14- and 28-day mortality compared to dexamethasone alone in patients hospitalized for COVID-19 across all levels of baseline respiratory support, including IMV/ECMO. However, the use of remdesivir+dexamethasone still has low clinical practice uptake. In addition, these data suggest a need to update the existing guidelines.

13.
BMC Genomics ; 25(1): 300, 2024 Mar 21.
Artículo en Inglés | MEDLINE | ID: mdl-38515040

RESUMEN

BACKGROUND: The Assay for Transposase-Accessible Chromatin using sequencing (ATAC-seq) utilizes the Transposase Tn5 to probe open chromatic, which simultaneously reveals multiple transcription factor binding sites (TFBSs) compared to traditional technologies. Deep learning (DL) technology, including convolutional neural networks (CNNs), has successfully found motifs from ATAC-seq data. Due to the limitation of the width of convolutional kernels, the existing models only find motifs with fixed lengths. A Graph neural network (GNN) can work on non-Euclidean data, which has the potential to find ATAC-seq motifs with different lengths. However, the existing GNN models ignored the relationships among ATAC-seq sequences, and their parameter settings should be improved. RESULTS: In this study, we proposed a novel GNN model named GNNMF to find ATAC-seq motifs via GNN and background coexisting probability. Our experiment has been conducted on 200 human datasets and 80 mouse datasets, demonstrated that GNNMF has improved the area of eight metrics radar scores of 4.92% and 6.81% respectively, and found more motifs than did the existing models. CONCLUSIONS: In this study, we developed a novel model named GNNMF for finding multiple ATAC-seq motifs. GNNMF built a multi-view heterogeneous graph by using ATAC-seq sequences, and utilized background coexisting probability and the iterloss to find different lengths of ATAC-seq motifs and optimize the parameter sets. Compared to existing models, GNNMF achieved the best performance on TFBS prediction and ATAC-seq motif finding, which demonstrates that our improvement is available for ATAC-seq motif finding.


Asunto(s)
Secuenciación de Inmunoprecipitación de Cromatina , Secuenciación de Nucleótidos de Alto Rendimiento , Humanos , Animales , Ratones , Análisis de Secuencia de ADN , Cromatina/genética , Redes Neurales de la Computación
14.
Neuroimage ; 294: 120631, 2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-38701993

RESUMEN

INTRODUCTION: Spatial normalization is a prerequisite step for the quantitative analysis of SPECT or PET brain images using volume-of-interest (VOI) template or voxel-based analysis. MRI-guided spatial normalization is the gold standard, but the wide use of PET/CT or SPECT/CT in routine clinical practice makes CT-guided spatial normalization a necessary alternative. Ventricular enlargement is observed with aging, and it hampers the spatial normalization of the lateral ventricles and striatal regions, limiting their analysis. The aim of the present study was to propose a robust spatial normalization method based on CT scans that takes into account features of the aging brain to reduce bias in the CT-guided striatal analysis of SPECT images. METHODS: We propose an enhanced CT-guided spatial normalization pipeline based on SPM12. Performance of the proposed pipeline was assessed on visually normal [123I]-FP-CIT SPECT/CT images. SPM12 default CT-guided spatial normalization was used as reference method. The metrics assessed were the overlap between the spatially normalized lateral ventricles and caudate/putamen VOIs, and the computation of caudate and putamen specific binding ratios (SBR). RESULTS: In total 231 subjects (mean age ± SD = 61.9 ± 15.5 years) were included in the statistical analysis. The mean overlap between the spatially normalized lateral ventricles of subjects and the caudate VOI and the mean SBR of caudate were respectively 38.40 % (± SD = 19.48 %) of the VOI and 1.77 (± 0.79) when performing SPM12 default spatial normalization. The mean overlap decreased to 9.13 % (± SD = 1.41 %, P < 0.001) of the VOI and the SBR of caudate increased to 2.38 (± 0.51, P < 0.0001) when performing the proposed pipeline. Spatially normalized lateral ventricles did not overlap with putamen VOI using either method. The mean putamen SBR value derived from the proposed spatial normalization (2.75 ± 0.54) was not significantly different from that derived from the default SPM12 spatial normalization (2.83 ± 0.52, P > 0.05). CONCLUSION: The automatic CT-guided spatial normalization used herein led to a less biased spatial normalization of SPECT images, hence an improved semi-quantitative analysis. The proposed pipeline could be implemented in clinical routine to perform a more robust SBR computation using hybrid imaging.


Asunto(s)
Cuerpo Estriado , Humanos , Masculino , Femenino , Persona de Mediana Edad , Anciano , Adulto , Cuerpo Estriado/diagnóstico por imagen , Cuerpo Estriado/metabolismo , Tomografía Computarizada por Rayos X/métodos , Tomografía Computarizada por Rayos X/normas , Tomografía Computarizada de Emisión de Fotón Único/métodos , Ventrículos Cerebrales/diagnóstico por imagen , Ventrículos Cerebrales/metabolismo , Procesamiento de Imagen Asistido por Computador/métodos , Tropanos
15.
Mol Biol Evol ; 40(4)2023 04 04.
Artículo en Inglés | MEDLINE | ID: mdl-37011142

RESUMEN

New protein coding genes can emerge from genomic regions that previously did not contain any genes, via a process called de novo gene emergence. To synthesize a protein, DNA must be transcribed as well as translated. Both processes need certain DNA sequence features. Stable transcription requires promoters and a polyadenylation signal, while translation requires at least an open reading frame. We develop mathematical models based on mutation probabilities, and the assumption of neutral evolution, to find out how quickly genes emerge and are lost. We also investigate the effect of the order by which DNA features evolve, and if sequence composition is biased by mutation rate. We rationalize how genes are lost much more rapidly than they emerge, and how they preferentially arise in regions that are already transcribed. Our study not only answers some fundamental questions on the topic of de novo emergence but also provides a modeling framework for future studies.


Asunto(s)
Evolución Molecular , Genómica , Mutación , Sistemas de Lectura Abierta , Genoma
16.
Cancer Sci ; 2024 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-39278260

RESUMEN

Locoregional recurrence of non-small-cell lung cancer (NSCLC) after complete resection lacks standard treatment. Durvalumab after chemoradiotherapy (CRT) or CRT alone is often selected in daily clinical practice for patients with locoregional recurrence; however, the therapeutic efficacy of these treatments remains unclear, and we aimed to assess this. This retrospective observational study used data from patients with NSCLC diagnosed with locoregional recurrence after complete resection who subsequently underwent concurrent CRT followed by durvalumab (CRT-D group) or CRT alone (CRT group). We employed propensity score analysis with inverse probability treatment weighting (IPTW) to adjust for various confounders and evaluate efficacy in the CRT-D group. After IPTW adjustment, the CRT-D group contained 119 patients (64.7% male; 69.7% adenocarcinoma), and the CRT group contained 111 patients (60.5% male; 73.4% adenocarcinoma). Their mean ages were 66 and 65 years, respectively. The IPTW-adjusted median progression-free survival was 25.4 and 11.5 months for the CRT-D and CRT groups, respectively (hazard ratio, 0.44; 95% confidence interval, 0.30-0.64); the median overall survival was not reached in either group favoring CRT-D (hazard ratio, 0.49; 95% confidence interval, 0.24-0.99). Grade 3 or 4 adverse events were observed in 48.8% of patients during CRT, 10.7% after initiating durvalumab maintenance therapy in the CRT-D group, and 57.3% in the CRT group. Overall, the sequential approach of CRT followed by durvalumab is a promising treatment strategy for locoregional recurrence of NSCLC after complete resection.

17.
Am J Epidemiol ; 193(2): 389-403, 2024 Feb 05.
Artículo en Inglés | MEDLINE | ID: mdl-37830395

RESUMEN

Understanding characteristics of patients with propensity scores in the tails of the propensity score (PS) distribution has relevance for inverse-probability-of-treatment-weighted and PS-based estimation in observational studies. Here we outline a method for identifying variables most responsible for extreme propensity scores. The approach is illustrated in 3 scenarios: 1) a plasmode simulation of adult patients in the National Ambulatory Medical Care Survey (2011-2015) and 2) timing of dexamethasone initiation and 3) timing of remdesivir initiation in patients hospitalized for coronavirus disease 2019 from February 2020 through January 2021. PS models were fitted using relevant baseline covariates, and tails of the PS distribution were defined using asymmetric first and 99th percentiles. After fitting of the PS model in each original data set, values of each key covariate were permuted and model-agnostic variable importance measures were examined. Visualization and variable importance techniques were helpful in identifying variables most responsible for extreme propensity scores and may help identify individual characteristics that might make patients inappropriate for inclusion in a study (e.g., off-label use). Subsetting or restricting the study sample based on variables identified using this approach may help investigators avoid the need for trimming or overlap weights in studies.


Asunto(s)
Puntaje de Propensión , Humanos , Simulación por Computador
18.
Am J Epidemiol ; 2024 Jul 16.
Artículo en Inglés | MEDLINE | ID: mdl-39010753

RESUMEN

Etiologic heterogeneity occurs when distinct sets of events or exposures give rise to different subtypes of disease. Inference about subtype-specific exposure effects from two-phase outcome-dependent sampling data requires adjustment for both confounding and the sampling design. Common approaches to inference for these effects do not necessarily appropriately adjust for these sources of bias, or allow for formal comparisons of effects across different subtypes. Herein, using inverse probability weighting (IPW) to fit a multinomial model is shown to yield valid inference with this sampling design for subtype-specific exposure effects and contrasts thereof. The IPW approach is compared to common regression-based methods for assessing exposure effect heterogeneity using simulations. The methods are applied to estimate subtype-specific effects of various exposures on breast cancer risk in the Carolina Breast Cancer Study.

19.
Am J Epidemiol ; 2024 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-39168831

RESUMEN

This study investigated the effectiveness of quitline service intensity (high vs. low) on past 30-day tobacco abstinence at 7-months follow-up, using observational data from the Oklahoma Tobacco Helpline (OTH) between April 2020 and December 2021. To assess the impact of loss to follow-up and non-random treatment assignment, we fit the parameters of a marginal structural model to estimate inverse probability weights for censoring (IPCW) and treatment (IPTW) and combined (IPCTW). The Risk Ratio (RR) was estimated using modified Poisson regression with robust variance estimator. Of the 4,695 individuals included in the study, 64% received high-intensity cessation services, and 53% were lost to follow-up. Using the conventional complete case analysis (responders only), high-intensity cessation services were associated with abstinence (RR=1.18; 95 CI: 1.04, 1.34). The effect estimate was attenuated after accounting for censoring (RR=1.14; 95% CI: 1.00, 1.30). After adjusting for both baseline confounding and selection bias via IPTCW, high-intensity cessation services were associated with 1.23 times (95% CI: 1.08, 1.41) the probability of abstinence compared to low-intensity services. Despite relatively high loss to follow-up, accounting for selection bias and confounding did not notably impact quit rates or the relationship between intensity of quitline services and tobacco cessation among OTH participants.

20.
Am J Epidemiol ; 2024 Aug 21.
Artículo en Inglés | MEDLINE | ID: mdl-39168837

RESUMEN

Radon is a known cause of lung cancer. Protective standards for radon exposure are derived largely from studies of working populations that are prone to healthy worker survivor bias. This bias can lead to under-protection of workers and is a key barrier to understanding health effects of many exposures. We apply inverse probability weighting to study a set of hypothetical exposure limits among 4,137 male, White and American Indian radon-exposed uranium miners in the Colorado Plateau followed from 1950 to 2005. We estimate cumulative risk of lung cancer through age 90 under hypothetical occupational limits. We estimate that earlier implementation of the current US Mining Safety and Health Administration annual standard of 4 working level months (implemented here as a monthly exposure limit) could have reduced lung cancer mortality from 16/100 workers to 6/100 workers (95% confidence intervals: 3/100, 8/100), in contrast with previous estimates of 10/100 workers. Our estimate is similar to that among contemporaneous occupational cohorts. Inverse probability weighting is a simple and computationally efficient way address healthy worker survivor bias in order to contrast health effects of exposure limits and estimate the number of excess health outcomes under exposure limits at work.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA