RESUMO
Advances in DNA sequencing technologies have enabled genotyping of complex genetic regions exhibiting copy number variation and high allelic diversity, yet it is impossible to derive exact genotypes in all cases, often resulting in ambiguous genotype calls, that is, partially missing data. An example of such a gene region is the killer-cell immunoglobulin-like receptor (KIR) genes. These genes are of special interest in the context of allogeneic hematopoietic stem cell transplantation. For such complex gene regions, current haplotype reconstruction methods are not feasible as they cannot cope with the complexity of the data. We present an expectation-maximization (EM)-algorithm to estimate haplotype frequencies (HTFs) which deals with the missing data components, and takes into account linkage disequilibrium (LD) between genes. To cope with the exponential increase in the number of haplotypes as genes are added, we add three components to a standard EM-algorithm implementation. First, reconstruction is performed iteratively, adding one gene at a time. Second, after each step, haplotypes with frequencies below a threshold are collapsed in a rare haplotype group. Third, the HTF of the rare haplotype group is profiled in subsequent iterations to improve estimates. A simulation study evaluates the effect of combining information of multiple genes on the estimates of these frequencies. We show that estimated HTFs are approximately unbiased. Our simulation study shows that the EM-algorithm is able to combine information from multiple genes when LD is high, whereas increased ambiguity levels increase bias. Linear regression models based on this EM, show that a large number of haplotypes can be problematic for unbiased effect size estimation and that models need to be sparse. In a real data analysis of KIR genotypes, we compare HTFs to those obtained in an independent study. Our new EM-algorithm-based method is the first to account for the full genetic architecture of complex gene regions, such as the KIR gene region. This algorithm can handle the numerous observed ambiguities, and allows for the collapsing of haplotypes to perform implicit dimension reduction. Combining information from multiple genes improves haplotype reconstruction.
Assuntos
Variações do Número de Cópias de DNA , Modelos Genéticos , Humanos , Haplótipos , Frequência do Gene , GenótipoRESUMO
BACKGROUND: Numerous studies have shown that older women with endometrial cancer have a higher risk of recurrence and cancer-related death. However, it remains unclear whether older age is a causal prognostic factor, or whether other risk factors become increasingly common with age. We aimed to address this question with a unique multimethod study design using state-of-the-art statistical and causal inference techniques on datasets of three large, randomised trials. METHODS: In this multimethod analysis, data from 1801 women participating in the randomised PORTEC-1, PORTEC-2, and PORTEC-3 trials were used for statistical analyses and causal inference. The cohort included 714 patients with intermediate-risk endometrial cancer, 427 patients with high-intermediate risk endometrial cancer, and 660 patients with high-risk endometrial cancer. Associations of age with clinicopathological and molecular features were analysed using non-parametric tests. Multivariable competing risk analyses were performed to determine the independent prognostic value of age. To analyse age as a causal prognostic variable, a deep learning causal inference model called AutoCI was used. FINDINGS: Median follow-up as estimated using the reversed Kaplan-Meier method was 12·3 years (95% CI 11·9-12·6) for PORTEC-1, 10·5 years (10·2-10·7) for PORTEC-2, and 6·1 years (5·9-6·3) for PORTEC-3. Both overall recurrence and endometrial cancer-specific death significantly increased with age. Moreover, older women had a higher frequency of deep myometrial invasion, serous tumour histology, and p53-abnormal tumours. Age was an independent risk factor for both overall recurrence (hazard ratio [HR] 1·02 per year, 95% CI 1·01-1·04; p=0·0012) and endometrial cancer-specific death (HR 1·03 per year, 1·01-1·05; p=0·0012) and was identified as a significant causal variable. INTERPRETATION: This study showed that advanced age was associated with more aggressive tumour features in women with endometrial cancer, and was independently and causally related to worse oncological outcomes. Therefore, our findings suggest that older women with endometrial cancer should not be excluded from diagnostic assessments, molecular testing, and adjuvant therapy based on their age alone. FUNDING: None.
Assuntos
Neoplasias do Endométrio , Humanos , Feminino , Neoplasias do Endométrio/patologia , Neoplasias do Endométrio/mortalidade , Fatores Etários , Idoso , Pessoa de Meia-Idade , Prognóstico , Ensaios Clínicos Controlados Aleatórios como Assunto , Fatores de Risco , Recidiva Local de Neoplasia/patologia , Recidiva Local de Neoplasia/epidemiologia , Idoso de 80 Anos ou maisRESUMO
OBJECTIVE: The arteriovenous fistula (AVF) is the first choice for gaining vascular access for hemodialysis. However, 20% to 50% of AVFs fail within 4 months after creation. Although demographic risk factors have been described, there is little evidence on the intraoperative predictors of AVF maturation failure. The aim of this study was to assess the predictive value of intraoperative transit time flow measurements (TTFMs) on AVF maturation failure. METHODS: In this retrospective cohort study, intraoperative blood flow, measured using TTFM, was compared with AVF maturation after 6 weeks in 55 patients. Owing to its significantly higher prevalence and risk of nonmaturation, the radiocephalic AVF (RCAVF) was the main focus of this study. A recommended cutoff point for high vs low intraoperative blood flow was determined for RCAVFs, using a receiver operating characteristic curve. RESULTS: The average intraoperative blood flow in RCAVFs was 156 mL/min. Patients with an intraoperative blood flow equal or lower than the determined cutoff point of 160 mL/min, showed a 3.03 times increased risk of AVF maturation failure after 6 weeks, compared with patients with a higher intraoperative blood flow (P < .001). CONCLUSIONS: The intraoperative blood flow in RCAVFs measured by TTFM provides an adequate means of predicting AVF nonmaturation 6 weeks after surgery. For RCAVFs, a cutoff point for intraoperative blood flow of 160 mL/min is recommended for maximum sensitivity and specificity to predict AVF maturation failure after 6 weeks.
Assuntos
Derivação Arteriovenosa Cirúrgica , Valor Preditivo dos Testes , Artéria Radial , Fluxo Sanguíneo Regional , Diálise Renal , Grau de Desobstrução Vascular , Humanos , Derivação Arteriovenosa Cirúrgica/efeitos adversos , Estudos Retrospectivos , Feminino , Masculino , Velocidade do Fluxo Sanguíneo , Pessoa de Meia-Idade , Idoso , Fatores de Tempo , Artéria Radial/fisiopatologia , Artéria Radial/cirurgia , Fatores de Risco , Curva ROC , Falha de Tratamento , Extremidade Superior/irrigação sanguíneaRESUMO
BACKGROUND: There is divergence in the rate at which people age. The concept of biological age is postulated to capture this variability, and hence to better represent an individual's true global physiological state than chronological age. Biological age predictors are often generated based on cross-sectional data, using biochemical or molecular markers as predictor variables. It is assumed that the difference between chronological and predicted biological age is informative of one's chronological age-independent aging divergence ∆. METHODS: We investigated the statistical assumptions underlying the most popular cross-sectional biological age predictors, based on multiple linear regression, the Klemera-Doubal method or principal component analysis. We used synthetic and real data to illustrate the consequences if this assumption does not hold. RESULTS: The most popular cross-sectional biological age predictors all use the same strong underlying assumption, namely that a candidate marker of aging's association with chronological age is directly informative of its association with the aging rate ∆. We called this the identical-association assumption and proved that it is untestable in a cross-sectional setting. If this assumption does not hold, weights assigned to candidate markers of aging are uninformative, and no more signal may be captured than if markers would have been assigned weights at random. CONCLUSIONS: Cross-sectional methods for predicting biological age commonly use the untestable identical-association assumption, which previous literature in the field had never explicitly acknowledged. These methods have inherent limitations and may provide uninformative results, highlighting the importance of researchers exercising caution in the development and interpretation of cross-sectional biological age predictors.
Assuntos
Envelhecimento , Humanos , Estudos Transversais , Biomarcadores , Modelos Lineares , Análise MultivariadaRESUMO
Aging is a multifaceted and intricate physiological process characterized by a gradual decline in functional capacity, leading to increased susceptibility to diseases and mortality. While chronological age serves as a strong risk factor for age-related health conditions, considerable heterogeneity exists in the aging trajectories of individuals, suggesting that biological age may provide a more nuanced understanding of the aging process. However, the concept of biological age lacks a clear operationalization, leading to the development of various biological age predictors without a solid statistical foundation. This paper addresses these limitations by proposing a comprehensive operationalization of biological age, introducing the "AccelerAge" framework for predicting biological age, and introducing previously underutilized evaluation measures for assessing the performance of biological age predictors. The AccelerAge framework, based on Accelerated Failure Time (AFT) models, directly models the effect of candidate predictors of aging on an individual's survival time, aligning with the prevalent metaphor of aging as a clock. We compare predictors based on the AccelerAge framework to a predictor based on the GrimAge predictor, which is considered one of the best-performing biological age predictors, using simulated data as well as data from the UK Biobank and the Leiden Longevity Study. Our approach seeks to establish a robust statistical foundation for biological age clocks, enabling a more accurate and interpretable assessment of an individual's aging status.
Assuntos
Envelhecimento , Modelos Estatísticos , Humanos , Envelhecimento/fisiologia , Idoso , Pessoa de Meia-Idade , Feminino , Masculino , Longevidade , Adulto , Idoso de 80 Anos ou maisRESUMO
The prognosis of patients with mycosis fungoides is variable. As the current literature is scarce and shows mixed results this study investigates the incidence of other primary malignancies in mycosis fungoides patients. A retrospective, nationwide, population- based cohort study was performed with patients with mycosis fungoides between 2000 and 2020 in The Netherlands. All histopathology reports were requested from the Nationwide Network and Registry of Histo- and Cytopathology and screened for other primary malignancies. Lifelong incidence rates were used to compare the incidence of malignancies in mycosis fungoides patients and the general population. In total 1,024 patients were included with a mean follow-up of 10 years (SD 6). A total of 294 cases of other primary malignancies were found with 29% of the mycosis fungoides patients developing at least 1 other primary malignancy. Only cutaneous (odds ratio [OR] 2.54; CI 2.0-3.2) and haematological malignancies (OR 2.62; CI 2.00-3.42) had a statistically significant higher incidence than the Dutch population overall. Mycosis fungoides patients have a significantly increased risk of developing melanomas (OR 2.76; CI 2.11-3.59) and cutaneous squamous cell carcinomas mycosis fungoides (OR 2.34; CI 1.58-3.45). This study shows no association between mycosis fungoides and other solid organ tumours; however, such patients are significantly at risk of developing other haematological and cutaneous malignancies. Clinicians should be aware of this increased risk.
Assuntos
Micose Fungoide , Neoplasias Cutâneas , Humanos , Micose Fungoide/epidemiologia , Micose Fungoide/patologia , Estudos Retrospectivos , Neoplasias Cutâneas/epidemiologia , Neoplasias Cutâneas/patologia , Países Baixos/epidemiologia , Masculino , Feminino , Pessoa de Meia-Idade , Incidência , Idoso , Adulto , Fatores de Risco , Sistema de Registros , Neoplasias Hematológicas/epidemiologia , Melanoma/epidemiologia , Medição de Risco , Fatores de TempoRESUMO
The standard treatment regimen for esophageal cancer is chemoradiation followed by esophagectomy. However, the use of neoadjuvant chemoradiotherapy damages the surrounding tissue, which potentially increases the risk of postoperative complications, including anastomotic leakage. The impact of definitive chemoradiotherapy (dCRT, 50.4 Gy radiotherapy) compared to the standard neoadjuvant scheme (nCRT, 41.4 Gy radiotherapy) prior to surgery on the incidence of anastomotic leakage remains poorly understood. To study this, all patients who received dCRT between 2011 and 2021 followed by esophagectomy were included. For each patient, two patients who received nCRT were selected as matched controls. Outcomes included postoperative anastomotic leakage, pulmonary and other complications, anastomotic stenosis, pulmonary and other postoperative complications (Clavien Dindo Classification ≥1), and overall survival. One hundred and eight patients were included with a median follow-up of 28 months. The time between neoadjuvant treatment and surgery was longer in the dCRT group compared to the nCRT group (65 vs. 48 days, P < 0.001). Postoperatively, significantly more patients in the dCRT group suffered from anastomotic leakage (11% vs. 1%, P = 0.04) and anastomotic stenosis (42% vs. 17%, P < 0.01). No differences were found for other complications or overall survival between both groups. In conclusion, preoperative dCRT is associated with a higher risk of anastomotic leakage and stenosis. These complications, however, can be treated effectively. Therefore, esophagectomy after dCRT is considered to be an appropriate treatment strategy in a selected patient group.
Assuntos
Fístula Anastomótica , Quimiorradioterapia , Neoplasias Esofágicas , Esofagectomia , Terapia Neoadjuvante , Humanos , Esofagectomia/métodos , Neoplasias Esofágicas/terapia , Neoplasias Esofágicas/mortalidade , Masculino , Feminino , Pessoa de Meia-Idade , Terapia Neoadjuvante/métodos , Fístula Anastomótica/etiologia , Fístula Anastomótica/epidemiologia , Idoso , Quimiorradioterapia/métodos , Estudos Retrospectivos , Complicações Pós-Operatórias/epidemiologia , Complicações Pós-Operatórias/etiologia , Resultado do TratamentoRESUMO
BACKGROUND: Estimating the risk of revision after arthroplasty could inform patient and surgeon decision-making. However, there is a lack of well-performing prediction models assisting in this task, which may be due to current conventional modeling approaches such as traditional survivorship estimators (such as Kaplan-Meier) or competing risk estimators. Recent advances in machine learning survival analysis might improve decision support tools in this setting. Therefore, this study aimed to assess the performance of machine learning compared with that of conventional modeling to predict revision after arthroplasty. QUESTION/PURPOSE: Does machine learning perform better than traditional regression models for estimating the risk of revision for patients undergoing hip or knee arthroplasty? METHODS: Eleven datasets from published studies from the Dutch Arthroplasty Register reporting on factors associated with revision or survival after partial or total knee and hip arthroplasty between 2018 and 2022 were included in our study. The 11 datasets were observational registry studies, with a sample size ranging from 3038 to 218,214 procedures. We developed a set of time-to-event models for each dataset, leading to 11 comparisons. A set of predictors (factors associated with revision surgery) was identified based on the variables that were selected in the included studies. We assessed the predictive performance of two state-of-the-art statistical time-to-event models for 1-, 2-, and 3-year follow-up: a Fine and Gray model (which models the cumulative incidence of revision) and a cause-specific Cox model (which models the hazard of revision). These were compared with a machine-learning approach (a random survival forest model, which is a decision tree-based machine-learning algorithm for time-to-event analysis). Performance was assessed according to discriminative ability (time-dependent area under the receiver operating curve), calibration (slope and intercept), and overall prediction error (scaled Brier score). Discrimination, known as the area under the receiver operating characteristic curve, measures the model's ability to distinguish patients who achieved the outcomes from those who did not and ranges from 0.5 to 1.0, with 1.0 indicating the highest discrimination score and 0.50 the lowest. Calibration plots the predicted versus the observed probabilities; a perfect plot has an intercept of 0 and a slope of 1. The Brier score calculates a composite of discrimination and calibration, with 0 indicating perfect prediction and 1 the poorest. A scaled version of the Brier score, 1 - (model Brier score/null model Brier score), can be interpreted as the amount of overall prediction error. RESULTS: Using machine learning survivorship analysis, we found no differences between the competing risks estimator and traditional regression models for patients undergoing arthroplasty in terms of discriminative ability (patients who received a revision compared with those who did not). We found no consistent differences between the validated performance (time-dependent area under the receiver operating characteristic curve) of different modeling approaches because these values ranged between -0.04 and 0.03 across the 11 datasets (the time-dependent area under the receiver operating characteristic curve of the models across 11 datasets ranged between 0.52 to 0.68). In addition, the calibration metrics and scaled Brier scores produced comparable estimates, showing no advantage of machine learning over traditional regression models. CONCLUSION: Machine learning did not outperform traditional regression models. CLINICAL RELEVANCE: Neither machine learning modeling nor traditional regression methods were sufficiently accurate in order to offer prognostic information when predicting revision arthroplasty. The benefit of these modeling approaches may be limited in this context.
Assuntos
Artroplastia de Quadril , Artroplastia do Joelho , Aprendizado de Máquina , Reoperação , Humanos , Reoperação/estatística & dados numéricos , Medição de Risco , Sistema de Registros , Fatores de Risco , Falha de Prótese , Feminino , Masculino , Idoso , Valor Preditivo dos TestesRESUMO
Hazard ratios are prone to selection bias, compromising their use as causal estimands. On the other hand, if Aalen's additive hazard model applies, the hazard difference has been shown to remain unaffected by the selection of frailty factors over time. Then, in the absence of confounding, observed hazard differences are equal in expectation to the causal hazard differences. However, in the presence of effect (on the hazard) heterogeneity, the observed hazard difference is also affected by selection of survivors. In this work, we formalize how the observed hazard difference (from a randomized controlled trial) evolves by selecting favourable levels of effect modifiers in the exposed group and thus deviates from the causal effect of interest. Such selection may result in a non-linear integrated hazard difference curve even when the individual causal effects are time-invariant. Therefore, a homogeneous time-varying causal additive effect on the hazard cannot be distinguished from a time-invariant but heterogeneous causal effect. We illustrate this causal issue by studying the effect of chemotherapy on the survival time of patients suffering from carcinoma of the oropharynx using data from a clinical trial. The hazard difference can thus not be used as an appropriate measure of the causal effect without making untestable assumptions.
Assuntos
Modelos de Riscos Proporcionais , Humanos , Viés , Viés de Seleção , CausalidadeRESUMO
It is known that the hazard ratio lacks a useful causal interpretation. Even for data from a randomized controlled trial, the hazard ratio suffers from so-called built-in selection bias as, over time, the individuals at risk among the exposed and unexposed are no longer exchangeable. In this paper, we formalize how the expectation of the observed hazard ratio evolves and deviates from the causal effect of interest in the presence of heterogeneity of the hazard rate of unexposed individuals (frailty) and heterogeneity in effect (individual modification). For the case of effect heterogeneity, we define the causal hazard ratio. We show that the expected observed hazard ratio equals the ratio of expectations of the latent variables (frailty and modifier) conditionally on survival in the world with and without exposure, respectively. Examples with gamma, inverse Gaussian and compound Poisson distributed frailty and categorical (harming, beneficial or neutral) distributed effect modifiers are presented for illustration. This set of examples shows that an observed hazard ratio with a particular value can arise for all values of the causal hazard ratio. Therefore, the hazard ratio cannot be used as a measure of the causal effect without making untestable assumptions, stressing the importance of using more appropriate estimands, such as contrasts of the survival probabilities.
Assuntos
Fragilidade , Humanos , Viés , Probabilidade , Modelos de Riscos Proporcionais , Viés de Seleção , Ensaios Clínicos como AssuntoRESUMO
OBJECTIVE: To analyze risk and patterns of locoregional failure (LRF) in patients of the RAPIDO trial at 5 years. BACKGROUND: Multimodality treatment improves local control in rectal cancer. Total neoadjuvant treatment (TNT) aims to improve systemic control while local control is maintained. At 3 years, LRF rate was comparable between TNT and chemoradiotherapy in the RAPIDO trial. METHODS: A total of 920 patients were randomized between an experimental (EXP, short-course radiotherapy, chemotherapy, and surgery) and a standard-care group (STD, chemoradiotherapy, surgery, and optional postoperative chemotherapy). LRFs, including early LRF (no resection except for organ preservation/R2 resection) and locoregional recurrence (LRR) after an R0/R1 resection, were analyzed. RESULTS: Totally, 460 EXP and 446 STD patients were eligible. At 5.6 years (median follow-up), LRF was detected in 54/460 (12%) and 36/446 (8%) patients in the EXP and STD groups, respectively ( P =0.07), in which EXP patients were more often treated with 3-dimensional-conformed radiotherapy ( P =0.029). In the EXP group, LRR was detected more often [44/431 (10%) vs. 26/428 (6%); P =0.027], with more often a breached mesorectum (9/44 (21%) vs. 1/26 (4); P =0.048). The EXP treatment, enlarged lateral lymph nodes, positive circumferential resection margin, tumor deposits, and node positivity at pathology were the significant predictors for developing LRR. Location of the LRRs was similar between groups. Overall survival after LRF was comparable [hazard ratio: 0.76 (95% CI, 0.46-1.26); P =0.29]. CONCLUSIONS: The EXP treatment was associated with an increased risk of LRR, whereas the reduction in disease-related treatment failure and distant metastases remained after 5 years. Further refinement of the TNT in rectal cancer is mandated.
Assuntos
Neoplasias Retais , Humanos , Protocolos de Quimioterapia Combinada Antineoplásica/uso terapêutico , Quimiorradioterapia , Seguimentos , Terapia Neoadjuvante , Recidiva Local de Neoplasia/patologia , Estadiamento de Neoplasias , Neoplasias Retais/patologiaRESUMO
BACKGROUND & AIMS: Screening for celiac disease (CD) is recommended in children with affected first-degree relatives (FDR). However, the frequency of screening and at what age remain unknown. The aims of this study were to detect variables influencing the risk of CD development and develop and validate clinical prediction models to provide individualized screening advice. METHODS: We analyzed prospective data from the 10 years of follow-up of the PreventCD-birth cohort involving 944 genetically predisposed children with CD-FDR. Variables significantly influencing the CD risk were combined to determine a risk score. Landmark analyses were performed at different ages. Prediction models were created using multivariable Cox proportional hazards regression analyses, backward elimination, and Harrell's c-index for discrimination. Validation was done using data from the independent NeoCel cohort. RESULTS: In March 2019, the median follow-up was 8.3 years (22 days-12.0 years); 135/944 children developed CD (mean age, 4.3 years [range, 1.1-11.4]). CD developed significantly more often in girls (P = .005) and in Human Leukocyte Antigen (HLA)-DQ2 homozygous individuals (8-year cumulative incidence rate of 35.4% vs maximum of the other HLA-risk groups 18.2% [P < .001]). The effect of homozygosity DR3-DQ2/DR7-DQ2 on CD development was only present in girls (interaction P = .04). The prediction models showed good fit in the validation cohort (Cox regression 0.81 [0.54]). To calculate a personalized risk of CD development and provide screening advice, we designed the Prediction application https://hputter.shinyapps.io/preventcd/. CONCLUSION: Children with CD-FDR develop CD early in life, and their risk depends on gender, age and HLA-DQ, which are all factors that are important for sound screening advice. These children should be screened early in life, including HLA-DQ2/8-typing, and if genetically predisposed to CD, they should get further personalized screening advice using our Prediction application. TRIAL REGISTRATION NUMBER: ISRCTN74582487 (https://www.isrctn.com/search?q=ISRCTN74582487).
Assuntos
Doença Celíaca , Doença Celíaca/diagnóstico , Doença Celíaca/epidemiologia , Doença Celíaca/genética , Criança , Pré-Escolar , Estudos de Coortes , Feminino , Predisposição Genética para Doença , Humanos , Estudos Prospectivos , Fatores de RiscoRESUMO
Multi-state models for event history analysis most commonly assume the process is Markov. This article considers tests of the Markov assumption that are applicable to general multi-state models. Two approaches using existing methodology are considered; a simple method based on including time of entry into each state as a covariate in Cox models for the transition intensities and a method involving detecting a shared frailty through a stratified Commenges-Andersen test. In addition, using the principle that under a Markov process the future rate of transitions of the process at times $t > s$ should not be influenced by the state occupied at time $s$, a new class of general tests is developed by considering summaries from families of log-rank statistics where patients are grouped by the state occupied at varying initial time $s$. An extended form of the test applicable to models that are Markov conditional on observed covariates is also derived. The null distribution of the proposed test statistics are approximated by using wild bootstrap sampling. The approaches are compared in simulation and applied to a dataset on sleeping behavior. The most powerful test depends on the particular departure from a Markov process, although the Cox-based method maintained good power in a wide range of scenarios. The proposed class of log-rank statistic based tests are most useful in situations where the non-Markov behavior does not persist, or is not uniform in nature across patient time.
Assuntos
Modelos Estatísticos , Projetos de Pesquisa , Simulação por Computador , Humanos , Cadeias de Markov , Modelos de Riscos ProporcionaisRESUMO
Rapidly detecting problems in the quality of care is of utmost importance for the well-being of patients. Without proper inspection schemes, such problems can go undetected for years. Cumulative sum (CUSUM) charts have proven to be useful for quality control, yet available methodology for survival outcomes is limited. The few available continuous time inspection charts usually require the researcher to specify an expected increase in the failure rate in advance, thereby requiring prior knowledge about the problem at hand. Misspecifying parameters can lead to false positive alerts and large detection delays. To solve this problem, we take a more general approach to derive the new Continuous time Generalized Rapid response CUSUM (CGR-CUSUM) chart. We find an expression for the approximate average run length (average time to detection) and illustrate the possible gain in detection speed by using the CGR-CUSUM over other commonly used monitoring schemes on a real-life data set from the Dutch Arthroplasty Register as well as in simulation studies. Besides the inspection of medical procedures, the CGR-CUSUM can also be used for other real-time inspection schemes such as industrial production lines and quality control of services.
RESUMO
BACKGROUND: Previous studies have reported conflicting results of prolonged antibiotic prophylaxis on infectious complications after pancreatoduodenectomy. This study evaluated the effect of prolonged antibiotics on surgical-site infections (SSIs) after pancreatoduodenectomy. METHODS: A systematic review and meta-analysis was undertaken of SSIs in patients with perioperative (within 24 h) versus prolonged antibiotic (over 24 h) prophylaxis after pancreatoduodenectomy. SSIs were classified as organ/space infections or superficial SSI within 30 days after surgery. ORs were calculated using a Mantel-Haenszel fixed-effect model. RESULTS: Ten studies were included in the qualitative analysis, of which 8 reporting on 1170 patients were included in the quantitative analysis. The duration of prolonged antibiotic prophylaxis varied between 2 and 10 days after surgery. Four studies reporting on 782 patients showed comparable organ/space infection rates in patients receiving perioperative and prolonged antibiotics (OR 1.35, 95 per cent c.i. 0.94 to 1.93). However, among patients with preoperative biliary drainage (5 studies reporting on 577 patients), organ/space infection rates were lower with prolonged compared with perioperative antibiotics (OR 2.09, 1.43 to 3.07). Three studies (633 patients) demonstrated comparable superficial SSI rates between patients receiving perioperative versus prolonged prophylaxis (OR 1.54, 0.97 to 2.44), as well as in patients with preoperative biliary drainage in 4 studies reporting on 431 patients (OR 1.60, 0.89 to 2.88). CONCLUSION: Prolonged antibiotic prophylaxis is associated with fewer organ/space infection in patients who undergo preoperative biliary drainage. However, the optimal duration of antibiotic prophylaxis after pancreatoduodenectomy remains to be determined and warrants confirmation in an RCT.
Almost 40 in 100 patients develop an infection after pancreatic surgery. This study collected research that studied the effect of prolonged antibiotics after pancreatic surgery on the number of infections after surgery. Research articles were selected if patients who received antibiotics only during surgery were compared with those who had prolonged antibiotics after surgery. Prolonged antibiotics means antibiotics for longer than 24 h after surgery. Comparing patients who had antibiotics during surgery and those who received prolonged antibiotics after surgery, this study focused on the number of abdominal infections and wound infections. Ten studies were selected, and these studies included 1170 patients in total. The duration of prolonged antibiotics ranged from 2 to 5 days after pancreatic surgery. Four studies (with 782 patients) showed comparable abdominal infections in patients who had antibiotics only during surgery and those who had prolonged antibiotics after surgery (OR 1.35, 95 per cent c.i. 0.94 to 1.93). However, for patients with a stent in the bile duct (5 studies on 577 patients), fewer abdominal infections were seen in patients who had prolonged antibiotics after surgery compared with patients who received antibiotics only during surgery (OR 2.09, 1.43 to 3.07). Three studies (633 patients) showed the same rate of wound infections in patients who had antibiotics only during surgery compared with those who received prolonged antibiotics after operation (OR 1.54, 0.97 to 2.44). The number of wound infections was also the same in patients with a stent in the bile duct (OR 1.60, 0.89 to 2.88). Prolonged antibiotics after pancreatic surgery seem to lower abdominal infections in patients who have a stent placed in the bile duct. However, the best duration of antibiotics is unclear; a decent study is needed.
RESUMO
Primary plasma cell leukemia (pPCL) is a rare and challenging malignancy. There are limited data regarding optimum transplant approaches. We therefore undertook a retrospective analysis from 1998-2014 of 751 patients with pPCL undergoing one of four transplant strategies; single autologous transplant (single auto), single allogeneic transplant (allo-first) or a combined tandem approach with an allogeneic transplant following an autologous transplant (auto-allo) or a tandem autologous transplant (auto-auto). To avoid time bias, multiple analytic approaches were employed including Cox models with time-dependent covariates and dynamic prediction by landmarking. Initial comparisons were made between patients undergoing allo-first (n=70) versus auto-first (n=681), regardless of a subsequent second transplant. The allo-first group had a lower relapse rate (45.9%, 95% confidence interval [95% CI]: 33.2-58.6 vs. 68.4%, 64.4-72.4) but higher non-relapse mortality (27%, 95% CI: 15.9-38.1 vs. 7.3%, 5.2-9.4) at 36 months. Patients who underwent allo-first had a remarkably higher risk in the first 100 days for both overall survival and progression-free survival. Patients undergoing auto-allo (n=122) had no increased risk in the short term and a significant benefit in progression-free survival after 100 days compared to those undergoing single auto (hazard ratio [HR]=0.69, 95% CI: 0.52- 0.92; P=0.012). Auto-auto (n=117) was an effective option for patients achieving complete remission prior to their first transplant, whereas in patients who did not achieve complete remission prior to transplantation our modeling predicted that auto-allo was superior. This is the largest retrospective study reporting on transplantation in pPCL to date. We confirm a significant mortality risk within the first 100 days for allo-first and suggest that tandem transplant strategies are superior. Disease status at time of transplant influences outcome. This knowledge may help to guide clinical decisions on transplant strategy.
Assuntos
Transplante de Células-Tronco Hematopoéticas , Leucemia Plasmocitária , Humanos , Estudos Retrospectivos , Transplante Homólogo , Leucemia Plasmocitária/diagnóstico , Leucemia Plasmocitária/terapia , Intervalo Livre de Doença , Transplante de Células-Tronco Hematopoéticas/efeitos adversos , Transplante Autólogo , RecidivaRESUMO
The additive hazards model specifies the effect of covariates on the hazard in an additive way, in contrast to the popular Cox model, in which it is multiplicative. As the non-parametric model, additive hazards offer a very flexible way of modeling time-varying covariate effects. It is most commonly estimated by ordinary least squares. In this paper, we consider the case where covariates are bounded, and derive the maximum likelihood estimator under the constraint that the hazard is non-negative for all covariate values in their domain. We show that the maximum likelihood estimator may be obtained by separately maximizing the log-likelihood contribution of each event time point, and we show that the maximizing problem is equivalent to fitting a series of Poisson regression models with an identity link under non-negativity constraints. We derive an analytic solution to the maximum likelihood estimator. We contrast the maximum likelihood estimator with the ordinary least-squares estimator in a simulation study and show that the maximum likelihood estimator has smaller mean squared error than the ordinary least-squares estimator. An illustration with data on patients with carcinoma of the oropharynx is provided.
Assuntos
Modelos de Riscos Proporcionais , Humanos , Funções Verossimilhança , Análise dos Mínimos Quadrados , Simulação por ComputadorRESUMO
Unobserved individual heterogeneity is a common challenge in population cancer survival studies. This heterogeneity is usually associated with the combination of model misspecification and the failure to record truly relevant variables. We investigate the effects of unobserved individual heterogeneity in the context of excess hazard models, one of the main tools in cancer epidemiology. We propose an individual excess hazard frailty model to account for individual heterogeneity. This represents an extension of frailty modeling to the relative survival framework. In order to facilitate the inference on the parameters of the proposed model, we select frailty distributions which produce closed-form expressions of the marginal hazard and survival functions. The resulting model allows for an intuitive interpretation, in which the frailties induce a selection of the healthier individuals among survivors. We model the excess hazard using a flexible parametric model with a general hazard structure which facilitates the inclusion of time-dependent effects. We illustrate the performance of the proposed methodology through a simulation study. We present a real-data example using data from lung cancer patients diagnosed in England, and discuss the impact of not accounting for unobserved heterogeneity on the estimation of net survival. The methodology is implemented in the R package IFNS.
Assuntos
Fragilidade , Neoplasias Pulmonares , Humanos , Modelos de Riscos Proporcionais , Análise de Sobrevida , Modelos EstatísticosRESUMO
PURPOSE: Near-infrared (NIR) fluorescence imaging using indocyanine green (ICG) is gaining popularity for the quantification of tissue perfusion, including foot perfusion in patients with lower extremity arterial disease (LEAD). However, the absolute fluorescence intensity is influenced by patient-and system-related factors limiting reliable and valid quantification. To enhance the quality of quantitative perfusion assessment using ICG NIR fluorescence imaging, normalization of the measured time-intensity curves seems useful. MATERIALS AND METHODS: In this cohort study, the effect of normalization on 2 aspects of ICG NIR fluorescence imaging in assessment of foot perfusion was measured: the repeatability and the region selection. Following intravenous administration of ICG, the NIR fluorescence intensity in both feet was recorded for 10 mins using the Quest Spectrum platform®. The effect of normalization on repeatability was measured in the nontreated foot in patients undergoing unilateral revascularization preprocedural and postprocedural (repeatability group). The effect of normalization on region selection was performed in patients without LEAD (region selection group). Absolute and normalized time-intensity curves were compared. RESULTS: Successful ICG NIR fluorescence imaging was performed in 54 patients (repeatability group, n = 38; region selection group, n = 16). For the repeatability group, normalization of the time-intensity curves displayed a comparable inflow pattern for repeated measurements. For the region selection group, the maximum fluorescence intensity (Imax) demonstrated significant differences between the 3 measured regions of the foot (P = .002). Following normalization, the time-intensity curves in both feet were comparable for all 3 regions. CONCLUSION: This study shows the effect of normalization of time-intensity curves on both the repeatability and region selection in ICG NIR fluorescence imaging. The significant difference between absolute parameters in various regions of the foot demonstrates the limitation of absolute intensity in interpreting tissue perfusion. Therefore, normalization and standardization of camera settings are essential steps toward reliable and valid quantification of tissue perfusion using ICG NIR fluorescence imaging.
Assuntos
Verde de Indocianina , Extremidade Inferior , Humanos , Estudos de Coortes , Resultado do Tratamento , Imagem Óptica/métodos , PerfusãoRESUMO
BACKGROUND: In health research, several chronic diseases are susceptible to competing risks (CRs). Initially, statistical models (SM) were developed to estimate the cumulative incidence of an event in the presence of CRs. As recently there is a growing interest in applying machine learning (ML) for clinical prediction, these techniques have also been extended to model CRs but literature is limited. Here, our aim is to investigate the potential role of ML versus SM for CRs within non-complex data (small/medium sample size, low dimensional setting). METHODS: A dataset with 3826 retrospectively collected patients with extremity soft-tissue sarcoma (eSTS) and nine predictors is used to evaluate model-predictive performance in terms of discrimination and calibration. Two SM (cause-specific Cox, Fine-Gray) and three ML techniques are compared for CRs in a simple clinical setting. ML models include an original partial logistic artificial neural network for CRs (PLANNCR original), a PLANNCR with novel specifications in terms of architecture (PLANNCR extended), and a random survival forest for CRs (RSFCR). The clinical endpoint is the time in years between surgery and disease progression (event of interest) or death (competing event). Time points of interest are 2, 5, and 10 years. RESULTS: Based on the original eSTS data, 100 bootstrapped training datasets are drawn. Performance of the final models is assessed on validation data (left out samples) by employing as measures the Brier score and the Area Under the Curve (AUC) with CRs. Miscalibration (absolute accuracy error) is also estimated. Results show that the ML models are able to reach a comparable performance versus the SM at 2, 5, and 10 years regarding both Brier score and AUC (95% confidence intervals overlapped). However, the SM are frequently better calibrated. CONCLUSIONS: Overall, ML techniques are less practical as they require substantial implementation time (data preprocessing, hyperparameter tuning, computational intensity), whereas regression methods can perform well without the additional workload of model training. As such, for non-complex real life survival data, these techniques should only be applied complementary to SM as exploratory tools of model's performance. More attention to model calibration is urgently needed.