Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 20 de 464
Filtrar
1.
Genet Epidemiol ; 48(1): 3-26, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-37830494

RESUMEN

Advances in DNA sequencing technologies have enabled genotyping of complex genetic regions exhibiting copy number variation and high allelic diversity, yet it is impossible to derive exact genotypes in all cases, often resulting in ambiguous genotype calls, that is, partially missing data. An example of such a gene region is the killer-cell immunoglobulin-like receptor (KIR) genes. These genes are of special interest in the context of allogeneic hematopoietic stem cell transplantation. For such complex gene regions, current haplotype reconstruction methods are not feasible as they cannot cope with the complexity of the data. We present an expectation-maximization (EM)-algorithm to estimate haplotype frequencies (HTFs) which deals with the missing data components, and takes into account linkage disequilibrium (LD) between genes. To cope with the exponential increase in the number of haplotypes as genes are added, we add three components to a standard EM-algorithm implementation. First, reconstruction is performed iteratively, adding one gene at a time. Second, after each step, haplotypes with frequencies below a threshold are collapsed in a rare haplotype group. Third, the HTF of the rare haplotype group is profiled in subsequent iterations to improve estimates. A simulation study evaluates the effect of combining information of multiple genes on the estimates of these frequencies. We show that estimated HTFs are approximately unbiased. Our simulation study shows that the EM-algorithm is able to combine information from multiple genes when LD is high, whereas increased ambiguity levels increase bias. Linear regression models based on this EM, show that a large number of haplotypes can be problematic for unbiased effect size estimation and that models need to be sparse. In a real data analysis of KIR genotypes, we compare HTFs to those obtained in an independent study. Our new EM-algorithm-based method is the first to account for the full genetic architecture of complex gene regions, such as the KIR gene region. This algorithm can handle the numerous observed ambiguities, and allows for the collapsing of haplotypes to perform implicit dimension reduction. Combining information from multiple genes improves haplotype reconstruction.


Asunto(s)
Variaciones en el Número de Copia de ADN , Modelos Genéticos , Humanos , Haplotipos , Frecuencia de los Genes , Genotipo
2.
Lancet Oncol ; 25(6): 779-789, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38701815

RESUMEN

BACKGROUND: Numerous studies have shown that older women with endometrial cancer have a higher risk of recurrence and cancer-related death. However, it remains unclear whether older age is a causal prognostic factor, or whether other risk factors become increasingly common with age. We aimed to address this question with a unique multimethod study design using state-of-the-art statistical and causal inference techniques on datasets of three large, randomised trials. METHODS: In this multimethod analysis, data from 1801 women participating in the randomised PORTEC-1, PORTEC-2, and PORTEC-3 trials were used for statistical analyses and causal inference. The cohort included 714 patients with intermediate-risk endometrial cancer, 427 patients with high-intermediate risk endometrial cancer, and 660 patients with high-risk endometrial cancer. Associations of age with clinicopathological and molecular features were analysed using non-parametric tests. Multivariable competing risk analyses were performed to determine the independent prognostic value of age. To analyse age as a causal prognostic variable, a deep learning causal inference model called AutoCI was used. FINDINGS: Median follow-up as estimated using the reversed Kaplan-Meier method was 12·3 years (95% CI 11·9-12·6) for PORTEC-1, 10·5 years (10·2-10·7) for PORTEC-2, and 6·1 years (5·9-6·3) for PORTEC-3. Both overall recurrence and endometrial cancer-specific death significantly increased with age. Moreover, older women had a higher frequency of deep myometrial invasion, serous tumour histology, and p53-abnormal tumours. Age was an independent risk factor for both overall recurrence (hazard ratio [HR] 1·02 per year, 95% CI 1·01-1·04; p=0·0012) and endometrial cancer-specific death (HR 1·03 per year, 1·01-1·05; p=0·0012) and was identified as a significant causal variable. INTERPRETATION: This study showed that advanced age was associated with more aggressive tumour features in women with endometrial cancer, and was independently and causally related to worse oncological outcomes. Therefore, our findings suggest that older women with endometrial cancer should not be excluded from diagnostic assessments, molecular testing, and adjuvant therapy based on their age alone. FUNDING: None.


Asunto(s)
Neoplasias Endometriales , Humanos , Femenino , Neoplasias Endometriales/patología , Neoplasias Endometriales/mortalidad , Factores de Edad , Anciano , Persona de Mediana Edad , Pronóstico , Ensayos Clínicos Controlados Aleatorios como Asunto , Factores de Riesgo , Recurrencia Local de Neoplasia/patología , Recurrencia Local de Neoplasia/epidemiología , Anciano de 80 o más Años
3.
J Vasc Surg ; 80(1): 232-239, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-38432488

RESUMEN

OBJECTIVE: The arteriovenous fistula (AVF) is the first choice for gaining vascular access for hemodialysis. However, 20% to 50% of AVFs fail within 4 months after creation. Although demographic risk factors have been described, there is little evidence on the intraoperative predictors of AVF maturation failure. The aim of this study was to assess the predictive value of intraoperative transit time flow measurements (TTFMs) on AVF maturation failure. METHODS: In this retrospective cohort study, intraoperative blood flow, measured using TTFM, was compared with AVF maturation after 6 weeks in 55 patients. Owing to its significantly higher prevalence and risk of nonmaturation, the radiocephalic AVF (RCAVF) was the main focus of this study. A recommended cutoff point for high vs low intraoperative blood flow was determined for RCAVFs, using a receiver operating characteristic curve. RESULTS: The average intraoperative blood flow in RCAVFs was 156 mL/min. Patients with an intraoperative blood flow equal or lower than the determined cutoff point of 160 mL/min, showed a 3.03 times increased risk of AVF maturation failure after 6 weeks, compared with patients with a higher intraoperative blood flow (P < .001). CONCLUSIONS: The intraoperative blood flow in RCAVFs measured by TTFM provides an adequate means of predicting AVF nonmaturation 6 weeks after surgery. For RCAVFs, a cutoff point for intraoperative blood flow of 160 mL/min is recommended for maximum sensitivity and specificity to predict AVF maturation failure after 6 weeks.


Asunto(s)
Derivación Arteriovenosa Quirúrgica , Valor Predictivo de las Pruebas , Arteria Radial , Flujo Sanguíneo Regional , Diálisis Renal , Grado de Desobstrucción Vascular , Humanos , Derivación Arteriovenosa Quirúrgica/efectos adversos , Estudios Retrospectivos , Femenino , Masculino , Velocidad del Flujo Sanguíneo , Persona de Mediana Edad , Anciano , Factores de Tiempo , Arteria Radial/fisiopatología , Arteria Radial/cirugía , Factores de Riesgo , Curva ROC , Insuficiencia del Tratamiento , Extremidad Superior/irrigación sanguínea
4.
BMC Med Res Methodol ; 24(1): 58, 2024 Mar 08.
Artículo en Inglés | MEDLINE | ID: mdl-38459475

RESUMEN

BACKGROUND: There is divergence in the rate at which people age. The concept of biological age is postulated to capture this variability, and hence to better represent an individual's true global physiological state than chronological age. Biological age predictors are often generated based on cross-sectional data, using biochemical or molecular markers as predictor variables. It is assumed that the difference between chronological and predicted biological age is informative of one's chronological age-independent aging divergence ∆. METHODS: We investigated the statistical assumptions underlying the most popular cross-sectional biological age predictors, based on multiple linear regression, the Klemera-Doubal method or principal component analysis. We used synthetic and real data to illustrate the consequences if this assumption does not hold. RESULTS: The most popular cross-sectional biological age predictors all use the same strong underlying assumption, namely that a candidate marker of aging's association with chronological age is directly informative of its association with the aging rate ∆. We called this the identical-association assumption and proved that it is untestable in a cross-sectional setting. If this assumption does not hold, weights assigned to candidate markers of aging are uninformative, and no more signal may be captured than if markers would have been assigned weights at random. CONCLUSIONS: Cross-sectional methods for predicting biological age commonly use the untestable identical-association assumption, which previous literature in the field had never explicitly acknowledged. These methods have inherent limitations and may provide uninformative results, highlighting the importance of researchers exercising caution in the development and interpretation of cross-sectional biological age predictors.


Asunto(s)
Envejecimiento , Humanos , Estudios Transversales , Biomarcadores , Modelos Lineales , Análisis Multivariante
5.
Eur J Epidemiol ; 2024 Apr 06.
Artículo en Inglés | MEDLINE | ID: mdl-38581608

RESUMEN

Aging is a multifaceted and intricate physiological process characterized by a gradual decline in functional capacity, leading to increased susceptibility to diseases and mortality. While chronological age serves as a strong risk factor for age-related health conditions, considerable heterogeneity exists in the aging trajectories of individuals, suggesting that biological age may provide a more nuanced understanding of the aging process. However, the concept of biological age lacks a clear operationalization, leading to the development of various biological age predictors without a solid statistical foundation. This paper addresses these limitations by proposing a comprehensive operationalization of biological age, introducing the "AccelerAge" framework for predicting biological age, and introducing previously underutilized evaluation measures for assessing the performance of biological age predictors. The AccelerAge framework, based on Accelerated Failure Time (AFT) models, directly models the effect of candidate predictors of aging on an individual's survival time, aligning with the prevalent metaphor of aging as a clock. We compare predictors based on the AccelerAge framework to a predictor based on the GrimAge predictor, which is considered one of the best-performing biological age predictors, using simulated data as well as data from the UK Biobank and the Leiden Longevity Study. Our approach seeks to establish a robust statistical foundation for biological age clocks, enabling a more accurate and interpretable assessment of an individual's aging status.

6.
Artículo en Inglés | MEDLINE | ID: mdl-38470976

RESUMEN

BACKGROUND: Estimating the risk of revision after arthroplasty could inform patient and surgeon decision-making. However, there is a lack of well-performing prediction models assisting in this task, which may be due to current conventional modeling approaches such as traditional survivorship estimators (such as Kaplan-Meier) or competing risk estimators. Recent advances in machine learning survival analysis might improve decision support tools in this setting. Therefore, this study aimed to assess the performance of machine learning compared with that of conventional modeling to predict revision after arthroplasty. QUESTION/PURPOSE: Does machine learning perform better than traditional regression models for estimating the risk of revision for patients undergoing hip or knee arthroplasty? METHODS: Eleven datasets from published studies from the Dutch Arthroplasty Register reporting on factors associated with revision or survival after partial or total knee and hip arthroplasty between 2018 and 2022 were included in our study. The 11 datasets were observational registry studies, with a sample size ranging from 3038 to 218,214 procedures. We developed a set of time-to-event models for each dataset, leading to 11 comparisons. A set of predictors (factors associated with revision surgery) was identified based on the variables that were selected in the included studies. We assessed the predictive performance of two state-of-the-art statistical time-to-event models for 1-, 2-, and 3-year follow-up: a Fine and Gray model (which models the cumulative incidence of revision) and a cause-specific Cox model (which models the hazard of revision). These were compared with a machine-learning approach (a random survival forest model, which is a decision tree-based machine-learning algorithm for time-to-event analysis). Performance was assessed according to discriminative ability (time-dependent area under the receiver operating curve), calibration (slope and intercept), and overall prediction error (scaled Brier score). Discrimination, known as the area under the receiver operating characteristic curve, measures the model's ability to distinguish patients who achieved the outcomes from those who did not and ranges from 0.5 to 1.0, with 1.0 indicating the highest discrimination score and 0.50 the lowest. Calibration plots the predicted versus the observed probabilities; a perfect plot has an intercept of 0 and a slope of 1. The Brier score calculates a composite of discrimination and calibration, with 0 indicating perfect prediction and 1 the poorest. A scaled version of the Brier score, 1 - (model Brier score/null model Brier score), can be interpreted as the amount of overall prediction error. RESULTS: Using machine learning survivorship analysis, we found no differences between the competing risks estimator and traditional regression models for patients undergoing arthroplasty in terms of discriminative ability (patients who received a revision compared with those who did not). We found no consistent differences between the validated performance (time-dependent area under the receiver operating characteristic curve) of different modeling approaches because these values ranged between -0.04 and 0.03 across the 11 datasets (the time-dependent area under the receiver operating characteristic curve of the models across 11 datasets ranged between 0.52 to 0.68). In addition, the calibration metrics and scaled Brier scores produced comparable estimates, showing no advantage of machine learning over traditional regression models. CONCLUSION: Machine learning did not outperform traditional regression models. CLINICAL RELEVANCE: Neither machine learning modeling nor traditional regression methods were sufficiently accurate in order to offer prognostic information when predicting revision arthroplasty. The benefit of these modeling approaches may be limited in this context.

7.
Lifetime Data Anal ; 30(2): 383-403, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38466520

RESUMEN

Hazard ratios are prone to selection bias, compromising their use as causal estimands. On the other hand, if Aalen's additive hazard model applies, the hazard difference has been shown to remain unaffected by the selection of frailty factors over time. Then, in the absence of confounding, observed hazard differences are equal in expectation to the causal hazard differences. However, in the presence of effect (on the hazard) heterogeneity, the observed hazard difference is also affected by selection of survivors. In this work, we formalize how the observed hazard difference (from a randomized controlled trial) evolves by selecting favourable levels of effect modifiers in the exposed group and thus deviates from the causal effect of interest. Such selection may result in a non-linear integrated hazard difference curve even when the individual causal effects are time-invariant. Therefore, a homogeneous time-varying causal additive effect on the hazard cannot be distinguished from a time-invariant but heterogeneous causal effect. We illustrate this causal issue by studying the effect of chemotherapy on the survival time of patients suffering from carcinoma of the oropharynx using data from a clinical trial. The hazard difference can thus not be used as an appropriate measure of the causal effect without making untestable assumptions.


Asunto(s)
Modelos de Riesgos Proporcionales , Humanos , Sesgo , Sesgo de Selección , Causalidad
8.
Lifetime Data Anal ; 30(2): 404-438, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38358572

RESUMEN

It is known that the hazard ratio lacks a useful causal interpretation. Even for data from a randomized controlled trial, the hazard ratio suffers from so-called built-in selection bias as, over time, the individuals at risk among the exposed and unexposed are no longer exchangeable. In this paper, we formalize how the expectation of the observed hazard ratio evolves and deviates from the causal effect of interest in the presence of heterogeneity of the hazard rate of unexposed individuals (frailty) and heterogeneity in effect (individual modification). For the case of effect heterogeneity, we define the causal hazard ratio. We show that the expected observed hazard ratio equals the ratio of expectations of the latent variables (frailty and modifier) conditionally on survival in the world with and without exposure, respectively. Examples with gamma, inverse Gaussian and compound Poisson distributed frailty and categorical (harming, beneficial or neutral) distributed effect modifiers are presented for illustration. This set of examples shows that an observed hazard ratio with a particular value can arise for all values of the causal hazard ratio. Therefore, the hazard ratio cannot be used as a measure of the causal effect without making untestable assumptions, stressing the importance of using more appropriate estimands, such as contrasts of the survival probabilities.


Asunto(s)
Fragilidad , Humanos , Sesgo , Probabilidad , Modelos de Riesgos Proporcionales , Sesgo de Selección , Ensayos Clínicos como Asunto
9.
Ann Surg ; 278(4): e766-e772, 2023 10 01.
Artículo en Inglés | MEDLINE | ID: mdl-36661037

RESUMEN

OBJECTIVE: To analyze risk and patterns of locoregional failure (LRF) in patients of the RAPIDO trial at 5 years. BACKGROUND: Multimodality treatment improves local control in rectal cancer. Total neoadjuvant treatment (TNT) aims to improve systemic control while local control is maintained. At 3 years, LRF rate was comparable between TNT and chemoradiotherapy in the RAPIDO trial. METHODS: A total of 920 patients were randomized between an experimental (EXP, short-course radiotherapy, chemotherapy, and surgery) and a standard-care group (STD, chemoradiotherapy, surgery, and optional postoperative chemotherapy). LRFs, including early LRF (no resection except for organ preservation/R2 resection) and locoregional recurrence (LRR) after an R0/R1 resection, were analyzed. RESULTS: Totally, 460 EXP and 446 STD patients were eligible. At 5.6 years (median follow-up), LRF was detected in 54/460 (12%) and 36/446 (8%) patients in the EXP and STD groups, respectively ( P =0.07), in which EXP patients were more often treated with 3-dimensional-conformed radiotherapy ( P =0.029). In the EXP group, LRR was detected more often [44/431 (10%) vs. 26/428 (6%); P =0.027], with more often a breached mesorectum (9/44 (21%) vs. 1/26 (4); P =0.048). The EXP treatment, enlarged lateral lymph nodes, positive circumferential resection margin, tumor deposits, and node positivity at pathology were the significant predictors for developing LRR. Location of the LRRs was similar between groups. Overall survival after LRF was comparable [hazard ratio: 0.76 (95% CI, 0.46-1.26); P =0.29]. CONCLUSIONS: The EXP treatment was associated with an increased risk of LRR, whereas the reduction in disease-related treatment failure and distant metastases remained after 5 years. Further refinement of the TNT in rectal cancer is mandated.


Asunto(s)
Neoplasias del Recto , Humanos , Protocolos de Quimioterapia Combinada Antineoplásica/uso terapéutico , Quimioradioterapia , Estudios de Seguimiento , Terapia Neoadyuvante , Recurrencia Local de Neoplasia/patología , Estadificación de Neoplasias , Neoplasias del Recto/patología
10.
Gastroenterology ; 163(2): 426-436, 2022 08.
Artículo en Inglés | MEDLINE | ID: mdl-35487291

RESUMEN

BACKGROUND & AIMS: Screening for celiac disease (CD) is recommended in children with affected first-degree relatives (FDR). However, the frequency of screening and at what age remain unknown. The aims of this study were to detect variables influencing the risk of CD development and develop and validate clinical prediction models to provide individualized screening advice. METHODS: We analyzed prospective data from the 10 years of follow-up of the PreventCD-birth cohort involving 944 genetically predisposed children with CD-FDR. Variables significantly influencing the CD risk were combined to determine a risk score. Landmark analyses were performed at different ages. Prediction models were created using multivariable Cox proportional hazards regression analyses, backward elimination, and Harrell's c-index for discrimination. Validation was done using data from the independent NeoCel cohort. RESULTS: In March 2019, the median follow-up was 8.3 years (22 days-12.0 years); 135/944 children developed CD (mean age, 4.3 years [range, 1.1-11.4]). CD developed significantly more often in girls (P = .005) and in Human Leukocyte Antigen (HLA)-DQ2 homozygous individuals (8-year cumulative incidence rate of 35.4% vs maximum of the other HLA-risk groups 18.2% [P < .001]). The effect of homozygosity DR3-DQ2/DR7-DQ2 on CD development was only present in girls (interaction P = .04). The prediction models showed good fit in the validation cohort (Cox regression 0.81 [0.54]). To calculate a personalized risk of CD development and provide screening advice, we designed the Prediction application https://hputter.shinyapps.io/preventcd/. CONCLUSION: Children with CD-FDR develop CD early in life, and their risk depends on gender, age and HLA-DQ, which are all factors that are important for sound screening advice. These children should be screened early in life, including HLA-DQ2/8-typing, and if genetically predisposed to CD, they should get further personalized screening advice using our Prediction application. TRIAL REGISTRATION NUMBER: ISRCTN74582487 (https://www.isrctn.com/search?q=ISRCTN74582487).


Asunto(s)
Enfermedad Celíaca , Enfermedad Celíaca/diagnóstico , Enfermedad Celíaca/epidemiología , Enfermedad Celíaca/genética , Niño , Preescolar , Estudios de Cohortes , Femenino , Predisposición Genética a la Enfermedad , Humanos , Estudios Prospectivos , Factores de Riesgo
11.
Biostatistics ; 23(2): 380-396, 2022 04 13.
Artículo en Inglés | MEDLINE | ID: mdl-35417532

RESUMEN

Multi-state models for event history analysis most commonly assume the process is Markov. This article considers tests of the Markov assumption that are applicable to general multi-state models. Two approaches using existing methodology are considered; a simple method based on including time of entry into each state as a covariate in Cox models for the transition intensities and a method involving detecting a shared frailty through a stratified Commenges-Andersen test. In addition, using the principle that under a Markov process the future rate of transitions of the process at times $t > s$ should not be influenced by the state occupied at time $s$, a new class of general tests is developed by considering summaries from families of log-rank statistics where patients are grouped by the state occupied at varying initial time $s$. An extended form of the test applicable to models that are Markov conditional on observed covariates is also derived. The null distribution of the proposed test statistics are approximated by using wild bootstrap sampling. The approaches are compared in simulation and applied to a dataset on sleeping behavior. The most powerful test depends on the particular departure from a Markov process, although the Cox-based method maintained good power in a wide range of scenarios. The proposed class of log-rank statistic based tests are most useful in situations where the non-Markov behavior does not persist, or is not uniform in nature across patient time.


Asunto(s)
Modelos Estadísticos , Proyectos de Investigación , Simulación por Computador , Humanos , Cadenas de Markov , Modelos de Riesgos Proporcionales
12.
Biostatistics ; 2022 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-36124984

RESUMEN

Rapidly detecting problems in the quality of care is of utmost importance for the well-being of patients. Without proper inspection schemes, such problems can go undetected for years. Cumulative sum (CUSUM) charts have proven to be useful for quality control, yet available methodology for survival outcomes is limited. The few available continuous time inspection charts usually require the researcher to specify an expected increase in the failure rate in advance, thereby requiring prior knowledge about the problem at hand. Misspecifying parameters can lead to false positive alerts and large detection delays. To solve this problem, we take a more general approach to derive the new Continuous time Generalized Rapid response CUSUM (CGR-CUSUM) chart. We find an expression for the approximate average run length (average time to detection) and illustrate the possible gain in detection speed by using the CGR-CUSUM over other commonly used monitoring schemes on a real-life data set from the Dutch Arthroplasty Register as well as in simulation studies. Besides the inspection of medical procedures, the CGR-CUSUM can also be used for other real-time inspection schemes such as industrial production lines and quality control of services.

13.
Br J Surg ; 110(11): 1458-1466, 2023 Oct 10.
Artículo en Inglés | MEDLINE | ID: mdl-37440361

RESUMEN

BACKGROUND: Previous studies have reported conflicting results of prolonged antibiotic prophylaxis on infectious complications after pancreatoduodenectomy. This study evaluated the effect of prolonged antibiotics on surgical-site infections (SSIs) after pancreatoduodenectomy. METHODS: A systematic review and meta-analysis was undertaken of SSIs in patients with perioperative (within 24 h) versus prolonged antibiotic (over 24 h) prophylaxis after pancreatoduodenectomy. SSIs were classified as organ/space infections or superficial SSI within 30 days after surgery. ORs were calculated using a Mantel-Haenszel fixed-effect model. RESULTS: Ten studies were included in the qualitative analysis, of which 8 reporting on 1170 patients were included in the quantitative analysis. The duration of prolonged antibiotic prophylaxis varied between 2 and 10 days after surgery. Four studies reporting on 782 patients showed comparable organ/space infection rates in patients receiving perioperative and prolonged antibiotics (OR 1.35, 95 per cent c.i. 0.94 to 1.93). However, among patients with preoperative biliary drainage (5 studies reporting on 577 patients), organ/space infection rates were lower with prolonged compared with perioperative antibiotics (OR 2.09, 1.43 to 3.07). Three studies (633 patients) demonstrated comparable superficial SSI rates between patients receiving perioperative versus prolonged prophylaxis (OR 1.54, 0.97 to 2.44), as well as in patients with preoperative biliary drainage in 4 studies reporting on 431 patients (OR 1.60, 0.89 to 2.88). CONCLUSION: Prolonged antibiotic prophylaxis is associated with fewer organ/space infection in patients who undergo preoperative biliary drainage. However, the optimal duration of antibiotic prophylaxis after pancreatoduodenectomy remains to be determined and warrants confirmation in an RCT.


Almost 40 in 100 patients develop an infection after pancreatic surgery. This study collected research that studied the effect of prolonged antibiotics after pancreatic surgery on the number of infections after surgery. Research articles were selected if patients who received antibiotics only during surgery were compared with those who had prolonged antibiotics after surgery. Prolonged antibiotics means antibiotics for longer than 24 h after surgery. Comparing patients who had antibiotics during surgery and those who received prolonged antibiotics after surgery, this study focused on the number of abdominal infections and wound infections. Ten studies were selected, and these studies included 1170 patients in total. The duration of prolonged antibiotics ranged from 2 to 5 days after pancreatic surgery. Four studies (with 782 patients) showed comparable abdominal infections in patients who had antibiotics only during surgery and those who had prolonged antibiotics after surgery (OR 1.35, 95 per cent c.i. 0.94 to 1.93). However, for patients with a stent in the bile duct (5 studies on 577 patients), fewer abdominal infections were seen in patients who had prolonged antibiotics after surgery compared with patients who received antibiotics only during surgery (OR 2.09, 1.43 to 3.07). Three studies (633 patients) showed the same rate of wound infections in patients who had antibiotics only during surgery compared with those who received prolonged antibiotics after operation (OR 1.54, 0.97 to 2.44). The number of wound infections was also the same in patients with a stent in the bile duct (OR 1.60, 0.89 to 2.88). Prolonged antibiotics after pancreatic surgery seem to lower abdominal infections in patients who have a stent placed in the bile duct. However, the best duration of antibiotics is unclear; a decent study is needed.

14.
Haematologica ; 108(4): 1105-1114, 2023 04 01.
Artículo en Inglés | MEDLINE | ID: mdl-35770529

RESUMEN

Primary plasma cell leukemia (pPCL) is a rare and challenging malignancy. There are limited data regarding optimum transplant approaches. We therefore undertook a retrospective analysis from 1998-2014 of 751 patients with pPCL undergoing one of four transplant strategies; single autologous transplant (single auto), single allogeneic transplant (allo-first) or a combined tandem approach with an allogeneic transplant following an autologous transplant (auto-allo) or a tandem autologous transplant (auto-auto). To avoid time bias, multiple analytic approaches were employed including Cox models with time-dependent covariates and dynamic prediction by landmarking. Initial comparisons were made between patients undergoing allo-first (n=70) versus auto-first (n=681), regardless of a subsequent second transplant. The allo-first group had a lower relapse rate (45.9%, 95% confidence interval [95% CI]: 33.2-58.6 vs. 68.4%, 64.4-72.4) but higher non-relapse mortality (27%, 95% CI: 15.9-38.1 vs. 7.3%, 5.2-9.4) at 36 months. Patients who underwent allo-first had a remarkably higher risk in the first 100 days for both overall survival and progression-free survival. Patients undergoing auto-allo (n=122) had no increased risk in the short term and a significant benefit in progression-free survival after 100 days compared to those undergoing single auto (hazard ratio [HR]=0.69, 95% CI: 0.52- 0.92; P=0.012). Auto-auto (n=117) was an effective option for patients achieving complete remission prior to their first transplant, whereas in patients who did not achieve complete remission prior to transplantation our modeling predicted that auto-allo was superior. This is the largest retrospective study reporting on transplantation in pPCL to date. We confirm a significant mortality risk within the first 100 days for allo-first and suggest that tandem transplant strategies are superior. Disease status at time of transplant influences outcome. This knowledge may help to guide clinical decisions on transplant strategy.


Asunto(s)
Trasplante de Células Madre Hematopoyéticas , Leucemia de Células Plasmáticas , Humanos , Estudios Retrospectivos , Trasplante Homólogo , Leucemia de Células Plasmáticas/diagnóstico , Leucemia de Células Plasmáticas/terapia , Supervivencia sin Enfermedad , Trasplante de Células Madre Hematopoyéticas/efectos adversos , Trasplante Autólogo , Recurrencia
15.
Biometrics ; 79(3): 1646-1656, 2023 09.
Artículo en Inglés | MEDLINE | ID: mdl-36124563

RESUMEN

The additive hazards model specifies the effect of covariates on the hazard in an additive way, in contrast to the popular Cox model, in which it is multiplicative. As the non-parametric model, additive hazards offer a very flexible way of modeling time-varying covariate effects. It is most commonly estimated by ordinary least squares. In this paper, we consider the case where covariates are bounded, and derive the maximum likelihood estimator under the constraint that the hazard is non-negative for all covariate values in their domain. We show that the maximum likelihood estimator may be obtained by separately maximizing the log-likelihood contribution of each event time point, and we show that the maximizing problem is equivalent to fitting a series of Poisson regression models with an identity link under non-negativity constraints. We derive an analytic solution to the maximum likelihood estimator. We contrast the maximum likelihood estimator with the ordinary least-squares estimator in a simulation study and show that the maximum likelihood estimator has smaller mean squared error than the ordinary least-squares estimator. An illustration with data on patients with carcinoma of the oropharynx is provided.


Asunto(s)
Modelos de Riesgos Proporcionales , Humanos , Funciones de Verosimilitud , Análisis de los Mínimos Cuadrados , Simulación por Computador
16.
Stat Med ; 42(7): 1066-1081, 2023 03 30.
Artículo en Inglés | MEDLINE | ID: mdl-36694108

RESUMEN

Unobserved individual heterogeneity is a common challenge in population cancer survival studies. This heterogeneity is usually associated with the combination of model misspecification and the failure to record truly relevant variables. We investigate the effects of unobserved individual heterogeneity in the context of excess hazard models, one of the main tools in cancer epidemiology. We propose an individual excess hazard frailty model to account for individual heterogeneity. This represents an extension of frailty modeling to the relative survival framework. In order to facilitate the inference on the parameters of the proposed model, we select frailty distributions which produce closed-form expressions of the marginal hazard and survival functions. The resulting model allows for an intuitive interpretation, in which the frailties induce a selection of the healthier individuals among survivors. We model the excess hazard using a flexible parametric model with a general hazard structure which facilitates the inclusion of time-dependent effects. We illustrate the performance of the proposed methodology through a simulation study. We present a real-data example using data from lung cancer patients diagnosed in England, and discuss the impact of not accounting for unobserved heterogeneity on the estimation of net survival. The methodology is implemented in the R package IFNS.


Asunto(s)
Fragilidad , Neoplasias Pulmonares , Humanos , Modelos de Riesgos Proporcionales , Análisis de Supervivencia , Modelos Estadísticos
17.
J Endovasc Ther ; 30(3): 364-371, 2023 06.
Artículo en Inglés | MEDLINE | ID: mdl-35236169

RESUMEN

PURPOSE: Near-infrared (NIR) fluorescence imaging using indocyanine green (ICG) is gaining popularity for the quantification of tissue perfusion, including foot perfusion in patients with lower extremity arterial disease (LEAD). However, the absolute fluorescence intensity is influenced by patient-and system-related factors limiting reliable and valid quantification. To enhance the quality of quantitative perfusion assessment using ICG NIR fluorescence imaging, normalization of the measured time-intensity curves seems useful. MATERIALS AND METHODS: In this cohort study, the effect of normalization on 2 aspects of ICG NIR fluorescence imaging in assessment of foot perfusion was measured: the repeatability and the region selection. Following intravenous administration of ICG, the NIR fluorescence intensity in both feet was recorded for 10 mins using the Quest Spectrum platform®. The effect of normalization on repeatability was measured in the nontreated foot in patients undergoing unilateral revascularization preprocedural and postprocedural (repeatability group). The effect of normalization on region selection was performed in patients without LEAD (region selection group). Absolute and normalized time-intensity curves were compared. RESULTS: Successful ICG NIR fluorescence imaging was performed in 54 patients (repeatability group, n = 38; region selection group, n = 16). For the repeatability group, normalization of the time-intensity curves displayed a comparable inflow pattern for repeated measurements. For the region selection group, the maximum fluorescence intensity (Imax) demonstrated significant differences between the 3 measured regions of the foot (P = .002). Following normalization, the time-intensity curves in both feet were comparable for all 3 regions. CONCLUSION: This study shows the effect of normalization of time-intensity curves on both the repeatability and region selection in ICG NIR fluorescence imaging. The significant difference between absolute parameters in various regions of the foot demonstrates the limitation of absolute intensity in interpreting tissue perfusion. Therefore, normalization and standardization of camera settings are essential steps toward reliable and valid quantification of tissue perfusion using ICG NIR fluorescence imaging.


Asunto(s)
Verde de Indocianina , Extremidad Inferior , Humanos , Estudios de Cohortes , Resultado del Tratamiento , Imagen Óptica/métodos , Perfusión
18.
BMC Med Res Methodol ; 23(1): 51, 2023 02 24.
Artículo en Inglés | MEDLINE | ID: mdl-36829145

RESUMEN

BACKGROUND: In health research, several chronic diseases are susceptible to competing risks (CRs). Initially, statistical models (SM) were developed to estimate the cumulative incidence of an event in the presence of CRs. As recently there is a growing interest in applying machine learning (ML) for clinical prediction, these techniques have also been extended to model CRs but literature is limited. Here, our aim is to investigate the potential role of ML versus SM for CRs within non-complex data (small/medium sample size, low dimensional setting). METHODS: A dataset with 3826 retrospectively collected patients with extremity soft-tissue sarcoma (eSTS) and nine predictors is used to evaluate model-predictive performance in terms of discrimination and calibration. Two SM (cause-specific Cox, Fine-Gray) and three ML techniques are compared for CRs in a simple clinical setting. ML models include an original partial logistic artificial neural network for CRs (PLANNCR original), a PLANNCR with novel specifications in terms of architecture (PLANNCR extended), and a random survival forest for CRs (RSFCR). The clinical endpoint is the time in years between surgery and disease progression (event of interest) or death (competing event). Time points of interest are 2, 5, and 10 years. RESULTS: Based on the original eSTS data, 100 bootstrapped training datasets are drawn. Performance of the final models is assessed on validation data (left out samples) by employing as measures the Brier score and the Area Under the Curve (AUC) with CRs. Miscalibration (absolute accuracy error) is also estimated. Results show that the ML models are able to reach a comparable performance versus the SM at 2, 5, and 10 years regarding both Brier score and AUC (95% confidence intervals overlapped). However, the SM are frequently better calibrated. CONCLUSIONS: Overall, ML techniques are less practical as they require substantial implementation time (data preprocessing, hyperparameter tuning, computational intensity), whereas regression methods can perform well without the additional workload of model training. As such, for non-complex real life survival data, these techniques should only be applied complementary to SM as exploratory tools of model's performance. More attention to model calibration is urgently needed.


Asunto(s)
Aprendizaje Automático , Modelos Estadísticos , Humanos , Pronóstico , Estudios Retrospectivos , Redes Neurales de la Computación
19.
Ann Vasc Surg ; 93: 308-318, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-36773932

RESUMEN

BACKGROUND: When introducing new techniques, attention must be paid to learning curve. Besides quantitative outcomes, qualitative factors of influence should be taken into consideration. This retrospective cohort study describes the quantitative learning curve of complex endovascular aortic repair (EVAR) in a nonhigh-volume academic center and provides qualitative factors that were perceived as contributors to this learning curve. With these factors, we aim to aid in future implementation of new techniques. METHODS: All patients undergoing complex EVAR in the Leiden University Medical Center (LUMC) between July 2013 and April 2021 were included (n = 90). Quantitative outcomes were as follows: operating time, blood loss, volume of contrast, hospital stay, major adverse events (MAE), 30-day mortality, and complexity. Patients were divided into 3 temporal groups (n = 30) for dichotomous outcomes. Regression plots were used for continuous outcomes. In 2017, the treatment team was interviewed by an external researcher. These interviews were reanalyzed for factors that contributed to successful implementation. RESULTS: Length of hospital stay (P = 0.008) and operating time (P = 0.010) decreased significantly over time. Fewer cardiac complications occurred in the third group (3: 0% vs. 2: 17% vs. 1: 17%, P = 0.042). There was a trend of increasing complexity (P = 0.076) and number of fenestrations (P = 0.060). No significant changes occurred in MAE and 30-day mortality. Qualitative factors that, according to the interviewees, positively influenced the learning curve were as follows: communication, mutual trust, a shared sense of responsibility and collective goals, clear authoritative structures, mutual learning, and team capabilities. CONCLUSIONS: In addition to factors previously identified in the literature, new learning curve factors were found (mutual learning and shared goals in the operating room (OR)) that should be taken into account when implementing new techniques.


Asunto(s)
Aneurisma de la Aorta Abdominal , Implantación de Prótesis Vascular , Procedimientos Endovasculares , Humanos , Reparación Endovascular de Aneurismas , Factores de Riesgo , Estudios Retrospectivos , Aneurisma de la Aorta Abdominal/cirugía , Curva de Aprendizaje , Procedimientos Endovasculares/efectos adversos , Factores de Tiempo , Resultado del Tratamiento , Complicaciones Posoperatorias
20.
Ann Vasc Surg ; 93: 283-290, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-36642169

RESUMEN

BACKGROUND: The angiosome concept is defined as the anatomical territory of a source artery within all tissue layers. When applying this theory in vascular surgery, direct revascularization (DR) is preferred to achieve increased blood flow toward the targeted angiosome of the foot in patients with lower extremity arterial disease (LEAD). This study evaluates the applicability of the angiosome concept using quantified near-infrared (NIR) fluorescence imaging with indocyanine green (ICG). METHODS: This study included patients undergoing an endovascular- or surgical revascularization of the leg between January 2019 and December 2021. Preinterventional and postinterventional ICG NIR fluorescence imaging was performed. Three angiosomes on the dorsum of the foot were determined: the posterior tibial artery (hallux), the anterior tibial artery (dorsum of the foot) and the combined angiosome (second to fifth digit). The angiosomes were classified from the electronic patient records and the degree of collateralization was classified based on preprocedural computed tomography angiography and/or X-ray angiography. Fluorescence intensity was quantified in all angiosomes. A subgroup analysis based on endovascular or surgical revascularized angiosomes, and within critical limb threatening ischemia (CLTI) patients was performed. RESULTS: ICG NIR fluorescence measurements were obtained in 52 patients (54 limbs) including a total of 157 angiosomes (121 DR and 36 indirect revascularizations [IR]). A significant improvement of all perfusion parameters in both the directly and indirectly revascularized angiosomes was found (P-values between <0.001-0.007). Within the indirectly revascularized angiosomes, 90.6% of the scored collaterals were classified as significant. When comparing the percentual change in perfusion parameters between the directly and indirectly revascularized angiosomes, no significant difference was seen in all perfusion parameters (P-values between 0.253 and 0.881). Similar results were shown in the CLTI patients subgroup analysis, displaying a significant improvement of perfusion parameters in both the direct and indirect angiosome groups (P-values between <0.001 and 0.007), and no significant difference when comparing the percentual parameter improvement between both angiosome groups (P-values between 0.134 and 0.359). Furthermore, no significant differences were observed when comparing percentual changes of perfusion parameters in directly and indirectly revascularized angiosomes for both endovascular and surgical interventions (P-values between 0.053 and 0.899). CONCLUSIONS: This study proves that both DR and IR of an angiosome leads to an improvement of perfusion. This suggests that interventional strategies should not only focus on creating in-line flow to the supplying angiosome. One can argue that the angiosome concept is not applicable in patients with LEAD.


Asunto(s)
Verde de Indocianina , Recuperación del Miembro , Humanos , Resultado del Tratamiento , Recuperación del Miembro/métodos , Pie/irrigación sanguínea , Arterias Tibiales , Isquemia , Flujo Sanguíneo Regional
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda