RESUMEN
Polygenic risk scores (PRS) enhance population risk stratification and advance personalized medicine, but existing methods face several limitations, encompassing issues related to computational burden, predictive accuracy, and adaptability to a wide range of genetic architectures. To address these issues, we propose Aggregated L0Learn using Summary-level data (ALL-Sum), a fast and scalable ensemble learning method for computing PRS using summary statistics from genome-wide association studies (GWAS). ALL-Sum leverages a L0L2 penalized regression and ensemble learning across tuning parameters to flexibly model traits with diverse genetic architectures. In extensive large-scale simulations across a wide range of polygenicity and GWAS sample sizes, ALL-Sum consistently outperformed popular alternative methods in terms of prediction accuracy, runtime, and memory usage by 10%, 20-fold, and threefold, respectively, and demonstrated robustness to diverse genetic architectures. We validated the performance of ALL-Sum in real data analysis of 11 complex traits using GWAS summary statistics from nine data sources, including the Global Lipids Genetics Consortium, Breast Cancer Association Consortium, and FinnGen Biobank, with validation in the UK Biobank. Our results show that on average, ALL-Sum obtained PRS with 25% higher accuracy on average, with 15 times faster computation and half the memory than the current state-of-the-art methods, and had robust performance across a wide range of traits and diseases. Furthermore, our method demonstrates stable prediction when using linkage disequilibrium computed from different data sources. ALL-Sum is available as a user-friendly R software package with publicly available reference data for streamlined analysis.
Asunto(s)
Estudio de Asociación del Genoma Completo , Herencia Multifactorial , Humanos , Herencia Multifactorial/genética , Estudio de Asociación del Genoma Completo/métodos , Aprendizaje Automático , Predisposición Genética a la Enfermedad , Polimorfismo de Nucleótido SimpleRESUMEN
INTRODUCTION: Despite the serious risks of diabetes with hepatitis C virus (HCV) infection, this preventable comorbidity is rarely a priority for HCV elimination. We aim to examine how a shared care model could eliminate HCV in patients with diabetes (PwD) in primary care. METHODS: There were 27 community-based Diabetes Health Promotion Institutes in each township/city of Changhua, Taiwan. PwD from these institutes from January 2018 to December 2020 were enrolled. HCV screening and treatment were integrated into diabetes structured care through collaboration between diabetes care and HCV care teams. Outcome measures included HCV care continuum indicators. Township/city variation in HCV infection prevalence and care cascades were also examined. RESULTS: Of the 10,684 eligible PwD, 9,984 (93.4%) underwent HCV screening, revealing a 6.18% (n = 617) anti-HCV seroprevalence. Among the 597 eligible seropositive individuals, 507 (84.9%) completed the RNA test, obtaining 71.8% positives. Treatment was initiated by 327 (89.8%) of 364 viremic patients, and 315 (86.5%) completed it, resulting in a final cure rate of 79.4% (n = 289). Overall, with the introduction of antivirals in this cohort, the prevalence of viremic HCV infection dropped from 4.44% to 1.34%, yielding a 69.70% (95% credible interval 63.64%-77.03%) absolute reduction. DISCUSSION: Although HCV prevalence varied, the care cascades achieved consistent results across townships/cities. We have further successfully implemented the model in county-wide hospital-based diabetes clinics, eventually treating 89.6% of the total PwD. A collaborative effort between diabetes care and HCV elimination enhanced the testing and treatment in PwD through an innovative shared care model.
RESUMEN
OBJECTIVE: Exercise improves health, but illnesses can cause changes in exercise behavior, including starting or stopping. This study investigated the effects of chronic disease screening on inactive individuals' exercise behavior and analyzed the impact of age and chronic disease history on this relationship using stratified analysis. METHODS: Using a community-based prospective observational cohort design and data from the Changhua Community-Based Integrated Screening (CHCIS) dataset from 2005 to 2020, we examined 12,038 people who were screened at least twice and self-reported having never exercised at their first screening. Changes in exercise behavior were classified as "initiating exercise" and "remaining inactive." We obtained chronic disease screening results from CHCIS records, which included measurements of waist circumference, blood glucose, blood pressure, triglycerides, and high-density lipoproteins. SAS version 9.4 was used for COX proportional hazards regression. RESULTS: The findings indicated that abnormal waist circumference and blood pressure increased the likelihood of initiating exercise compared to normal results. Age stratification showed that those aged 40-49 with abnormal results were more likely to start exercising than normal participants, but not those under 40 or over 65. When stratified by chronic disease history, abnormal screening results correlated with exercise initiation only in groups without chronic disease history, except for those with a history of hyperlipidemia. CONCLUSIONS: This is the first study to demonstrate that abnormal screening results may influence exercise initiation in individuals who have never exercised, and this association varies by screening item, age, and disease history.
Asunto(s)
Conducta Sedentaria , Humanos , Estudios Prospectivos , Taiwán , Presión Sanguínea/fisiología , Enfermedad CrónicaRESUMEN
PURPOSE: Despite the increase in outpatient total knee arthroplasty (TKA) procedures, many patients are still discharged to non-home locations following index surgery. The ability to accurately predict non-home discharge (NHD) following TKAs has the potential to promote a reduction in associated adverse events and excess healthcare costs. This study aimed to evaluate whether a machine learning (ML) model could outperform the American College of Surgeons (ACS) Risk Calculator in predicting NHD following TKA, using the same set of clinical variables. We hypothesised that the ML model would outperform the ACS Risk Calculator. METHODS: Data from 365,240 patients who underwent a primary TKA between 2013 and 2020 were extracted from the ACS-National Surgical Quality Improvement Program database and used to develop an artificial neural network (ANN) to predict discharge disposition following primary TKA. The ANN and ACS calculator were assessed and compared using discrimination, calibration and decision curve analysis. RESULTS: Age (>68 years), BMI (>35.5 kg/m2) and ASA Class (≥2) were found to be the most important variables in predicting NHD following TKA. When compared to the ACS calculator, the ANN model demonstrated a significantly superior ability to distinguish the area under the receiver operating characteristic curve (AUC) among NHD patients and provided probability predictions well aligned with the true outcomes (AUCANN = 0.69, AUCACS = 0.50, p = 0.002, slopeANN = 0.85, slopeACS = 4.46, interceptANN = 0.04, and interceptACS = 0.06). CONCLUSION: Our findings support the hypothesis that machine learning models outperform the ACS Risk Calculator in predicting non-home discharge after TKA, even when constrained to the same clinical variables. Our findings underscore the potential benefits of integrating machine learning models into clinical practice for improving preoperative patient risk identification, optimisation, counselling and clinical decision-making. LEVEL OF EVIDENCE: III.
RESUMEN
BACKGROUND: Although risk calculators are used to prognosticate postoperative outcomes following revision total hip and knee arthroplasty (total joint arthroplasty [TJA]), machine learning (ML) based predictive tools have emerged as a promising alternative for improved risk stratification. This study aimed to compare the predictive ability of ML models for 30-day mortality following revision TJA to that of traditional risk-assessment indices such as the CARDE-B score (congestive heart failure, albumin (< 3.5 mg/dL), renal failure on dialysis, dependence for daily living, elderly (> 65 years of age), and body mass index (BMI) of < 25 kg/m2), 5-item modified frailty index (5MFI), and 6MFI. METHODS: Adult patients undergoing revision TJA between 2013 and 2020 were selected from the American College of Surgeons National Surgical Quality Improvement Program database and randomly split 80:20 to compose the training and validation cohorts. There were 3 ML models - extreme gradient boosting, random forest, and elastic-net penalized logistic regression (NEPLR) - that were developed and evaluated using discrimination, calibration metrics, and accuracy. The discrimination of CARDE-B, 5MFI, and 6MFI scores was assessed individually and compared to that of ML models. RESULTS: All models were equally accurate (Brier score = 0.005) and demonstrated outstanding discrimination with similar areas under the receiver operating characteristic curve (AUCs, extreme gradient boosting = 0.94, random forest = NEPLR = 0.93). The NEPLR was the best-calibrated model overall (slope = 0.54, intercept = -0.004). The CARDE-B had the highest discrimination among the scores (AUC = 0.89), followed by 6MFI (AUC = 0.80), and 5MFI (AUC = 0.68). Albumin < 3.5 mg/dL and BMI (< 30.15) were the most important predictors of 30-day mortality following revision TJA. CONCLUSIONS: The ML models outperform traditional risk-assessment indices in predicting postoperative 30-day mortality after revision TJA. Our findings highlight the utility of ML for risk stratification in a clinical setting. The identification of hypoalbuminemia and BMI as prognostic markers may allow patient-specific perioperative optimization strategies to improve outcomes following revision TJA.
Asunto(s)
Artroplastia de Reemplazo de Cadera , Artroplastia de Reemplazo de Rodilla , Fragilidad , Aprendizaje Automático , Reoperación , Humanos , Anciano , Masculino , Femenino , Medición de Riesgo/métodos , Fragilidad/mortalidad , Reoperación/estadística & datos numéricos , Persona de Mediana Edad , Algoritmos , Factores de Riesgo , Anciano de 80 o más AñosRESUMEN
BACKGROUND: While predictive capabilities of machine learning (ML) algorithms for hip and knee total joint arthroplasty (TJA) have been demonstrated in previous studies, their performance in racial and ethnic minority patients has not been investigated. This study aimed to assess the performance of ML algorithms in predicting 30-day complications following TJA in racial and ethnic minority patients. METHODS: A total of 267,194 patients undergoing primary TJA between 2013 and 2020 were identified from a national outcomes database. The patient cohort was stratified according to race, with further sub-stratification into Hispanic or non-Hispanic ethnicity. There were two ML algorithms, histogram-based gradient boosting (HGB), and random forest (RF), that were modeled to predict 30-day complications following primary TJA in the overall population. They were subsequently assessed in each racial and ethnic subcohort using discrimination, calibration, accuracy, and potential clinical usefulness. RESULTS: Both models achieved excellent (Area under the curve (AUC) > 0.8) discrimination (AUCHGB = AUCRF = 0.86), calibration, and accuracy (HGB: slope = 1.00, intercept = -0.03, Brier score = 0.12; RF: slope = 0.97, intercept = 0.02, Brier score = 0.12) in the non-Hispanic White population (N = 224,073). Discrimination decreased in the White Hispanic (N = 10,429; AUC = 0.75 to 0.76), Black (N = 25,116; AUC = 0.77), Black Hispanic (N = 240; AUC = 0.78), Asian non-Hispanic (N = 4,809; AUC = 0.78 to 0.79), and overall (N = 267,194; AUC = 0.75 to 0.76) cohorts, but remained well-calibrated. We noted the poorest model discrimination (N = 1,870; AUC = 0.67 to 0.68) and calibration in the American-Indian cohort. CONCLUSION: The ML algorithms demonstrate an inferior predictive ability for 30-day complications following primary TJA in racial and ethnic minorities when trained on existing healthcare big data. This may be attributed to the disproportionate underrepresentation of minority groups within these databases, as demonstrated by the smaller sample sizes available to train the machine learning models. The ML models developed using smaller datasets (e.g., in racial and ethnic minorities) may not be as accurate as larger datasets, highlighting the need for equity-conscious model development.
RESUMEN
Importance: Effects of screening for Helicobacter pylori on gastric cancer incidence and mortality are unknown. Objective: To evaluate the effects of an invitation to screen for H pylori on gastric cancer incidence and mortality. Design, Setting, and Participants: A pragmatic randomized clinical trial of residents aged 50 to 69 years in Changhua County, Taiwan, eligible for biennial fecal immunochemical tests (FIT) for colon cancer screening. Participants were randomized to either an invitation for H pylori stool antigen (HPSA) + FIT assessment or FIT alone. The study was conducted between January 1, 2014, and September 27, 2018. Final follow-up occurred December 31, 2020. Intervention: Invitation for testing for H pylori stool antigen. Main Outcomes and Measures: The primary outcomes were gastric cancer incidence and gastric cancer mortality. All invited individuals were analyzed according to the groups to which they were randomized. Results: Of 240â¯000 randomized adults (mean age, 58.1 years [SD, 5.6]; 46.8% female), 63â¯508 were invited for HPSA + FIT, and 88â¯995 were invited for FIT alone. Of the 240â¯000 randomized, 38â¯792 who were unreachable and 48â¯705 who did not receive an invitation were excluded. Of those invited, screening participation rates were 49.6% (31â¯497/63â¯508) for HPSA + FIT and 35.7% (31â¯777/88â¯995) for FIT alone. Among 12â¯142 participants (38.5%) with positive HPSA results, 8664 (71.4%) received antibiotic treatment, and eradication occurred in 91.9%. Gastric cancer incidence rates were 0.032% in the HPSA + FIT group and 0.037% in the FIT-alone group (mean difference, -0.005% [95% CI, -0.013% to 0.003%]; P = .23). Gastric cancer mortality rates were 0.015% in the HPSA + FIT group and 0.013% in the FIT-alone group (mean difference, 0.002% [95% CI, -0.004% to 0.007%]; P = .57). After adjusting for differences in screening participation, length of follow-up, and patient characteristics in post hoc analyses, an invitation for HPSA + FIT was associated with lower rates of gastric cancer (0.79 [95% CI, 0.63-0.98]) but not with gastric cancer mortality (1.02 [95% CI, 0.73-1.40]), compared with FIT alone. Among participants who received antibiotics, the most common adverse effects were abdominal pain or diarrhea (2.1%) and dyspepsia or poor appetite (0.8%). Conclusions and Relevance: Among residents of Taiwan, an invitation to test for HPSA combined with FIT did not reduce rates of gastric cancer or gastric cancer mortality, compared with an invitation for FIT alone. However, when differences in screening participation and length of follow-up were accounted for, gastric cancer incidence, but not gastric cancer mortality, was lower in the HSPA + FIT group, compared with FIT alone. Trial Registration: ClinicalTrials.gov Identifier: NCT01741363.
RESUMEN
INTRODUCTION: The rising demand for total knee arthroplasty (TKA) is expected to increase the total number of TKA-related readmissions, presenting significant public health and economic burden. With the increasing use of Patient-Reported Outcomes Measurement Information System (PROMIS) scores to inform clinical decision-making, this study aimed to investigate whether preoperative PROMIS scores are predictive of 90-day readmissions following primary TKA. MATERIALS AND METHODS: We retrospectively reviewed a consecutive series of 10,196 patients with preoperative PROMIS scores who underwent primary TKA. Two comparison groups, readmissions (n = 79; 3.6%) and non-readmissions (n = 2091; 96.4%) were established. Univariate and multivariate logistic regression analyses were then performed with readmission as the outcome variable to determine whether preoperative PROMIS scores could predict 90-day readmission. RESULTS: The study cohort consisted of 2170 patients overall. Non-white patients (OR = 3.53, 95% CI [1.16, 10.71], p = 0.026) and patients with cardiovascular or cerebrovascular disease (CVD) (OR = 1.66, 95% CI [1.01, 2.71], p = 0.042) were found to have significantly higher odds of 90-day readmission after TKA. Preoperative PROMIS-PF10a (p = 0.25), PROMIS-GPH (p = 0.38), and PROMIS-GMH (p = 0.07) scores were not significantly associated with 90-day readmission. CONCLUSION: This study demonstrates that preoperative PROMIS scores may not be used to predict 90-day readmission following primary TKA. Non-white patients and patients with CVD are 3.53 and 1.66 times more likely to be readmitted, highlighting existing racial disparities and medical comorbidities contributing to readmission in patients undergoing TKA.
Asunto(s)
Artroplastia de Reemplazo de Rodilla , Enfermedades Cardiovasculares , Humanos , Readmisión del Paciente , Estudios Retrospectivos , ComorbilidadRESUMEN
INTRODUCTION: Prolonged length of stay (LOS) following revision total hip arthroplasty (THA) can lead to increased healthcare costs, higher rates of readmission, and lower patient satisfaction. In this study, we investigated the predictive power of machine learning (ML) models for prolonged LOS after revision THA using patient data from a national-scale patient repository. MATERIALS AND METHODS: We identified 11,737 revision THA cases from the American College of Surgeons National Surgical Quality Improvement Program database from 2013 to 2020. Prolonged LOS was defined as exceeding the 75th value of all LOSs in the study cohort. We developed four ML models: artificial neural network (ANN), random forest, histogram-based gradient boosting, and k-nearest neighbor, to predict prolonged LOS after revision THA. Each model's performance was assessed during training and testing sessions in terms of discrimination, calibration, and clinical utility. RESULTS: The ANN model was the most accurate with an AUC of 0.82, calibration slope of 0.90, calibration intercept of 0.02, and Brier score of 0.140 during testing, indicating the model's competency in distinguishing patients subject to prolonged LOS with minimal prediction error. All models showed clinical utility by producing net benefits in the decision curve analyses. The most significant predictors of prolonged LOS were preoperative blood tests (hematocrit, platelet count, and leukocyte count), preoperative transfusion, operation time, indications for revision THA (infection), and age. CONCLUSIONS: Our study demonstrated that the ML model accurately predicted prolonged LOS after revision THA. The results highlighted the importance of the indications for revision surgery in determining the risk of prolonged LOS. With the model's aid, clinicians can stratify individual patients based on key factors, improve care coordination and discharge planning for those at risk of prolonged LOS, and increase cost efficiency.
RESUMEN
INTRODUCTION: Length of stay (LOS) has been extensively assessed as a marker for healthcare utilization, functional outcomes, and cost of care for patients undergoing arthroplasty. The notable patient-to-patient variation in LOS following revision hip and knee total joint arthroplasty (TJA) suggests a potential opportunity to reduce preventable discharge delays. Previous studies investigated the impact of social determinants of health (SDoH) on orthopaedic conditions and outcomes using deprivation indices with inconsistent findings. The aim of the study is to compare the association of three publicly available national indices of social deprivation with prolonged LOS in revision TJA patients. MATERIALS AND METHODS: 1,047 consecutive patients who underwent a revision TJA were included in this retrospective study. Patient demographics, comorbidities, and behavioral characteristics were extracted. Area deprivation index (ADI), social deprivation index (SDI), and social vulnerability index (SVI) were recorded for each patient, following which univariate and multivariate logistic regression analyses were performed to determine the relationship between deprivation measures and prolonged LOS (greater than five days postoperatively). RESULTS: 193 patients had a prolonged LOS following surgery. Categorical ADI was significantly associated with prolonged LOS following surgery (OR = 2.14; 95% CI = 1.30-3.54; p = 0.003). No association with LOS was found using SDI and SVI. When accounting for other covariates, only ASA scores (ORrange=3.43-3.45; p < 0.001) and age (ORrange=1.00-1.03; prange=0.025-0.049) were independently associated with prolonged LOS. CONCLUSION: The varying relationship observed between the length of stay and socioeconomic markers in this study indicates that the selection of a deprivation index could significantly impact the outcomes when investigating the association between socioeconomic deprivation and clinical outcomes. These results suggest that ADI is a potential metric of social determinants of health that is applicable both clinically and in future policies related to hospital stays including bundled payment plan following revision TJA.
Asunto(s)
Artroplastia de Reemplazo de Cadera , Artroplastia de Reemplazo de Rodilla , Tiempo de Internación , Reoperación , Determinantes Sociales de la Salud , Humanos , Artroplastia de Reemplazo de Cadera/estadística & datos numéricos , Tiempo de Internación/estadística & datos numéricos , Artroplastia de Reemplazo de Rodilla/estadística & datos numéricos , Masculino , Femenino , Anciano , Estudios Retrospectivos , Persona de Mediana Edad , Reoperación/estadística & datos numéricos , Anciano de 80 o más AñosRESUMEN
BACKGROUND: The benefits of mammographic screening have been shown to include a decrease in mortality due to breast cancer. Taiwan's Breast Cancer Screening Program is a national screening program that has offered biennial mammographic breast cancer screening for women aged 50-69 years since 2004 and for those aged 45-69 years since 2009, with the implementation of mobile units in 2010. The purpose of this study was to compare the performance results of the program with changes in the previous (2004-2009) and latter (2010-2020) periods. METHODS: A cohort of 3,665,078 women who underwent biennial breast cancer mammography screenings from 2004 to 2020 was conducted, and data were obtained from the Health Promotion Administration, Ministry of Health and Welfare of Taiwan. We compared the participation of screened women and survival rates from breast cancer in the earlier and latter periods across national breast cancer screening programs. RESULTS: Among 3,665,078 women who underwent 8,169,869 examinations in the study population, the screened population increased from 3.9% in 2004 to 40% in 2019. The mean cancer detection rate was 4.76 and 4.08 cancers per 1000 screening mammograms in the earlier (2004-2009) and latter (2010-2020) periods, respectively. The 10-year survival rate increased from 89.68% in the early period to 97.33% in the latter period. The mean recall rate was 9.90% (95% CI: 9.83-9.97%) in the early period and decreased to 8.15% (95%CI, 8.13-8.17%) in the latter period. CONCLUSIONS: The evolution of breast cancer screening in Taiwan has yielded favorable outcomes by increasing the screening population, increasing the 10-year survival rate, and reducing the recall rate through the participation of young women, the implementation of a mobile unit service and quality assurance program, thereby providing historical evidence to policy makers to plan future needs.
Asunto(s)
Neoplasias de la Mama , Femenino , Humanos , Neoplasias de la Mama/diagnóstico por imagen , Neoplasias de la Mama/epidemiología , Taiwán/epidemiología , Detección Precoz del Cáncer/métodos , Mamografía/métodos , Tasa de Supervivencia , Tamizaje Masivo/métodosRESUMEN
BACKGROUND: Viral therapies developed for cancer treatment have classically prioritized direct oncolytic effects over their immune activating properties. However, recent clinical insights have challenged this longstanding prioritization and have shifted the focus to more immune-based mechanisms. Through the potential utilization of novel, inherently immune-stimulating, oncotropic viruses there is a therapeutic opportunity to improve anti-tumor outcomes through virus-mediated immune activation. PV001-DV is an attenuated strain of Dengue virus (DEN-1 #45AZ5) with a favorable clinical safety profile that also maintains the potent immune stimulatory properties characterstic of Dengue virus infection. METHODS: In this study, we utilized in vitro tumor killing and immune multiplex assays to examine the anti-tumor effects of PV001-DV as a potential novel cancer immunotherapy. RESULTS: In vitro assays demonstrated that PV001-DV possesses the ability to directly kill human melanoma cells lines as well as patient melanoma tissue ex vivo. Importantly, further work demonstrated that, when patient peripheral blood mononuclear cells (PBMCs) were exposed to PV001-DV, a substantial induction in the production of apoptotic factors and immunostimulatory cytokines was detected. When tumor cells were cultured with the resulting soluble mediators from these PBMCs, rapid cell death of melanoma and breast cancer cell lines was observed. These soluble mediators also increased dengue virus binding ligands and immune checkpoint receptor, PD-L1 expression. CONCLUSIONS: The direct in vitro tumor-killing and immune-mediated tumor cytotoxicity facilitated by PV001-DV contributes support of its upcoming clinical evaluation in patients with advanced melanoma who have failed prior therapy.
Asunto(s)
Virus del Dengue , Dengue , Melanoma , Viroterapia Oncolítica , Virus Oncolíticos , Humanos , Virus del Dengue/fisiología , Leucocitos Mononucleares , Melanoma/terapia , Células MCF-7 , Inmunidad , Muerte Celular , Viroterapia Oncolítica/métodosRESUMEN
BACKGROUND: The glycemic continuum often indicates a gradual decline in insulin sensitivity leading to an increase in glucose levels. Although prediabetes is an established risk factor for both macrovascular and microvascular diseases, whether prediabetes is independently associated with the risk of developing atrial fibrillation (AF), particularly the occurrence time, has not been well studied using a high-quality research design in combination with statistical machine-learning algorithms. METHODS: Using data available from electronic medical records collected from the National Taiwan University Hospital, a tertiary medical center in Taiwan, we conducted a retrospective cohort study consisting 174,835 adult patients between 2014 and 2019 to investigate the relationship between prediabetes and AF. To render patients with prediabetes as comparable to those with normal glucose test, a propensity-score matching design was used to select the matched pairs of two groups with a 1:1 ratio. The Kaplan-Meier method was used to compare the cumulative risk of AF between prediabetes and normal glucose test using log-rank test. The multivariable Cox regression model was employed to estimate adjusted hazard ratio (HR) for prediabetes versus normal glucose test by stratifying three levels of glycosylated hemoglobin (HbA1c). The machine-learning algorithm using the random survival forest (RSF) method was further used to identify the importance of clinical factors associated with AF in patients with prediabetes. RESULTS: A sample of 14,309 pairs of patients with prediabetes and normal glucose test result were selected. The incidence of AF was 11.6 cases per 1000 person-years during a median follow-up period of 47.1 months. The Kaplan-Meier analysis revealed that the risk of AF was significantly higher in patients with prediabetes (log-rank p < 0.001). The multivariable Cox regression model indicated that prediabetes was independently associated with a significant increased risk of AF (HR 1.24, 95% confidence interval 1.11-1.39, p < 0.001), particularly for patients with HbA1c above 5.5%. The RSF method identified elevated N-terminal natriuretic peptide and altered left heart structure as the two most important risk factors for AF among patients with prediabetes. CONCLUSIONS: Our study found that prediabetes is independently associated with a higher risk of AF. Furthermore, alterations in left heart structure make a significant contribution to this elevated risk, and these structural changes may begin during the prediabetes stage.
Asunto(s)
Fibrilación Atrial , Estado Prediabético , Adulto , Humanos , Fibrilación Atrial/diagnóstico , Fibrilación Atrial/epidemiología , Estudios Retrospectivos , Hemoglobina Glucada , Estado Prediabético/diagnóstico , Estado Prediabético/epidemiología , Estado Prediabético/complicaciones , Factores de Riesgo , GlucosaRESUMEN
AIMS: This study investigates how lumen roughness and urethral length influence urinary flow speed. METHODS: We used micro-computed tomography scans to measure the lumen roughness and dimensions for rabbits, cats, and pigs. We designed and fabricated three-dimensional-printed urethra mimics of varying roughness and length to perform flow experiments. We also developed a corresponding mathematical model to rationalize the observed flow speed. RESULTS: We update the previously reported relationship between body mass and urethra length and diameter, now including 41 measurements for urethra length and 10 measurements for diameter. We report the relationship between lumen diameter and roughness as a function of position down the urethra for rabbits, cats, and pigs. The time course of urinary speed from our mimics is reported, as well as the average speed as a function of urethra length. CONCLUSIONS: Based on the behavior of our mimics, we conclude that the lumen roughness in mammals reduces flow speed by up to 25% compared to smooth urethras. Urine flows fastest when the urethra length exceeds 25 times its diameter. Longer urethras do not drain faster due to viscous effects counteracting the additional gravitational head. However, flows with our urethra mimics are still 6 times faster than those observed in nature, suggesting that further work is needed to understand flow resistance in the urethra.
Asunto(s)
Mamíferos , Uretra , Conejos , Porcinos , Animales , Uretra/diagnóstico por imagen , Microtomografía por Rayos XRESUMEN
ABSTRACT: Basal cell carcinomas and melanoma are common cutaneous malignancies. However, the development of a basomelanocytic tumor that simultaneously includes elements of melanoma and basal cell carcinoma is extremely rare. We present the case of an 84-year-old man who presented with a nonpigmented, nonulcerated pink nodule of his left upper back and discuss the current management recommendations for basomelanocytic tumors.
Asunto(s)
Carcinoma Basocelular , Melanoma , Neoplasias Cutáneas , Masculino , Humanos , Anciano de 80 o más Años , Neoplasias Cutáneas/patología , Carcinoma Basocelular/cirugía , Carcinoma Basocelular/patología , Melanoma/patología , Dorso/patologíaRESUMEN
BACKGROUND: Contact tracing for containing emerging infectious diseases such as COVID-19 is resource intensive and requires digital transformation to enable timely decision-making. OBJECTIVE: This study demonstrates the design and implementation of digital contact tracing using multimodal health informatics to efficiently collect personal information and contain community outbreaks. The implementation of digital contact tracing was further illustrated by 3 empirical SARS-CoV-2 infection clusters. METHODS: The implementation in Changhua, Taiwan, served as a demonstration of the multisectoral informatics and connectivity between electronic health systems needed for digital contact tracing. The framework incorporates traditional travel, occupation, contact, and cluster approaches and a dynamic contact process enabled by digital technology. A centralized registry system, accessible only to authorized health personnel, ensures privacy and data security. The efficiency of the digital contact tracing system was evaluated through a field study in Changhua. RESULTS: The digital contact tracing system integrates the immigration registry, communicable disease report system, and national health records to provide real-time information about travel, occupation, contact, and clusters for potential contacts and to facilitate a timely assessment of the risk of COVID-19 transmission. The digitalized system allows for informed decision-making regarding quarantine, isolation, and treatment, with a focus on personal privacy. In the first cluster infection, the system monitored 665 contacts and isolated 4 (0.6%) cases; none of the contacts (0/665, 0%) were infected during quarantine. The estimated reproduction number of 0.92 suggests an effective containment strategy for preventing community-acquired outbreak. The system was also used in a cluster investigation involving foreign workers, where none of the 462 contacts (0/462, 0%) tested positive for SARS-CoV-2. CONCLUSIONS: By integrating the multisectoral database, the contact tracing process can be digitalized to provide the information required for risk assessment and decision-making in a timely manner to contain a community-acquired outbreak when facing the outbreak of emerging infectious disease.
Asunto(s)
COVID-19 , Enfermedades Transmisibles Emergentes , Humanos , COVID-19/epidemiología , COVID-19/prevención & control , Trazado de Contacto , SARS-CoV-2 , CuarentenaRESUMEN
BACKGROUND: Nonhome discharge disposition following primary total knee arthroplasty (TKA) is associated with a higher rate of complications and constitutes a socioeconomic burden on the health care system. While existing algorithms predicting nonhome discharge disposition varied in degrees of mathematical complexity and prediction power, their capacity to generalize predictions beyond the development dataset remains limited. Therefore, this study aimed to establish the machine learning model generalizability by performing internal and external validations using nation-scale and institutional cohorts, respectively. METHODS: Four machine learning models were trained using the national cohort. Recursive feature elimination and hyper-parameter tuning were applied. Internal validation was achieved through five-fold cross-validation during model training. The trained models' performance was externally validated using the institutional cohort and assessed by discrimination, calibration, and clinical utility. RESULTS: The national (424,354 patients) and institutional (10,196 patients) cohorts had non-home discharge rates of 19.4 and 36.4%, respectively. The areas under the receiver operating curve of the model predictions were 0.83 to 0.84 during internal validation and increased to 0.88 to 0.89 during external validation. Artificial neural network and histogram-based gradient boosting elicited the best performance with a mean area under the receiver operating curve of 0.89, calibration slope of 1.39, and Brier score of 0.14, which indicated that the two models were robust in distinguishing non-home discharge and well-calibrated with accurate predictions of the probabilities. The low inter-dataset similarity indicated reliable external validation. Length of stay, age, body mass index, and sex were the strongest predictors of discharge destination after primary TKA. CONCLUSION: The machine learning models demonstrated excellent predictive performance during both internal and external validations, supporting their generalizability across different patient cohorts and potential applicability in the clinical workflow.
Asunto(s)
Artroplastia de Reemplazo de Rodilla , Humanos , Alta del Paciente , Algoritmos , Aprendizaje Automático , Articulación de la Rodilla , Estudios RetrospectivosRESUMEN
BACKGROUND: Postoperative discharge to facilities account for over 33% of the $ 2.7 billion revision total knee arthroplasty (TKA)-associated annual expenditures and are associated with increased complications when compared to home discharges. Prior studies predicting discharge disposition using advanced machine learning (ML) have been limited due to a lack of generalizability and validation. This study aimed to establish ML model generalizability by externally validating its prediction for nonhome discharge following revision TKA using national and institutional databases. METHODS: The national and institutional cohorts comprised 52,533 and 1,628 patients, respectively, with 20.6 and 19.4% nonhome discharge rates. Five ML models were trained and internally validated (five-fold cross-validation) on a large national dataset. Subsequently, external validation was performed on our institutional dataset. Model performance was assessed using discrimination, calibration, and clinical utility. Global predictor importance plots and local surrogate models were used for interpretation. RESULTS: The strongest predictors of nonhome discharge were patient age, body mass index, and surgical indication. The area under the receiver operating characteristic curve increased from internal to external validation and ranged between 0.77 and 0.79. Artificial neural network was the best predictive model for identifying patients at risk for nonhome discharge (area under the receiver operating characteristic curve = 0.78), and also the most accurate (calibration slope = 0.93, intercept = 0.02, and Brier score = 0.12). CONCLUSION: All five ML models demonstrated good-to-excellent discrimination, calibration, and clinical utility on external validation, with artificial neural network being the best model for predicting discharge disposition following revision TKA. Our findings establish the generalizability of ML models developed using data from a national database. The integration of these predictive models into clinical workflow may assist in optimizing discharge planning, bed management, and cost containment associated with revision TKA.
Asunto(s)
Artroplastia de Reemplazo de Rodilla , Humanos , Artroplastia de Reemplazo de Rodilla/efectos adversos , Alta del Paciente , Aprendizaje Automático , Redes Neurales de la Computación , Bases de Datos Factuales , Estudios RetrospectivosRESUMEN
BACKGROUND: The rates of blood transfusion following primary and revision total hip arthroplasty (THA) remain as high as 9% and 18%, respectively, contributing to patient morbidity and healthcare costs. Existing predictive tools are limited to specific populations, thereby diminishing their clinical applicability. This study aimed to externally validate our previous institutionally developed machine learning (ML) algorithms to predict the risk of postoperative blood transfusion following primary and revision THA using national inpatient data. METHODS: Five ML algorithms were trained and validated using data from 101,266 primary THA and 8,594 revision THA patients from a large national database to predict postoperative transfusion risk after primary and revision THA. Models were assessed and compared based on discrimination, calibration, and decision curve analysis. RESULTS: The most important predictors of transfusion following primary and revision THA were preoperative hematocrit (<39.4%) and operation time (>157 minutes), respectively. All ML models demonstrated excellent discrimination (area under the curve (AUC) >0.8) in primary and revision THA patients, with artificial neural network (AUC = 0.84, slope = 1.11, intercept = -0.04, Brier score = 0.04), and elastic-net-penalized logistic regression (AUC = 0.85, slope = 1.08, intercept = -0.01, and Brier score = 0.12) performing best, respectively. On decision curve analysis, all 5 models demonstrated a higher net benefit than the conventional strategy of intervening for all or no patients in both patient cohorts. CONCLUSIONS: This study successfully validated our previous institutionally developed ML algorithms for the prediction of blood transfusion following primary and revision THA. Our findings highlight the potential generalizability of predictive ML tools developed using nationally representative data in THA patients.
Asunto(s)
Artroplastia de Reemplazo de Cadera , Humanos , Artroplastia de Reemplazo de Cadera/efectos adversos , Aprendizaje Automático , Redes Neurales de la Computación , Algoritmos , Transfusión Sanguínea , Estudios RetrospectivosRESUMEN
BACKGROUND: Existing machine learning models that predicted prolonged lengths of stay (LOS) following primary total hip arthroplasty (THA) were limited by the small training volume and exclusion of important patient factors. This study aimed to develop machine learning models using a national-scale data set and examine their performance in predicting prolonged LOS following THA. METHODS: A total of 246,265 THAs were analyzed from a large database. Prolonged LOS was defined as exceeding the 75th percentile of all LOSs in the cohort. Candidate predictors of prolonged LOS were selected by recursive feature elimination and used to construct four machine learning models-artificial neural network, random forest, histogram-based gradient boosting, and k-nearest neighbor. The model performance was assessed by discrimination, calibration, and utility. RESULTS: All models exhibited excellent performance in discrimination (area under the receiver operating characteristic curve [AUC] = 0.72 to 0.74) and calibration (slope: 0.83 to 1.18, intercept: -0.01 to 0.11, Brier score: 0.185 to 0.192) during both training and testing sessions. The artificial neural network was the best performer with an AUC of 0.73, calibration slope of 0.99, calibration intercept of -0.01, and Brier score of 0.185. All models showed great utility by producing higher net benefits than the default treatment strategies in the decision curve analyses. Age, laboratory tests, and surgical variables were the strongest predictors of prolonged LOS. CONCLUSION: The excellent prediction performance of machine learning models demonstrated their capacity to identify patients prone to prolonged LOS. Many factors contributing to prolonged LOS can be optimized to minimize hospital stay for high-risk patients.