Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 16 de 16
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
País de afiliación
Intervalo de año de publicación
1.
Am J Gastroenterol ; 2024 Aug 23.
Artículo en Inglés | MEDLINE | ID: mdl-39177332

RESUMEN

OBJECTIVES: With the increasing use of direct oral anticoagulants (DOACs), managing these agents around endoscopic submucosal dissection (ESD) is crucial. However, due to the need for a large number of cases, studies examining the timing of resumption are lacking, resulting in varied recommendations across international guidelines. We aimed to perform a comparative study about the resumption timing of DOACs after colorectal ESD using a nationwide database in Japan. METHODS: We conducted a retrospective cohort study on colorectal ESD using the Diagnosis Procedure Combination database from 2012 to 2023. Patients using anticoagulants other than DOACs were excluded, and only those who resumed DOACs within 3 days were included. From eligible patients, we divided them into early (the day after ESD) and delayed (2 to 3 days after ESD) resumption groups. We used inverse probability of treatment weighting (IPTW) to assess the delayed bleeding and thromboembolic events within 30 days. Delayed bleeding was defined as bleeding requiring endoscopic hemostasis or blood transfusion after ESD. RESULTS: Of 176,139 colorectal ESDs, 3,550 involved DOAC users, with 2,698 (76%) categorized as early resumption and 852 (24%) categorized as delayed resumption groups. After IPTW adjustment, the early resumption group did not significantly increase delayed bleeding compared to the delayed resumption group (OR, 1.05; 95% CI, 0.78-1.42; P = 0.73). However, it significantly reduced the risk of thromboembolic events (OR, 0.45; 95% CI, 0.25-0.82; P < 0.01). CONCLUSIONS: Resuming DOACs the day after colorectal ESD was associated with reduced thromboembolic events without significant increase in risk of delayed bleeding.

2.
Surg Endosc ; 38(5): 2699-2708, 2024 May.
Artículo en Inglés | MEDLINE | ID: mdl-38528262

RESUMEN

BACKGROUND: Drainage fluid amylase (DFA) is useful for predicting clinically relevant postoperative pancreatic fistula (CR-POPF) after distal pancreatectomy (DP). However, difference in optimal cutoff value of DFA for predicting CR-POPF between open DP (ODP) and laparoscopic DP (LDP) has not been investigated. This study aimed to identify the optimal cutoff values of DFA for predicting CR-POPF after ODP and LDP. METHODS: Data for 294 patients (ODP, n = 127; LDP, n = 167) undergoing DP at Kobe University Hospital between 2010 and 2021 were reviewed. Propensity score matching was performed to minimize treatment selection bias. Receiver operating characteristic (ROC) analysis was performed to determine the optimal cutoff values of DFA for predicting CR-POPF for ODP and LDP. Logistic regression analysis for CR-POPF was performed to investigate the diagnostic value of DFA on postoperative day (POD) three with identified cutoff value. RESULTS: In the matched cohort, CR-POPF rates were 24.7% and 7.9% after ODP and LDP, respectively. DFA on POD one was significantly lower after ODP than after LDP (2263 U/L vs 4243 U/L, p < 0.001), while the difference was not significant on POD three (543 U/L vs 1221 U/L, p = 0.171). ROC analysis revealed that the optimal cutoff value of DFA on POD one and three for predicting CR-POPF were different between ODP and LDP (ODP, 3697 U/L on POD one, 1114 U/L on POD three; LDP, 10564 U/L on POD one, 6020 U/L on POD three). Multivariate analysis showed that DFA on POD three with identified cutoff value was the independent predictor for CR-POPF both for ODP and LDP. CONCLUSIONS: DFA on POD three is an independent predictor for CR-POPF after both ODP and LDP. However, the optimal cutoff value for it is significantly higher after LDP than after ODP. Optimal threshold of DFA for drain removal may be different between ODP and LDP.


Asunto(s)
Amilasas , Drenaje , Laparoscopía , Pancreatectomía , Fístula Pancreática , Complicaciones Posoperatorias , Humanos , Fístula Pancreática/etiología , Fístula Pancreática/diagnóstico , Pancreatectomía/métodos , Masculino , Femenino , Amilasas/análisis , Amilasas/metabolismo , Drenaje/métodos , Persona de Mediana Edad , Laparoscopía/métodos , Anciano , Estudios Retrospectivos , Complicaciones Posoperatorias/diagnóstico , Complicaciones Posoperatorias/etiología , Valor Predictivo de las Pruebas , Puntaje de Propensión , Adulto , Curva ROC
3.
Clin Exp Nephrol ; 28(8): 784-792, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38506982

RESUMEN

BACKGROUND: Magnesium deficiency is associated with various health conditions, but its impact on the progression of chronic kidney disease (CKD) remains unclear. This study aimed to investigate the association between serum magnesium levels and prognosis of renal function in CKD patients. METHODS: This is an analysis of the Japan Chronic Kidney Disease Database Ex (J-CKD-DB-Ex), which is a multicenter prospective cohort including CKD patients enrolled from January 1, 2014 to December 31, 2020. We included adult outpatients with CKD stage G3 and G4 at the time of initial magnesium measurement. Patients were classified by magnesium levels as low (<1.7 mg/dl), normal (1.7-2.6 mg/dl), or high (>2.6 mg/dl). The primary outcomes were the composite of an eGFR < 15 ml/min/1.73 m2 or a ≥30% reduction in eGFR from the initial measurement, which was defined as CKD progression. We applied the Kaplan-Meier analysis and Cox regression hazard model to examine the association between magnesium levels and CKD progression. RESULTS: The analysis included 9868 outpatients during the follow-up period. The low magnesium group was significantly more likely to reach CKD progression. Cox regression, adjusting for covariates and using the normal magnesium group as the reference, showed that the hazard ratio for the low magnesium group was 1.20 (1.08-1.34). High magnesium was not significantly associated with poor renal outcomes compared with normal magnesium. CONCLUSION: Based on large real-world data, this study demonstrated that low magnesium levels are associated with poorer renal outcomes.


Asunto(s)
Progresión de la Enfermedad , Tasa de Filtración Glomerular , Magnesio , Insuficiencia Renal Crónica , Humanos , Magnesio/sangre , Insuficiencia Renal Crónica/sangre , Insuficiencia Renal Crónica/fisiopatología , Insuficiencia Renal Crónica/diagnóstico , Masculino , Femenino , Persona de Mediana Edad , Pronóstico , Anciano , Estudios Prospectivos , Deficiencia de Magnesio/sangre , Deficiencia de Magnesio/complicaciones , Japón/epidemiología , Riñón/fisiopatología
4.
Dig Endosc ; 2024 Mar 11.
Artículo en Inglés | MEDLINE | ID: mdl-38462957

RESUMEN

OBJECTIVES: We aimed to develop and validate a simple scoring system to predict in-hospital mortality after endoscopic variceal ligation (EVL) for esophageal variceal bleeding. METHODS: Data from a 13-year study involving 46 Japanese institutions were split into development (initial 7 years) and validation (last 6 years) cohorts. The study subjects were patients hospitalized for esophageal variceal bleeding and treated with EVL. Variable selection was performed using least absolute shrinkage and selection operator regression, targeting in-hospital all-cause mortality as the outcome. We developed the Hospital Outcome Prediction following Endoscopic Variceal Ligation (HOPE-EVL) score from ß coefficients of multivariate logistic regression and assessed its discrimination and calibration. RESULTS: The study included 980 patients: 536 in the development cohort and 444 in the validation cohort. In-hospital mortality was 13.6% and 10.1% for the respective cohorts. The scoring system used five variables: systolic blood pressure (<80 mmHg: 2 points), Glasgow Coma Scale (≤12: 1 point), total bilirubin (≥5 mg/dL: 1 point), creatinine (≥1.5 mg/dL: 1 point), and albumin (<2.8 g/dL: 1 point). The risk groups (low: 0-1, middle: 2-3, high: ≥4) in the validation cohort corresponded to observed and predicted mortality probabilities of 2.0% and 2.5%, 19.0% and 22.9%, and 57.6% and 71.9%, respectively. In this cohort, the HOPE-EVL score demonstrated excellent discrimination ability (area under the curve [AUC] 0.890; 95% confidence interval [CI] 0.850-0.930) compared with the Model for End-stage Liver Disease score (AUC 0.853; 95% CI 0.794-0.912) and the Child-Pugh score (AUC 0.798; 95% CI 0.727-0.869). CONCLUSIONS: The HOPE-EVL score practically and effectively predicts in-hospital mortality. This score could facilitate the appropriate allocation of resources and effective communication with patients and their families.

5.
J Clin Biochem Nutr ; 75(1): 60-64, 2024 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-39070532

RESUMEN

Gastrointestinal bleeding (GIB) is a significant public health concern, predominantly associated with high morbidity. However, there have been no reports investigating the trends of GIB in Japan using nationwide data. This study aims to identify current trends and issues in the management of GIB by assessing Japan's national data. We analyzed National Database sampling data from 2012 to 2019, evaluating annual hospitalization rates for major six types of GIB including hemorrhagic gastric ulcers, duodenal ulcers, esophageal variceal bleeding, colonic diverticular bleeding, ischemic colitis, and rectal ulcers. In this study, hospitalization rates per 100,000 indicated a marked decline in hemorrhagic gastric ulcers, approximately two-thirds from 41.5 to 27.9, whereas rates for colonic diverticular bleeding more than doubled, escalating from 15.1 to 34.0. Ischemic colitis rates increased 1.6 times, from 20.8 to 34.9. In 2017, the hospitalization rate per 100,000 for colonic diverticular bleeding and ischemic colitis surpassed those for hemorrhagic gastric ulcers (31.1, 31.3, and 31.0, respectively). No significant changes were observed for duodenal ulcers, esophageal variceal bleeding, or rectal ulcers. The findings of this study underscore a pivotal shift in hospitalization frequencies from upper GIB to lower GIB in 2017, indicating a potential shift in clinical focus and resource allocation.

7.
PLoS One ; 19(1): e0296319, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38241403

RESUMEN

Digital advancements can reduce the burden of recording clinical information. This intra-subject experimental study compared the time and error rates for recording vital signs and prescriptions between an optical character reader (OCR) and manual typing. This study was conducted at three community hospitals and two fire departments in Japan. Thirty-eight volunteers (15 paramedics, 10 nurses, and 13 physicians) participated in the study. We prepared six sample pictures: three ambulance monitors for vital signs (normal, abnormal, and shock) and three pharmacy notebooks that provided prescriptions (two, four, or six medications). The participants recorded the data for each picture using an OCR or by manually typing on a smartphone. The outcomes were recording time and error rate defined as the number of characters with omissions or misrecognitions/misspellings of the total number of characters. Data were analyzed using paired Wilcoxon signed-rank sum and McNemar's tests. The recording times for vital signs were similar between groups (normal state, 21 s [interquartile range (IQR), 17-26 s] for OCR vs. 23 s [IQR, 18-31 s] for manual typing). In contrast, prescription recording was faster with the OCR (e.g., six-medication list, 18 s [IQR, 14-21 s] for OCR vs. 144 s [IQR, 112-187 s] for manual typing). The OCR had fewer errors than manual typing for both vital signs and prescriptions (0/1056 [0%] vs. 14/1056 [1.32%]; p<0.001 and 30/4814 [0.62%] vs. 53/4814 [1.10%], respectively). In conclusion, the developed OCR reduced the recording time for prescriptions but not vital signs. The OCR showed lower error rates than manual typing for both vital signs and prescription data.


Asunto(s)
Prescripciones de Medicamentos , Signos Vitales , Humanos , Teléfono Inteligente , Japón
8.
World J Gastroenterol ; 30(3): 238-251, 2024 Jan 21.
Artículo en Inglés | MEDLINE | ID: mdl-38314133

RESUMEN

BACKGROUND: Esophageal variceal bleeding is a severe complication associated with liver cirrhosis and typically necessitates endoscopic hemostasis. The current standard treatment is endoscopic variceal ligation (EVL), and Western guidelines recommend antibiotic prophylaxis following hemostasis. However, given the improvements in prognosis for variceal bleeding due to advancements in the management of bleeding and treatments of liver cirrhosis and the global concerns regarding the emergence of multidrug-resistant bacteria, there is a need to reassess the use of routine antibiotic prophylaxis after hemostasis. AIM: To evaluate the effectiveness of antibiotic prophylaxis in patients treated for EVL. METHODS: We conducted a 13-year observational study using the Tokushukai medical database across 46 hospitals. Patients were divided into the prophylaxis group (received antibiotics on admission or the next day) and the non-prophylaxis group (did not receive antibiotics within one day of admission). The primary outcome was composed of 6-wk mortality, 4-wk rebleeding, and 4-wk spontaneous bacterial peritonitis (SBP). The secondary outcomes were each individual result and in-hospital mortality. A logistic regression with inverse probability of treatment weighting was used. A subgroup analysis was conducted based on the Child-Pugh classification to determine its influence on the primary outcome measures, while sensitivity analyses for antibiotic type and duration were also performed. RESULTS: Among 980 patients, 790 were included (prophylaxis: 232, non-prophylaxis: 558). Most patients were males under the age of 65 years with a median Child-Pugh score of 8. The composite primary outcomes occurred in 11.2% of patients in the prophylaxis group and 9.5% in the non-prophylaxis group. No significant differences in outcomes were observed between the groups (adjusted odds ratio, 1.11; 95% confidence interval, 0.61-1.99; P = 0.74). Individual outcomes such as 6-wk mortality, 4-wk rebleeding, 4-wk onset of SBP, and in-hospital mortality were not significantly different between the groups. The primary outcome did not differ between the Child-Pugh subgroups. Similar results were observed in the sensitivity analyses. CONCLUSION: No significant benefit to antibiotic prophylaxis for esophageal variceal bleeding treated with EVL was detected in this study. Global reassessment of routine antibiotic prophylaxis is imperative.


Asunto(s)
Enfermedades del Esófago , Várices Esofágicas y Gástricas , Anciano , Femenino , Humanos , Masculino , Antibacterianos/uso terapéutico , Profilaxis Antibiótica , Várices Esofágicas y Gástricas/cirugía , Várices Esofágicas y Gástricas/complicaciones , Hemorragia Gastrointestinal/etiología , Hemorragia Gastrointestinal/prevención & control , Ligadura/efectos adversos , Cirrosis Hepática/complicaciones , Cirrosis Hepática/tratamiento farmacológico , Resultado del Tratamiento , Persona de Mediana Edad
9.
Circ Cardiovasc Interv ; 17(6): e013156, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38629314

RESUMEN

BACKGROUND: We assessed the safety profile of tricuspid transcatheter edge-to-edge repair (TEER) in patients with right ventricular (RV) dysfunction. METHODS: We identified patients undergoing TEER to treat tricuspid regurgitation from June 2015 to October 2021 and assessed tricuspid annular plane systolic excursion (TAPSE) and RV fractional area change (RVFAC). RV dysfunction was defined as TAPSE <17 mm and RVFAC <35%. The primary end point was 30-day mortality after TEER. We also investigated the change in the RV function in the early phase and clinical outcomes at 2 years. RESULTS: The study participants (n=262) were at high surgical risk (EuroSCORE II, 6.2% [interquartile range, 4.0%-10.3%]). Among them, 44 patients met the criteria of RV dysfunction. Thirty-day mortality was 3.2% in patients with normal RV function and 2.3% in patients with RV dysfunction (P=0.99). Tricuspid regurgitation reduction to ≤2+ was consistently achieved irrespective of RV dysfunction (76.5% versus 70.5%; P=0.44). TAPSE and RVFAC declined after TEER in patients with normal RV function (TAPSE, 19.0±4.7 to 17.9±4.5 mm; P=0.001; RVFAC, 46.2%±8.1% to 40.3%±9.7%; P<0.001). In contrast, those parameters were unchanged or tended to increase in patients with RV dysfunction (TAPSE, 13.2±2.3 to 15.3±4.7 mm; P=0.011; RVFAC, 29.6%±4.1% to 31.6%±8.3%; P=0.14). Two years after TEER, compared with patients with normal RV function, patients with RV dysfunction had significantly higher mortality (27.0% versus 56.3%; P<0.001). CONCLUSIONS: TEER was safe and feasible to treat tricuspid regurgitation in patients with RV dysfunction. The decline in the RV function was observed in patients with normal RV function but not in patients with RV dysfunction.


Asunto(s)
Cateterismo Cardíaco , Recuperación de la Función , Insuficiencia de la Válvula Tricúspide , Válvula Tricúspide , Disfunción Ventricular Derecha , Función Ventricular Derecha , Humanos , Insuficiencia de la Válvula Tricúspide/fisiopatología , Insuficiencia de la Válvula Tricúspide/cirugía , Insuficiencia de la Válvula Tricúspide/mortalidad , Insuficiencia de la Válvula Tricúspide/diagnóstico por imagen , Masculino , Femenino , Disfunción Ventricular Derecha/fisiopatología , Disfunción Ventricular Derecha/mortalidad , Disfunción Ventricular Derecha/diagnóstico por imagen , Disfunción Ventricular Derecha/etiología , Válvula Tricúspide/fisiopatología , Válvula Tricúspide/cirugía , Válvula Tricúspide/diagnóstico por imagen , Resultado del Tratamiento , Anciano , Cateterismo Cardíaco/efectos adversos , Cateterismo Cardíaco/mortalidad , Cateterismo Cardíaco/instrumentación , Factores de Tiempo , Factores de Riesgo , Persona de Mediana Edad , Estudios Retrospectivos , Implantación de Prótesis de Válvulas Cardíacas/efectos adversos , Implantación de Prótesis de Válvulas Cardíacas/mortalidad , Implantación de Prótesis de Válvulas Cardíacas/instrumentación , Anciano de 80 o más Años , Medición de Riesgo
10.
Gen Thorac Cardiovasc Surg ; 72(6): 417-425, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38294659

RESUMEN

OBJECTIVE: To establish a risk-stratification system for predicting the postoperative recurrence of esophageal squamous cell carcinoma, this study aimed to evaluate the prognostic value of clusters based on blood inflammation and coagulation markers and investigate their correlation with serum cytokines and genetic alteration. METHOD: This single-center, retrospective cohort study enrolled 491 patients with esophageal cancer who underwent subtotal esophagectomy between 2004 and 2012. For cluster exploration, nonhierarchical cluster analysis and k-means were applied using serum C-reactive protein, albumin, fibrinogen, and platelet-lymphocyte ratio as variables. Then, multivariate survival analysis was conducted to investigate the association of clusters with recurrence-free survival. To characterize the clusters, serum interleukin-6, interleukin-8, and genetic alteration in primary tumors, the PleSSision-Rapid panel, which can evaluate 160 representative driver genes, was used. RESULTS: Patients were classified into clusters 1, 2, and 3, which included 24 (5%), 161 (33%), and 306 (62%) patients, respectively. Compared with cluster 3, cluster 1 or 2 had significantly worse recurrence-free survival. Based on the multivariable analysis using cluster, pStage, and age as covariates, cluster was an independent prognostic factor for recurrence-free survival (hazard ratio, 1.55; 95% confidence interval, 1.08-2.21; P = 0.02). The percentage of serum interleukin-6 and interleukin-8 levels was the highest in cluster 1, followed by clusters 2 and 3. In 23 patients with available genomic profiles, no significant difference in representative genomic alterations was observed. CONCLUSIONS: Non-biased clustering using inflammation and coagulation markers identified the intense inflammatory subtype, which had an independent prognostic effect on recurrence-free survival.


Asunto(s)
Neoplasias Esofágicas , Carcinoma de Células Escamosas de Esófago , Esofagectomía , Humanos , Masculino , Femenino , Neoplasias Esofágicas/sangre , Neoplasias Esofágicas/mortalidad , Neoplasias Esofágicas/patología , Neoplasias Esofágicas/cirugía , Neoplasias Esofágicas/genética , Estudios Retrospectivos , Persona de Mediana Edad , Anciano , Carcinoma de Células Escamosas de Esófago/sangre , Carcinoma de Células Escamosas de Esófago/cirugía , Carcinoma de Células Escamosas de Esófago/mortalidad , Carcinoma de Células Escamosas de Esófago/genética , Biomarcadores de Tumor/sangre , Recurrencia Local de Neoplasia/sangre , Análisis por Conglomerados , Medición de Riesgo , Factores de Riesgo , Inflamación/sangre , Mediadores de Inflamación/sangre
11.
J Intensive Care ; 12(1): 21, 2024 Jun 05.
Artículo en Inglés | MEDLINE | ID: mdl-38840225

RESUMEN

BACKGROUND: Patients who receive invasive mechanical ventilation (IMV) in the intensive care unit (ICU) have exhibited lower in-hospital mortality rates than those who are treated outside. However, the patient-, hospital-, and regional factors influencing the ICU admission of patients with IMV have not been quantitatively examined. METHODS: This retrospective cohort study used data from the nationwide Japanese inpatient administrative database and medical facility statistics. We included patients aged ≥ 15 years who underwent IMV between April 2018 and March 2019. The primary outcome was ICU admission on the day of IMV initiation. Multilevel logistic regression analyses incorporating patient-, hospital-, or regional-level variables were used to assess cluster effects by calculating the intraclass correlation coefficient (ICC), median odds ratio (MOR), and proportional change in variance (PCV). RESULTS: Among 83,346 eligible patients from 546 hospitals across 140 areas, 40.4% were treated in ICUs on their IMV start day. ICU admission rates varied widely between hospitals (median 0.7%, interquartile range 0-44.5%) and regions (median 28.7%, interquartile range 0.9-46.2%). Multilevel analyses revealed significant effects of hospital cluster (ICC 82.2% and MOR 41.4) and regional cluster (ICC 67.3% and MOR 12.0). Including patient-level variables did not change these ICCs and MORs, with a PCV of 2.3% and - 1.0%, respectively. Further adjustment for hospital- and regional-level variables decreased the ICC and MOR, with a PCV of 95.2% and 85.6%, respectively. Among the hospital- and regional-level variables, hospitals with ICU beds and regions with ICU beds had a statistically significant and strong association with ICU admission. CONCLUSIONS: Our results revealed that primarily hospital and regional factors, rather than patient-related ones, opposed ICU admissions for patients with IMV. This has important implications for healthcare policymakers planning interventions for optimal ICU resource allocation.

12.
Circ Rep ; 6(3): 74-79, 2024 Mar 08.
Artículo en Inglés | MEDLINE | ID: mdl-38464986

RESUMEN

Background: Alcohol septal ablation (ASA) and septal myectomy (SM) are 2 options for septal reduction therapy (SRT) to treat medication-resistant symptomatic obstructive hypertrophic cardiomyopathy (HCM). Because differences in mortality rates after these different SRT methods have not been extensively investigated in real-world settings, in this study compared the 1-year mortality rates after ASA and SM using population-based database. Methods and Results: Utilizing New York Statewide Planning and Research Cooperative System (SPARCS) data from 2005 to 2016, we performed a comparative effectiveness study of ASA vs. SM in patients with HCM. The outcome was all-cause death up to 360 days after SRT. We constructed a multivariable logistic regression model and performed sensitivity analysis with propensity score (PS)-matching and inverse probability of treatment weighting (IPTW) methods. We identified 755 patients with HCM who underwent SRT: 348 with ASA and 407 with SM. The multivariable analysis showed that all-cause deaths were significantly fewer in the ASA group at 360 days after SRT (adjusted odds ratio=0.34; 95% confidence interval [CI] 0.13-0.84; P=0.02). The PS-matching and IPTW methods also supported a lower mortality rate in the ASA group at 360 days post-SRT. Conclusions: In this population-based study of patients with HCM who underwent SRT in a real-world setting, the 1-year all-cause mortality rate was significantly lower in patients who underwent ASA compared with SM.

13.
Sci Rep ; 14(1): 14911, 2024 06 28.
Artículo en Inglés | MEDLINE | ID: mdl-38942898

RESUMEN

We aimed to identify the clinical subtypes in individuals starting long-term care in Japan and examined their association with prognoses. Using linked medical insurance claims data and survey data for care-need certification in a large city, we identified participants who started long-term care. Grouping them based on 22 diseases recorded in the past 6 months using fuzzy c-means clustering, we examined the longitudinal association between clusters and death or care-need level deterioration within 2 years. We analyzed 4,648 participants (median age 83 [interquartile range 78-88] years, female 60.4%) between October 2014 and March 2019 and categorized them into (i) musculoskeletal and sensory, (ii) cardiac, (iii) neurological, (iv) respiratory and cancer, (v) insulin-dependent diabetes, and (vi) unspecified subtypes. The results of clustering were replicated in another city. Compared with the musculoskeletal and sensory subtype, the adjusted hazard ratio (95% confidence interval) for death was 1.22 (1.05-1.42), 1.81 (1.54-2.13), and 1.21 (1.00-1.46) for the cardiac, respiratory and cancer, and insulin-dependent diabetes subtypes, respectively. The care-need levels more likely worsened in the cardiac, respiratory and cancer, and unspecified subtypes than in the musculoskeletal and sensory subtype. In conclusion, distinct clinical subtypes exist among individuals initiating long-term care.


Asunto(s)
Cuidados a Largo Plazo , Humanos , Femenino , Anciano , Masculino , Japón/epidemiología , Análisis por Conglomerados , Anciano de 80 o más Años , Cuidados a Largo Plazo/estadística & datos numéricos , Pronóstico , Neoplasias/mortalidad , Neoplasias/epidemiología , Neoplasias/clasificación
14.
PLOS Digit Health ; 3(8): e0000578, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39163277

RESUMEN

It is expected but unknown whether machine-learning models can outperform regression models, such as a logistic regression (LR) model, especially when the number and types of predictor variables increase in electronic health records (EHRs). We aimed to compare the predictive performance of gradient-boosted decision tree (GBDT), random forest (RF), deep neural network (DNN), and LR with the least absolute shrinkage and selection operator (LR-LASSO) for unplanned readmission. We used EHRs of patients discharged alive from 38 hospitals in 2015-2017 for derivation and in 2018 for validation, including basic characteristics, diagnosis, surgery, procedure, and drug codes, and blood-test results. The outcome was 30-day unplanned readmission. We created six patterns of data tables having different numbers of binary variables (that ≥5% or ≥1% of patients or ≥10 patients had) with and without blood-test results. For each pattern of data tables, we used the derivation data to establish the machine-learning and LR models, and used the validation data to evaluate the performance of each model. The incidence of outcome was 6.8% (23,108/339,513 discharges) and 6.4% (7,507/118,074 discharges) in the derivation and validation datasets, respectively. For the first data table with the smallest number of variables (102 variables that ≥5% of patients had, without blood-test results), the c-statistic was highest for GBDT (0.740), followed by RF (0.734), LR-LASSO (0.720), and DNN (0.664). For the last data table with the largest number of variables (1543 variables that ≥10 patients had, including blood-test results), the c-statistic was highest for GBDT (0.764), followed by LR-LASSO (0.755), RF (0.751), and DNN (0.720), suggesting that the difference between GBDT and LR-LASSO was small and their 95% confidence intervals overlapped. In conclusion, GBDT generally outperformed LR-LASSO to predict unplanned readmission, but the difference of c-statistic became smaller as the number of variables was increased and blood-test results were used.

15.
Ann Clin Epidemiol ; 4(3): 63-71, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-38504945

RESUMEN

Machine learning refers to a series of processes in which a computer finds rules from a vast amount of data. With recent advances in computer technology and the availability of a wide variety of health data, machine learning has rapidly developed and been applied in medical research. Currently, there are three types of machine learning: supervised, unsupervised, and reinforcement learning. In medical research, supervised learning is commonly used for diagnoses and prognoses, while unsupervised learning is used for phenotyping a disease, and reinforcement learning for maximizing favorable results, such as optimization of total patients' waiting time in the emergency department. The present article focuses on the concept and application of supervised learning in medicine, the most commonly used machine learning approach in medicine, and provides a brief explanation of four algorithms widely used for prediction (random forests, gradient-boosted decision tree, support vector machine, and neural network). Among these algorithms, the neural network has further developed into deep learning algorithms to solve more complex tasks. Along with simple classification problems, deep learning is commonly used to process medical imaging, such as retinal fundus photographs for diabetic retinopathy diagnosis. Although machine learning can bring new insights into medicine by processing a vast amount of data that are often beyond human capacity, algorithms can also fail when domain knowledge is neglected. The combination of algorithms and human cognitive ability is a key to the successful application of machine learning in medicine.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA