Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 51
Filter
1.
Article in English | MEDLINE | ID: mdl-38652239

ABSTRACT

BACKGROUND: Hypoglycemic pharmacotherapy interventions for alleviating the risk of dementia remains controversial, particularly about dipeptidyl peptidase 4 (DPP4) inhibitors versus metformin. Our objective was to investigate whether the initiation of DPP4 inhibitors, as opposed to metformin, was linked to a reduced risk of dementia. METHODS: We included individuals with type 2 diabetes over 40 years old who were new users of DPP4 inhibitors or metformin in the Chinese Renal Disease Data System (CRDS) database between 2009 and 2020. The study employed Kaplan-Meier and Cox regression for survival analysis and the Fine and Gray model for the competing risk of death. RESULTS: Following a 1:1 propensity score matching, the analysis included 3626 DPP4 inhibitor new users and an equal number of metformin new users. After adjusting for potential confounders, the utilization of DPP4 inhibitors was associated with a decreased risk of all-cause dementia compared to metformin (hazard ratio (HR) 0.63, 95% confidence interval (CI) 0.45-0.89). Subgroup analysis revealed that the utilization of DPP4 inhibitors was associated with a reduced incidence of dementia in individuals who initiated drug therapy at the age of 60 years or older (HR 0.69, 95% CI 0.48-0.98), those without baseline macrovascular complications (HR 0.62, 95% CI 0.41-0.96), and those without baseline microvascular complications (HR 0.67, 95% CI 0.47-0.98). CONCLUSION: In this real-world study, we found that DPP4 inhibitors presented an association with a lower risk of dementia in individuals with type 2 diabetes than metformin, particularly in older people and those without diabetes-related comorbidities.

2.
Lancet Reg Health West Pac ; 43: 100817, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38456090

ABSTRACT

Cardiometabolic diseases (CMDs) are the major types of non-communicable diseases, contributing to huge disease burdens in the Western Pacific region (WPR). The use of digital health (dHealth) technologies, such as wearable gadgets, mobile apps, and artificial intelligence (AI), facilitates interventions for CMDs prevention and treatment. Currently, most studies on dHealth and CMDs in WPR were conducted in a few high- and middle-income countries like Australia, China, Japan, the Republic of Korea, and New Zealand. Evidence indicated that dHealth services promoted early prevention by behavior interventions, and AI-based innovation brought automated diagnosis and clinical decision-support. dHealth brought facilitators for the doctor-patient interplay in the effectiveness, experience, and communication skills during healthcare services, with rapidly development during the pandemic of coronavirus disease 2019. In the future, the improvement of dHealth services in WPR needs to gain more policy support, enhance technology innovation and privacy protection, and perform cost-effectiveness research.

3.
BMC Med ; 22(1): 56, 2024 02 05.
Article in English | MEDLINE | ID: mdl-38317226

ABSTRACT

BACKGROUND: A comprehensive overview of artificial intelligence (AI) for cardiovascular disease (CVD) prediction and a screening tool of AI models (AI-Ms) for independent external validation are lacking. This systematic review aims to identify, describe, and appraise AI-Ms of CVD prediction in the general and special populations and develop a new independent validation score (IVS) for AI-Ms replicability evaluation. METHODS: PubMed, Web of Science, Embase, and IEEE library were searched up to July 2021. Data extraction and analysis were performed for the populations, distribution, predictors, algorithms, etc. The risk of bias was evaluated with the prediction risk of bias assessment tool (PROBAST). Subsequently, we designed IVS for model replicability evaluation with five steps in five items, including transparency of algorithms, performance of models, feasibility of reproduction, risk of reproduction, and clinical implication, respectively. The review is registered in PROSPERO (No. CRD42021271789). RESULTS: In 20,887 screened references, 79 articles (82.5% in 2017-2021) were included, which contained 114 datasets (67 in Europe and North America, but 0 in Africa). We identified 486 AI-Ms, of which the majority were in development (n = 380), but none of them had undergone independent external validation. A total of 66 idiographic algorithms were found; however, 36.4% were used only once and only 39.4% over three times. A large number of different predictors (range 5-52,000, median 21) and large-span sample size (range 80-3,660,000, median 4466) were observed. All models were at high risk of bias according to PROBAST, primarily due to the incorrect use of statistical methods. IVS analysis confirmed only 10 models as "recommended"; however, 281 and 187 were "not recommended" and "warning," respectively. CONCLUSION: AI has led the digital revolution in the field of CVD prediction, but is still in the early stage of development as the defects of research design, report, and evaluation systems. The IVS we developed may contribute to independent external validation and the development of this field.


Subject(s)
Artificial Intelligence , Cardiovascular Diseases , Humans , Cardiovascular Diseases/diagnosis , Cardiovascular Diseases/epidemiology , Algorithms , Africa , Europe
4.
Article in English | MEDLINE | ID: mdl-38262746

ABSTRACT

BACKGROUND AND HYPOTHESIS: Postoperative acute kidney injury (AKI) is a common condition after surgery, however, the available data about nationwide epidemiology of postoperative AKI in China from the large and high-quality studies is limited. This study was aimed to determine the incidence, risk factors, and outcomes of postoperative AKI among patients undergoing surgery in China. METHODS: This was a large, multicenter, retrospective study performed in 16 tertiary medical centers in China. Adult (at least 18 years old) patients who undergoing surgical procedures from January 1, 2013 to December 31, 2019 were included. Postoperative AKI was defined by the Kidney Disease: Improving Global Outcomes creatinine criteria. The associations of AKI and in-hospital outcomes were investigated using logistic regression models adjusted for potential confounders. RESULTS: Among 520 707 patients included in our study, 25 830 (5.0%) patients developed postoperative AKI. The incidence of postoperative AKI varied by surgery type, which was highest in cardiac (34.6%) surgery, followed by urologic (8.7%), and general (4.2%) surgeries. 89.2% postoperative AKI cases were detected in the first 2 postoperative days. However, only 584 (2.3%) patients with postoperative AKI were diagnosed with AKI on discharge. Risk factors for postoperative AKI included advanced age, male sex, lower baseline kidney function, pre-surgery hospital stay ≤ 3 days or > 7 days, hypertension, diabetes mellitus, and use of PPIs or diuretics. The risk of in-hospital death increased with the stage of AKI. In addition, patients with postoperative AKI had longer length of hospital stay (12 vs 19 days), were more likely to require intensive unit care (13.1% vs 45.0%) and renal replacement therapy (0.4% vs 7.7%). CONCLUSIONS: Postoperative AKI was common across surgery type in China, particularly for patients undergoing cardiac surgery. Implementation and evaluation of an alarm system is important for the battle against postoperative AKI.

5.
Eur Urol ; 85(5): 457-465, 2024 May.
Article in English | MEDLINE | ID: mdl-37414703

ABSTRACT

BACKGROUND: Conservative management is an option for prostate cancer (PCa) patients either with the objective of delaying or even avoiding curative therapy, or to wait until palliative treatment is needed. PIONEER, funded by the European Commission Innovative Medicines Initiative, aims at improving PCa care across Europe through the application of big data analytics. OBJECTIVE: To describe the clinical characteristics and long-term outcomes of PCa patients on conservative management by using an international large network of real-world data. DESIGN, SETTING, AND PARTICIPANTS: From an initial cohort of >100 000 000 adult individuals included in eight databases evaluated during a virtual study-a-thon hosted by PIONEER, we identified newly diagnosed PCa cases (n = 527 311). Among those, we selected patients who did not receive curative or palliative treatment within 6 mo from diagnosis (n = 123 146). OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: Patient and disease characteristics were reported. The number of patients who experienced the main study outcomes was quantified for each stratum and the overall cohort. Kaplan-Meier analyses were used to estimate the distribution of time to event data. RESULTS AND LIMITATIONS: The most common comorbidities were hypertension (35-73%), obesity (9.2-54%), and type 2 diabetes (11-28%). The rate of PCa-related symptomatic progression ranged between 2.6% and 6.2%. Hospitalization (12-25%) and emergency department visits (10-14%) were common events during the 1st year of follow-up. The probability of being free from both palliative and curative treatments decreased during follow-up. Limitations include a lack of information on patients and disease characteristics and on treatment intent. CONCLUSIONS: Our results allow us to better understand the current landscape of patients with PCa managed with conservative treatment. PIONEER offers a unique opportunity to characterize the baseline features and outcomes of PCa patients managed conservatively using real-world data. PATIENT SUMMARY: Up to 25% of men with prostate cancer (PCa) managed conservatively experienced hospitalization and emergency department visits within the 1st year after diagnosis; 6% experienced PCa-related symptoms. The probability of receiving therapies for PCa decreased according to time elapsed after the diagnosis.


Subject(s)
Diabetes Mellitus, Type 2 , Prostatic Neoplasms , Male , Adult , Humans , Big Data , Prostatic Neoplasms/therapy , Prostatic Neoplasms/diagnosis , Disease-Free Survival , Europe
6.
Cancer Innov ; 2(3): 219-232, 2023 Jun.
Article in English | MEDLINE | ID: mdl-38089405

ABSTRACT

With the progress and development of computer technology, applying machine learning methods to cancer research has become an important research field. To analyze the most recent research status and trends, main research topics, topic evolutions, research collaborations, and potential directions of this research field, this study conducts a bibliometric analysis on 6206 research articles worldwide collected from PubMed between 2011 and 2021 concerning cancer research using machine learning methods. Python is used as a tool for bibliometric analysis, Gephi is used for social network analysis, and the Latent Dirichlet Allocation model is used for topic modeling. The trend analysis of articles not only reflects the innovative research at the intersection of machine learning and cancer but also demonstrates its vigorous development and increasing impacts. In terms of journals, Nature Communications is the most influential journal and Scientific Reports is the most prolific one. The United States and Harvard University have contributed the most to cancer research using machine learning methods. As for the research topic, "Support Vector Machine," "classification," and "deep learning" have been the core focuses of the research field. Findings are helpful for scholars and related practitioners to better understand the development status and trends of cancer research using machine learning methods, as well as to have a deeper understanding of research hotspots.

7.
Kidney Dis (Basel) ; 9(6): 517-528, 2023 Dec.
Article in English | MEDLINE | ID: mdl-38089444

ABSTRACT

Introduction: Comprehensive data on the risk of hospital-acquired (HA) acute kidney injury (AKI) among adult users of opioid analgesics are lacking. This study aimed to systematically compare the risk of HA-AKI among the users of various opioid analgesics. Methods: This multicenter, retrospective real-world study analyzed 255,265 adult hospitalized patients who received at least one prescription of opioid analgesic during the first 30 days of hospitalization. The primary outcome was the time from the first opioid analgesic prescription to HA-AKI occurrence. 12 subtypes of opioid analgesics were analyzed, including 9 for treating moderate-to-severe pain and 3 for mild-to-moderate pain. We examined the association between the exposure to each subtype of opioid analgesic and the risk of HA-AKI using Cox proportional hazards models, using the most commonly used opioid analgesic as the reference group. Results: As compared to dezocine, the most commonly used opioid analgesic for treating moderate-to-severe pain, exposure to morphine, but not the other 7 types of opioid analgesics, was associated with a significantly increased risk of HA-AKI (adjusted hazard ratio: 1.56, 95% confidence interval: 1.40-1.78). The association was consistent in stratified analyses and in a propensity-matched cohort. There were no significant differences in the risk of HA-AKI among the opioid analgesic users with mild-to-moderate pain after adjusting for confounders. Conclusion: The use of morphine was associated with an increased risk of HA-AKI in adult patients with moderate-to-severe pain. Opioid analgesics other than morphine should be chosen preferentially in adult patients with high risk of HA-AKI when treating moderate-to-severe pain.

8.
Clin Kidney J ; 16(11): 2262-2270, 2023 Nov.
Article in English | MEDLINE | ID: mdl-37915920

ABSTRACT

Background: Acute kidney injury (AKI) has been associated with increased risks of new-onset and worsening proteinuria. However, epidemiologic data for post-AKI proteinuria was still lacking. This study aimed to determine the incidence, risk factors and clinical correlations of post-AKI proteinuria among hospitalized patients. Methods: This study was conducted in a multicenter cohort including patients aged 18-100 years with hospital-acquired AKI (HA-AKI) hospitalized at 19 medical centers throughout China. The primary outcome was the incidence of post-AKI proteinuria. Secondary outcomes included AKI recovery and kidney disease progression. The results of both quantitative and qualitative urinary protein tests were used to define post-AKI proteinuria. Cox proportional hazard model with stepwise regression was used to determine the risk factors for post-AKI proteinuria. Results: Of 6206 HA-AKI patients without proteinuria at baseline, 2102 (33.9%) had new-onset proteinuria, whereas of 5137 HA-AKI with baseline proteinuria, 894 (17.4%) had worsening proteinuria after AKI. Higher AKI stage and preexisting CKD diagnosis were risk factors for new-onset proteinuria and worsening proteinuria, whereas treatment with renin-angiotensin system inhibitors was associated with an 11% lower risk of incident proteinuria. About 60% and 75% of patients with post-AKI new-onset and worsening proteinuria, respectively, recovered within 3 months. Worsening proteinuria was associated with a lower incidence of AKI recovery and a higher risk of kidney disease progression. Conclusions: Post-AKI proteinuria is common and usually transient among hospitalized patients. The risk profiles for new-onset and worsening post-AKI proteinuria differed markedly. Worsening proteinuria after AKI was associated with adverse kidney outcomes, which emphasized the need for close monitoring of proteinuria after AKI.

9.
Front Public Health ; 11: 1219407, 2023.
Article in English | MEDLINE | ID: mdl-37546298

ABSTRACT

Recently, in order to comprehensively promote the development of medical institutions and solve the nationwide problems in the healthcare fields, the government of China developed an innovative national policy of "Trinity" smart hospital construction, which includes "smart medicine," "smart services," and "smart management". The prototype of the evaluation system has been established, and a large number of construction achievements have emerged in many hospitals. In this article, the summary of this field was performed to provide a reference for medical workers, managers of hospitals, and policymakers.


Subject(s)
Delivery of Health Care , Hospital Design and Construction , Humans , China , Policy , Hospitals
10.
Clin J Am Soc Nephrol ; 18(9): 1186-1194, 2023 09 01.
Article in English | MEDLINE | ID: mdl-37314777

ABSTRACT

BACKGROUND: The efficacy of immunosuppression in the management of immunoglobulin A (IgA) nephropathy remains highly controversial. The study was conducted to assess the effect of immunosuppression, compared with supportive care, in the real-world setting of IgA nephropathy. METHODS: A cohort of 3946 patients with IgA nephropathy, including 1973 new users of immunosuppressive agents and 1973 propensity score-matched recipients of supportive care, in a nationwide register data from January 2019 to May 2022 in China was analyzed. The primary outcome was a composite of 40% eGFR decrease of the baseline, kidney failure, and all-cause mortality. A Cox proportional hazard model was used to estimate the effects of immunosuppression on the composite outcomes and its components in the propensity score-matched cohort. RESULTS: Among 3946 individuals (mean [SD] age 36 [10] years, mean [SD] eGFR 85 [28] ml/min per 1.73 m 2 , and mean [SD] proteinuria 1.4 [1.7] g/24 hours), 396 primary composite outcome events were observed, of which 156 (8%) were in the immunosuppression group and 240 (12%) in the supportive care group. Compared with supportive care, immunosuppression treatment was associated with 40% lower risk of the primary outcome events (adjusted hazard ratio, 0.60; 95% confidence interval, 0.48 to 0.75). Comparable effect size was observed for glucocorticoid monotherapy and mycophenolate mofetil alone. In the prespecified subgroup analysis, the treatment effects of immunosuppression were consistent across ages, sexes, levels of proteinuria, and values of eGFR at baseline. Serious adverse events were more frequent in the immunosuppression group compared with the supportive care group. CONCLUSIONS: Immunosuppressive therapy, compared with supportive care, was associated with a 40% lower risk of clinically important kidney outcomes in patients with IgA nephropathy.


Subject(s)
Glomerulonephritis, IGA , Humans , Adult , Glomerulonephritis, IGA/complications , Glomerulonephritis, IGA/drug therapy , Glomerular Filtration Rate , Kidney , Immunosuppression Therapy/adverse effects , Immunosuppressive Agents/adverse effects , Proteinuria/drug therapy , Proteinuria/etiology
12.
Nat Commun ; 14(1): 3739, 2023 06 22.
Article in English | MEDLINE | ID: mdl-37349292

ABSTRACT

Acute kidney injury (AKI) is prevalent and a leading cause of in-hospital death worldwide. Early prediction of AKI-related clinical events and timely intervention for high-risk patients could improve outcomes. We develop a deep learning model based on a nationwide multicenter cooperative network across China that includes 7,084,339 hospitalized patients, to dynamically predict the risk of in-hospital death (primary outcome) and dialysis (secondary outcome) for patients who developed AKI during hospitalization. A total of 137,084 eligible patients with AKI constitute the analysis set. In the derivation cohort, the area under the receiver operator curve (AUROC) for 24-h, 48-h, 72-h, and 7-day death are 95·05%, 94·23%, 93·53%, and 93·09%, respectively. For dialysis outcome, the AUROC of each time span are 88·32%, 83·31%, 83·20%, and 77·99%, respectively. The predictive performance is consistent in both internal and external validation cohorts. The model can predict important outcomes of patients with AKI, which could be helpful for the early management of AKI.


Subject(s)
Acute Kidney Injury , Renal Dialysis , Humans , Hospital Mortality , Risk Factors , Renal Dialysis/adverse effects , Acute Kidney Injury/diagnosis , Acute Kidney Injury/therapy , Acute Kidney Injury/etiology , Hospitals , Retrospective Studies
13.
CMAJ ; 195(21): E729-E738, 2023 05 29.
Article in English | MEDLINE | ID: mdl-37247880

ABSTRACT

BACKGROUND: The role of statin therapy in the development of kidney disease in patients with type 2 diabetes mellitus (DM) remains uncertain. We aimed to determine the relationships between statin initiation and kidney outcomes in patients with type 2 DM. METHODS: Through a new-user design, we conducted a multicentre retrospective cohort study using the China Renal Data System database (which includes inpatient and outpatient data from 19 urban academic centres across China). We included patients with type 2 DM who were aged 40 years or older and admitted to hospital between Jan. 1, 2000, and May 26, 2021, and excluded those with pre-existing chronic kidney disease and those who were already on statins or without follow-up at an affiliated outpatient clinic within 90 days after discharge. The primary exposure was initiation of a statin. The primary outcome was the development of diabetic kidney disease (DKD), defined as a composite of the occurrence of kidney dysfunction (estimated glomerular filtration rate [eGFR] < 60 mL/min/1.73 m2 and > 25% decline from baseline) and proteinuria (a urinary albumin-to-creatinine ratio ≥ 30 mg/g and > 50% increase from baseline), sustained for at least 90 days; secondary outcomes included development of kidney function decline (a sustained > 40% decline in eGFR). We used Cox proportional hazards regression to evaluate the relationships between statin initiation and kidney outcomes, as well as to conduct subgroup analyses according to patient characteristics, presence or absence of dyslipidemia, and pattern of dyslipidemia. For statin initiators, we explored the association between different levels of lipid control and outcomes. We conducted analyses using propensity overlap weighting to balance the participant characteristics. RESULTS: Among 7272 statin initiators and 12 586 noninitiators in the weighted cohort, statin initiation was associated with lower risks of incident DKD (hazard ratio [HR] 0.72, 95% confidence interval [CI] 0.62-0.83) and kidney function decline (HR 0.60, 95% CI 0.44-0.81). We obtained similar results to the primary analyses for participants with differing patterns of dyslipidemia, those prescribed different statins, and after stratification according to participant characteristics. Among statin initiators, those with intensive control of high-density lipoprotein cholesterol (LDL-C) (< 1.8 mmol/L) had a lower risk of incident DKD (HR 0.51, 95% CI 0.32-0.81) than those with inadequate lipid control (LDL-C ≥ 3.4 mmol/L). INTERPRETATION: For patients with type 2 DM admitted to and followed up in academic centres, statin initiation was associated with a lower risk of kidney disease development, particularly in those with intensive control of LDL-C. These findings suggest that statin initiation may be an effective and reasonable approach for preventing kidney disease in patients with type 2 DM.


Subject(s)
Diabetes Mellitus, Type 2 , Dyslipidemias , Hydroxymethylglutaryl-CoA Reductase Inhibitors , Renal Insufficiency, Chronic , Humans , Hydroxymethylglutaryl-CoA Reductase Inhibitors/adverse effects , Diabetes Mellitus, Type 2/drug therapy , Diabetes Mellitus, Type 2/epidemiology , Cholesterol, LDL , Retrospective Studies , Renal Insufficiency, Chronic/epidemiology , Dyslipidemias/drug therapy , Dyslipidemias/epidemiology
16.
Drug Saf ; 45(5): 511-519, 2022 05.
Article in English | MEDLINE | ID: mdl-35579814

ABSTRACT

With the rapid development of artificial intelligence (AI) technologies, and the large amount of pharmacovigilance-related data stored in an electronic manner, data-driven automatic methods need to be urgently applied to all aspects of pharmacovigilance to assist healthcare professionals. However, the quantity and quality of data directly affect the performance of AI, and there are particular challenges to implementing AI in limited-resource settings. Analyzing challenges and solutions for AI-based pharmacovigilance in resource-limited settings can improve pharmacovigilance frameworks and capabilities in these settings. In this review, we summarize the challenges into four categories: establishing a database for an AI-based pharmacovigilance system, lack of human resources, weak AI technology and insufficient government support. This study also discusses possible solutions and future perspectives on AI-based pharmacovigilance in resource-limited settings.


Subject(s)
Artificial Intelligence , Pharmacovigilance , Databases, Factual , Health Personnel , Humans , Technology
17.
Clin Epidemiol ; 14: 369-384, 2022.
Article in English | MEDLINE | ID: mdl-35345821

ABSTRACT

Purpose: Routinely collected real world data (RWD) have great utility in aiding the novel coronavirus disease (COVID-19) pandemic response. Here we present the international Observational Health Data Sciences and Informatics (OHDSI) Characterizing Health Associated Risks and Your Baseline Disease In SARS-COV-2 (CHARYBDIS) framework for standardisation and analysis of COVID-19 RWD. Patients and Methods: We conducted a descriptive retrospective database study using a federated network of data partners in the United States, Europe (the Netherlands, Spain, the UK, Germany, France and Italy) and Asia (South Korea and China). The study protocol and analytical package were released on 11th June 2020 and are iteratively updated via GitHub. We identified three non-mutually exclusive cohorts of 4,537,153 individuals with a clinical COVID-19 diagnosis or positive test, 886,193 hospitalized with COVID-19, and 113,627 hospitalized with COVID-19 requiring intensive services. Results: We aggregated over 22,000 unique characteristics describing patients with COVID-19. All comorbidities, symptoms, medications, and outcomes are described by cohort in aggregate counts and are readily available online. Globally, we observed similarities in the USA and Europe: more women diagnosed than men but more men hospitalized than women, most diagnosed cases between 25 and 60 years of age versus most hospitalized cases between 60 and 80 years of age. South Korea differed with more women than men hospitalized. Common comorbidities included type 2 diabetes, hypertension, chronic kidney disease and heart disease. Common presenting symptoms were dyspnea, cough and fever. Symptom data availability was more common in hospitalized cohorts than diagnosed. Conclusion: We constructed a global, multi-centre view to describe trends in COVID-19 progression, management and evolution over time. By characterising baseline variability in patients and geography, our work provides critical context that may otherwise be misconstrued as data quality issues. This is important as we perform studies on adverse events of special interest in COVID-19 vaccine surveillance.

18.
Front Cardiovasc Med ; 9: 845210, 2022.
Article in English | MEDLINE | ID: mdl-35321110

ABSTRACT

Background: There is currently a lack of model for predicting the occurrence of venous thromboembolism (VTE) in patients with lung cancer. Machine learning (ML) techniques are being increasingly adapted for use in the medical field because of their capabilities of intelligent analysis and scalability. This study aimed to develop and validate ML models to predict the incidence of VTE among lung cancer patients. Methods: Data of lung cancer patients from a Grade 3A cancer hospital in China with and without VTE were included. Patient characteristics and clinical predictors related to VTE were collected. The primary endpoint was the diagnosis of VTE during index hospitalization. We calculated and compared the area under the receiver operating characteristic curve (AUROC) using the selected best-performed model (Random Forest model) through multiple model comparison, as well as investigated feature contributions during the training process with both permutation importance scores and the impurity-based feature importance scores in random forest model. Results: In total, 3,398 patients were included in our study, 125 of whom experienced VTE during their hospital stay. The ROC curve and precision-recall curve (PRC) for Random Forest Model showed an AUROC of 0.91 (95% CI: 0.893-0.926) and an AUPRC of 0.43 (95% CI: 0.363-0.500). For the simplified model, five most relevant features were selected: Karnofsky Performance Status (KPS), a history of VTE, recombinant human endostatin, EGFR-TKI, and platelet count. We re-trained a random forest classifier with results of the AUROC of 0.87 (95% CI: 0.802-0.917) and AUPRC of 0.30 (95% CI: 0.265-0.358), respectively. Conclusion: According to the study results, there was no conspicuous decrease in the model's performance when use fewer features to predict, we concluded that our simplified model would be more applicable in real-life clinical settings. The developed model using ML algorithms in our study has the potential to improve the early detection and prediction of the incidence of VTE in patients with lung cancer.

19.
JMIR Med Inform ; 10(3): e28781, 2022 Mar 03.
Article in English | MEDLINE | ID: mdl-35238790

ABSTRACT

BACKGROUND: Modern clinical care in intensive care units is full of rich data, and machine learning has great potential to support clinical decision-making. The development of intelligent machine learning-based clinical decision support systems is facing great opportunities and challenges. Clinical decision support systems may directly help clinicians accurately diagnose, predict outcomes, identify risk events, or decide treatments at the point of care. OBJECTIVE: We aimed to review the research and application of machine learning-enabled clinical decision support studies in intensive care units to help clinicians, researchers, developers, and policy makers better understand the advantages and limitations of machine learning-supported diagnosis, outcome prediction, risk event identification, and intensive care unit point-of-care recommendations. METHODS: We searched papers published in the PubMed database between January 1980 and October 2020. We defined selection criteria to identify papers that focused on machine learning-enabled clinical decision support studies in intensive care units and reviewed the following aspects: research topics, study cohorts, machine learning models, analysis variables, and evaluation metrics. RESULTS: A total of 643 papers were collected, and using our selection criteria, 97 studies were found. Studies were categorized into 4 topics-monitoring, detection, and diagnosis (13/97, 13.4%), early identification of clinical events (32/97, 33.0%), outcome prediction and prognosis assessment (46/97, 47.6%), and treatment decision (6/97, 6.2%). Of the 97 papers, 82 (84.5%) studies used data from adult patients, 9 (9.3%) studies used data from pediatric patients, and 6 (6.2%) studies used data from neonates. We found that 65 (67.0%) studies used data from a single center, and 32 (33.0%) studies used a multicenter data set; 88 (90.7%) studies used supervised learning, 3 (3.1%) studies used unsupervised learning, and 6 (6.2%) studies used reinforcement learning. Clinical variable categories, starting with the most frequently used, were demographic (n=74), laboratory values (n=59), vital signs (n=55), scores (n=48), ventilation parameters (n=43), comorbidities (n=27), medications (n=18), outcome (n=14), fluid balance (n=13), nonmedicine therapy (n=10), symptoms (n=7), and medical history (n=4). The most frequently adopted evaluation metrics for clinical data modeling studies included area under the receiver operating characteristic curve (n=61), sensitivity (n=51), specificity (n=41), accuracy (n=29), and positive predictive value (n=23). CONCLUSIONS: Early identification of clinical and outcome prediction and prognosis assessment contributed to approximately 80% of studies included in this review. Using new algorithms to solve intensive care unit clinical problems by developing reinforcement learning, active learning, and time-series analysis methods for clinical decision support will be greater development prospects in the future.

20.
JMIR Med Inform ; 10(1): e30363, 2022 Jan 27.
Article in English | MEDLINE | ID: mdl-35084343

ABSTRACT

BACKGROUND: Real-world data (RWD) and real-world evidence (RWE) are playing increasingly important roles in clinical research and health care decision-making. To leverage RWD and generate reliable RWE, data should be well defined and structured in a way that is semantically interoperable and consistent across stakeholders. The adoption of data standards is one of the cornerstones supporting high-quality evidence for the development of clinical medicine and therapeutics. Clinical Data Interchange Standards Consortium (CDISC) data standards are mature, globally recognized, and heavily used by the pharmaceutical industry for regulatory submissions. The CDISC RWD Connect Initiative aims to better understand the barriers to implementing CDISC standards for RWD and to identify the tools and guidance needed to more easily implement them. OBJECTIVE: The aim of this study is to understand the barriers to implementing CDISC standards for RWD and to identify the tools and guidance that may be needed to implement CDISC standards more easily for this purpose. METHODS: We conducted a qualitative Delphi survey involving an expert advisory board with multiple key stakeholders, with 3 rounds of input and review. RESULTS: Overall, 66 experts participated in round 1, 56 in round 2, and 49 in round 3 of the Delphi survey. Their inputs were collected and analyzed, culminating in group statements. It was widely agreed that the standardization of RWD is highly necessary, and the primary focus should be on its ability to improve data sharing and the quality of RWE. The priorities for RWD standardization included electronic health records, such as data shared using Health Level 7 Fast Health care Interoperability Resources (FHIR), and the data stemming from observational studies. With different standardization efforts already underway in these areas, a gap analysis should be performed to identify the areas where synergies and efficiencies are possible and then collaborate with stakeholders to create or extend existing mappings between CDISC and other standards, controlled terminologies, and models to represent data originating across different sources. CONCLUSIONS: There are many ongoing data standardization efforts around human health data-related activities, each with different definitions, levels of granularity, and purpose. Among these, CDISC has been successful in standardizing clinical trial-based data for regulation worldwide. However, the complexity of the CDISC standards and the fact that they were developed for different purposes, combined with the lack of awareness and incentives to use a new standard and insufficient training and implementation support, are significant barriers to setting up the use of CDISC standards for RWD. The collection and dissemination of use cases, development of tools and support systems for the RWD community, and collaboration with other standards development organizations are potential steps forward. Using CDISC will help link clinical trial data and RWD and promote innovation in health data science.

SELECTION OF CITATIONS
SEARCH DETAIL
...