Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 149
Filter
Add more filters

Country/Region as subject
Publication year range
1.
J Biomed Inform ; 149: 104566, 2024 Jan.
Article in English | MEDLINE | ID: mdl-38070818

ABSTRACT

Modern hospitals implement clinical pathways to standardize patients' treatments. Conformance checking techniques provide an automated tool to assess whether the actual executions of clinical processes comply with the corresponding clinical pathways. However, clinical processes are typically characterized by a high degree of uncertainty, both in their execution and recording. This paper focuses on uncertainty related to logging clinical processes. The logging of the activities executed during a clinical process in the hospital information system is often performed manually by the involved actors (e.g., the nurses). However, such logging can occur at a different time than the actual execution time, which hampers the reliability of the diagnostics provided by conformance checking techniques. To address this issue, we propose a novel conformance checking algorithm that leverages principles of fuzzy set theory to incorporate experts' knowledge when generating conformance diagnostics. We exploit this knowledge to define a fuzzy tolerance in a time window, which is then used to assess the magnitude of timestamp violations of the recorded activities when evaluating the overall process execution compliance. Experiments conducted on a real-life case study in a Dutch hospital show that the proposed method obtains more accurate diagnostics than the state-of-the-art approaches. We also consider how our diagnostics can be used to stimulate discussion with domain experts on possible strategies to mitigate logging uncertainty in the clinical practice.


Subject(s)
Algorithms , Hospital Information Systems , Humans , Reproducibility of Results , Uncertainty , Hospitals , Fuzzy Logic
2.
Rev Cardiovasc Med ; 24(11): 331, 2023 Nov.
Article in English | MEDLINE | ID: mdl-39076442

ABSTRACT

Background: Acute kidney injury (AKI) is a common complication after pediatric cardiac surgery. And autologous blood transfusion (ABT) is an important predictor of postoperative AKI. Unlike previous studies, which mainly focused on the correlation between ABT and AKI, the current study focuses heavily on the causal relationship between them, thus providing guidance for the treatment of patients during hospitalization to reduce the occurrence of AKI. Methods: A retrospective cohort of 3386 patients extracted from the Pediatric Intensive Care database was used for statistical analysis, multifactorial analysis, and causal inference. Characteristics that were correlated with ABT and AKI were categorized as confounders, instrumental variables, and effect modifiers, and were entered into the DoWhy causal inference model to determine causality. The calculated average treatment effect (ATE) was compared with the results of the multifactorial analysis. Results: The adjusted odds ratio (OR) for ABT volume was obtained by multifactorial analysis as 0.964. The DoWhy model refute test was able to indicate a causal relationship between ABT and AKI. Any ABT reduces AKI about 15.3%-18.8% by different estimation methods. The ATE regarding the amount of ABT was -0.0088, suggesting that every 1 mL/kg of ABT reduced the risk of AKI by 0.88%. Conclusions: Intraoperative transfusion of autologous blood can have a protective effect against postoperative AKI.

3.
J Biomed Inform ; 142: 104372, 2023 06.
Article in English | MEDLINE | ID: mdl-37105510

ABSTRACT

Phenotype-based prioritization of candidate genes and diseases has become a well-established approach for multi-omics diagnostics of rare diseases. Most current algorithms exploit semantic analysis and probabilistic statistics based on Human Phenotype Ontology and are commonly superior to naive search methods. However, these algorithms are mostly less interpretable and do not perform well in real clinical scenarios due to noise and imprecision of query terms, and the fact that individuals may not display all phenotypes of the disease they belong to. We present a Phenotype-driven Likelihood Ratio analysis approach (PheLR) assisting interpretable clinical diagnosis of rare diseases. With a likelihood ratio paradigm, PheLR estimates the posterior probability of candidate diseases and how much a phenotypic feature contributes to the prioritization result. Benchmarked using simulated and realistic patients, PheLR shows significant advantages over current approaches and is robust to noise and inaccuracy. To facilitate clinical practice and visualized differential diagnosis, PheLR is implemented as an online web tool (https://phelr.nbscn.org).


Subject(s)
Algorithms , Rare Diseases , Humans , Rare Diseases/diagnosis , Phenotype , Diagnosis, Differential
4.
Thromb J ; 20(1): 18, 2022 Apr 12.
Article in English | MEDLINE | ID: mdl-35414086

ABSTRACT

BACKGROUND: An increase in the incidence of central venous catheter (CVC)-related thrombosis (CRT) has been reported in pediatric intensive care patients over the past decade. Risk factors for the development of CRT are not well understood, especially in children. The study objective was to identify potential clinical risk factors associated with CRT with novel fusion machine learning models. METHODS: Patients aged 0-18 who were admitted to intensive care units from December 2015 to December 2018 and underwent at least one CVC placement were included. Two fusion model approaches (stacking and blending) were used to build a better performance model based on three widely used machine learning models (logistic regression, random forest and gradient boosting decision tree). High-impact risk factors were identified based on their contribution in both fusion artificial intelligence models. RESULTS: A total of 478 factors of 3871 patients and 3927 lines were used to build fusion models, one of which achieved quite satisfactory performance (AUC = 0.82, recall = 0.85, accuracy = 0.65) in 5-fold cross validation. A total of 11 risk factors were identified based on their independent contributions to the two fusion models. Some risk factors, such as D-dimer, thrombin time, blood acid-base balance-related factors, dehydrating agents, lymphocytes and basophils were identified or confirmed to play an important role in CRT in children. CONCLUSIONS: The fusion model, which achieves better performance in CRT prediction, can better understand the risk factors for CRT and provide potential biomarkers and measures for thromboprophylaxis in pediatric intensive care settings.

5.
BMC Med Inform Decis Mak ; 22(1): 37, 2022 02 10.
Article in English | MEDLINE | ID: mdl-35144618

ABSTRACT

BACKGROUND: One of the primary obstacles to measure clinical quality is the lack of configurable solutions to make computers understand and compute clinical quality indicators. The paper presents a solution that can help clinical staff develop clinical quality measurement more easily and generate the corresponding data reports and visualization by a configurable method based on openEHR and Clinical Quality Language (CQL). METHODS: First, expression logic adopted from CQL was combined with openEHR to express clinical quality indicators. Archetype binding provides the clinical information models used in expression logic, terminology binding makes the medical concepts consistent used in clinical quality artifacts and metadata is regarded as the essential component for sharing and management. Then, a systematic approach was put forward to facilitate the development of clinical quality indicators and the generation of corresponding data reports and visualization. Finally, clinical physicians were invited to test our system and give their opinions. RESULTS: With the combination of openEHR and CQL, 64 indicators from Centers for Medicare & Medicaid Services (CMS) were expressed for verification and a complicated indicator was shown as an example. 68 indicators from 17 different scenes in the local environment were also expressed and computed in our system. A platform was built to support the development of indicators in a unified way. Also, an execution engine can parse and compute these indicators. Based on a clinical data repository (CDR), indicators were used to generate data reports and visualization and shown in a dashboard. CONCLUSION: Our method is capable of expressing clinical quality indicators formally. With the computer-interpretable indicators, a systematic approach can make it more easily to define clinical indicators and generate medical data reports and visualization, and facilitate the adoption of clinical quality measurements.


Subject(s)
Electronic Health Records , Language , Aged , Humans , Medicare , United States
6.
BMC Med Inform Decis Mak ; 22(1): 245, 2022 09 19.
Article in English | MEDLINE | ID: mdl-36123745

ABSTRACT

BACKGROUND: Lung cancer is the leading cause of cancer death worldwide. Prognostic prediction plays a vital role in the decision-making process for postoperative non-small cell lung cancer (NSCLC) patients. However, the high imbalance ratio of prognostic data limits the development of effective prognostic prediction models. METHODS: In this study, we present a novel approach, namely ensemble learning with active sampling (ELAS), to tackle the imbalanced data problem in NSCLC prognostic prediction. ELAS first applies an active sampling mechanism to query the most informative samples to update the base classifier to give it a new perspective. This training process is repeated until no enough samples are queried. Next, an internal validation set is employed to evaluate the base classifiers, and the ones with the best performances are integrated as the ensemble model. Besides, we set up multiple initial training data seeds and internal validation sets to ensure the stability and generalization of the model. RESULTS: We verified the effectiveness of the ELAS on a real clinical dataset containing 1848 postoperative NSCLC patients. Experimental results showed that the ELAS achieved the best averaged 0.736 AUROC value and 0.453 AUPRC value for 6 prognostic tasks and obtained significant improvements in comparison with the SVM, AdaBoost, Bagging, SMOTE and TomekLinks. CONCLUSIONS: We conclude that the ELAS can effectively alleviate the imbalanced data problem in NSCLC prognostic prediction and demonstrates good potential for future postoperative NSCLC prognostic prediction.


Subject(s)
Carcinoma, Non-Small-Cell Lung , Lung Neoplasms , Algorithms , Carcinoma, Non-Small-Cell Lung/surgery , Humans , Lung Neoplasms/surgery , Machine Learning , Prognosis
7.
Genomics ; 113(4): 2683-2694, 2021 07.
Article in English | MEDLINE | ID: mdl-34129933

ABSTRACT

The AJCC staging system is considered as the golden standard in clinical practice. However, it remains some pitfalls in assessing the prognosis of gastric cancer (GC) patients with similar clinicopathological characteristics. We aim to develop a new clinic and genetic risk score (CGRS) to improve the prognosis prediction of GC patients. We established genetic risk score (GRS) based on nine-gene signature including APOD, CCDC92, CYS1, GSDME, ST8SIA5, STARD3NL, TIMEM245, TSPYL5, and VAT1 based on the gene expression profiles of the training set from the Asian Cancer Research Group (ACRG) cohort by LASSO-Cox regression algorithms. CGRS was established by integrating GRS with clinical risk score (CRS) derived from Surveillance, Epidemiology, and End Results (SEER) database. GRS and CGRS dichotomized GC patients into high and low risk groups with significantly different prognosis in four independent cohorts with different data types, such as microarray, RNA sequencing and qRT-PCR (all HR > 1, all P < 0.001). Both GRS and CGRS were prognostic signatures independent of the AJCC staging system. Receiver operating characteristic (ROC) analysis showed that area under ROC curve of CGRS was larger than that of the AJCC staging system in most cohorts we studied. Nomogram and web tool (http://39.100.117.92/CGRS/) based on CGRS were developed for clinicians to conveniently assess GC prognosis in clinical practice. CGRS integrating genetic signature with clinical features shows strong robustness in predicting GC prognosis, and can be easily applied in clinical practice through the web application.


Subject(s)
Stomach Neoplasms , Transcriptome , Biomarkers, Tumor/genetics , Biomarkers, Tumor/metabolism , Humans , Nomograms , Nuclear Proteins/genetics , Prognosis , Stomach Neoplasms/genetics , Stomach Neoplasms/pathology
8.
BMC Musculoskelet Disord ; 22(1): 344, 2021 Apr 12.
Article in English | MEDLINE | ID: mdl-33845817

ABSTRACT

BACKGROUND: DDH (Developmental Dysplasia of the Hip) screening can potentially avert many morbidities and reduce costs. The debate about universal vs. selective DDH ultrasonography screening in different countries revolves to a large extent around effectiveness, cost, and the possibility of overdiagnosis and overtreatment. In this study, we proposed and evaluated a Z-score enhanced Graf method to optimize population-specific DDH screening. METHODS: A total of 39,710 history ultrasonography hip examinations were collected to establish a sex, side specific and age-based Z-scores model using the local regression method. The correlation between Z-scores and classic Graf types was analyzed. Four thousand two hundred twenty-nine cases with follow-up ultrasonographic examinations and 5284 cases with follow-up X-ray examinations were used to evaluate the false positive rate of the first examination based on the subsequent examinations. The results using classic Graf types and the Z-score enhanced types were compared. RESULTS: The Z-score enhanced Graf types were highly correlated with the classic Graf's classification (R = 0.67, p < 0.001). Using the Z-scores ≥2 as a threshold could reduce by 86.56 and 80.44% the false positives in the left and right hips based on the follow-up ultrasonographic examinations, and reduce by 78.99% false-positive cases based on the follow-up X-ray examinations, respectively. CONCLUSIONS: Using an age, sex and side specific Z-scores enhanced Graf's method can better control the false positive rate in DDH screening among different populations.


Subject(s)
Hip Dislocation, Congenital , China/epidemiology , Hip Dislocation, Congenital/diagnostic imaging , Hip Dislocation, Congenital/epidemiology , Humans , Infant , Infant, Newborn , Neonatal Screening , Retrospective Studies , Ultrasonography
9.
J Med Internet Res ; 23(9): e25630, 2021 09 28.
Article in English | MEDLINE | ID: mdl-34581680

ABSTRACT

BACKGROUND: Hypertension is a long-term medical condition. Electronic and mobile health care services can help patients to self-manage this condition. However, not all management is effective, possibly due to different levels of patient engagement (PE) with health care services. Health care provider follow-up is an intervention to promote PE and blood pressure (BP) control. OBJECTIVE: This study aimed to discover and characterize patterns of PE with a hypertension self-management app, investigate the effects of health care provider follow-up on PE, and identify the follow-up effects on BP in each PE pattern. METHODS: PE was represented as the number of days that a patient recorded self-measured BP per week. The study period was the first 4 weeks for a patient to engage in the hypertension management service. K-means algorithm was used to group patients by PE. There was compliance follow-up, regular follow-up, and abnormal follow-up in management. The follow-up effect was calculated by the change in PE (CPE) and the change in systolic blood pressure (CSBP, SBP) before and after each follow-up. Chi-square tests and z scores were used to ascertain the distribution of gender, age, education level, SBP, and the number of follow-ups in each cluster. The follow-up effect was identified by analysis of variances. Once a significant effect was detected, Bonferroni multiple comparisons were further conducted to identify the difference between 2 clusters. RESULTS: Patients were grouped into 4 clusters according to PE: (1) PE started low and dropped even lower (PELL), (2) PE started high and remained high (PEHH), (3) PE started high and dropped to low (PEHL), and (4) PE started low and rose to high (PELH). Significantly more patients over 60 years old were found in the PEHH cluster (P≤.05). Abnormal follow-up was significantly less frequent (P≤.05) in the PELL cluster. Compliance follow-up and regular follow-up can improve PE. In the clusters of PEHH and PELH, the improvement in PE in the first 3 weeks and the decrease in SBP in all 4 weeks were significant after follow-up. The SBP of the clusters of PELL and PELH decreased more (-6.1 mmHg and -8.4 mmHg) after follow-up in the first week. CONCLUSIONS: Four distinct PE patterns were identified for patients engaging in the hypertension self-management app. Patients aged over 60 years had higher PE in terms of recording self-measured BP using the app. Once SBP reduced, patients with low PE tended to stop using the app, and a continued decline in PE occurred simultaneously with the increase in SBP. The duration and depth of the effect of health care provider follow-up were more significant in patients with high or increased engagement after follow-up.


Subject(s)
Hypertension , Patient Participation , Aged , Blood Pressure , Cluster Analysis , Electronics , Follow-Up Studies , Health Personnel , Humans , Hypertension/therapy , Middle Aged
10.
BMC Med Inform Decis Mak ; 21(Suppl 9): 247, 2021 11 16.
Article in English | MEDLINE | ID: mdl-34789213

ABSTRACT

BACKGROUND: Standardized coding of plays an important role in radiology reports' secondary use such as data analytics, data-driven decision support, and personalized medicine. RadLex, a standard radiological lexicon, can reduce subjective variability and improve clarity in radiology reports. RadLex coding of radiology reports is widely used in many countries, but translation and localization of RadLex in China are far from being established. Although automatic RadLex coding is a common way for non-standard radiology reports, the high-accuracy cross-language RadLex coding is hardly achieved due to the limitation of up-to-date auto-translation and text similarity algorithms and still requires further research. METHODS: We present an effective approach that combines a hybrid translation and a Multilayer Perceptron weighting text similarity ensemble algorithm for automatic RadLex coding of Chinese structured radiology reports. Firstly, a hybrid way to integrate Google neural machine translation and dictionary translation helps to optimize the translation of Chinese radiology phrases to English. The dictionary is made up of 21,863 Chinese-English radiological term pairs extracted from several free medical dictionaries. Secondly, four typical text similarity algorithms are introduced, which are Levenshtein distance, Jaccard similarity coefficient, Word2vec Continuous bag-of-words model, and WordNet Wup similarity algorithms. Lastly, the Multilayer Perceptron model has been used to synthesize the contextual, lexical, character and syntactical information of four text similarity algorithms to promote precision, in which four similarity scores of two terms are taken as input and the output presents whether the two terms are synonyms. RESULTS: The results show the effectiveness of the approach with an F1-score of 90.15%, a precision of 91.78% and a recall of 88.59%. The hybrid translation algorithm has no negative effect on the final coding, F1-score has increased by 21.44% and 8.12% compared with the GNMT algorithm and dictionary translation. Compared with the single similarity, the result of the MLP weighting similarity algorithm is satisfactory that has a 4.48% increase compared with the best single similarity algorithm, WordNet Wup. CONCLUSIONS: The paper proposed an innovative automatic cross-language RadLex coding approach to solve the standardization of Chinese structured radiology reports, that can be taken as a reference to automatic cross-language coding.


Subject(s)
Radiology Information Systems , Radiology , Algorithms , China , Humans , Language , Natural Language Processing
11.
BMC Med Inform Decis Mak ; 21(1): 113, 2021 04 03.
Article in English | MEDLINE | ID: mdl-33812388

ABSTRACT

BACKGROUND: Ensuring data is of appropriate quality is essential for the secondary use of electronic health records (EHRs) in research and clinical decision support. An effective method of data quality assessment (DQA) is automating data quality rules (DQRs) to replace the time-consuming, labor-intensive manual process of creating DQRs, which is difficult to guarantee standard and comparable DQA results. This paper presents a case study of automatically creating DQRs based on openEHR archetypes in a Chinese hospital to investigate the feasibility and challenges of automating DQA for EHR data. METHODS: The clinical data repository (CDR) of the Shanxi Dayi Hospital is an archetype-based relational database. Four steps are undertaken to automatically create DQRs in this CDR database. First, the keywords and features relevant to DQA of archetypes were identified via mapping them to a well-established DQA framework, Kahn's DQA framework. Second, the templates of DQRs in correspondence with these identified keywords and features were created in the structured query language (SQL). Third, the quality constraints were retrieved from archetypes. Fourth, these quality constraints were automatically converted to DQRs according to the pre-designed templates and mapping relationships of archetypes and data tables. We utilized the archetypes of the CDR to automatically create DQRs to meet quality requirements of the Chinese Application-Level Ranking Standard for EHR Systems (CARSES) and evaluated their coverage by comparing with expert-created DQRs. RESULTS: We used 27 archetypes to automatically create 359 DQRs. 319 of them are in agreement with the expert-created DQRs, covering 84.97% (311/366) requirements of the CARSES. The auto-created DQRs had varying levels of coverage of the four quality domains mandated by the CARSES: 100% (45/45) of consistency, 98.11% (208/212) of completeness, 54.02% (57/87) of conformity, and 50% (11/22) of timeliness. CONCLUSION: It's feasible to create DQRs automatically based on openEHR archetypes. This study evaluated the coverage of the auto-created DQRs to a typical DQA task of Chinese hospitals, the CARSES. The challenges of automating DQR creation were identified, such as quality requirements based on semantic, and complex constraints of multiple elements. This research can enlighten the exploration of DQR auto-creation and contribute to the automatic DQA.


Subject(s)
Decision Support Systems, Clinical , Electronic Health Records , Data Accuracy , Humans , Language , Semantics
12.
BMC Med Inform Decis Mak ; 21(Suppl 2): 214, 2021 07 30.
Article in English | MEDLINE | ID: mdl-34330277

ABSTRACT

BACKGROUND: Computed tomography (CT) reports record a large volume of valuable information about patients' conditions and the interpretations of radiology images from radiologists, which can be used for clinical decision-making and further academic study. However, the free-text nature of clinical reports is a critical barrier to use this data more effectively. In this study, we investigate a novel deep learning method to extract entities from Chinese CT reports for lung cancer screening and TNM staging. METHODS: The proposed approach presents a new named entity recognition algorithm, namely the BERT-based-BiLSTM-Transformer network (BERT-BTN) with pre-training, to extract clinical entities for lung cancer screening and staging. Specifically, instead of traditional word embedding methods, BERT is applied to learn the deep semantic representations of characters. Following the long short-term memory layer, a Transformer layer is added to capture the global dependencies between characters. Besides, pre-training technique is employed to alleviate the problem of insufficient labeled data. RESULTS: We verify the effectiveness of the proposed approach on a clinical dataset containing 359 CT reports collected from the Department of Thoracic Surgery II of Peking University Cancer Hospital. The experimental results show that the proposed approach achieves an 85.96% macro-F1 score under exact match scheme, which improves the performance by 1.38%, 1.84%, 3.81%,4.29%,5.12%,5.29% and 8.84% compared to BERT-BTN, BERT-LSTM, BERT-fine-tune, BERT-Transformer, FastText-BTN, FastText-BiLSTM and FastText-Transformer, respectively. CONCLUSIONS: In this study, we developed a novel deep learning method, i.e., BERT-BTN with pre-training, to extract the clinical entities from Chinese CT reports. The experimental results indicate that the proposed approach can efficiently recognize various clinical entities about lung cancer screening and staging, which shows the potential for further clinical decision-making and academic research.


Subject(s)
Deep Learning , Lung Neoplasms , Algorithms , China , Early Detection of Cancer , Humans , Lung Neoplasms/diagnostic imaging
13.
BMC Med Inform Decis Mak ; 21(1): 332, 2021 11 27.
Article in English | MEDLINE | ID: mdl-34838025

ABSTRACT

BACKGROUND: An increase in the incidence of central venous catheter (CVC)-associated deep venous thrombosis (CADVT) has been reported in pediatric patients over the past decade. At the same time, current screening guidelines for venous thromboembolism risk have low sensitivity for CADVT in hospitalized children. This study utilized a multimodal deep learning model to predict CADVT before it occurs. METHODS: Children who were admitted to intensive care units (ICUs) between December 2015 and December 2018 and with CVC placement at least 3 days were included. The variables analyzed included demographic characteristics, clinical conditions, laboratory test results, vital signs and medications. A multimodal deep learning (MMDL) model that can handle temporal data using long short-term memory (LSTM) and gated recurrent units (GRUs) was proposed for this prediction task. Four benchmark machine learning models, logistic regression (LR), random forest (RF), gradient boosting decision tree (GBDT) and a published cutting edge MMDL, were used to compare and evaluate the models with a fivefold cross-validation approach. Accuracy, recall, area under the ROC curve (AUC), and average precision (AP) were used to evaluate the discrimination of each model at three time points (24 h, 48 h and 72 h) before CADVT occurred. Brier score and Spiegelhalter's z test were used measure the calibration of these prediction models. RESULTS: A total of 1830 patients were included in this study, and approximately 15% developed CADVT. In the CADVT prediction task, the model proposed in this paper significantly outperforms both traditional machine learning models and existing multimodal deep learning models at all 3 time points. It achieved 77% accuracy and 90% recall at 24 h before CADVT was discovered. It can be used to accurately predict the occurrence of CADVT 72 h in advance with an accuracy of greater than 75%, a recall of more than 87%, and an AUC value of 0.82. CONCLUSION: In this study, a machine learning method was successfully established to predict CADVT in advance. These findings demonstrate that artificial intelligence (AI) could provide measures for thromboprophylaxis in a pediatric intensive care setting.


Subject(s)
Central Venous Catheters , Venous Thromboembolism , Venous Thrombosis , Anticoagulants , Artificial Intelligence , Central Venous Catheters/adverse effects , Child , Critical Care , Humans , Venous Thromboembolism/diagnosis , Venous Thromboembolism/epidemiology , Venous Thromboembolism/etiology , Venous Thrombosis/diagnostic imaging , Venous Thrombosis/epidemiology
14.
BMC Med Inform Decis Mak ; 21(1): 325, 2021 11 22.
Article in English | MEDLINE | ID: mdl-34809614

ABSTRACT

BACKGROUND: Patients with chronic obstructive pulmonary disease (COPD) experience deficits in exercise capacity and physical activity as their disease progresses. Pulmonary rehabilitation (PR) can enhance exercise capacity of patients and it is crucial for patients to maintain a lifestyle which is long-term physically active. This study aimed to develop a home-based rehabilitation mHealth system incorporating behavior change techniques (BCTs) for COPD patients, and evaluate its technology acceptance and feasibility. METHODS: Guided by the medical research council (MRC) framework the process of this study was divided into four steps. In the first step, the prescription was constructed. The second step was to formulate specific intervention functions based on the behavior change wheel theory. Subsequently, in the third step we conducted iterative system development. And in the last step two pilot studies were performed, the first was for the improvement of system functions and the second was to explore potential clinical benefits and validate the acceptance and usability of the system. RESULTS: A total of 17 participants were enrolled, among them 12 COPD participants completed the 12-week study. For the clinical outcomes, Six-Minute Walk Test (6MWT) showed significant difference (P = .023) over time with an improvement exceeded the minimal clinically important difference (MCID). Change in respiratory symptom (CAT score) was statistically different (P = .031) with a greater decrease of - 3. The mMRC levels reduced overall and showed significant difference. The overall compliance of this study reached 82.20% (± 1.68%). The results of questionnaire and interviews indicated good technology acceptance and functional usability. The participants were satisfied with the mHealth-based intervention. CONCLUSIONS: This study developed a home-based PR mHealth system for COPD patients. We showed that the home-based PR mHealth system incorporating BCTs is a feasible and acceptable intervention for COPD patients, and COPD patients can benefit from the intervention delivered by the system. The proposed system played an important auxiliary role in offering exercise prescription according to the characteristics of patients. It provided means and tools for further individuation of exercise prescription in the future.


Subject(s)
Pulmonary Disease, Chronic Obstructive , Telemedicine , Exercise Tolerance , Humans , Pulmonary Disease, Chronic Obstructive/therapy , Quality of Life , Walk Test
15.
J Pediatr ; 224: 146-149, 2020 09.
Article in English | MEDLINE | ID: mdl-32416087

ABSTRACT

The lower than expected rates of children affected by coronavirus disease-2019 does not mean that there was no impact on children's health. Using data on pediatric healthcare visits before and after the breakout of coronavirus disease-2019 and historical data, we identified pediatric conditions that were most affected by the pandemic and epidemic control measures during the pandemic.


Subject(s)
Child Health/statistics & numerical data , Coronavirus Infections/epidemiology , Hospitals, Pediatric/statistics & numerical data , Pneumonia, Viral/epidemiology , Betacoronavirus , COVID-19 , Child , China/epidemiology , Humans , Pandemics , SARS-CoV-2
16.
BMC Med Res Methodol ; 20(1): 9, 2020 01 14.
Article in English | MEDLINE | ID: mdl-31937265

ABSTRACT

BACKGROUND: Drug safety in children is a major concern; however, there is still a lack of methods for quantitatively measuring, let alone to improving, drug safety in children under different clinical conditions. To assess pediatric drug safety under different clinical conditions, a computational method based on Electronic Medical Record (EMR) datasets was proposed. METHODS: In this study, a computational method was designed to extract the significant drug-diagnosis associations (based on a Bonferroni-adjusted hypergeometric P-value < 0.05) among drug and diagnosis co-occurrence in EMR datasets. This allows for differences between pediatric and adult drug use to be compared based on different EMR datasets. The drug-diagnosis associations were further used to generate drug clusters under specific clinical conditions using unsupervised clustering. A 5-layer quantitative pediatric drug safety level was proposed based on the drug safety statement of the pediatric labeling of each drug. Therefore, the drug safety levels under different pediatric clinical conditions were calculated. Two EMR datasets from a 1900-bed children's hospital and a 2000-bed general hospital were used to test this method. RESULTS: The comparison between the children's hospital and the general hospital showed unique features of pediatric drug use and identified the drug treatment gap between children and adults. In total, 591 drugs were used in the children's hospital; 18 drug clusters that were associated with certain clinical conditions were generated based on our method; and the quantitative drug safety levels of each drug cluster (under different clinical conditions) were calculated, analyzed, and visualized. CONCLUSION: With this method, quantitative drug safety levels under certain clinical conditions in pediatric patients can be evaluated and compared. If there are longitudinal data, improvements can also be measured. This method has the potential to be used in many population-level, health data-based drug safety studies.


Subject(s)
Computational Biology/methods , Drug-Related Side Effects and Adverse Reactions/pathology , Electronic Health Records/statistics & numerical data , Pharmaceutical Preparations , Child , Female , Hospitals, Pediatric , Humans , Male
17.
J Med Internet Res ; 22(6): e20239, 2020 06 10.
Article in English | MEDLINE | ID: mdl-32496207

ABSTRACT

BACKGROUND: The coronavirus disease (COVID-19) was discovered in China in December 2019. It has developed into a threatening international public health emergency. With the exception of China, the number of cases continues to increase worldwide. A number of studies about disease diagnosis and treatment have been carried out, and many clinically proven effective results have been achieved. Although information technology can improve the transferring of such knowledge to clinical practice rapidly, data interoperability is still a challenge due to the heterogeneous nature of hospital information systems. This issue becomes even more serious if the knowledge for diagnosis and treatment is updated rapidly as is the case for COVID-19. An open, semantic-sharing, and collaborative-information modeling framework is needed to rapidly develop a shared data model for exchanging data among systems. openEHR is such a framework and is supported by many open software packages that help to promote information sharing and interoperability. OBJECTIVE: This study aims to develop a shared data model based on the openEHR modeling approach to improve the interoperability among systems for the diagnosis and treatment of COVID-19. METHODS: The latest Guideline of COVID-19 Diagnosis and Treatment in China was selected as the knowledge source for modeling. First, the guideline was analyzed and the data items used for diagnosis and treatment, and management were extracted. Second, the data items were classified and further organized into domain concepts with a mind map. Third, searching was executed in the international openEHR Clinical Knowledge Manager (CKM) to find the existing archetypes that could represent the concepts. New archetypes were developed for those concepts that could not be found. Fourth, these archetypes were further organized into a template using Ocean Template Editor. Fifth, a test case of data exchanging between the clinical data repository and clinical decision support system based on the template was conducted to verify the feasibility of the study. RESULTS: A total of 203 data items were extracted from the guideline in China, and 16 domain concepts (16 leaf nodes in the mind map) were organized. There were 22 archetypes used to develop the template for all data items extracted from the guideline. All of them could be found in the CKM and reused directly. The archetypes and templates were reviewed and finally released in a public project within the CKM. The test case showed that the template can facilitate the data exchange and meet the requirements of decision support. CONCLUSIONS: This study has developed the openEHR template for COVID-19 based on the latest guideline from China using openEHR modeling methodology. It represented the capability of the methodology for rapidly modeling and sharing knowledge through reusing the existing archetypes, which is especially useful in a new and fast-changing area such as with COVID-19.


Subject(s)
Coronavirus Infections , Electronic Health Records/standards , Pandemics , Pneumonia, Viral , Practice Guidelines as Topic , COVID-19 , China/epidemiology , Coronavirus Infections/epidemiology , Decision Support Systems, Clinical , Humans , Pneumonia, Viral/epidemiology
18.
BMC Med Inform Decis Mak ; 19(1): 91, 2019 04 25.
Article in English | MEDLINE | ID: mdl-31023325

ABSTRACT

BACKGROUND: Many clinical concepts are standardized under a categorical and hierarchical taxonomy such as ICD-10, ATC, etc. These taxonomic clinical concepts provide insight into semantic meaning and similarity among clinical concepts and have been applied to patient similarity measures. However, the effects of diverse set sizes of taxonomic clinical concepts contributing to similarity at the patient level have not been well studied. METHODS: In this paper the most widely used taxonomic clinical concepts system, ICD-10, was studied as a representative taxonomy. The distance between ICD-10-coded diagnosis sets is an integrated estimation of the information content of each concept, the similarity between each pairwise concepts and the similarity between the sets of concepts. We proposed a novel method at the set-level similarity to calculate the distance between sets of hierarchical taxonomic clinical concepts to measure patient similarity. A real-world clinical dataset with ICD-10 coded diagnoses and hospital length of stay (HLOS) information was used to evaluate the performance of various algorithms and their combinations in predicting whether a patient need long-term hospitalization or not. Four subpopulation prototypes that were defined based on age and HLOS with different diagnoses set sizes were used as the target for similarity analysis. The F-score was used to evaluate the performance of different algorithms by controlling other factors. We also evaluated the effect of prototype set size on prediction precision. RESULTS: The results identified the strengths and weaknesses of different algorithms to compute information content, code-level similarity and set-level similarity under different contexts, such as set size and concept set background. The minimum weighted bipartite matching approach, which has not been fully recognized previously showed unique advantages in measuring the concepts-based patient similarity. CONCLUSIONS: This study provides a systematic benchmark evaluation of previous algorithms and novel algorithms used in taxonomic concepts-based patient similarity, and it provides the basis for selecting appropriate methods under different clinical scenarios.


Subject(s)
International Classification of Diseases , Patients/classification , Semantics , Adolescent , Adult , Algorithms , Electronic Health Records , Humans , Middle Aged , Young Adult
19.
BMC Med Inform Decis Mak ; 19(1): 5, 2019 01 09.
Article in English | MEDLINE | ID: mdl-30626381

ABSTRACT

BACKGROUND: Main adverse cardiac events (MACE) are essentially composite endpoints for assessing safety and efficacy of treatment processes of acute coronary syndrome (ACS) patients. Timely prediction of MACE is highly valuable for improving the effects of ACS treatments. Most existing tools are specific to predict MACE by mainly using static patient features and neglecting dynamic treatment information during learning. METHODS: We address this challenge by developing a deep learning-based approach to utilize a large volume of heterogeneous electronic health record (EHR) for predicting MACE after ACS. Specifically, we obtain the deep representation of dynamic treatment features from EHR data, using the bidirectional recurrent neural network. And then, the extracted latent representation of treatment features can be utilized to predict whether a patient occurs MACE in his or her hospitalization. RESULTS: We validate the effectiveness of our approach on a clinical dataset containing 2930 ACS patient samples with 232 static feature types and 2194 dynamic feature types. The performance of our best model for predicting MACE after ACS remains robust and reaches 0.713 and 0.764 in terms of AUC and Accuracy, respectively, and has over 11.9% (1.2%) and 1.9% (7.5%) performance gain of AUC (Accuracy) in comparison with both logistic regression and a boosted resampling model presented in our previous work, respectively. The results are statistically significant. CONCLUSIONS: We hypothesize that our proposed model adapted to leverage dynamic treatment information in EHR data appears to boost the performance of MACE prediction for ACS, and can readily meet the demand clinical prediction of other diseases, from a large volume of EHR in an open-ended fashion.


Subject(s)
Acute Coronary Syndrome/complications , Acute Coronary Syndrome/diagnosis , Electronic Health Records , Hospitalization , Models, Theoretical , Neural Networks, Computer , Acute Coronary Syndrome/therapy , Aged , Deep Learning , Female , Humans , Male , Middle Aged , Prognosis
20.
BMC Med Inform Decis Mak ; 19(Suppl 2): 61, 2019 04 09.
Article in English | MEDLINE | ID: mdl-30961585

ABSTRACT

BACKGROUND: Major adverse cardiac event (MACE) prediction plays a key role in providing efficient and effective treatment strategies for patients with acute coronary syndrome (ACS) during their hospitalizations. Existing prediction models have limitations to cope with imprecise and ambiguous clinical information such that clinicians cannot reach to reliable MACE prediction results for individuals. METHODS: To remedy it, this study proposes a hybrid method using Rough Set Theory (RST) and Dempster-Shafer Theory (DST) of evidence. In details, four state-of-the-art models, including one traditional ACS risk scoring model, i.e., GRACE, and three machine learning based models, i.e., Support Vector Machine, L1-Logistic Regression, and Classification and Regression Tree, are employed to generate initial MACE prediction results, and then RST is applied to determine the weights of the four single models. After that, the acquired prediction results are assumed as basic beliefs for the problem propositions and in this way, an evidential prediction result is generated based on DST in an integrative manner. RESULTS: Having applied the proposed method on a clinical dataset consisting of 2930 ACS patient samples, our model achieves 0.715 AUC value with competitive standard deviation, which is the best prediction results comparing with the four single base models and two baseline ensemble models. CONCLUSIONS: Facing with the limitations in traditional ACS risk scoring models, machine learning models and the uncertainties of EHR data, we present an ensemble approach via RST and DST to alleviate this problem. The experimental results reveal that our proposed method achieves better performance for the problem of MACE prediction when compared with the single models.


Subject(s)
Acute Coronary Syndrome/complications , Electronic Health Records , Machine Learning , Acute Coronary Syndrome/therapy , Hospitalization , Humans , Logistic Models , Predictive Value of Tests , Prognosis , Risk Assessment/methods
SELECTION OF CITATIONS
SEARCH DETAIL