Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 3 de 3
Filter
Add more filters

Database
Language
Affiliation country
Publication year range
1.
IEEE J Transl Eng Health Med ; 10: 2700414, 2022.
Article in English | MEDLINE | ID: mdl-36199984

ABSTRACT

This paper presents an integrated and scalable precision health service for health promotion and chronic disease prevention. Continuous real-time monitoring of lifestyle and environmental factors is implemented by integrating wearable devices, open environmental data, indoor air quality sensing devices, a location-based smartphone app, and an AI-assisted telecare platform. The AI-assisted telecare platform provided comprehensive insight into patients' clinical, lifestyle, and environmental data, and generated reliable predictions of future acute exacerbation events. All data from 1,667 patients were collected prospectively during a 24-month follow-up period, resulting in the detection of 386 abnormal episodes. Machine learning algorithms and deep learning algorithms were used to train modular chronic disease models. The modular chronic disease prediction models that have passed external validation include obesity, panic disorder, and chronic obstructive pulmonary disease, with an average accuracy of 88.46%, a sensitivity of 75.6%, a specificity of 93.0%, and an F1 score of 79.8%. Compared with previous studies, we establish an effective way to collect lifestyle, life trajectory, and symptom records, as well as environmental factors, and improve the performance of the prediction model by adding objective comprehensive data and feature selection. Our results also demonstrate that lifestyle and environmental factors are highly correlated with patient health and have the potential to predict future abnormal events better than using only questionnaire data. Furthermore, we have constructed a cost-effective model that needs only a few features to support the prediction task, which is helpful for deploying real-world modular prediction models.


Subject(s)
Deep Learning , Wearable Electronic Devices , Chronic Disease , Cohort Studies , Humans , Machine Learning , Precision Medicine
2.
JMIR Med Inform ; 10(11): e41342, 2022 Nov 10.
Article in English | MEDLINE | ID: mdl-36355417

ABSTRACT

BACKGROUND: The automatic coding of clinical text documents by using the International Classification of Diseases, 10th Revision (ICD-10) can be performed for statistical analyses and reimbursements. With the development of natural language processing models, new transformer architectures with attention mechanisms have outperformed previous models. Although multicenter training may increase a model's performance and external validity, the privacy of clinical documents should be protected. We used federated learning to train a model with multicenter data, without sharing data per se. OBJECTIVE: This study aims to train a classification model via federated learning for ICD-10 multilabel classification. METHODS: Text data from discharge notes in electronic medical records were collected from the following three medical centers: Far Eastern Memorial Hospital, National Taiwan University Hospital, and Taipei Veterans General Hospital. After comparing the performance of different variants of bidirectional encoder representations from transformers (BERT), PubMedBERT was chosen for the word embeddings. With regard to preprocessing, the nonalphanumeric characters were retained because the model's performance decreased after the removal of these characters. To explain the outputs of our model, we added a label attention mechanism to the model architecture. The model was trained with data from each of the three hospitals separately and via federated learning. The models trained via federated learning and the models trained with local data were compared on a testing set that was composed of data from the three hospitals. The micro F1 score was used to evaluate model performance across all 3 centers. RESULTS: The F1 scores of PubMedBERT, RoBERTa (Robustly Optimized BERT Pretraining Approach), ClinicalBERT, and BioBERT (BERT for Biomedical Text Mining) were 0.735, 0.692, 0.711, and 0.721, respectively. The F1 score of the model that retained nonalphanumeric characters was 0.8120, whereas the F1 score after removing these characters was 0.7875-a decrease of 0.0245 (3.11%). The F1 scores on the testing set were 0.6142, 0.4472, 0.5353, and 0.2522 for the federated learning, Far Eastern Memorial Hospital, National Taiwan University Hospital, and Taipei Veterans General Hospital models, respectively. The explainable predictions were displayed with highlighted input words via the label attention architecture. CONCLUSIONS: Federated learning was used to train the ICD-10 classification model on multicenter clinical text while protecting data privacy. The model's performance was better than that of models that were trained locally.

3.
JMIR Med Inform ; 9(8): e23230, 2021 Aug 31.
Article in English | MEDLINE | ID: mdl-34463639

ABSTRACT

BACKGROUND: The International Classification of Diseases (ICD) code is widely used as the reference in medical system and billing purposes. However, classifying diseases into ICD codes still mainly relies on humans reading a large amount of written material as the basis for coding. Coding is both laborious and time-consuming. Since the conversion of ICD-9 to ICD-10, the coding task became much more complicated, and deep learning- and natural language processing-related approaches have been studied to assist disease coders. OBJECTIVE: This paper aims at constructing a deep learning model for ICD-10 coding, where the model is meant to automatically determine the corresponding diagnosis and procedure codes based solely on free-text medical notes to improve accuracy and reduce human effort. METHODS: We used diagnosis records of the National Taiwan University Hospital as resources and apply natural language processing techniques, including global vectors, word to vectors, embeddings from language models, bidirectional encoder representations from transformers, and single head attention recurrent neural network, on the deep neural network architecture to implement ICD-10 auto-coding. Besides, we introduced the attention mechanism into the classification model to extract the keywords from diagnoses and visualize the coding reference for training freshmen in ICD-10. Sixty discharge notes were randomly selected to examine the change in the F1-score and the coding time by coders before and after using our model. RESULTS: In experiments on the medical data set of National Taiwan University Hospital, our prediction results revealed F1-scores of 0.715 and 0.618 for the ICD-10 Clinical Modification code and Procedure Coding System code, respectively, with a bidirectional encoder representations from transformers embedding approach in the Gated Recurrent Unit classification model. The well-trained models were applied on the ICD-10 web service for coding and training to ICD-10 users. With this service, coders can code with the F1-score significantly increased from a median of 0.832 to 0.922 (P<.05), but not in a reduced interval. CONCLUSIONS: The proposed model significantly improved the F1-score but did not decrease the time consumed in coding by disease coders.

SELECTION OF CITATIONS
SEARCH DETAIL