Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Appl Clin Inform ; 14(4): 743-751, 2023 08.
Article in English | MEDLINE | ID: mdl-37399838

ABSTRACT

OBJECTIVES: This study evaluated if medical doctors could identify more hemorrhage events during chart review in a clinical setting when assisted by an artificial intelligence (AI) model and medical doctors' perception of using the AI model. METHODS: To develop the AI model, sentences from 900 electronic health records were labeled as positive or negative for hemorrhage and categorized into one of 12 anatomical locations. The AI model was evaluated on a test cohort consisting of 566 admissions. Using eye-tracking technology, we investigated medical doctors' reading workflow during manual chart review. Moreover, we performed a clinical use study where medical doctors read two admissions with and without AI assistance to evaluate performance when using and perception of using the AI model. RESULTS: The AI model had a sensitivity of 93.7% and a specificity of 98.1% on the test cohort. In the use studies, we found that medical doctors missed more than 33% of relevant sentences when doing chart review without AI assistance. Hemorrhage events described in paragraphs were more often overlooked compared with bullet-pointed hemorrhage mentions. With AI-assisted chart review, medical doctors identified 48 and 49 percentage points more hemorrhage events than without assistance in two admissions, and they were generally positive toward using the AI model as a supporting tool. CONCLUSION: Medical doctors identified more hemorrhage events with AI-assisted chart review and they were generally positive toward using the AI model.


Subject(s)
Artificial Intelligence , Physicians , Humans , Electronic Health Records , Hemorrhage/diagnosis , Hospitalization
2.
Res Pract Thromb Haemost ; 5(4): e12505, 2021 May.
Article in English | MEDLINE | ID: mdl-34013150

ABSTRACT

BACKGROUND: Bleeding is associated with a significantly increased morbidity and mortality. Bleeding events are often described in the unstructured text of electronic health records, which makes them difficult to identify by manual inspection. OBJECTIVES: To develop a deep learning model that detects and visualizes bleeding events in electronic health records. PATIENTS/METHODS: Three hundred electronic health records with International Classification of Diseases, Tenth Revision diagnosis codes for bleeding or leukemia were extracted. Each sentence in the electronic health record was annotated as positive or negative for bleeding. The annotated sentences were used to develop a deep learning model that detects bleeding at sentence and note level. RESULTS: On a balanced test set of 1178   sentences, the best-performing deep learning model achieved a sensitivity of 0.90, specificity of 0.90, and negative predictive value of 0.90. On a test set consisting of 700 notes, of which 49 were positive for bleeding, the model achieved a note-level sensitivity of 1.00, specificity of 0.52, and negative predictive value of 1.00. By using a sentence-level model on a note level, the model can explain its predictions by visualizing the exact sentence in a note that contains information regarding bleeding. Moreover, we found that the model performed consistently well across different types of bleedings. CONCLUSIONS: A deep learning model can be used to detect and visualize bleeding events in the free text of electronic health records. The deep learning model can thus facilitate systematic assessment of bleeding risk, and thereby optimize patient care and safety.

SELECTION OF CITATIONS
SEARCH DETAIL
...