Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 12 de 12
Filtrar
1.
J Biomed Inform ; 53: 73-80, 2015 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-25236952

RESUMO

BACKGROUND: Therapy for certain medical conditions occurs in a stepwise fashion, where one medication is recommended as initial therapy and other medications follow. Sequential pattern mining is a data mining technique used to identify patterns of ordered events. OBJECTIVE: To determine whether sequential pattern mining is effective for identifying temporal relationships between medications and accurately predicting the next medication likely to be prescribed for a patient. DESIGN: We obtained claims data from Blue Cross Blue Shield of Texas for patients prescribed at least one diabetes medication between 2008 and 2011, and divided these into a training set (90% of patients) and test set (10% of patients). We applied the CSPADE algorithm to mine sequential patterns of diabetes medication prescriptions both at the drug class and generic drug level and ranked them by the support statistic. We then evaluated the accuracy of predictions made for which diabetes medication a patient was likely to be prescribed next. RESULTS: We identified 161,497 patients who had been prescribed at least one diabetes medication. We were able to mine stepwise patterns of pharmacological therapy that were consistent with guidelines. Within three attempts, we were able to predict the medication prescribed for 90.0% of patients when making predictions by drug class, and for 64.1% when making predictions at the generic drug level. These results were stable under 10-fold cross validation, ranging from 89.1%-90.5% at the drug class level and 63.5-64.9% at the generic drug level. Using 1 or 2 items in the patient's medication history led to more accurate predictions than not using any history, but using the entire history was sometimes worse. CONCLUSION: Sequential pattern mining is an effective technique to identify temporal relationships between medications and can be used to predict next steps in a patient's medication regimen. Accurate predictions can be made without using the patient's entire medication history.


Assuntos
Prescrições de Medicamentos/estatística & dados numéricos , Tratamento Farmacológico/métodos , Seguro Saúde/estatística & dados numéricos , Reconhecimento Automatizado de Padrão , Algoritmos , Mineração de Dados , Sistemas de Apoio a Decisões Clínicas , Diabetes Mellitus/tratamento farmacológico , Progressão da Doença , Humanos , Linguagens de Programação , Reprodutibilidade dos Testes , Compostos de Sulfonilureia/uso terapêutico , Texas
2.
J Am Med Inform Assoc ; 31(6): 1367-1379, 2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38497958

RESUMO

OBJECTIVE: This study aimed to develop and assess the performance of fine-tuned large language models for generating responses to patient messages sent via an electronic health record patient portal. MATERIALS AND METHODS: Utilizing a dataset of messages and responses extracted from the patient portal at a large academic medical center, we developed a model (CLAIR-Short) based on a pre-trained large language model (LLaMA-65B). In addition, we used the OpenAI API to update physician responses from an open-source dataset into a format with informative paragraphs that offered patient education while emphasizing empathy and professionalism. By combining with this dataset, we further fine-tuned our model (CLAIR-Long). To evaluate fine-tuned models, we used 10 representative patient portal questions in primary care to generate responses. We asked primary care physicians to review generated responses from our models and ChatGPT and rated them for empathy, responsiveness, accuracy, and usefulness. RESULTS: The dataset consisted of 499 794 pairs of patient messages and corresponding responses from the patient portal, with 5000 patient messages and ChatGPT-updated responses from an online platform. Four primary care physicians participated in the survey. CLAIR-Short exhibited the ability to generate concise responses similar to provider's responses. CLAIR-Long responses provided increased patient educational content compared to CLAIR-Short and were rated similarly to ChatGPT's responses, receiving positive evaluations for responsiveness, empathy, and accuracy, while receiving a neutral rating for usefulness. CONCLUSION: This subjective analysis suggests that leveraging large language models to generate responses to patient messages demonstrates significant potential in facilitating communication between patients and healthcare providers.


Assuntos
Portais do Paciente , Humanos , Registros Eletrônicos de Saúde , Relações Médico-Paciente , Processamento de Linguagem Natural , Empatia , Conjuntos de Dados como Assunto
3.
J Am Med Inform Assoc ; 31(8): 1665-1670, 2024 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-38917441

RESUMO

OBJECTIVE: This study aims to investigate the feasibility of using Large Language Models (LLMs) to engage with patients at the time they are drafting a question to their healthcare providers, and generate pertinent follow-up questions that the patient can answer before sending their message, with the goal of ensuring that their healthcare provider receives all the information they need to safely and accurately answer the patient's question, eliminating back-and-forth messaging, and the associated delays and frustrations. METHODS: We collected a dataset of patient messages sent between January 1, 2022 to March 7, 2023 at Vanderbilt University Medical Center. Two internal medicine physicians identified 7 common scenarios. We used 3 LLMs to generate follow-up questions: (1) Comprehensive LLM Artificial Intelligence Responder (CLAIR): a locally fine-tuned LLM, (2) GPT4 with a simple prompt, and (3) GPT4 with a complex prompt. Five physicians rated them with the actual follow-ups written by healthcare providers on clarity, completeness, conciseness, and utility. RESULTS: For five scenarios, our CLAIR model had the best performance. The GPT4 model received higher scores for utility and completeness but lower scores for clarity and conciseness. CLAIR generated follow-up questions with similar clarity and conciseness as the actual follow-ups written by healthcare providers, with higher utility than healthcare providers and GPT4, and lower completeness than GPT4, but better than healthcare providers. CONCLUSION: LLMs can generate follow-up patient messages designed to clarify a medical question that compares favorably to those generated by healthcare providers.


Assuntos
Inteligência Artificial , Humanos , Relações Médico-Paciente , Estudos de Viabilidade , Envio de Mensagens de Texto
4.
J Am Med Inform Assoc ; 31(6): 1388-1396, 2024 May 20.
Artigo em Inglês | MEDLINE | ID: mdl-38452289

RESUMO

OBJECTIVES: To evaluate the capability of using generative artificial intelligence (AI) in summarizing alert comments and to determine if the AI-generated summary could be used to improve clinical decision support (CDS) alerts. MATERIALS AND METHODS: We extracted user comments to alerts generated from September 1, 2022 to September 1, 2023 at Vanderbilt University Medical Center. For a subset of 8 alerts, comment summaries were generated independently by 2 physicians and then separately by GPT-4. We surveyed 5 CDS experts to rate the human-generated and AI-generated summaries on a scale from 1 (strongly disagree) to 5 (strongly agree) for the 4 metrics: clarity, completeness, accuracy, and usefulness. RESULTS: Five CDS experts participated in the survey. A total of 16 human-generated summaries and 8 AI-generated summaries were assessed. Among the top 8 rated summaries, five were generated by GPT-4. AI-generated summaries demonstrated high levels of clarity, accuracy, and usefulness, similar to the human-generated summaries. Moreover, AI-generated summaries exhibited significantly higher completeness and usefulness compared to the human-generated summaries (AI: 3.4 ± 1.2, human: 2.7 ± 1.2, P = .001). CONCLUSION: End-user comments provide clinicians' immediate feedback to CDS alerts and can serve as a direct and valuable data resource for improving CDS delivery. Traditionally, these comments may not be considered in the CDS review process due to their unstructured nature, large volume, and the presence of redundant or irrelevant content. Our study demonstrates that GPT-4 is capable of distilling these comments into summaries characterized by high clarity, accuracy, and completeness. AI-generated summaries are equivalent and potentially better than human-generated summaries. These AI-generated summaries could provide CDS experts with a novel means of reviewing user comments to rapidly optimize CDS alerts both online and offline.


Assuntos
Inteligência Artificial , Sistemas de Apoio a Decisões Clínicas , Sistemas de Registro de Ordens Médicas , Humanos , Registros Eletrônicos de Saúde , Processamento de Linguagem Natural
5.
medRxiv ; 2023 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-37503263

RESUMO

Objective: This study aimed to develop and assess the performance of fine-tuned large language models for generating responses to patient messages sent via an electronic health record patient portal. Methods: Utilizing a dataset of messages and responses extracted from the patient portal at a large academic medical center, we developed a model (CLAIR-Short) based on a pre-trained large language model (LLaMA-65B). In addition, we used the OpenAI API to update physician responses from an open-source dataset into a format with informative paragraphs that offered patient education while emphasizing empathy and professionalism. By combining with this dataset, we further fine-tuned our model (CLAIR-Long). To evaluate the fine-tuned models, we used ten representative patient portal questions in primary care to generate responses. We asked primary care physicians to review generated responses from our models and ChatGPT and rated them for empathy, responsiveness, accuracy, and usefulness. Results: The dataset consisted of a total of 499,794 pairs of patient messages and corresponding responses from the patient portal, with 5,000 patient messages and ChatGPT-updated responses from an online platform. Four primary care physicians participated in the survey. CLAIR-Short exhibited the ability to generate concise responses similar to provider's responses. CLAIR-Long responses provided increased patient educational content compared to CLAIR-Short and were rated similarly to ChatGPT's responses, receiving positive evaluations for responsiveness, empathy, and accuracy, while receiving a neutral rating for usefulness. Conclusion: Leveraging large language models to generate responses to patient messages demonstrates significant potential in facilitating communication between patients and primary care providers.

6.
medRxiv ; 2023 Feb 23.
Artigo em Inglês | MEDLINE | ID: mdl-36865144

RESUMO

Objective: To determine if ChatGPT can generate useful suggestions for improving clinical decision support (CDS) logic and to assess noninferiority compared to human-generated suggestions. Methods: We supplied summaries of CDS logic to ChatGPT, an artificial intelligence (AI) tool for question answering that uses a large language model, and asked it to generate suggestions. We asked human clinician reviewers to review the AI-generated suggestions as well as human-generated suggestions for improving the same CDS alerts, and rate the suggestions for their usefulness, acceptance, relevance, understanding, workflow, bias, inversion, and redundancy. Results: Five clinicians analyzed 36 AI-generated suggestions and 29 human-generated suggestions for 7 alerts. Of the 20 suggestions that scored highest in the survey, 9 were generated by ChatGPT. The suggestions generated by AI were found to offer unique perspectives and were evaluated as highly understandable and relevant, with moderate usefulness, low acceptance, bias, inversion, redundancy. Conclusion: AI-generated suggestions could be an important complementary part of optimizing CDS alerts, can identify potential improvements to alert logic and support their implementation, and may even be able to assist experts in formulating their own suggestions for CDS improvement. ChatGPT shows great potential for using large language models and reinforcement learning from human feedback to improve CDS alert logic and potentially other medical areas involving complex, clinical logic, a key step in the development of an advanced learning health system.

7.
J Am Med Inform Assoc ; 30(7): 1237-1245, 2023 06 20.
Artigo em Inglês | MEDLINE | ID: mdl-37087108

RESUMO

OBJECTIVE: To determine if ChatGPT can generate useful suggestions for improving clinical decision support (CDS) logic and to assess noninferiority compared to human-generated suggestions. METHODS: We supplied summaries of CDS logic to ChatGPT, an artificial intelligence (AI) tool for question answering that uses a large language model, and asked it to generate suggestions. We asked human clinician reviewers to review the AI-generated suggestions as well as human-generated suggestions for improving the same CDS alerts, and rate the suggestions for their usefulness, acceptance, relevance, understanding, workflow, bias, inversion, and redundancy. RESULTS: Five clinicians analyzed 36 AI-generated suggestions and 29 human-generated suggestions for 7 alerts. Of the 20 suggestions that scored highest in the survey, 9 were generated by ChatGPT. The suggestions generated by AI were found to offer unique perspectives and were evaluated as highly understandable and relevant, with moderate usefulness, low acceptance, bias, inversion, redundancy. CONCLUSION: AI-generated suggestions could be an important complementary part of optimizing CDS alerts, can identify potential improvements to alert logic and support their implementation, and may even be able to assist experts in formulating their own suggestions for CDS improvement. ChatGPT shows great potential for using large language models and reinforcement learning from human feedback to improve CDS alert logic and potentially other medical areas involving complex, clinical logic, a key step in the development of an advanced learning health system.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Sistema de Aprendizagem em Saúde , Humanos , Inteligência Artificial , Idioma , Fluxo de Trabalho
8.
J Diabetes Sci Technol ; : 19322968221119788, 2022 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-36047538

RESUMO

BACKGROUND: The insulin ordering process is an opportunity to provide clinicians with hypoglycemia risk predictions, but few hypoglycemia models centered around the insulin ordering process exist. METHODS: We used data on adult patients, admitted in 2019 to non-ICU floors of a large teaching hospital, who had orders for subcutaneous insulin. Our outcome was hypoglycemia, defined as a blood glucose (BG) <70 mg/dL within 24 hours after ordering insulin. We trained and evaluated models to predict hypoglycemia at the time of placing an insulin order, using logistic regression, random forest, and extreme gradient boosting (XGBoost). We compared performance using area under the receiver operating characteristic curve (AUCs) and precision-recall curves. We determined recall at our goal precision of 0.30. RESULTS: Of 21 052 included insulin orders, 1839 (9%) were followed by a hypoglycemic event within 24 hours. Logistic regression, random forest, and XGBoost models had AUCs of 0.81, 0.80, and 0.79, and recall of 0.44, 0.49, and 0.32, respectively. The most significant predictor was the lowest BG value in the 24 hours preceding the order. Predictors related to the insulin order being placed at the time of the prediction were useful to the model but less important than the patient's history of BG values over time. CONCLUSIONS: Hypoglycemia within the next 24 hours can be predicted at the time an insulin order is placed, providing an opportunity to integrate decision support into the medication ordering process to make insulin therapy safer.

9.
J Am Med Inform Assoc ; 29(6): 1050-1059, 2022 05 11.
Artigo em Inglês | MEDLINE | ID: mdl-35244165

RESUMO

OBJECTIVE: We describe the Clickbusters initiative implemented at Vanderbilt University Medical Center (VUMC), which was designed to improve safety and quality and reduce burnout through the optimization of clinical decision support (CDS) alerts. MATERIALS AND METHODS: We developed a 10-step Clickbusting process and implemented a program that included a curriculum, CDS alert inventory, oversight process, and gamification. We carried out two 3-month rounds of the Clickbusters program at VUMC. We completed descriptive analyses of the changes made to alerts during the process, and of alert firing rates before and after the program. RESULTS: Prior to Clickbusters, VUMC had 419 CDS alerts in production, with 488 425 firings (42 982 interruptive) each week. After 2 rounds, the Clickbusters program resulted in detailed, comprehensive reviews of 84 CDS alerts and reduced the number of weekly alert firings by more than 70 000 (15.43%). In addition to the direct improvements in CDS, the initiative also increased user engagement and involvement in CDS. CONCLUSIONS: At VUMC, the Clickbusters program was successful in optimizing CDS alerts by reducing alert firings and resulting clicks. The program also involved more users in the process of evaluating and improving CDS and helped build a culture of continuous evaluation and improvement of clinical content in the electronic health record.


Assuntos
Sistemas de Apoio a Decisões Clínicas , Sistemas de Registro de Ordens Médicas , Registros Eletrônicos de Saúde , Humanos
10.
J Am Med Inform Assoc ; 25(11): 1552-1555, 2018 11 01.
Artigo em Inglês | MEDLINE | ID: mdl-30060109

RESUMO

Clinical vocabularies allow for standard representation of clinical concepts, and can also contain knowledge structures, such as hierarchy, that facilitate the creation of maintainable and accurate clinical decision support (CDS). A key architectural feature of clinical hierarchies is how they handle parent-child relationships - specifically whether hierarchies are strict hierarchies (allowing a single parent per concept) or polyhierarchies (allowing multiple parents per concept). These structures handle subsumption relationships (ie, ancestor and descendant relationships) differently. In this paper, we describe three real-world malfunctions of clinical decision support related to incorrect assumptions about subsumption checking for ß-blocker, specifically carvedilol, a non-selective ß-blocker that also has α-blocker activity. We recommend that 1) CDS implementers should learn about the limitations of terminologies, hierarchies, and classification, 2) CDS implementers should thoroughly test CDS, with a focus on special or unusual cases, 3) CDS implementers should monitor feedback from users, and 4) electronic health record (EHR) and clinical content developers should offer and support polyhierarchical clinical terminologies, especially for medications.


Assuntos
Antagonistas de Receptores Adrenérgicos alfa 1/uso terapêutico , Carvedilol/uso terapêutico , Sistemas de Apoio a Decisões Clínicas , Erros de Medicação , Terminologia como Assunto , Antagonistas Adrenérgicos beta/uso terapêutico , Carvedilol/classificação , Registros Eletrônicos de Saúde , Humanos , Gestão do Conhecimento , Vocabulário Controlado
11.
Am J Clin Pathol ; 140(6): 801-6, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24225746

RESUMO

OBJECTIVES: Blood samples for vancomycin levels are often drawn too early, leading to potential misinterpretation of results. However, only a few studies describe interventions to reduce mistimed vancomycin levels. METHODS: We implemented an information technology (IT)-based intervention that provided educational instructions to nurses and determined the percentage of levels drawn too early for 27 months before (n = 6,291) and 14 months after (n = 3,608) the intervention. In addition, we conducted nurse interviews (n = 40) and dataset analysis to assess the root causes of mistimed levels. RESULTS: The percentage of vancomycin timing errors decreased from 39% (2,438/6,291) to 32% (1,137/3,608), though in a time series analysis this decrease was not statistically significant (P = .64). Four common causes of mistimed levels were found: (1) unclear provider orders, (2) scheduling levels to be drawn with morning laboratory tests, (3) lack of communication between providers, and (4) failure to adjust the blood draw in relation to the previous dose. CONCLUSIONS: A real-time, IT-based intervention that links the timing of levels with medication administration might have a more substantial impact.


Assuntos
Coleta de Amostras Sanguíneas/métodos , Monitoramento de Medicamentos/métodos , Educação Médica/métodos , Vancomicina/sangue , Humanos , Enfermeiras e Enfermeiros , Tempo
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa