Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 8 de 8
Filter
Add more filters










Database
Language
Publication year range
1.
J Med Libr Assoc ; 112(1): 13-21, 2024 Jan 16.
Article in English | MEDLINE | ID: mdl-38911524

ABSTRACT

Objective: To evaluate the ability of DynaMedex, an evidence-based drug and disease Point of Care Information (POCI) resource, in answering clinical queries using keyword searches. Methods: Real-world disease-related questions compiled from clinicians at an academic medical center, DynaMedex search query data, and medical board review resources were categorized into five clinical categories (complications & prognosis, diagnosis & clinical presentation, epidemiology, prevention & screening/monitoring, and treatment) and six specialties (cardiology, endocrinology, hematology-oncology, infectious disease, internal medicine, and neurology). A total of 265 disease-related questions were evaluated by pharmacist reviewers based on if an answer was found (yes, no), whether the answer was relevant (yes, no), difficulty in finding the answer (easy, not easy), cited best evidence available (yes, no), clinical practice guidelines included (yes, no), and level of detail provided (detailed, limited details). Results: An answer was found for 259/265 questions (98%). Both reviewers found an answer for 241 questions (91%), neither found the answer for 6 questions (2%), and only one reviewer found an answer for 18 questions (7%). Both reviewers found a relevant answer 97% of the time when an answer was found. Of all relevant answers found, 68% were easy to find, 97% cited best quality of evidence available, 72% included clinical guidelines, and 95% were detailed. Recommendations for areas of resource improvement were identified. Conclusions: The resource enabled reviewers to answer most questions easily with the best quality of evidence available, providing detailed answers and clinical guidelines, with a high level of replication of results across users.


Subject(s)
Point-of-Care Systems , Humans , Evidence-Based Medicine
2.
JMIR Med Inform ; 12: e53625, 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38842167

ABSTRACT

Background: Despite restrictive opioid management guidelines, opioid use disorder (OUD) remains a major public health concern. Machine learning (ML) offers a promising avenue for identifying and alerting clinicians about OUD, thus supporting better clinical decision-making regarding treatment. Objective: This study aimed to assess the clinical validity of an ML application designed to identify and alert clinicians of different levels of OUD risk by comparing it to a structured review of medical records by clinicians. Methods: The ML application generated OUD risk alerts on outpatient data for 649,504 patients from 2 medical centers between 2010 and 2013. A random sample of 60 patients was selected from 3 OUD risk level categories (n=180). An OUD risk classification scheme and standardized data extraction tool were developed to evaluate the validity of the alerts. Clinicians independently conducted a systematic and structured review of medical records and reached a consensus on a patient's OUD risk level, which was then compared to the ML application's risk assignments. Results: A total of 78,587 patients without cancer with at least 1 opioid prescription were identified as follows: not high risk (n=50,405, 64.1%), high risk (n=16,636, 21.2%), and suspected OUD or OUD (n=11,546, 14.7%). The sample of 180 patients was representative of the total population in terms of age, sex, and race. The interrater reliability between the ML application and clinicians had a weighted kappa coefficient of 0.62 (95% CI 0.53-0.71), indicating good agreement. Combining the high risk and suspected OUD or OUD categories and using the review of medical records as a gold standard, the ML application had a corrected sensitivity of 56.6% (95% CI 48.7%-64.5%) and a corrected specificity of 94.2% (95% CI 90.3%-98.1%). The positive and negative predictive values were 93.3% (95% CI 88.2%-96.3%) and 60.0% (95% CI 50.4%-68.9%), respectively. Key themes for disagreements between the ML application and clinician reviews were identified. Conclusions: A systematic comparison was conducted between an ML application and clinicians for identifying OUD risk. The ML application generated clinically valid and useful alerts about patients' different OUD risk levels. ML applications hold promise for identifying patients at differing levels of OUD risk and will likely complement traditional rule-based approaches to generating alerts about opioid safety issues.

3.
Appl Clin Inform ; 14(4): 632-643, 2023 08.
Article in English | MEDLINE | ID: mdl-37586414

ABSTRACT

OBJECTIVES: We assessed how clinician satisfaction with a vendor electronic health record (EHR) changed over time in the 4 years following the transition from a homegrown EHR system to identify areas for improvement. METHODS: We conducted a multiyear survey of clinicians across a large health care system after transitioning to a vendor EHR. Eligible clinicians from the first institution to transition received a survey invitation by email in fall 2016 and then eligible clinicians systemwide received surveys in spring 2018 and spring 2019. The survey included items assessing ease/difficulty of completing tasks and items assessing perceptions of the EHR's value, usability, and impact. One item assessing overall satisfaction and one open-ended question were included. Frequencies and means were calculated, and comparison of means was performed between 2018 and 2019 on all clinicians. A multivariable generalized linear model was performed to predict the outcome of overall satisfaction. RESULTS: Response rates for the surveys ranged from 14 to 19%. The mean response from 3 years of surveys for one institution, Brigham and Women's Hospital, increased for overall satisfaction between 2016 (2.85), 2018 (3.01), and 2019 (3.21, p < 0.001). We found no significant differences in mean response for overall satisfaction between all responders of the 2018 survey (3.14) and those of the 2019 survey (3.19). Systemwide, tasks rated the most difficult included "Monitoring patient medication adherence," "Identifying when a referral has not been completed," and "Making a list of patients based on clinical information (e.g., problem, medication)." Clinicians disagreed the most with "The EHR helps me focus on patient care rather than the computer" and "The EHR allows me to complete tasks efficiently." CONCLUSION: Survey results indicate room for improvement in clinician satisfaction with the EHR. Usability of EHRs should continue to be an area of focus to ease clinician burden and improve clinician experience.


Subject(s)
Delivery of Health Care , Electronic Health Records , Humans , Female , Surveys and Questionnaires , Patient Care , Personal Satisfaction
5.
JMIR Hum Factors ; 10: e43960, 2023 04 17.
Article in English | MEDLINE | ID: mdl-37067858

ABSTRACT

BACKGROUND: Evidence-based point-of-care information (POCI) tools can facilitate patient safety and care by helping clinicians to answer disease state and drug information questions in less time and with less effort. However, these tools may also be visually challenging to navigate or lack the comprehensiveness needed to sufficiently address a medical issue. OBJECTIVE: This study aimed to collect clinicians' feedback and directly observe their use of the combined POCI tool DynaMed and Micromedex with Watson, now known as DynaMedex. EBSCO partnered with IBM Watson Health, now known as Merative, to develop the combined tool as a resource for clinicians. We aimed to identify areas for refinement based on participant feedback and examine participant perceptions to inform further development. METHODS: Participants (N=43) within varying clinical roles and specialties were recruited from Brigham and Women's Hospital and Massachusetts General Hospital in Boston, Massachusetts, United States, between August 10, 2021, and December 16, 2021, to take part in usability sessions aimed at evaluating the efficiency and effectiveness of, as well as satisfaction with, the DynaMed and Micromedex with Watson tool. Usability testing methods, including think aloud and observations of user behavior, were used to identify challenges regarding the combined tool. Data collection included measurements of time on task; task ease; satisfaction with the answer; posttest feedback on likes, dislikes, and perceived reliability of the tool; and interest in recommending the tool to a colleague. RESULTS: On a 7-point Likert scale, pharmacists rated ease (mean 5.98, SD 1.38) and satisfaction (mean 6.31, SD 1.34) with the combined POCI tool higher than the physicians, nurse practitioner, and physician's assistants (ease: mean 5.57, SD 1.64, and satisfaction: mean 5.82, SD 1.60). Pharmacists spent longer (mean 2 minutes, 26 seconds, SD 1 minute, 41 seconds) on average finding an answer to their question than the physicians, nurse practitioner, and physician's assistants (mean 1 minute, 40 seconds, SD 1 minute, 23 seconds). CONCLUSIONS: Overall, the tool performed well, but this usability evaluation identified multiple opportunities for improvement that would help inexperienced users.

6.
J Am Med Inform Assoc ; 29(8): 1416-1424, 2022 07 12.
Article in English | MEDLINE | ID: mdl-35575780

ABSTRACT

OBJECTIVE: We developed a comprehensive, medication-related clinical decision support (CDS) software prototype for use in the operating room. The purpose of this study was to compare the usability of the CDS software to the current standard electronic health record (EHR) medication administration and documentation workflow. MATERIALS AND METHODS: The primary outcome was the time taken to complete all simulation tasks. Secondary outcomes were the total number of mouse clicks and the total distance traveled on the screen in pixels. Forty participants were randomized and assigned to complete 7 simulation tasks in 1 of 2 groups: (1) the CDS group (n = 20), who completed tasks using the CDS and (2) the Control group (n = 20), who completed tasks using the standard medication workflow with retrospective manual documentation in our anesthesia information management system. Blinding was not possible. We video- and audio-recorded the participants to capture quantitative data (time on task, mouse clicks, and pixels traveled on the screen) and qualitative data (think-aloud verbalization). RESULTS: The CDS group mean total task time (402.2 ± 85.9 s) was less than the Control group (509.8 ± 103.6 s), with a mean difference of 107.6 s (95% confidence interval [CI], 60.5-179.5 s, P < .001). The CDS group used fewer mouse clicks (26.4 ± 4.5 clicks) than the Control group (56.0 ± 15.0 clicks) with a mean difference of 29.6 clicks (95% CI, 23.2-37.6, P < .001). The CDS group had fewer pixels traveled on the computer monitor (59.5 ± 20.0 thousand pixels) than the Control group (109.3 ± 40.8 thousand pixels) with a mean difference of 49.8 thousand pixels (95% CI, 33.0-73.7, P < .001). CONCLUSIONS: The perioperative medication-related CDS software prototype substantially outperformed standard EHR workflow by decreasing task time and improving efficiency and quality of care in a simulation setting.


Subject(s)
Decision Support Systems, Clinical , Documentation , Electronic Health Records , Humans , Retrospective Studies , Software
7.
JMIR Cancer ; 8(2): e31461, 2022 Apr 07.
Article in English | MEDLINE | ID: mdl-35389353

ABSTRACT

As technology continues to improve, health care systems have the opportunity to use a variety of innovative tools for decision-making, including artificial intelligence (AI) applications. However, there has been little research on the feasibility and efficacy of integrating AI systems into real-world clinical practice, especially from the perspectives of clinicians who use such tools. In this paper, we review physicians' perceptions of and satisfaction with an AI tool, Watson for Oncology, which is used for the treatment of cancer. Watson for Oncology has been implemented in several different settings, including Brazil, China, India, South Korea, and Mexico. By focusing on the implementation of an AI-based clinical decision support system for oncology, we aim to demonstrate how AI can be both beneficial and challenging for cancer management globally and particularly for low-middle-income countries. By doing so, we hope to highlight the need for additional research on user experience and the unique social, cultural, and political barriers to the successful implementation of AI in low-middle-income countries for cancer care.

8.
NPJ Digit Med ; 4(1): 54, 2021 Mar 19.
Article in English | MEDLINE | ID: mdl-33742085

ABSTRACT

Artificial intelligence (AI) represents a valuable tool that could be used to improve the safety of care. Major adverse events in healthcare include: healthcare-associated infections, adverse drug events, venous thromboembolism, surgical complications, pressure ulcers, falls, decompensation, and diagnostic errors. The objective of this scoping review was to summarize the relevant literature and evaluate the potential of AI to improve patient safety in these eight harm domains. A structured search was used to query MEDLINE for relevant articles. The scoping review identified studies that described the application of AI for prediction, prevention, or early detection of adverse events in each of the harm domains. The AI literature was narratively synthesized for each domain, and findings were considered in the context of incidence, cost, and preventability to make projections about the likelihood of AI improving safety. Three-hundred and ninety-two studies were included in the scoping review. The literature provided numerous examples of how AI has been applied within each of the eight harm domains using various techniques. The most common novel data were collected using different types of sensing technologies: vital sign monitoring, wearables, pressure sensors, and computer vision. There are significant opportunities to leverage AI and novel data sources to reduce the frequency of harm across all domains. We expect AI to have the greatest impact in areas where current strategies are not effective, and integration and complex analysis of novel, unstructured data are necessary to make accurate predictions; this applies specifically to adverse drug events, decompensation, and diagnostic errors.

SELECTION OF CITATIONS
SEARCH DETAIL