Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
1.
Expert Opin Drug Saf ; 23(5): 547-552, 2024 May.
Article in English | MEDLINE | ID: mdl-38597245

ABSTRACT

INTRODUCTION: Artificial intelligence or machine learning (AI/ML) based systems can help personalize prescribing decisions for individual patients. The recommendations of these clinical decision support systems must relate to the "label" of the medicines involved. The label of a medicine is an approved guide that indicates how to prescribe the drug in a safe and effective manner. AREAS COVERED: The label for a medicine may evolve as new information on drug safety and effectiveness emerges, leading to the addition or removal of warnings, drug-drug interactions, or to permit new indications. However, the speed at which these updates are made to these AI/ML recommendation systems may be delayed and could influence the safety of prescribing decisions. This article explores the need to keep AI/ML tools 'in sync' with any label changes. Additionally, challenges relating to medicine availability and geographical suitability are discussed. EXPERT OPINION: These considerations highlight the important role that pharmacoepidemiologists and drug safety professionals must play within the monitoring and use of these tools. Furthermore, these issues highlight the guiding role that regulators need to have in planning and oversight of these tools.


Artificial intelligence or machine learning (AI/ML) based systems that guide the prescription of medications have the potential to vastly improve patient care, but these tools should only provide recommendations that are in line with the label of a medicine. With a constantly evolving medication label, this is likely to be a challenge, and this also has implications for the off-label use of medicines.


Subject(s)
Artificial Intelligence , Decision Support Systems, Clinical , Drug Labeling , Drug-Related Side Effects and Adverse Reactions , Machine Learning , Humans , Drug-Related Side Effects and Adverse Reactions/prevention & control , Drug Interactions , Pharmacoepidemiology/methods , Practice Patterns, Physicians'/standards , Precision Medicine
2.
Drug Saf ; 47(2): 117-123, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38019365

ABSTRACT

The use of artificial intelligence (AI)-based tools to guide prescribing decisions is full of promise and may enhance patient outcomes. These tools can perform actions such as choosing the 'safest' medication, choosing between competing medications, promoting de-prescribing or even predicting non-adherence. These tools can exist in a variety of formats; for example, they may be directly integrated into electronic medical records or they may exist in a stand-alone website accessible by a web browser. One potential impact of these tools is that they could manipulate our understanding of the benefit-risk of medicines in the real world. Currently, the benefit risk of approved medications is assessed according to carefully planned agreements covering spontaneous reporting systems and planned surveillance studies. But AI-based tools may limit or even block prescription to high-risk patients or prevent off-label use. The uptake and temporal availability of these tools may be uneven across healthcare systems and geographies, creating artefacts in data that are difficult to account for. It is also hard to estimate the 'true impact' that a tool had on a prescribing decision. International borders may also be highly porous to these tools, especially in cases where tools are available over the web. These tools already exist, and their use is likely to increase in the coming years. How they can be accounted for in benefit-risk decisions is yet to be seen.


Subject(s)
Artificial Intelligence , Delivery of Health Care , Humans , Drug Prescriptions , Electronic Health Records , Risk Assessment
3.
J Allergy Clin Immunol Pract ; 11(2): 519-526.e3, 2023 02.
Article in English | MEDLINE | ID: mdl-36581072

ABSTRACT

BACKGROUND: The quality of allergy documentation in electronic health records is frequently poor. OBJECTIVE: To compare the usability of 3 graphical user interfaces (GUIs) for drug allergy documentation. METHODS: Physicians tested 3 GUIs by means of 5 fictional drug allergy scenarios: the current GUI (GUI 0), using mainly free-text, and 2 new coded versions (GUI 1 and GUI 2) asking information on allergen category, specific allergen, symptom(s), symptom onset, timing of initial reaction, and diagnosis status with a semiautomatic delabeling feature. Satisfaction was measured by the System Usability Scale questionnaire, efficiency by time to complete the tasks, and effectiveness by a task completion score. Posttest interviews provided more in-depth qualitative feedback. RESULTS: Thirty physicians from 7 different medical specialties and with varying degrees of experience participated. The mean System Usability Scale scores for GUI 1 (77.25, adjective rating "Good") and GUI 2 (78.42, adjective rating "Good") were significantly higher than for GUI 0 (56.58, adjective rating "OK") (Z, 6.27, Padj < .001 and Z, 6.62, Padj < .001, respectively). There was no significant difference in task time between GUIs. Task completion scores of GUI 1 and GUI 2 were higher than for GUI 0 (Z, 9.59, Padj < .001 and Z, 11.87, Padj < .001, respectively). Quantitative and qualitative findings were combined to propose a GUI 3 with high usability. CONCLUSIONS: The usability and quality of allergy documentation was higher for the newly developed coded GUIs with a semiautomatic delabeling feature without being more time-consuming.


Subject(s)
Drug Hypersensitivity , Hypersensitivity , Humans , User-Computer Interface , Electronic Health Records , Documentation , Drug Hypersensitivity/diagnosis
4.
Br J Clin Pharmacol ; 89(4): 1374-1385, 2023 04.
Article in English | MEDLINE | ID: mdl-36321834

ABSTRACT

AIMS: Many clinical decision support systems trigger warning alerts for drug-drug interactions potentially leading to QT prolongation and torsades de pointes (QT-DDIs). Unfortunately, there is overalerting and underalerting because stratification is only based on a fixed QT-DDI severity level. We aimed to improve QT-DDI alerting by developing and validating a risk prediction model considering patient- and drug-related factors. METHODS: We fitted 31 predictor candidates to a stepwise linear regression for 1000 bootstrap samples and selected the predictors present in 95% of the 1000 models. A final linear regression model with those variables was fitted on the original development sample (350 QT-DDIs). This model was validated on an external dataset (143 QT-DDIs). Both true QTc and predicted QTc were stratified into three risk levels (low, moderate and high). Stratification of QT-DDIs could be appropriate (predicted risk = true risk), acceptable (one risk level difference) or inappropriate (two risk levels difference). RESULTS: The final model included 11 predictors with the three most important being use of antiarrhythmics, age and baseline QTc. Comparing current practice to the prediction model, appropriate stratification increased significantly from 37% to 54% appropriate QT-DDIs (increase of 17.5% on average [95% CI +5.4% to +29.6%], padj = 0.006) and inappropriate stratification decreased significantly from 13% to 1% inappropriate QT-DDIs (decrease of 11.2% on average [95% CI -17.7% to -4.7%], padj ≤ 0.001). CONCLUSION: The prediction model including patient- and drug-related factors outperformed QT alerting based on QT-DDI severity alone and therefore is a promising strategy to improve DDI alerting.


Subject(s)
Decision Support Systems, Clinical , Long QT Syndrome , Torsades de Pointes , Humans , Long QT Syndrome/chemically induced , Long QT Syndrome/diagnosis , Drug Interactions , Torsades de Pointes/chemically induced , Torsades de Pointes/prevention & control , Anti-Arrhythmia Agents , Risk Factors , Electrocardiography
5.
J Med Syst ; 46(12): 100, 2022 Nov 23.
Article in English | MEDLINE | ID: mdl-36418746

ABSTRACT

In clinical practice, many drug therapies are associated with prolongation of the QT interval. In literature, estimation of the risk of prescribing drug-induced QT prolongation is mainly executed by means of logistic regression; only one paper reported the use of machine learning techniques. In this paper, we compare the performance of both techniques on the same dataset. High risk for QT prolongation was defined as having a corrected QT interval (QTc) ≥ 450 ms or ≥ 470 ms for respectively male and female patients. Both conventional statistical methods (CSM) and machine learning techniques (MLT) were used. All algorithms were validated internally and with a hold-out dataset of respectively 512 and 102 drug-drug interactions with possible drug-induced QTc prolongation. MLT outperformed the best CSM in both internal and hold-out validation. Random forest and Adaboost classification performed best in the hold-out set with an equal harmonic mean of sensitivity and specificity (HMSS) of 81.2% and an equal accuracy of 82.4% in a hold-out dataset. Sensitivity and specificity were both high (respectively 75.6% and 87.7%). The most important features were baseline QTc value, C-reactive protein level, heart rate at baseline, age, calcium level, renal function, serum potassium level and the atrial fibrillation status. All CSM performed similarly with HMSS varying between 60.3% and 66.3%. The overall performance of logistic regression was 62.0%. MLT (bagging and boosting) outperform CSM in predicting drug-induced QTc prolongation. Additionally, 19.2% was gained in terms of performance by random forest and Adaboost classification compared to logistic regression (the most used technique in literature in estimating the risk for QTc prolongation). Future research should focus on testing the classification on fully external data, further exploring potential of other (new) machine and deep learning models and on generating data pipelines to automatically feed the data to the classifier used.


Subject(s)
Long QT Syndrome , Machine Learning , Humans , Female , Male , Drug Interactions , Algorithms , Heart Rate , Long QT Syndrome/chemically induced
6.
Stud Health Technol Inform ; 290: 991-992, 2022 Jun 06.
Article in English | MEDLINE | ID: mdl-35673171

ABSTRACT

The current drug allergy documentation module in the electronic health record of our institution is in a free-text format. Two versions of a structured and coded drug allergy documentation module were developed. Twenty-five physicians tested the three interfaces via 3x5 test scenarios. The usability was measured for each interface with a system usability scale questionnaire. Both new versions scored significantly better than the current free-text version. User feedback will be used to further optimize the new module.


Subject(s)
Drug Hypersensitivity , Physicians , Documentation , Drug Hypersensitivity/diagnosis , Electronic Health Records , Humans , User-Computer Interface
7.
Stud Health Technol Inform ; 294: 435-439, 2022 May 25.
Article in English | MEDLINE | ID: mdl-35612117

ABSTRACT

Ensemble modeling is an increasingly popular data science technique that combines the knowledge of multiple base learners to enhance predictive performance. In this paper, the idea was to increase predictive performance by holding out three algorithms when testing multiple classifiers: (a) the best overall performing algorithm (based on the harmonic mean of sensitivity and specificity (HMSS) of that algorithm); (b) the most sensitive model; and (c) the most specific model. This approach boils down to majority voting between the predictions of these three base learners. In this exemplary study, a case of identifying a prolonged QT interval after administering a drug-drug interaction with increased risk of QT prolongation (QT-DDI) is presented. Performance measures included accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). Overall performance was measured by calculating the HMSS. Results show an increase in all performance measure characteristics compared to the original best performing algorithm, except for specificity where performance remained stable. The presented approach is fairly simple and shows potential to increase predictive performance, even without adjusting the default cut-offs to differentiate between high and low risk cases. Future research should look at a way of combining all tested algorithms, instead of using only three. Similarly, this approach should be tested on a multiclass prediction problem.


Subject(s)
Algorithms , Data Science , Humans , Sensitivity and Specificity
10.
Int J Med Inform ; 148: 104393, 2021 04.
Article in English | MEDLINE | ID: mdl-33486355

ABSTRACT

OBJECTIVE: Evaluation of the effect of six optimization strategies in a clinical decision support system (CDSS) for drug-drug interaction (DDI) screening on alert burden and alert acceptance and description of clinical pharmacist intervention acceptance. METHODS: Optimizations in the new CDSS were the customization of the knowledge base (with addition of 67 extra DDIs and changes in severity classification), a new alert design, required override reasons for the most serious alerts, the creation of DDI-specific screening intervals, patient-specific alerting, and a real-time follow-up system of all alerts by clinical pharmacists with interventions by telephone was introduced. The alert acceptance was evaluated both at the prescription level (i.e. prescription acceptance, was the DDI prescribed?) and at the administration level (i.e. administration acceptance, did the DDI actually take place?). Finally, the new follow-up system was evaluated by assessing the acceptance of clinical pharmacist's interventions. RESULTS: In the pre-intervention period, 1087 alerts (92.0 % level 1 alerts) were triggered, accounting for 19 different DDIs. In the post-intervention period, 2630 alerts (38.4 % level 1 alerts) were triggered, representing 86 different DDIs. The relative risk forprescription acceptance in the post-intervention period compared to the pre-intervention period was 4.02 (95 % confidence interval (CI) 3.17-5.10; 25.5 % versus 6.3 %). The relative risk for administration acceptance was 1.16 (95 % CI 1.08-1.25; 54.4 % versus 46.7 %). Finally, 86.9 % of the clinical pharmacist interventions were accepted. CONCLUSION: Six concurrently implemented CDSS optimization strategies resulted in a high alert acceptance and clinical pharmacist intervention acceptance. Administration acceptance was remarkably higher than prescription acceptance.


Subject(s)
Decision Support Systems, Clinical , Medical Order Entry Systems , Pharmaceutical Preparations , Drug Interactions , Humans , Pharmacists
11.
Int J Med Inform ; 133: 104013, 2020 01.
Article in English | MEDLINE | ID: mdl-31698230

ABSTRACT

OBJECTIVE: To investigate whether context-specific alerts for potassium-increasing drug-drug interactions (DDIs) in a clinical decision support system reduced the alert burden, increased alert acceptance, and had an effect on the occurrence of hyperkalemia. MATERIALS AND METHODS: In the pre-intervention period all alerts for potassium-increasing DDIs were level 1 alerts advising absolute contraindication, while in the post-intervention period the same drug combinations could trigger a level 1 (absolute contraindication), a level 2 (monitor potassium values), or a level 3 alert (informative, not shown to physicians) based on the patient's recent laboratory value of potassium. Alert acceptance was defined as non-prescription or non-administration of the interacting drug combination for level 1 alerts and as monitoring of the potassium levels for level 2 alerts. RESULTS: The alert burden decreased by 92.8%. The relative risk (RR) for alert acceptance based on prescription rates for level 1 alerts and monitoring rates for level 2 alerts was 15.048 (86.5% vs 5.7%; 95% CI 12.037-18.811; P < 0.001). With alert acceptance for level 1 alerts based on actual administration and for level 2 alerts on monitoring rates, the RR was 3.597 (87.6% vs 24.4%; 95% CI 3.192-4.053; P < 0.001). In the generalized linear mixed model the effect of the intervention on the occurrence of hyperkalemia was not significant (OR 1.091, 95% CI 0.172-6.919). CONCLUSION: The proposed strategy seems effective to get a grip on the delicate balance between over- and under alerting.


Subject(s)
Potassium , Decision Support Systems, Clinical , Drug Interactions , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...