Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 152
Filtrar
1.
BMJ Health Care Inform ; 31(1)2024 Nov 02.
Artículo en Inglés | MEDLINE | ID: mdl-39488434

RESUMEN

OBJECTIVES: Technology-related prescribing errors curtail the positive impacts of computerised provider order entry (CPOE) on medication safety. Understanding how technology-related errors (TREs) occur can inform CPOE optimisation. Previously, we developed a classification of the underlying mechanisms of TREs using prescribing error data from two adult hospitals. Our objective was to update the classification using paediatric prescribing error data and to assess the reliability with which reviewers could independently apply the classification. MATERIALS AND METHODS: Using data on 1696 prescribing errors identified by chart review in 2016 and 2017 at a tertiary paediatric hospital, we identified errors that were technology-related. These errors were investigated to classify their underlying mechanisms using our previously developed classification, and new categories were added based on the data. A two-step process was used to identify and classify TREs involving a review of the error in the CPOE and simulating the error in the CPOE testing environment. RESULTS: The technology-related error mechanism (TREM) classification comprises six mechanism categories, one contributing factor and 19 subcategories. The categories are as follows: (1) incorrect system configuration or system malfunction, (2) opening or using the wrong patient record, (3) selection errors, (4) construction errors, (5) editing errors, (6) errors that occur when using workflows that differ from a paper-based system (7) contributing factor: use of hybrid systems. CONCLUSION: TREs remain a critical issue for CPOE. The updated TREM classification provides a systematic means of assessing and monitoring TREs to inform and prioritise system improvements and has now been updated for the paediatric setting.


Asunto(s)
Sistemas de Entrada de Órdenes Médicas , Errores de Medicación , Humanos , Errores de Medicación/prevención & control , Hospitales Pediátricos , Reproducibilidad de los Resultados
2.
Int J Retina Vitreous ; 10(1): 79, 2024 Oct 17.
Artículo en Inglés | MEDLINE | ID: mdl-39420407

RESUMEN

PURPOSE: This scoping review aims to explore the current applications of ChatGPT in the retina field, highlighting its potential, challenges, and limitations. METHODS: A comprehensive literature search was conducted across multiple databases, including PubMed, Scopus, MEDLINE, and Embase, to identify relevant articles published from 2022 onwards. The inclusion criteria focused on studies evaluating the use of ChatGPT in retinal healthcare. Data were extracted and synthesized to map the scope of ChatGPT's applications in retinal care, categorizing articles into various practical application areas such as academic research, charting, coding, diagnosis, disease management, and patient counseling. RESULTS: A total of 68 articles were included in the review, distributed across several categories: 8 related to academics and research, 5 to charting, 1 to coding and billing, 44 to diagnosis, 49 to disease management, 2 to literature consulting, 23 to medical education, and 33 to patient counseling. Many articles were classified into multiple categories due to overlapping topics. The findings indicate that while ChatGPT shows significant promise in areas such as medical education and diagnostic support, concerns regarding accuracy, reliability, and the potential for misinformation remain prevalent. CONCLUSION: ChatGPT offers substantial potential in advancing retinal healthcare by supporting clinical decision-making, enhancing patient education, and automating administrative tasks. However, its current limitations, particularly in clinical accuracy and the risk of generating misinformation, necessitate cautious integration into practice, with continuous oversight from healthcare professionals. Future developments should focus on improving accuracy, incorporating up-to-date medical guidelines, and minimizing the risks associated with AI-driven healthcare tools.

3.
JAMIA Open ; 7(4): ooae123, 2024 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-39473879

RESUMEN

Objectives: To determine whether the addition of a primary aldosteronism (PA) predictive model to a secondary hypertension decision support tool increases screening for PA in a primary care setting. Materials and Methods: One hundred fifty-three primary care clinics were randomized to receive a secondary hypertension decision support tool with or without an integrated predictive model between August 2023 and April 2024. Results: For patients with risk scores in the top 1 percentile, 63/2896 (2.2%) patients where the alert was displayed in model clinics had the order set launched, while 12/1210 (1.0%) in no-model clinics had the order set launched (P = .014). Nineteen of 2896 (0.66%) of these highest risk patients in model clinics had an aldosterone-to-renin ratio (ARR) ordered compared to 0/1210 (0.0%) patients in no-model clinics (P = .010). For patients with scores not in the top 1 percentile, 438/20 493 (2.1%) patients in model clinics had the order set launched compared to 273/17 820 (1.5%) in no-model clinics (P < .001). One hundred twenty-four of 20 493 (0.61%) in model clinics had an ARR ordered compared to 34/17 820 (0.19%) in the no-model clinics (P < .001). Discussion: The addition of a PA predictive model to secondary hypertension alert displays and triggering criteria along with order set displays and order preselection criteria results in a statistically and clinically significant increase in screening for PA, a condition that clinicians insufficiently screen for currently. Conclusion: Addition of a predictive model for an under-screened condition to traditional clinical decision support may increase screening for these conditions.

4.
J Nurs Scholarsh ; 2024 Sep 11.
Artículo en Inglés | MEDLINE | ID: mdl-39262027

RESUMEN

INTRODUCTION: Accurate and rapid triage can reduce undertriage and overtriage, which may improve emergency department flow. This study aimed to identify the effects of a prospective study applying artificial intelligence-based triage in the clinical field. DESIGN: Systematic review of prospective studies. METHODS: CINAHL, Cochrane, Embase, PubMed, ProQuest, KISS, and RISS were searched from March 9 to April 18, 2023. All the data were screened independently by three researchers. The review included prospective studies that measured outcomes related to AI-based triage. Three researchers extracted data and independently assessed the study's quality using the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) protocol. RESULTS: Of 1633 studies, seven met the inclusion criteria for this review. Most studies applied machine learning to triage, and only one was based on fuzzy logic. All studies, except one, utilized a five-level triage classification system. Regarding model performance, the feed-forward neural network achieved a precision of 33% in the level 1 classification, whereas the fuzzy clip model achieved a specificity and sensitivity of 99%. The accuracy of the model's triage prediction ranged from 80.5% to 99.1%. Other outcomes included time reduction, overtriage and undertriage checks, mistriage factors, and patient care and prognosis outcomes. CONCLUSION: Triage nurses in the emergency department can use artificial intelligence as a supportive means for triage. Ultimately, we hope to be a resource that can reduce undertriage and positively affect patient health. PROTOCOL REGISTRATION: We have registered our review in PROSPERO (registration number: CRD 42023415232).

5.
Int J Med Inform ; 191: 105581, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39106772

RESUMEN

INTRODUCTION: The management of chronic diabetes mellitus and its complications demands customized glycaemia control strategies. Polypharmacy is prevalent among people with diabetes and comorbidities, which increases the risk of adverse drug reactions. Clinical decision support systems (CDSSs) may constitute an innovative solution to these problems. The aim of our study was to conduct a systematic review assessing the value of CDSSs for the management of antidiabetic drugs (AD). MATERIALS AND METHODS: We systematically searched the scientific literature published between January 2010 and October 2023. The retrieved studies were categorized as non-specific or AD-specific. The studies' quality was assessed using the Mixed Methods Appraisal Tool. The review's results were reported in accordance with the PRISMA guidelines. RESULTS: Twenty studies met our inclusion criteria. The majority of AD-specific studies were conducted more recently (2020-2023) compared to non-specific studies (2010-2015). This trend hints at growing interest in more specialized CDSSs tailored for prescriptions of ADs. The nine AD-specific studies focused on metformin and insulin and demonstrated positive impacts of the CDSSs on different outcomes, including the reduction in the proportion of inappropriate prescriptions of ADs and in hypoglycaemia events. The 11 nonspecific studies showed similar trends for metformin and insulin prescriptions, although the CDSSs' impacts were not significant. There was a predominance of metformin and insulin in the studied CDSSs and a lack of studies on ADs such as sodium-glucose cotransporter-2 (SGLT-2) inhibitors and glucagon-like peptide-1 (GLP-1) receptor agonists. CONCLUSION: The limited number of studies, especially randomized clinical trials, interested in evaluating the application of CDSS in the management of ADs underscores the need for further investigations. Our findings suggest the potential benefit of applying CDSSs to the prescription of ADs particularly in primary care settings and when targeting clinical pharmacists. Finally, establishing core outcome sets is crucial for ensuring consistent and standardized evaluation of these CDSSs.


Asunto(s)
Sistemas de Apoyo a Decisiones Clínicas , Hipoglucemiantes , Humanos , Hipoglucemiantes/uso terapéutico , Diabetes Mellitus/tratamiento farmacológico , Polifarmacia
6.
BMJ Health Care Inform ; 31(1)2024 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-39209331

RESUMEN

BACKGROUND: Older patients with diabetic kidney disease (DKD) often do not receive optimal pharmacological treatment. Current clinical practice guidelines (CPGs) do not incorporate the concept of personalised care. Clinical decision support (CDS) algorithms that consider both evidence and personalised care to improve patient outcomes can improve the care of older adults. The aim of this research is to design and validate a CDS algorithm for prescribing renin-angiotensin-aldosterone system inhibitors (RAASi) for older patients with diabetes. METHODS: The design of the CDS tool included the following phases: (1) gathering evidence from systematic reviews and meta-analyses of randomised clinical trials to determine the number needed to treat (NNT) and time-to-benefit (TTB) values applicable to our target population for use in the algorithm. (2) Building a list of potential cases that addressed different prescribing scenarios (starting, adding or switching to RAASi). (3) Reviewing relevant guidelines and extracting all recommendations related to prescribing RAASi for DKD. (4) Matching NNT and TTB with specific clinical cases. (5) Validating the CDS algorithm using Delphi technique. RESULTS: We created a CDS algorithm that covered 15 possible scenarios and we generated 36 personalised and nine general recommendations based on the calculated and matched NNT and TTB values and considering the patient's life expectancy and functional capacity. The algorithm was validated by experts in three rounds of Delphi study. CONCLUSION: We designed an evidence-informed CDS algorithm that integrates considerations often overlooked in CPGs. The next steps include testing the CDS algorithm in a clinical trial.


Asunto(s)
Algoritmos , Sistemas de Apoyo a Decisiones Clínicas , Nefropatías Diabéticas , Humanos , Anciano , Técnica Delphi , Masculino , Femenino , Anciano de 80 o más Años , Inhibidores de la Enzima Convertidora de Angiotensina/uso terapéutico
7.
BMJ Health Care Inform ; 31(1)2024 Aug 24.
Artículo en Inglés | MEDLINE | ID: mdl-39181544

RESUMEN

INTRODUCTION: Digital healthcare innovation has yielded many prototype clinical decision support (CDS) systems, however, few are fully adopted into practice, despite successful research outcomes. We aimed to explore the characteristics of implementations in clinical practice to inform future innovation. METHODS: Web of Science, Trip Database, PubMed, NHS Digital and the BMA website were searched for examples of CDS systems in May 2022 and updated in June 2023. Papers were included if they reported on a CDS giving pathway advice to a clinician, adopted into regular clinical practice and had sufficient published information for analysis. Examples were excluded if they were only used in a research setting or intended for patients. Articles found in citation searches were assessed alongside a detailed hand search of the grey literature to gather all available information, including commercial information. Examples were excluded if there was insufficient information for analysis. The normalisation process theory (NPT) framework informed analysis. RESULTS: 22 implemented CDS projects were included, with 53 related publications or sources of information (40 peer-reviewed publications and 13 alternative sources). NPT framework analysis indicated organisational support was paramount to successful adoption of CDS. Ensuring that workflows were optimised for patient care alongside iterative, mixed-methods implementation was key to engaging clinicians. CONCLUSION: Extensive searches revealed few examples of CDS available for analysis, highlighting the implementation gap between research and healthcare innovation. Lessons from included projects include the need for organisational support, an underpinning mixed-methods implementation strategy and an iterative approach to address clinician feedback.


Asunto(s)
Sistemas de Apoyo a Decisiones Clínicas , Humanos
8.
BMC Med Inform Decis Mak ; 24(1): 188, 2024 Jul 04.
Artículo en Inglés | MEDLINE | ID: mdl-38965569

RESUMEN

BACKGROUND: Medication errors and associated adverse drug events (ADE) are a major cause of morbidity and mortality worldwide. In recent years, the prevention of medication errors has become a high priority in healthcare systems. In order to improve medication safety, computerized Clinical Decision Support Systems (CDSS) are increasingly being integrated into the medication process. Accordingly, a growing number of studies have investigated the medication safety-related effectiveness of CDSS. However, the outcome measures used are heterogeneous, leading to unclear evidence. The primary aim of this study is to summarize and categorize the outcomes used in interventional studies evaluating the effects of CDSS on medication safety in primary and long-term care. METHODS: We systematically searched PubMed, Embase, CINAHL, and Cochrane Library for interventional studies evaluating the effects of CDSS targeting medication safety and patient-related outcomes. We extracted methodological characteristics, outcomes and empirical findings from the included studies. Outcomes were assigned to three main categories: process-related, harm-related, and cost-related. Risk of bias was assessed using the Evidence Project risk of bias tool. RESULTS: Thirty-two studies met the inclusion criteria. Almost all studies (n = 31) used process-related outcomes, followed by harm-related outcomes (n = 11). Only three studies used cost-related outcomes. Most studies used outcomes from only one category and no study used outcomes from all three categories. The definition and operationalization of outcomes varied widely between the included studies, even within outcome categories. Overall, evidence on CDSS effectiveness was mixed. A significant intervention effect was demonstrated by nine of fifteen studies with process-related primary outcomes (60%) but only one out of five studies with harm-related primary outcomes (20%). The included studies faced a number of methodological problems that limit the comparability and generalizability of their results. CONCLUSIONS: Evidence on the effectiveness of CDSS is currently inconclusive due in part to inconsistent outcome definitions and methodological problems in the literature. Additional high-quality studies are therefore needed to provide a comprehensive account of CDSS effectiveness. These studies should follow established methodological guidelines and recommendations and use a comprehensive set of harm-, process- and cost-related outcomes with agreed-upon and consistent definitions. PROSPERO REGISTRATION: CRD42023464746.


Asunto(s)
Sistemas de Apoyo a Decisiones Clínicas , Cuidados a Largo Plazo , Errores de Medicación , Atención Primaria de Salud , Humanos , Sistemas de Apoyo a Decisiones Clínicas/normas , Errores de Medicación/prevención & control , Cuidados a Largo Plazo/normas , Atención Primaria de Salud/normas , Seguridad del Paciente/normas , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos/prevención & control , Evaluación de Resultado en la Atención de Salud
9.
BMJ Health Care Inform ; 31(1)2024 Jul 01.
Artículo en Inglés | MEDLINE | ID: mdl-38955390

RESUMEN

BACKGROUND: The detrimental repercussions of the COVID-19 pandemic on the quality of care and clinical outcomes for patients with acute coronary syndrome (ACS) necessitate a rigorous re-evaluation of prognostic prediction models in the context of the pandemic environment. This study aimed to elucidate the adaptability of prediction models for 30-day mortality in patients with ACS during the pandemic periods. METHODS: A total of 2041 consecutive patients with ACS were included from 32 institutions between December 2020 and April 2023. The dataset comprised patients who were admitted for ACS and underwent coronary angiography for the diagnosis during hospitalisation. The prediction accuracy of the Global Registry of Acute Coronary Events (GRACE) and a machine learning model, KOTOMI, was evaluated for 30-day mortality in patients with ST-elevation acute myocardial infarction (STEMI) and non-ST-elevation acute coronary syndrome (NSTE-ACS). RESULTS: The area under the receiver operating characteristics curve (AUROC) was 0.85 (95% CI 0.81 to 0.89) in the GRACE and 0.87 (95% CI 0.82 to 0.91) in the KOTOMI for STEMI. The difference of 0.020 (95% CI -0.098-0.13) was not significant. For NSTE-ACS, the respective AUROCs were 0.82 (95% CI 0.73 to 0.91) in the GRACE and 0.83 (95% CI 0.74 to 0.91) in the KOTOMI, also demonstrating insignificant difference of 0.010 (95% CI -0.023 to 0.25). The prediction accuracy of both models had consistency in patients with STEMI and insignificant variation in patients with NSTE-ACS between the pandemic periods. CONCLUSIONS: The prediction models maintained high accuracy for 30-day mortality of patients with ACS even in the pandemic periods, despite marginal variation observed.


Asunto(s)
Síndrome Coronario Agudo , COVID-19 , Humanos , Síndrome Coronario Agudo/mortalidad , COVID-19/epidemiología , COVID-19/mortalidad , Femenino , Masculino , Pronóstico , Anciano , Persona de Mediana Edad , Aprendizaje Automático , SARS-CoV-2 , Infarto del Miocardio con Elevación del ST/mortalidad , Angiografía Coronaria , Curva ROC , Sistema de Registros , Pandemias
10.
J Periodontol ; 2024 Jul 15.
Artículo en Inglés | MEDLINE | ID: mdl-39007745

RESUMEN

BACKGROUND: With recent advances in artificial intelligence, the use of this technology has begun to facilitate comprehensive tissue evaluation and planning of interventions. This study aimed to assess different convolutional neural networks (CNN) in deep learning algorithms to detect keratinized gingiva based on intraoral photos and evaluate the ability of networks to measure keratinized gingiva width. METHODS: Six hundred of 1200 photographs taken before and after applying a disclosing agent were used to compare the neural networks in segmenting the keratinized gingiva. Segmentation performances of networks were evaluated using accuracy, intersection over union, and F1 score. Keratinized gingiva width from a reference point was measured from ground truth images and compared with the measurements of clinicians and the DeepLab image that was generated from the ResNet50 model. The effect of measurement operators, phenotype, and jaw on differences in measurements was evaluated by three-factor mixed-design analysis of variance (ANOVA). RESULTS: Among the compared networks, ResNet50 distinguished keratinized gingiva at the highest accuracy rate of 91.4%. The measurements between deep learning and clinicians were in excellent agreement according to jaw and phenotype. When analyzing the influence of the measurement operators, phenotype, and jaw on the measurements performed according to the ground truth, there were statistically significant differences in measurement operators and jaw (p < 0.05). CONCLUSIONS: Automated keratinized gingiva segmentation with the ResNet50 model might be a feasible method for assisting professionals. The measurement results promise a potentially high performance of the model as it requires less time and experience. PLAIN LANGUAGE SUMMARY: With recent advances in artificial intelligence (AI), it is now possible to use this technology to evaluate tissues and plan medical procedures thoroughly. This study focused on testing different AI models, specifically CNN, to identify and measure a specific type of gum tissue called keratinized gingiva using photos taken inside the mouth. Out of 1200 photos, 600 were used in the study to compare the performance of different CNN in identifying gingival tissue. The accuracy and effectiveness of these models were measured and compared to human clinician ratings. The study found that the ResNet50 model was the most accurate, correctly identifying gingival tissue 91.4% of the time. When the AI model and clinicians' measurements of gum tissue width were compared, the results were very similar, especially when accounting for different jaws and gum structures. The study also analyzed the effect of various factors on the measurements and found significant differences based on who took the measurements and jaw type. In conclusion, using the ResNet50 model to identify and measure gum tissue automatically could be a practical tool for dental professionals, saving time and requiring less expertise.

11.
BMJ Health Care Inform ; 31(1)2024 Jun 19.
Artículo en Inglés | MEDLINE | ID: mdl-38901863

RESUMEN

OBJECTIVES: Risk stratification tools that predict healthcare utilisation are extensively integrated into primary care systems worldwide, forming a key component of anticipatory care pathways, where high-risk individuals are targeted by preventative interventions. Existing work broadly focuses on comparing model performance in retrospective cohorts with little attention paid to efficacy in reducing morbidity when deployed in different global contexts. We review the evidence supporting the use of such tools in real-world settings, from retrospective dataset performance to pathway evaluation. METHODS: A systematic search was undertaken to identify studies reporting the development, validation and deployment of models that predict healthcare utilisation in unselected primary care cohorts, comparable to their current real-world application. RESULTS: Among 3897 articles screened, 51 studies were identified evaluating 28 risk prediction models. Half underwent external validation yet only two were validated internationally. No association between validation context and model discrimination was observed. The majority of real-world evaluation studies reported no change, or indeed significant increases, in healthcare utilisation within targeted groups, with only one-third of reports demonstrating some benefit. DISCUSSION: While model discrimination appears satisfactorily robust to application context there is little evidence to suggest that accurate identification of high-risk individuals can be reliably translated to improvements in service delivery or morbidity. CONCLUSIONS: The evidence does not support further integration of care pathways with costly population-level interventions based on risk prediction in unselected primary care cohorts. There is an urgent need to independently appraise the safety, efficacy and cost-effectiveness of risk prediction systems that are already widely deployed within primary care.


Asunto(s)
Algoritmos , Aceptación de la Atención de Salud , Atención Primaria de Salud , Humanos , Medición de Riesgo , Aceptación de la Atención de Salud/estadística & datos numéricos
12.
Circ Cardiovasc Qual Outcomes ; : e010359, 2024 Feb 06.
Artículo en Inglés | MEDLINE | ID: mdl-38318703

RESUMEN

BACKGROUND: There are multiple risk assessment models (RAMs) for venous thromboembolism prophylaxis, but it is unknown whether they increase appropriate prophylaxis. METHODS: To determine the impact of a RAM embedded in the electronic health record, we conducted a stepped-wedge hospital-level cluster-randomized trial conducted from October 1, 2017 to February 28, 2019 at 10 Cleveland Clinic hospitals. We included consecutive general medical patients aged 18 years or older. Patients were excluded if they had a contraindication to prophylaxis, including anticoagulation for another condition, acute bleeding, or comfort-only care. A RAM was embedded in the general admission order set and physicians were encouraged to use it. The decisions to use the RAM and act on the results were reserved to the treating physician. The primary outcome was the percentage of patients receiving appropriate prophylaxis (high-risk patients with pharmacological thromboprophylaxis plus low-risk patients without prophylaxis) within 48 hours of hospitalization. Secondary outcomes included total patients receiving prophylaxis, venous thromboembolism among high-risk patients at 14 and 45 days, major bleeding, heparin-induced thrombocytopenia, and length of stay. Mixed-effects models were used to analyze the study outcomes. RESULTS: A total of 26 506 patients (mean age, 61; 52% female; 73% White) were analyzed, including 11 134 before and 15 406 after implementation of the RAM. After implementation, the RAM was used for 24% of patients, and the percentage of patients receiving appropriate prophylaxis increased from 43.1% to 48.8% (adjusted odds ratio, 1.11 [1.00-1.23]), while overall prophylaxis use decreased from 73.5% to 65.2% (adjusted odds ratio, 0.87 [0.78-0.97]). Rates of venous thromboembolism among high-risk patients (adjusted odds ratio, 0.72 [0.38-1.36]), rates of bleeding and heparin-induced thrombocytopenia (adjusted odds ratio, 0.19 [0.02-1.47]), and length of stay were unchanged. CONCLUSIONS: Implementation of a RAM for venous thromboembolism increased appropriate prophylaxis use, but the RAM was used for a minority of patients. REGISTRATION: URL: https://www.clinicaltrials.gov/study/NCT03243708?term=nct03243708&rank=1; Unique identifier: NCT03243708.

13.
Br J Clin Pharmacol ; 90(4): 1152-1161, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38294057

RESUMEN

AIMS: We aim to examine and understand the work processes of antimicrobial stewardship (AMS) teams across 2 hospitals that use the same digital intervention, and to identify the barriers and enablers to effective AMS in each setting. METHODS: Employing a contextual inquiry approach informed by the Systems Engineering Initiative for Patient Safety (SEIPS) model, observations and semistructured interviews were conducted with AMS team members (n = 15) in 2 Australian hospitals. Qualitative data analysis was conducted, mapping themes to the SEIPS framework. RESULTS: Both hospitals utilized similar systems, however, they displayed variations in AMS processes, particularly in postprescription review, interdepartmental AMS meetings and the utilization of digital tools. An antimicrobial dashboard was available at both hospitals but was utilized more at the hospital where the AMS team members were involved in the dashboard's development, and there were user champions. At the hospital where the dashboard was utilized less, participants were unaware of key features, and interoperability issues were observed. Establishing strong relationships between the AMS team and prescribers emerged as key to effective AMS at both hospitals. However, organizational and cultural differences were found, with 1 hospital reporting insufficient support from executive leadership, increased prescriber autonomy and resource constraints. CONCLUSION: Organizational and cultural elements, such as executive support, resource allocation and interdepartmental relationships, played a crucial role in achieving AMS goals. System interoperability and user champions further promoted the adoption of digital tools, potentially improving AMS outcomes through increased user engagement and acceptance.


Asunto(s)
Antiinfecciosos , Programas de Optimización del Uso de los Antimicrobianos , Humanos , Australia , Hospitales , Investigación Cualitativa
14.
Einstein (São Paulo, Online) ; 22: eAO0328, 2024. tab, graf
Artículo en Inglés | LILACS-Express | LILACS | ID: biblio-1534330

RESUMEN

ABSTRACT Objective: To develop and validate predictive models to estimate the number of COVID-19 patients hospitalized in the intensive care units and general wards of a private not-for-profit hospital in São Paulo, Brazil. Methods: Two main models were developed. The first model calculated hospital occupation as the difference between predicted COVID-19 patient admissions, transfers between departments, and discharges, estimating admissions based on their weekly moving averages, segmented by general wards and intensive care units. Patient discharge predictions were based on a length of stay predictive model, assessing the clinical characteristics of patients hospitalized with COVID-19, including age group and usage of mechanical ventilation devices. The second model estimated hospital occupation based on the correlation with the number of telemedicine visits by patients diagnosed with COVID-19, utilizing correlational analysis to define the lag that maximized the correlation between the studied series. Both models were monitored for 365 days, from May 20th, 2021, to May 20th, 2022. Results: The first model predicted the number of hospitalized patients by department within an interval of up to 14 days. The second model estimated the total number of hospitalized patients for the following 8 days, considering calls attended by Hospital Israelita Albert Einstein's telemedicine department. Considering the average daily predicted values for the intensive care unit and general ward across a forecast horizon of 8 days, as limited by the second model, the first and second models obtained R² values of 0.900 and 0.996, respectively and mean absolute errors of 8.885 and 2.524 beds, respectively. The performances of both models were monitored using the mean error, mean absolute error, and root mean squared error as a function of the forecast horizon in days. Conclusion: The model based on telemedicine use was the most accurate in the current analysis and was used to estimate COVID-19 hospital occupancy 8 days in advance, validating predictions of this nature in similar clinical contexts. The results encourage the expansion of this method to other pathologies, aiming to guarantee the standards of hospital care and conscious consumption of resources.

16.
Mult Scler Relat Disord ; 80: 105092, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37931489

RESUMEN

BACKGROUND: Disease modifying therapies (DMTs) offer opportunities to improve the course of multiple sclerosis (MS), but decisions about treatment are difficult. People with multiple sclerosis (pwMS) want more involvement in decisions about DMTs, but new approaches are needed to support shared decision-making (SDM) because of the number of treatment options and the range of outcomes affected by treatment. We designed a patient-centered tool, MS-SUPPORT, to facilitate SDM for pwMS. We sought to evaluate the feasibility and impact of MS-SUPPORT on decisions about disease modifying treatments (DMTs), SDM processes, and quality-of-life. METHODS: This multisite randomized controlled trial compared the SDM intervention (MS-SUPPORT) to control (usual care) over a 12-month period. English-speaking adults with relapsing MS were eligible if they had an upcoming MS appointment and an email address. To evaluate clinician perspectives, participants' MS clinicians were invited to participate. Patients were referred between November 11, 2019 and October 23, 2020 by their MS clinician or a patient advocacy organization (the Multiple Sclerosis Association of America). MS-SUPPORT is an online, interactive, evidence-based decision aid that was co-created with pwMS. It clarifies patient treatment goals and values and provides tailored information about MS, DMTs, and adherence. Viewed by patients before their clinic appointment, MS-SUPPORT generates a personalized summary of the patient's treatment goals and preferences, adherence, DMT use, and clinical situation to share with their MS clinician. Outcomes (DMT utilization, adherence, quality-of-life, and SDM) were assessed at enrollment, post-MS-SUPPORT, post-appointment, and quarterly for 1 year. RESULTS: Participants included 501 adults with MS from across the USA (84.6% female, 83% white) and 34 of their MS clinicians (47% neurologists, 41% Nurse Practitioners, 12% Physician Assistants). Among the 203 patients who completed MS-SUPPORT, most (88.2%) reported they would recommend it to others and that it helped them talk to their doctor (85.2%), understand their options (82.3%) and the importance of taking DMTs as prescribed (82.3%). Among non-users of DMTs at baseline, the probability ratio of current DMT use consistently trended higher over one-year follow-up in the MS-SUPPORT group (1.30 [0.86-1.96]), as did the cumulative probability of starting a DMT within 6-months, with shorter time-to-start (46 vs 90 days, p=0.24). Among the 222 responses from 34 participating clinicians, more clinicians in the MS-SUPPORT group (vs control) trended towards recommending their patient start a DMT (9 of 108 (8%) vs 5 of 109 (5%), respectively, p=0.26). Adherence (no missed doses) to daily-dosed DMTs was higher in the MS-SUPPORT group (81.25% vs 56.41%, p=.026). Fewer patients forgot their doses (p=.046). The MS-SUPPORT group (vs control) reported 1.7 fewer days/month of poor mental health (p=0.02). CONCLUSIONS: MS-SUPPORT was strongly endorsed by patients and is feasible to use in clinical settings. MS-SUPPORT increased the short-term probability of taking and adhering to a DMT, and improved long-term mental health. Study limitations include selection bias, response bias, social desirability bias, and recall bias. Exploring approaches to reinforcement and monitoring its implementation in real-world settings should provide further insights into the value and utility of this new SDM tool.


Asunto(s)
Esclerosis Múltiple , Médicos , Adulto , Humanos , Femenino , Masculino , Esclerosis Múltiple/tratamiento farmacológico , Toma de Decisiones Conjunta , Calidad de Vida
17.
BMJ Health Care Inform ; 30(1)2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37730251

RESUMEN

OBJECTIVE: The study aimed to measure the validity of International Classification of Diseases, 10th Edition (ICD-10) code F44.5 for functional seizure disorder (FSD) in the Veterans Affairs Connecticut Healthcare System electronic health record (VA EHR). METHODS: The study used an informatics search tool, a natural language processing algorithm and a chart review to validate FSD coding. RESULTS: The positive predictive value (PPV) for code F44.5 was calculated to be 44%. DISCUSSION: ICD-10 introduced a specific code for FSD to improve coding validity. However, results revealed a meager (44%) PPV for code F44.5. Evaluation of the low diagnostic precision of FSD identified inconsistencies in the ICD-10 and VA EHR systems. CONCLUSION: Information system improvements may increase the precision of diagnostic coding by clinicians. Specifically, the EHR problem list should include commonly used diagnostic codes and an appropriately curated ICD-10 term list for 'seizure disorder,' and a single ICD code for FSD should be classified under neurology and psychiatry.


Asunto(s)
Epilepsia , Clasificación Internacional de Enfermedades , Humanos , Algoritmos , Registros Electrónicos de Salud , Epilepsia/diagnóstico , Procesamiento de Lenguaje Natural
18.
Trials ; 24(1): 577, 2023 Sep 09.
Artículo en Inglés | MEDLINE | ID: mdl-37684688

RESUMEN

INTRODUCTION: Multidisciplinary team meetings (MDMs), also known as tumor conferences, are a cornerstone of cancer treatments. However, barriers such as incomplete patient information or logistical challenges can postpone tumor board decisions and delay patient treatment, potentially affecting clinical outcomes. Therapeutic Assistance and Decision algorithms for hepatobiliary tumor Boards (ADBoard) aims to reduce this delay by providing automated data extraction and high-quality, evidence-based treatment recommendations. METHODS AND ANALYSIS: With the help of natural language processing, relevant patient information will be automatically extracted from electronic medical records and used to complete a classic tumor conference protocol. A machine learning model is trained on retrospective MDM data and clinical guidelines to recommend treatment options for patients in our inclusion criteria. Study participants will be randomized to either MDM with ADBoard (Arm A: MDM-AB) or conventional MDM (Arm B: MDM-C). The concordance of recommendations of both groups will be compared using interrater reliability. We hypothesize that the therapy recommendations of ADBoard would be in high agreement with those of the MDM-C, with a Cohen's kappa value of ≥ 0.75. Furthermore, our secondary hypotheses state that the completeness of patient information presented in MDM is higher when using ADBoard than without, and the explainability of tumor board protocols in MDM-AB is higher compared to MDM-C as measured by the System Causability Scale. DISCUSSION: The implementation of ADBoard aims to improve the quality and completeness of the data required for MDM decision-making and to propose therapeutic recommendations that consider current medical evidence and guidelines in a transparent and reproducible manner. ETHICS AND DISSEMINATION: The project was approved by the Ethics Committee of the Charité - Universitätsmedizin Berlin. REGISTRATION DETAILS: The study was registered on ClinicalTrials.gov (trial identifying number: NCT05681949; https://clinicaltrials.gov/study/NCT05681949 ) on 12 January 2023.


Asunto(s)
Neoplasias Hepáticas , Humanos , Reproducibilidad de los Resultados , Estudios Retrospectivos , Neoplasias Hepáticas/diagnóstico , Neoplasias Hepáticas/terapia , Algoritmos , Grupo de Atención al Paciente , Ensayos Clínicos Controlados Aleatorios como Asunto
19.
BMJ Health Care Inform ; 30(1)2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37751942

RESUMEN

BACKGROUND: Treat-to-target (T2T) is a therapeutic strategy currently being studied for its application in systemic lupus erythematosus (SLE). Patients and rheumatologists have little support in making the best treatment decision in the context of a T2T strategy, thus, the use of information technology for systematically processing data and supporting information and knowledge may improve routine decision-making practices, helping to deliver value-based care. OBJECTIVE: To design and develop an online Clinical Decision Support Systems (CDSS) tool "SLE-T2T", and test its usability for the implementation of a T2T strategy in the management of patients with SLE. METHODS: A prototype of a CDSS was conceived as a web-based application with the task of generating appropriate treatment advice based on entered patients' data. Once developed, a System Usability Score (SUS) questionnaire was implemented to test whether the eHealth tool was user-friendly, comprehensible, easy-to-deliver and workflow-oriented. Data from the participants' comments were synthesised, and the elements in need for improvement were identified. RESULTS: The beta version web-based system was developed based on the interim usability and acceptance evaluation. 7 participants completed the SUS survey. The median SUS score of SLE-T2T was 79 (scale 0 to 100), categorising the application as 'good' and indicating the need for minor improvements to the design. CONCLUSIONS: SLE-T2T is the first eHealth tool to be designed for the management of SLE patients in a T2T context. The SUS score and unstructured feedback showed high acceptance of this digital instrument for its future use in a clinical trial.


Asunto(s)
Sistemas de Apoyo a Decisiones Clínicas , Lupus Eritematoso Sistémico , Aplicaciones Móviles , Telemedicina , Humanos , Lupus Eritematoso Sistémico/tratamiento farmacológico , Internet
20.
BMJ Health Care Inform ; 30(1)2023 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-37709302

RESUMEN

OBJECTIVE: To identify the risk of acute respiratory distress syndrome (ARDS) and in-hospital mortality using long short-term memory (LSTM) framework in a mechanically ventilated (MV) non-COVID-19 cohort and a COVID-19 cohort. METHODS: We included MV ICU patients between 2017 and 2018 and reviewed patient records for ARDS and death. Using active learning, we enriched this cohort with MV patients from 2016 to 2019 (MV non-COVID-19, n=3905). We collected a second validation cohort of hospitalised patients with COVID-19 in 2020 (COVID+, n=5672). We trained an LSTM model using 132 structured features on the MV non-COVID-19 training cohort and validated on the MV non-COVID-19 validation and COVID-19 cohorts. RESULTS: Applying LSTM (model score 0.9) on the MV non-COVID-19 validation cohort had a sensitivity of 86% and specificity of 57%. The model identified the risk of ARDS 10 hours before ARDS and 9.4 days before death. The sensitivity (70%) and specificity (84%) of the model on the COVID-19 cohort are lower than MV non-COVID-19 cohort. For the COVID-19 + cohort and MV COVID-19 + patients, the model identified the risk of in-hospital mortality 2.4 days and 1.54 days before death, respectively. DISCUSSION: Our LSTM algorithm accurately and timely identified the risk of ARDS or death in MV non-COVID-19 and COVID+ patients. By alerting the risk of ARDS or death, we can improve the implementation of evidence-based ARDS management and facilitate goals-of-care discussions in high-risk patients. CONCLUSION: Using the LSTM algorithm in hospitalised patients identifies the risk of ARDS or death.


Asunto(s)
COVID-19 , Síndrome de Dificultad Respiratoria , Humanos , Mortalidad Hospitalaria , Memoria a Corto Plazo , Algoritmos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA