Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 2.451
Filter
1.
JMIR Public Health Surveill ; 10: e49127, 2024 Jul 03.
Article in English | MEDLINE | ID: mdl-38959048

ABSTRACT

BACKGROUND: Electronic health records (EHRs) play an increasingly important role in delivering HIV care in low- and middle-income countries. The data collected are used for direct clinical care, quality improvement, program monitoring, public health interventions, and research. Despite widespread EHR use for HIV care in African countries, challenges remain, especially in collecting high-quality data. OBJECTIVE: We aimed to assess data completeness, accuracy, and timeliness compared to paper-based records, and factors influencing data quality in a large-scale EHR deployment in Rwanda. METHODS: We randomly selected 50 health facilities (HFs) using OpenMRS, an EHR system that supports HIV care in Rwanda, and performed a data quality evaluation. All HFs were part of a larger randomized controlled trial, with 25 HFs receiving an enhanced EHR with clinical decision support systems. Trained data collectors visited the 50 HFs to collect 28 variables from the paper charts and the EHR system using the Open Data Kit app. We measured data completeness, timeliness, and the degree of matching of the data in paper and EHR records, and calculated concordance scores. Factors potentially affecting data quality were drawn from a previous survey of users in the 50 HFs. RESULTS: We randomly selected 3467 patient records, reviewing both paper and EHR copies (194,152 total data items). Data completeness was >85% threshold for all data elements except viral load (VL) results, second-line, and third-line drug regimens. Matching scores for data values were close to or >85% threshold, except for dates, particularly for drug pickups and VL. The mean data concordance was 10.2 (SD 1.28) for 15 (68%) variables. HF and user factors (eg, years of EHR use, technology experience, EHR availability and uptime, and intervention status) were tested for correlation with data quality measures. EHR system availability and uptime was positively correlated with concordance, whereas users' experience with technology was negatively correlated with concordance. The alerts for missing VL results implemented at 11 intervention HFs showed clear evidence of improving timeliness and completeness of initially low matching of VL results in the EHRs and paper records (11.9%-26.7%; P<.001). Similar effects were seen on the completeness of the recording of medication pickups (18.7%-32.6%; P<.001). CONCLUSIONS: The EHR records in the 50 HFs generally had high levels of completeness except for VL results. Matching results were close to or >85% threshold for nondate variables. Higher EHR stability and uptime, and alerts for entering VL both strongly improved data quality. Most data were considered fit for purpose, but more regular data quality assessments, training, and technical improvements in EHR forms, data reports, and alerts are recommended. The application of quality improvement techniques described in this study should benefit a wide range of HFs and data uses for clinical care, public health, and disease surveillance.


Subject(s)
Data Accuracy , Electronic Health Records , HIV Infections , Health Facilities , Rwanda , Electronic Health Records/statistics & numerical data , Electronic Health Records/standards , Humans , Cross-Sectional Studies , HIV Infections/drug therapy , Health Facilities/statistics & numerical data , Health Facilities/standards
2.
J Med Internet Res ; 26: e57721, 2024 Jul 24.
Article in English | MEDLINE | ID: mdl-39047282

ABSTRACT

BACKGROUND: Discharge letters are a critical component in the continuity of care between specialists and primary care providers. However, these letters are time-consuming to write, underprioritized in comparison to direct clinical care, and are often tasked to junior doctors. Prior studies assessing the quality of discharge summaries written for inpatient hospital admissions show inadequacies in many domains. Large language models such as GPT have the ability to summarize large volumes of unstructured free text such as electronic medical records and have the potential to automate such tasks, providing time savings and consistency in quality. OBJECTIVE: The aim of this study was to assess the performance of GPT-4 in generating discharge letters written from urology specialist outpatient clinics to primary care providers and to compare their quality against letters written by junior clinicians. METHODS: Fictional electronic records were written by physicians simulating 5 common urology outpatient cases with long-term follow-up. Records comprised simulated consultation notes, referral letters and replies, and relevant discharge summaries from inpatient admissions. GPT-4 was tasked to write discharge letters for these cases with a specified target audience of primary care providers who would be continuing the patient's care. Prompts were written for safety, content, and style. Concurrently, junior clinicians were provided with the same case records and instructional prompts. GPT-4 output was assessed for instances of hallucination. A blinded panel of primary care physicians then evaluated the letters using a standardized questionnaire tool. RESULTS: GPT-4 outperformed human counterparts in information provision (mean 4.32, SD 0.95 vs 3.70, SD 1.27; P=.03) and had no instances of hallucination. There were no statistically significant differences in the mean clarity (4.16, SD 0.95 vs 3.68, SD 1.24; P=.12), collegiality (4.36, SD 1.00 vs 3.84, SD 1.22; P=.05), conciseness (3.60, SD 1.12 vs 3.64, SD 1.27; P=.71), follow-up recommendations (4.16, SD 1.03 vs 3.72, SD 1.13; P=.08), and overall satisfaction (3.96, SD 1.14 vs 3.62, SD 1.34; P=.36) between the letters generated by GPT-4 and humans, respectively. CONCLUSIONS: Discharge letters written by GPT-4 had equivalent quality to those written by junior clinicians, without any hallucinations. This study provides a proof of concept that large language models can be useful and safe tools in clinical documentation.


Subject(s)
Patient Discharge , Humans , Patient Discharge/standards , Electronic Health Records/standards , Single-Blind Method , Language
3.
BMC Med Inform Decis Mak ; 24(1): 192, 2024 Jul 09.
Article in English | MEDLINE | ID: mdl-38982465

ABSTRACT

BACKGROUND: As global aging intensifies, the prevalence of ocular fundus diseases continues to rise. In China, the tense doctor-patient ratio poses numerous challenges for the early diagnosis and treatment of ocular fundus diseases. To reduce the high risk of missed or misdiagnosed cases, avoid irreversible visual impairment for patients, and ensure good visual prognosis for patients with ocular fundus diseases, it is particularly important to enhance the growth and diagnostic capabilities of junior doctors. This study aims to leverage the value of electronic medical record data to developing a diagnostic intelligent decision support platform. This platform aims to assist junior doctors in diagnosing ocular fundus diseases quickly and accurately, expedite their professional growth, and prevent delays in patient treatment. An empirical evaluation will assess the platform's effectiveness in enhancing doctors' diagnostic efficiency and accuracy. METHODS: In this study, eight Chinese Named Entity Recognition (NER) models were compared, and the SoftLexicon-Glove-Word2vec model, achieving a high F1 score of 93.02%, was selected as the optimal recognition tool. This model was then used to extract key information from electronic medical records (EMRs) and generate feature variables based on diagnostic rule templates. Subsequently, an XGBoost algorithm was employed to construct an intelligent decision support platform for diagnosing ocular fundus diseases. The effectiveness of the platform in improving diagnostic efficiency and accuracy was evaluated through a controlled experiment comparing experienced and junior doctors. RESULTS: The use of the diagnostic intelligent decision support platform resulted in significant improvements in both diagnostic efficiency and accuracy for both experienced and junior doctors (P < 0.05). Notably, the gap in diagnostic speed and precision between junior doctors and experienced doctors narrowed considerably when the platform was used. Although the platform also provided some benefits to experienced doctors, the improvement was less pronounced compared to junior doctors. CONCLUSION: The diagnostic intelligent decision support platform established in this study, based on the XGBoost algorithm and NER, effectively enhances the diagnostic efficiency and accuracy of junior doctors in ocular fundus diseases. This has significant implications for optimizing clinical diagnosis and treatment.


Subject(s)
Ophthalmologists , Humans , Clinical Decision-Making , Electronic Health Records/standards , Artificial Intelligence , China , Decision Support Systems, Clinical
4.
Front Public Health ; 12: 1379973, 2024.
Article in English | MEDLINE | ID: mdl-39040857

ABSTRACT

Introduction: This study is part of the U.S. Food and Drug Administration (FDA)'s Biologics Effectiveness and Safety (BEST) initiative, which aims to improve the FDA's postmarket surveillance capabilities by using real-world data (RWD). In the United States, using RWD for postmarket surveillance has been hindered by the inability to exchange clinical data between healthcare providers and public health organizations in an interoperable format. However, the Office of the National Coordinator for Health Information Technology (ONC) has recently enacted regulation requiring all healthcare providers to support seamless access, exchange, and use of electronic health information through the interoperable HL7 Fast Healthcare Interoperability Resources (FHIR) standard. To leverage the recent ONC changes, BEST designed a pilot platform to query and receive the clinical information necessary to analyze suspected AEs. This study assessed the feasibility of using the RWD received through the data exchange of FHIR resources to study post-vaccination AE cases by evaluating the data volume, query response time, and data quality. Materials and methods: The study used RWD from 283 post-vaccination AE cases, which were received through the platform. We used descriptive statistics to report results and apply 322 data quality tests based on a data quality framework for EHR. Results: The volume analysis indicated the average clinical resources for a post-vaccination AE case was 983.9 for the median partner. The query response time analysis indicated that cases could be received by the platform at a median of 3 min and 30 s. The quality analysis indicated that most of the data elements and conformance requirements useful for postmarket surveillance were met. Discussion: This study describes the platform's data volume, data query response time, and data quality results from the queried postvaccination adverse event cases and identified updates to current standards to close data quality gaps.


Subject(s)
Data Accuracy , United States Food and Drug Administration , Humans , United States , Pilot Projects , Product Surveillance, Postmarketing/standards , Product Surveillance, Postmarketing/statistics & numerical data , Adverse Drug Reaction Reporting Systems/standards , Vaccination/adverse effects , Health Information Exchange/standards , Male , Female , Adult , Time Factors , Electronic Health Records/standards , Electronic Health Records/statistics & numerical data , Middle Aged , Adolescent
5.
BMC Prim Care ; 25(1): 262, 2024 Jul 18.
Article in English | MEDLINE | ID: mdl-39026167

ABSTRACT

BACKGROUND: Electronic health records (EHRs) can accelerate documentation and may enhance details of notes, or complicate documentation and introduce errors. Comprehensive assessment of documentation quality requires comparing documentation to what transpires during the clinical encounter itself. We assessed outpatient primary care notes and corresponding recorded encounters to determine accuracy, thoroughness, and several additional key measures of documentation quality. METHODS: Patients and primary care clinicians across five midwestern primary care clinics of the US Department of Veterans Affairs were recruited into a prospective observational study. Clinical encounters were video-recorded and transcribed verbatim. Using the Physician Documentation Quality Instrument (PDQI-9) added to other measures, reviewers scored quality of the documentation by comparing transcripts to corresponding encounter notes. PDQI-9 items were scored from 1 to 5, with higher scores indicating higher quality. RESULTS: Encounters (N = 49) among 11 clinicians were analyzed. Most issues that patients initiated in discussion were omitted from notes, and nearly half of notes referred to information or observations that could not be verified. Four notes lacked concluding assessments and plans; nine lacked information about when patients should return. Except for thoroughness, PDQI-9 items that were assessed achieved quality scores exceeding 4 of 5 points. CONCLUSIONS: Among outpatient primary care electronic records examined, most issues that patients initiated in discussion were absent from notes, and nearly half of notes referred to information or observations absent from transcripts. EHRs may contribute to certain kinds of errors. Approaches to improving documentation should consider the roles of the EHR, patient, and clinician together.


Subject(s)
Documentation , Electronic Health Records , Primary Health Care , United States Department of Veterans Affairs , Humans , Primary Health Care/standards , United States Department of Veterans Affairs/organization & administration , United States , Documentation/standards , Electronic Health Records/standards , Prospective Studies , Ambulatory Care/standards , Female , Male , Middle Aged , Outpatients , Aged
6.
BMC Med Inform Decis Mak ; 24(1): 155, 2024 Jun 05.
Article in English | MEDLINE | ID: mdl-38840250

ABSTRACT

BACKGROUND: Diagnosis can often be recorded in electronic medical records (EMRs) as free-text or using a term with a diagnosis code. Researchers, governments, and agencies, including organisations that deliver incentivised primary care quality improvement programs, frequently utilise coded data only and often ignore free-text entries. Diagnosis data are reported for population healthcare planning including resource allocation for patient care. This study sought to determine if diagnosis counts based on coded diagnosis data only, led to under-reporting of disease prevalence and if so, to what extent for six common or important chronic diseases. METHODS: This cross-sectional data quality study used de-identified EMR data from 84 general practices in Victoria, Australia. Data represented 456,125 patients who attended one of the general practices three or more times in two years between January 2021 and December 2022. We reviewed the percentage and proportional difference between patient counts of coded diagnosis entries alone and patient counts of clinically validated free-text entries for asthma, chronic kidney disease, chronic obstructive pulmonary disease, dementia, type 1 diabetes and type 2 diabetes. RESULTS: Undercounts were evident in all six diagnoses when using coded diagnoses alone (2.57-36.72% undercount), of these, five were statistically significant. Overall, 26.4% of all patient diagnoses had not been coded. There was high variation between practices in recording of coded diagnoses, but coding for type 2 diabetes was well captured by most practices. CONCLUSION: In Australia clinical decision support and the reporting of aggregated patient diagnosis data to government that relies on coded diagnoses can lead to significant underreporting of diagnoses compared to counts that also incorporate clinically validated free-text diagnoses. Diagnosis underreporting can impact on population health, healthcare planning, resource allocation, and patient care. We propose the use of phenotypes derived from clinically validated text entries to enhance the accuracy of diagnosis and disease reporting. There are existing technologies and collaborations from which to build trusted mechanisms to provide greater reliability of general practice EMR data used for secondary purposes.


Subject(s)
Electronic Health Records , General Practice , Humans , Cross-Sectional Studies , General Practice/statistics & numerical data , Electronic Health Records/standards , Victoria , Chronic Disease , Clinical Coding/standards , Data Accuracy , Population Health/statistics & numerical data , Male , Female , Middle Aged , Adult , Australia , Aged , Diabetes Mellitus, Type 2/diagnosis , Diabetes Mellitus, Type 2/epidemiology
7.
Pediatrics ; 154(1)2024 Jul 01.
Article in English | MEDLINE | ID: mdl-38864111

ABSTRACT

OBJECTIVES: In 2005, the American Academy of Pediatrics founded the Partnership for Policy Implementation (PPI). The PPI has collaborated with authors to improve the quality of clinical guidelines, technical reports, and policies that standardize care delivery, improve care quality and patient outcomes, and reduce variation and costs. METHODS: In this article, we describe how the PPI trained informaticians apply a variety of tools and techniques to these guidance documents, eliminating ambiguity in clinical recommendations and allowing guideline recommendations to be implemented by practicing clinicians and electronic health record (EHR) developers more easily. RESULTS: Since its inception, the PPI has participated in the development of 45 published and 27 in-progress clinical practice guidelines, policy statements, technical and clinical reports, and other projects endorsed by the American Academy of Pediatrics. The partnership has trained informaticians to apply a variety of tools and techniques to eliminate ambiguity or lack of decidability and can be implemented by practicing clinicians and EHR developers. CONCLUSIONS: With the increasing use of EHRs in pediatrics, the need for medical societies to improve the clarity, decidability, and actionability of their guidelines has become more important than ever.


Subject(s)
Pediatrics , Practice Guidelines as Topic , Humans , Pediatrics/standards , Pediatrics/organization & administration , United States , Societies, Medical , Electronic Health Records/standards , Health Policy
8.
Value Health ; 27(6): 692-701, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38871437

ABSTRACT

This ISPOR Good Practices report provides a framework for assessing the suitability of electronic health records data for use in health technology assessments (HTAs). Although electronic health record (EHR) data can fill evidence gaps and improve decisions, several important limitations can affect its validity and relevance. The ISPOR framework includes 2 components: data delineation and data fitness for purpose. Data delineation provides a complete understanding of the data and an assessment of its trustworthiness by describing (1) data characteristics; (2) data provenance; and (3) data governance. Fitness for purpose comprises (1) data reliability items, ie, how accurate and complete the estimates are for answering the question at hand and (2) data relevance items, which assess how well the data are suited to answer the particular question from a decision-making perspective. The report includes a checklist specific to EHR data reporting: the ISPOR SUITABILITY Checklist. It also provides recommendations for HTA agencies and policy makers to improve the use of EHR-derived data over time. The report concludes with a discussion of limitations and future directions in the field, including the potential impact from the substantial and rapid advances in the diffusion and capabilities of large language models and generative artificial intelligence. The report's immediate audiences are HTA evidence developers and users. We anticipate that it will also be useful to other stakeholders, particularly regulators and manufacturers, in the future.


Subject(s)
Checklist , Electronic Health Records , Technology Assessment, Biomedical , Electronic Health Records/standards , Humans , Reproducibility of Results , Advisory Committees , Decision Making
9.
BMJ Open Qual ; 13(2)2024 Jun 19.
Article in English | MEDLINE | ID: mdl-38901878

ABSTRACT

BACKGROUND: Evaluation of quality of care in oncology is key in ensuring patients receive adequate treatment. American Society of Clinical Oncology's (ASCO) Quality Oncology Practice Initiative (QOPI) Certification Program (QCP) is an international initiative that evaluates quality of care in outpatient oncology practices. METHODS: We retrospectively reviewed free-text electronic medical records from patients with breast cancer (BR), colorectal cancer (CRC) or non-small cell lung cancer (NSCLC). In a baseline measurement, high scores were obtained for the nine disease-specific measures of QCP Track (2021 version had 26 measures); thus, they were not further analysed. We evaluated two sets of measures: the remaining 17 QCP Track measures, as well as these plus other 17 measures selected by us (combined measures). Review of data from 58 patients (26 BR; 18 CRC; 14 NSCLC) seen in June 2021 revealed low overall quality scores (OQS)-below ASCO's 75% threshold-for QCP Track measures (46%) and combined measures (58%). We developed a plan to improve OQS and monitored the impact of the intervention by abstracting data at subsequent time points. RESULTS: We evaluated potential causes for the low OQS and developed a plan to improve it over time by educating oncologists at our hospital on the importance of improving collection of measures and highlighting the goal of applying for QOPI certification. We conducted seven plan-do-study-act cycles and evaluated the scores at seven subsequent data abstraction time points from November 2021 to December 2022, reviewing 404 patients (199 BR; 114 CRC; 91 NSCLC). All measures were improved. Four months after the intervention, OQS surpassed the quality threshold and was maintained for 10 months until the end of the study (range, 78-87% for QCP Track measures; 78-86% for combined measures). CONCLUSIONS: We developed an easy-to-implement intervention that achieved a fast improvement in OQS, enabling our Medical Oncology Department to aim for QOPI certification.


Subject(s)
Electronic Health Records , Quality Improvement , Humans , Electronic Health Records/statistics & numerical data , Electronic Health Records/standards , Retrospective Studies , Female , Spain , Male , Middle Aged , Quality of Health Care/standards , Quality of Health Care/statistics & numerical data , Aged , Data Collection/methods , Data Collection/standards , Medical Oncology/standards , Medical Oncology/methods , Medical Oncology/statistics & numerical data , Colorectal Neoplasms/therapy , Adult , Breast Neoplasms/therapy , Carcinoma, Non-Small-Cell Lung/therapy
10.
BMC Med Res Methodol ; 24(1): 136, 2024 Jun 22.
Article in English | MEDLINE | ID: mdl-38909216

ABSTRACT

BACKGROUND: Generating synthetic patient data is crucial for medical research, but common approaches build up on black-box models which do not allow for expert verification or intervention. We propose a highly available method which enables synthetic data generation from real patient records in a privacy preserving and compliant fashion, is interpretable and allows for expert intervention. METHODS: Our approach ties together two established tools in medical informatics, namely OMOP as a data standard for electronic health records and Synthea as a data synthetization method. For this study, data pipelines were built which extract data from OMOP, convert them into time series format, learn temporal rules by 2 statistical algorithms (Markov chain, TARM) and 3 algorithms of causal discovery (DYNOTEARS, J-PCMCI+, LiNGAM) and map the outputs into Synthea graphs. The graphs are evaluated quantitatively by their individual and relative complexity and qualitatively by medical experts. RESULTS: The algorithms were found to learn qualitatively and quantitatively different graph representations. Whereas the Markov chain results in extremely large graphs, TARM, DYNOTEARS, and J-PCMCI+ were found to reduce the data dimension during learning. The MultiGroupDirect LiNGAM algorithm was found to not be applicable to the problem statement at hand. CONCLUSION: Only TARM and DYNOTEARS are practical algorithms for real-world data in this use case. As causal discovery is a method to debias purely statistical relationships, the gradient-based causal discovery algorithm DYNOTEARS was found to be most suitable.


Subject(s)
Algorithms , Electronic Health Records , Humans , Electronic Health Records/statistics & numerical data , Electronic Health Records/standards , Markov Chains , Medical Informatics/methods , Medical Informatics/statistics & numerical data
11.
BMC Med Inform Decis Mak ; 24(1): 162, 2024 Jun 12.
Article in English | MEDLINE | ID: mdl-38915012

ABSTRACT

Many state-of-the-art results in natural language processing (NLP) rely on large pre-trained language models (PLMs). These models consist of large amounts of parameters that are tuned using vast amounts of training data. These factors cause the models to memorize parts of their training data, making them vulnerable to various privacy attacks. This is cause for concern, especially when these models are applied in the clinical domain, where data are very sensitive. Training data pseudonymization is a privacy-preserving technique that aims to mitigate these problems. This technique automatically identifies and replaces sensitive entities with realistic but non-sensitive surrogates. Pseudonymization has yielded promising results in previous studies. However, no previous study has applied pseudonymization to both the pre-training data of PLMs and the fine-tuning data used to solve clinical NLP tasks. This study evaluates the effects on the predictive performance of end-to-end pseudonymization of Swedish clinical BERT models fine-tuned for five clinical NLP tasks. A large number of statistical tests are performed, revealing minimal harm to performance when using pseudonymized fine-tuning data. The results also find no deterioration from end-to-end pseudonymization of pre-training and fine-tuning data. These results demonstrate that pseudonymizing training data to reduce privacy risks can be done without harming data utility for training PLMs.


Subject(s)
Natural Language Processing , Humans , Privacy , Sweden , Anonyms and Pseudonyms , Computer Security/standards , Confidentiality/standards , Electronic Health Records/standards
12.
BMC Med Inform Decis Mak ; 24(1): 178, 2024 Jun 24.
Article in English | MEDLINE | ID: mdl-38915008

ABSTRACT

OBJECTIVE: This study aimed to develop and validate a quantitative index system for evaluating the data quality of Electronic Medical Records (EMR) in disease risk prediction using Machine Learning (ML). MATERIALS AND METHODS: The index system was developed in four steps: (1) a preliminary index system was outlined based on literature review; (2) we utilized the Delphi method to structure the indicators at all levels; (3) the weights of these indicators were determined using the Analytic Hierarchy Process (AHP) method; and (4) the developed index system was empirically validated using real-world EMR data in a ML-based disease risk prediction task. RESULTS: The synthesis of review findings and the expert consultations led to the formulation of a three-level index system with four first-level, 11 second-level, and 33 third-level indicators. The weights of these indicators were obtained through the AHP method. Results from the empirical analysis illustrated a positive relationship between the scores assigned by the proposed index system and the predictive performances of the datasets. DISCUSSION: The proposed index system for evaluating EMR data quality is grounded in extensive literature analysis and expert consultation. Moreover, the system's high reliability and suitability has been affirmed through empirical validation. CONCLUSION: The novel index system offers a robust framework for assessing the quality and suitability of EMR data in ML-based disease risk predictions. It can serve as a guide in building EMR databases, improving EMR data quality control, and generating reliable real-world evidence.


Subject(s)
Data Accuracy , Electronic Health Records , Machine Learning , Electronic Health Records/standards , Humans , Risk Assessment/standards , Delphi Technique
13.
J Am Board Fam Med ; 37(2): 316-320, 2024.
Article in English | MEDLINE | ID: mdl-38740491

ABSTRACT

BACKGROUND: Creating useful clinical quality measure (CQM) reports in a busy primary care practice is known to depend on the capability of the electronic health record (EHR). Two other domains may also contribute: supportive leadership to prioritize the work and commit the necessary resources, and individuals with the necessary health information technology (IT) skills to do so. Here we describe the results of an assessment of the above 3 domains and their associations with successful CQM reporting during an initiative to improve smaller primary care practices' cardiovascular disease CQMs. METHODS: The study took place within an AHRQ EvidenceNOW initiative of external support for smaller practices across Washington, Oregon and Idaho. Practice facilitators who provided this support completed an assessment of the 3 domains previously described for each of their assigned practices. Practices submitted 3 CQMs to the study team: appropriate aspirin prescribing, use of statins when indicated, blood pressure control, and tobacco screening/cessation. RESULTS: Practices with advanced EHR reporting capability were more likely to report 2 or more CQMs. Only one-third of practices were "advanced" in this domain, and this domain had the highest proportion of practices (39.1%) assessed as "basic." The presence of advanced leadership or advanced skills did not appreciably increase the proportion of practices that reported 2 or more CQMs. CONCLUSIONS: Our findings support previous reports of limited EHR reporting capabilities within smaller practices but extend these findings by demonstrating that practices with advanced capabilities in this domain are more likely to produce CQM reports.


Subject(s)
Electronic Health Records , Primary Health Care , Humans , Primary Health Care/standards , Primary Health Care/organization & administration , Electronic Health Records/statistics & numerical data , Electronic Health Records/standards , Oregon , Cardiovascular Diseases/therapy , Cardiovascular Diseases/diagnosis , Washington , Quality of Health Care , Hydroxymethylglutaryl-CoA Reductase Inhibitors/therapeutic use , Idaho , Aspirin/administration & dosage , Quality Indicators, Health Care , Quality Improvement , Smoking Cessation/methods , Leadership
16.
Article in German | MEDLINE | ID: mdl-38748234

ABSTRACT

In order to achieve the goals of the Medical Informatics Initiative (MII), staff with skills in the field of medical informatics and data science are required. Each consortium has established training activities. Further, cross-consortium activities have emerged. This article describes the concepts, implemented programs, and experiences in the consortia. Fifty-one new professorships have been established and 10 new study programs have been created: 1 bachelor's degree and 6 consecutive and 3 part-time master's degree programs. Further, learning and training opportunities can be used by all MII partners. Certification and recognition opportunities have been created.The educational offers are aimed at target groups with a background in computer science, medicine, nursing, bioinformatics, biology, natural science, and data science. Additional qualifications for physicians in computer science and computer scientists in medicine seem to be particularly important. They can lead to higher quality in software development and better support for treatment processes by application systems.Digital learning methods were important in all consortia. They offer flexibility for cross-location and interprofessional training. This enables learning at an individual pace and an exchange between professional groups.The success of the MII depends largely on society's acceptance of the multiple use of medical data in both healthcare and research. The information required for this is provided by the MII's public relations work. There is also an enormous need in society for medical and digital literacy.


Subject(s)
Curriculum , Medical Informatics , Humans , Computer Security/standards , Electronic Health Records/standards , Germany , Medical Informatics/education , Professional Competence/standards
17.
BMC Med Inform Decis Mak ; 24(1): 121, 2024 May 09.
Article in English | MEDLINE | ID: mdl-38724966

ABSTRACT

OBJECTIVE: Hospitals and healthcare providers should assess and compare the quality of care given to patients and based on this improve the care. In the Netherlands, hospitals provide data to national quality registries, which in return provide annual quality indicators. However, this process is time-consuming, resource intensive and risks patient privacy and confidentiality. In this paper, we presented a multicentric 'Proof of Principle' study for federated calculation of quality indicators in patients with colorectal cancer. The findings suggest that the proposed approach is highly time-efficient and consume significantly lesser resources. MATERIALS AND METHODS: Two quality indicators are calculated in an efficient and privacy presevering federated manner, by i) applying the Findable Accessible Interoperable and Reusable (FAIR) data principles and ii) using the Personal Health Train (PHT) infrastructure. Instead of sharing data to a centralized registry, PHT enables analysis by sending algorithms and sharing only insights from the data. RESULTS: ETL process extracted data from the Electronic Health Record systems of the hospitals, converted them to FAIR data and hosted in RDF endpoints within each hospital. Finally, quality indicators from each center are calculated using PHT and the mean result along with the individual results plotted. DISCUSSION AND CONCLUSION: PHT and FAIR data principles can efficiently calculate quality indicators in a privacy-preserving federated approach and the work can be scaled up both nationally and internationally. Despite this, application of the methodology was largely hampered by ELSI issues. However, the lessons learned from this study can provide other hospitals and researchers to adapt to the process easily and take effective measures in building quality of care infrastructures.


Subject(s)
Colorectal Neoplasms , Electronic Health Records , Quality Indicators, Health Care , Humans , Colorectal Neoplasms/therapy , Quality Indicators, Health Care/standards , Netherlands , Electronic Health Records/standards , Registries/standards
19.
Article in German | MEDLINE | ID: mdl-38639817

ABSTRACT

BACKGROUND: The digitalization in the healthcare sector promises a secondary use of patient data in the sense of a learning healthcare system. For this, the Medical Informatics Initiative's (MII) Consent Working Group has created an ethical and legal basis with standardized consent documents. This paper describes the systematically monitored introduction of these documents at the MII sites. METHODS: The monitoring of the introduction included regular online surveys, an in-depth analysis of the introduction processes at selected sites, and an assessment of the documents in use. In addition, inquiries and feedback from a large number of stakeholders were evaluated. RESULTS: The online surveys showed that 27 of the 32 sites have gradually introduced the consent documents productively, with a current total of 173,289 consents. The analysis of the implementation procedures revealed heterogeneous organizational conditions at the sites. The requirements of various stakeholders were met by developing and providing supplementary versions of the consent documents and additional information materials. DISCUSSION: The introduction of the MII consent documents at the university hospitals creates a uniform legal basis for the secondary use of patient data. However, the comprehensive implementation within the sites remains challenging. Therefore, minimum requirements for patient information and supplementary recommendations for best practice must be developed. The further development of the national legal framework for research will not render the participation and transparency mechanisms developed here obsolete.


Subject(s)
Informed Consent , Germany , Informed Consent/legislation & jurisprudence , Informed Consent/standards , Humans , Electronic Health Records/legislation & jurisprudence , Electronic Health Records/standards , Consent Forms/standards , Consent Forms/legislation & jurisprudence , National Health Programs/legislation & jurisprudence
20.
Jt Comm J Qual Patient Saf ; 50(8): 560-568, 2024 Aug.
Article in English | MEDLINE | ID: mdl-38584053

ABSTRACT

BACKGROUND: Communication failures are among the most common causes of harmful medical errors. At one Comprehensive Cancer Center, patient handoffs varied among services. The authors describe the implementation and results of an organization-wide project to improve handoffs and implement an evidence-based handoff tool across all inpatient services. METHODS: The research team created a task force composed of members from 22 hospital services-advanced practice providers (APPs), trainees, some faculty members, electronic health record (EHR) staff, education and training specialists, and nocturnal providers. Over two years, the task force expanded to include consulting services and Anesthesiology. Factors contributing to ineffective handoffs were identified and organized into categories. The EHR I-PASS tool was used to standardize handoff documentation. Training was provided to staff on its use, and compliance was monitored using a customized dashboard. I-PASS champions in each service were responsible for the rollout of I-PASS in their respective services. The data were reported quarterly to the Quality Assessment and Performance Improvement (QAPI) governing committee. Provider handoff perception was assessed through the biennial Institution-wide safety culture survey. RESULTS: All fellows, residents, APPs, and physician assistants were trained in the use of I-PASS, either online or in person. Adherence to the I-PASS written tool improved from 41.6% in 2019 to 70.5% in 2022 (p < 0.05), with improvements seen in most services. The frequency of updating I-PASS elements and the action list in the handoff tool also increased over time. The handoff favorability score on the safety culture survey improved from 38% in 2018 to 59% in 2022. CONCLUSION: The implementation approach developed by the Provider Handoff Task Force led to increased use of the I-PASS EHR tool and improved safety culture survey handoff favorability.


Subject(s)
Advisory Committees , Cancer Care Facilities , Patient Handoff , Humans , Patient Handoff/standards , Patient Handoff/organization & administration , Cancer Care Facilities/organization & administration , Cancer Care Facilities/standards , Advisory Committees/organization & administration , Electronic Health Records/organization & administration , Electronic Health Records/standards , Quality Improvement/organization & administration , Communication , Patient Safety/standards
SELECTION OF CITATIONS
SEARCH DETAIL