Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 11 de 11
Filter
1.
J Am Med Inform Assoc ; 28(1): 126-131, 2021 01 15.
Article in English | MEDLINE | ID: mdl-33120413

ABSTRACT

Identifying acute events as they occur is challenging in large hospital systems. Here, we describe an automated method to detect 2 rare adverse drug events (ADEs), drug-induced torsades de pointes and Stevens-Johnson syndrome and toxic epidermal necrolysis, in near real time for participant recruitment into prospective clinical studies. A text processing system searched clinical notes from the electronic health record (EHR) for relevant keywords and alerted study personnel via email of potential patients for chart review or in-person evaluation. Between 2016 and 2018, the automated recruitment system resulted in capture of 138 true cases of drug-induced rare events, improving recall from 43% to 93%. Our focused electronic alert system maintained 2-year enrollment, including across an EHR migration from a bespoke system to Epic. Real-time monitoring of EHR notes may accelerate research for certain conditions less amenable to conventional study recruitment paradigms.


Subject(s)
Drug-Related Side Effects and Adverse Reactions/diagnosis , Electronic Health Records , Medical Order Entry Systems , Stevens-Johnson Syndrome/diagnosis , Torsades de Pointes/chemically induced , Adult , Data Mining , Female , Humans , Male , Middle Aged , Prospective Studies , Rare Diseases/diagnosis , Torsades de Pointes/diagnosis
2.
J Am Med Inform Assoc ; 25(11): 1540-1546, 2018 11 01.
Article in English | MEDLINE | ID: mdl-30124903

ABSTRACT

Electronic health record (EHR) algorithms for defining patient cohorts are commonly shared as free-text descriptions that require human intervention both to interpret and implement. We developed the Phenotype Execution and Modeling Architecture (PhEMA, http://projectphema.org) to author and execute standardized computable phenotype algorithms. With PhEMA, we converted an algorithm for benign prostatic hyperplasia, developed for the electronic Medical Records and Genomics network (eMERGE), into a standards-based computable format. Eight sites (7 within eMERGE) received the computable algorithm, and 6 successfully executed it against local data warehouses and/or i2b2 instances. Blinded random chart review of cases selected by the computable algorithm shows PPV ≥90%, and 3 out of 5 sites had >90% overlap of selected cases when comparing the computable algorithm to their original eMERGE implementation. This case study demonstrates potential use of PhEMA computable representations to automate phenotyping across different EHR systems, but also highlights some ongoing challenges.


Subject(s)
Algorithms , Electronic Health Records , Phenotype , Prostatic Hyperplasia/diagnosis , Data Warehousing , Databases, Factual , Genomics , Humans , Male , Organizational Case Studies , Prostatic Hyperplasia/genetics
3.
J Am Med Inform Assoc ; 23(6): 1046-1052, 2016 11.
Article in English | MEDLINE | ID: mdl-27026615

ABSTRACT

OBJECTIVE: Health care generated data have become an important source for clinical and genomic research. Often, investigators create and iteratively refine phenotype algorithms to achieve high positive predictive values (PPVs) or sensitivity, thereby identifying valid cases and controls. These algorithms achieve the greatest utility when validated and shared by multiple health care systems.Materials and Methods We report the current status and impact of the Phenotype KnowledgeBase (PheKB, http://phekb.org), an online environment supporting the workflow of building, sharing, and validating electronic phenotype algorithms. We analyze the most frequent components used in algorithms and their performance at authoring institutions and secondary implementation sites. RESULTS: As of June 2015, PheKB contained 30 finalized phenotype algorithms and 62 algorithms in development spanning a range of traits and diseases. Phenotypes have had over 3500 unique views in a 6-month period and have been reused by other institutions. International Classification of Disease codes were the most frequently used component, followed by medications and natural language processing. Among algorithms with published performance data, the median PPV was nearly identical when evaluated at the authoring institutions (n = 44; case 96.0%, control 100%) compared to implementation sites (n = 40; case 97.5%, control 100%). DISCUSSION: These results demonstrate that a broad range of algorithms to mine electronic health record data from different health systems can be developed with high PPV, and algorithms developed at one site are generally transportable to others. CONCLUSION: By providing a central repository, PheKB enables improved development, transportability, and validity of algorithms for research-grade phenotypes using health care generated data.


Subject(s)
Algorithms , Knowledge Bases , Phenotype , Data Mining/methods , Electronic Health Records , Genomics , Humans , International Classification of Diseases , Natural Language Processing
4.
J Am Med Inform Assoc ; 22(6): 1220-30, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26342218

ABSTRACT

BACKGROUND: Electronic health records (EHRs) are increasingly used for clinical and translational research through the creation of phenotype algorithms. Currently, phenotype algorithms are most commonly represented as noncomputable descriptive documents and knowledge artifacts that detail the protocols for querying diagnoses, symptoms, procedures, medications, and/or text-driven medical concepts, and are primarily meant for human comprehension. We present desiderata for developing a computable phenotype representation model (PheRM). METHODS: A team of clinicians and informaticians reviewed common features for multisite phenotype algorithms published in PheKB.org and existing phenotype representation platforms. We also evaluated well-known diagnostic criteria and clinical decision-making guidelines to encompass a broader category of algorithms. RESULTS: We propose 10 desired characteristics for a flexible, computable PheRM: (1) structure clinical data into queryable forms; (2) recommend use of a common data model, but also support customization for the variability and availability of EHR data among sites; (3) support both human-readable and computable representations of phenotype algorithms; (4) implement set operations and relational algebra for modeling phenotype algorithms; (5) represent phenotype criteria with structured rules; (6) support defining temporal relations between events; (7) use standardized terminologies and ontologies, and facilitate reuse of value sets; (8) define representations for text searching and natural language processing; (9) provide interfaces for external software algorithms; and (10) maintain backward compatibility. CONCLUSION: A computable PheRM is needed for true phenotype portability and reliability across different EHR products and healthcare systems. These desiderata are a guide to inform the establishment and evolution of EHR phenotype algorithm authoring platforms and languages.


Subject(s)
Algorithms , Diagnosis, Computer-Assisted , Electronic Health Records , Humans , Phenotype
5.
AMIA Jt Summits Transl Sci Proc ; 2015: 127-31, 2015.
Article in English | MEDLINE | ID: mdl-26306254

ABSTRACT

Electronic clinical quality measures (eCQMs) based on the Quality Data Model (QDM) cannot currently be executed against non-standardized electronic health record (EHR) data. To address this gap, we prototyped an implementation of a QDM-based eCQM using KNIME, an open-source platform comprising a wide array of computational workflow tools that are collectively capable of executing QDM-based logic, while also giving users the flexibility to customize mappings from site-specific EHR data. To prototype this capability, we implemented eCQM CMS30 (titled: Statin Prescribed at Discharge) using KNIME. The implementation contains value set modules with connections to the National Library of Medicine's Value Set Authority Center, QDM Data Elements that can query a local EHR database, and logical and temporal operators. We successfully executed the KNIME implementation of CMS30 using data from the Vanderbilt University and Northwestern University EHR systems.

6.
AMIA Jt Summits Transl Sci Proc ; 2015: 147-51, 2015.
Article in English | MEDLINE | ID: mdl-26306258

ABSTRACT

Increasing interest in and experience with electronic health record (EHR)-driven phenotyping has yielded multiple challenges that are at present only partially addressed. Many solutions require the adoption of a single software platform, often with an additional cost of mapping existing patient and phenotypic data to multiple representations. We propose a set of guiding design principles and a modular software architecture to bridge the gap to a standardized phenotype representation, dissemination and execution. Ongoing development leveraging this proposed architecture has shown its ability to address existing limitations.

7.
Stud Health Technol Inform ; 216: 1098, 2015.
Article in English | MEDLINE | ID: mdl-26262397

ABSTRACT

This study describes our efforts in developing a standards-based semantic metadata repository for supporting electronic health record (EHR)-driven phenotype authoring and execution. Our system comprises three layers: 1) a semantic data element repository layer; 2) a semantic services layer; and 3) a phenotype application layer. In a prototype implementation, we developed the repository and services through integrating the data elements from both Quality Data Model (QDM) and HL7 Fast Healthcare Inteoroperability Resources (FHIR) models. We discuss the modeling challenges and the potential of our system to support EHR phenotype authoring and execution applications.


Subject(s)
Databases, Factual/standards , Electronic Health Records/standards , Health Level Seven/standards , Semantics , Vocabulary, Controlled , Guidelines as Topic , Medical Record Linkage/standards , Natural Language Processing , United States
8.
J Am Med Inform Assoc ; 22(6): 1251-60, 2015 Nov.
Article in English | MEDLINE | ID: mdl-26224336

ABSTRACT

OBJECTIVE: To review and evaluate available software tools for electronic health record-driven phenotype authoring in order to identify gaps and needs for future development. MATERIALS AND METHODS: Candidate phenotype authoring tools were identified through (1) literature search in four publication databases (PubMed, Embase, Web of Science, and Scopus) and (2) a web search. A collection of tools was compiled and reviewed after the searches. A survey was designed and distributed to the developers of the reviewed tools to discover their functionalities and features. RESULTS: Twenty-four different phenotype authoring tools were identified and reviewed. Developers of 16 of these identified tools completed the evaluation survey (67% response rate). The surveyed tools showed commonalities but also varied in their capabilities in algorithm representation, logic functions, data support and software extensibility, search functions, user interface, and data outputs. DISCUSSION: Positive trends identified in the evaluation included: algorithms can be represented in both computable and human readable formats; and most tools offer a web interface for easy access. However, issues were also identified: many tools were lacking advanced logic functions for authoring complex algorithms; the ability to construct queries that leveraged un-structured data was not widely implemented; and many tools had limited support for plug-ins or external analytic software. CONCLUSIONS: Existing phenotype authoring tools could enable clinical researchers to work with electronic health record data more efficiently, but gaps still exist in terms of the functionalities of such tools. The present work can serve as a reference point for the future development of similar tools.


Subject(s)
Algorithms , Biomedical Research , Electronic Health Records , Software , Humans , Translational Research, Biomedical
9.
J Biomed Inform ; 56: 292-9, 2015 Aug.
Article in English | MEDLINE | ID: mdl-26070431

ABSTRACT

OBJECTIVE: Assessment of medical trainee learning through pre-defined competencies is now commonplace in schools of medicine. We describe a novel electronic advisor system using natural language processing (NLP) to identify two geriatric medicine competencies from medical student clinical notes in the electronic medical record: advance directives (AD) and altered mental status (AMS). MATERIALS AND METHODS: Clinical notes from third year medical students were processed using a general-purpose NLP system to identify biomedical concepts and their section context. The system analyzed these notes for relevance to AD or AMS and generated custom email alerts to students with embedded supplemental learning material customized to their notes. Recall and precision of the two advisors were evaluated by physician review. Students were given pre and post multiple choice question tests broadly covering geriatrics. RESULTS: Of 102 students approached, 66 students consented and enrolled. The system sent 393 email alerts to 54 students (82%), including 270 for AD and 123 for AMS. Precision was 100% for AD and 93% for AMS. Recall was 69% for AD and 100% for AMS. Students mentioned ADs for 43 patients, with all mentions occurring after first having received an AD reminder. Students accessed educational links 34 times from the 393 email alerts. There was no difference in pre (mean 62%) and post (mean 60%) test scores. CONCLUSIONS: The system effectively identified two educational opportunities using NLP applied to clinical notes and demonstrated a small change in student behavior. Use of electronic advisors such as these may provide a scalable model to assess specific competency elements and deliver educational opportunities.


Subject(s)
Advance Directives , Educational Measurement , Geriatrics/education , Mental Disorders/diagnosis , Natural Language Processing , Academic Medical Centers , Aged , Algorithms , Automation , Clinical Clerkship , Clinical Competence , Education, Medical , Electronic Health Records , Hospitals, Veterans , Humans , Learning , Middle Aged , Outcome Assessment, Health Care , Reproducibility of Results , Software , Students, Medical , Tennessee , User-Computer Interface
10.
Pharmacogenomics ; 13(4): 407-18, 2012 Mar.
Article in English | MEDLINE | ID: mdl-22329724

ABSTRACT

AIM: Warfarin pharmacogenomic algorithms reduce dosing error, but perform poorly in non-European-Americans. Electronic health record (EHR) systems linked to biobanks may allow for pharmacogenomic analysis, but they have not yet been used for this purpose. PATIENTS & METHODS: We used BioVU, the Vanderbilt EHR-linked DNA repository, to identify European-Americans (n = 1022) and African-Americans (n = 145) on stable warfarin therapy and evaluated the effect of 15 pharmacogenetic variants on stable warfarin dose. RESULTS: Associations between variants in VKORC1, CYP2C9 and CYP4F2 with weekly dose were observed in European-Americans as well as additional variants in CYP2C9 and CALU in African-Americans. Compared with traditional 5 mg/day dosing, implementing the US FDA recommendations or the International Warfarin Pharmacogenomics Consortium (IWPC) algorithm reduced error in weekly dose in European-Americans (13.5-12.4 and 9.5 mg/week, respectively) but less so in African-Americans (15.2-15.0 and 13.8 mg/week, respectively). By further incorporating associated variants specific for European-Americans and African-Americans in an expanded algorithm, dose-prediction error reduced to 9.1 mg/week (95% CI: 8.4-9.6) in European-Americans and 12.4 mg/week (95% CI: 10.0-13.2) in African-Americans. The expanded algorithm explained 41 and 53% of dose variation in African-Americans and European-Americans, respectively, compared with 29 and 50%, respectively, for the IWPC algorithm. Implementing these predictions via dispensable pill regimens similarly reduced dosing error. CONCLUSION: These results validate EHR-linked DNA biorepositories as real-world resources for pharmacogenomic validation and discovery.


Subject(s)
Anticoagulants/administration & dosage , Black or African American/genetics , Dose-Response Relationship, Drug , Warfarin/administration & dosage , White People/genetics , Adult , Aged , Aged, 80 and over , Aryl Hydrocarbon Hydroxylases/genetics , Calcium-Binding Proteins/genetics , Cytochrome P-450 CYP2C9 , Cytochrome P-450 Enzyme System/genetics , Cytochrome P450 Family 4 , Drug Administration Schedule , Electronic Health Records , Female , Humans , Male , Middle Aged , Mixed Function Oxygenases/genetics , Polymorphism, Single Nucleotide/genetics , Substance-Related Disorders , Vitamin K Epoxide Reductases
11.
AMIA Annu Symp Proc ; 2010: 157-61, 2010 Nov 13.
Article in English | MEDLINE | ID: mdl-21346960

ABSTRACT

Accurate assessment and evaluation of medical curricula has long been a goal of medical educators. Current methods rely on manually-entered keywords and trainee-recorded logs of case exposure. In this study, we used natural language processing to compare the clinical content coverage in a four-year medical curriculum to the electronic medical record notes written by clinical trainees. The content coverage was compared for each of 25 agreed-upon core clinical problems (CCPs) and seven categories of infectious diseases. Most CCPs were covered in both corpora. Lecture curricula more frequently represented rare curricula, and several areas of low content coverage were identified, primarily related to outpatient complaints. Such methods may prove useful for future curriculum evaluations and revisions.


Subject(s)
Curriculum , Natural Language Processing , Education, Medical, Undergraduate , Electronic Health Records , Humans
SELECTION OF CITATIONS
SEARCH DETAIL
...