RESUMO
BACKGROUND: VCF formatted files are the lingua franca of next-generation sequencing, whereas HL7 FHIR is emerging as a standard language for electronic health record interoperability. A growing number of FHIR-based clinical genomics applications are emerging. Here, we describe an open source utility for converting variants from VCF format into HL7 FHIR format. RESULTS: vcf2fhir converts VCF variants into a FHIR Genomics Diagnostic Report. Conversion translates each VCF row into a corresponding FHIR-formatted variant in the generated report. In scope are simple variants (SNVs, MNVs, Indels), along with zygosity and phase relationships, for autosomes, sex chromosomes, and mitochondrial DNA. Input parameters include VCF file and genome build ('GRCh37' or 'GRCh38'); and optionally a conversion region that indicates the region(s) to convert, a studied region that lists genomic regions studied by the lab, and a non-callable region that lists studied regions deemed uncallable by the lab. Conversion can be limited to a subset of VCF by supplying genomic coordinates of the conversion region(s). If studied and non-callable regions are also supplied, the output FHIR report will include 'region-studied' observations that detail which portions of the conversion region were studied, and of those studied regions, which portions were deemed uncallable. We illustrate the vcf2fhir utility via two case studies. The first, 'SMART Cancer Navigator', is a web application that offers clinical decision support by linking patient EHR information to cancerous gene variants. The second, 'Precision Genomics Integration Platform', intersects a patient's FHIR-formatted clinical and genomic data with knowledge bases in order to provide on-demand delivery of contextually relevant genomic findings and recommendations to the EHR. CONCLUSIONS: Experience to date shows that the vcf2fhir utility can be effectively woven into clinically useful genomic-EHR integration pipelines. Additional testing will be a critical step towards the clinical validation of this utility, enabling it to be integrated in a variety of real world data flow scenarios. For now, we propose the use of this utility primarily to accelerate FHIR Genomics understanding and to facilitate experimentation with further integration of genomics data into the EHR.
Assuntos
Sistemas de Apoio a Decisões Clínicas , Genômica , Registros Eletrônicos de Saúde , Humanos , Bases de Conhecimento , OncogenesRESUMO
We developed a prototype genomic archiving and communications system to securely store genome data and provide clinical decision support (CDS). This system operates on a client-server model. The client encrypts the data, and the server stores data and performs the computations necessary for CDS. Computations are directly performed on encrypted data, and the client decrypts results. The server cannot decrypt inputs or outputs, which provides strong guarantees of security. We have validated our system with three genomics-based CDS applications. The results demonstrate that it is possible to resolve a long-standing dilemma in genomic data privacy and accessibility, by using a principled cryptographical framework and a mathematical representation of genome data and CDS questions.
Assuntos
Sistemas de Apoio a Decisões Clínicas , Segurança Computacional , Estudo de Associação Genômica Ampla , Genômica , Humanos , PrivacidadeRESUMO
OBJECTIVES: Pharmacogenetics (PGx) is increasingly important in individualizing therapeutic management plans, but is often implemented apart from other types of medication clinical decision support (CDS). The lack of integration of PGx into existing CDS may result in incomplete interaction information, which may pose patient safety concerns. We sought to develop a cloud-based orchestrated medication CDS service that integrates PGx with a broad set of drug screening alerts and evaluate it through a clinician utility study. METHODS: We developed the PillHarmonics service for implementation per the CDS Hooks protocol, algorithmically integrating a wide range of drug interaction knowledge using cloud-based screening services from First Databank (drug-drug/allergy/condition), PharmGKB (drug-gene), and locally curated content (drug-renal/hepatic/race). We performed a user study, presenting 13 clinicians and pharmacists with a prototype of the system's usage in synthetic patient scenarios. We collected feedback via a standard questionnaire and structured interview. RESULTS: Clinician assessment of PillHarmonics via the Technology Acceptance Model questionnaire shows significant evidence of perceived utility. Thematic analysis of structured interviews revealed that aggregated knowledge, concise actionable summaries, and information accessibility were highly valued, and that clinicians would use the service in their practice. CONCLUSION: Medication safety and optimizing efficacy of therapy regimens remain significant issues. A comprehensive medication CDS system that leverages patient clinical and genomic data to perform a wide range of interaction checking and presents a concise and holistic view of medication knowledge back to the clinician is feasible and perceived as highly valuable for more informed decision-making. Such a system can potentially address many of the challenges identified with current medication-related CDS.
Assuntos
Sistemas de Apoio a Decisões Clínicas , Farmacogenética , Humanos , Computação em NuvemRESUMO
While VCF formatted files are the lingua franca of next-generation sequencing, most EHRs do not provide native VCF support. As a result, labs often must send non-structured PDF reports to the EHR. On the other hand, while FHIR adoption is growing, most EHRs support HL7 interoperability standards, particularly those based on the HL7 Version 2 (HL7v2) standard. The HL7 Version 2 genomics component of the HL7 Laboratory Results Interface (HL7v2 LRI) standard specifies a formalism for the structured communication of genomic data from lab to EHR. We previously described an open-source tool (vcf2fhir) that converts VCF files into HL7 FHIR format. In this report, we describe how the utility has been extended to output HL7v2 LRI data that contains both variants and variant annotations (e.g., predicted phenotypes and therapeutic implications). Using this HL7v2 converter, we implemented an automated pipeline for moving structured genomic data from the clinical laboratory to EHR. We developed an open source hl7v2GenomicsExtractor that converts genomic interpretation report files into a series of HL7v2 observations conformant to HL7v2 LRI. We further enhanced the converter to produce output conformant to Epic's genomic import specification and to support alternative input formats. An automated pipeline for pushing standards-based structured genomic data directly into the EHR was successfully implemented, where genetic variant data and the clinical annotations are now both available to be viewed in the EHR through Epic's genomics module. Issues encountered in the development and deployment of the HL7v2 converter primarily revolved around data variability issues, primarily lack of a standardized representation of data elements within various genomic interpretation report files. The technical implementation of a HL7v2 message transformation to feed genomic variant and clinical annotation data into an EHR has been successful. In addition to genetic variant data, the implementation described here releases the valuable asset of clinically relevant genomic annotations provided by labs from static PDFs to calculable, structured data in EHR systems.
RESUMO
OBJECTIVE: Enabling clinicians to formulate individualized clinical management strategies from the sea of molecular data remains a fundamentally important but daunting task. Here, we describe efforts towards a new paradigm in genomics-electronic health record (HER) integration, using a standardized suite of FHIR Genomics Operations that encapsulates the complexity of molecular data so that precision medicine solution developers can focus on building applications. MATERIALS AND METHODS: FHIR Genomics Operations essentially "wrap" a genomics data repository, presenting a uniform interface to applications. More importantly, operations encapsulate the complexity of data within a repository and normalize redundant data representations-particularly relevant in genomics, where a tremendous amount of raw data exists in often-complex non-FHIR formats. RESULTS: Fifteen FHIR Genomics Operations have been developed, designed to support a wide range of clinical scenarios, such as variant discovery; clinical trial matching; hereditary condition and pharmacogenomic screening; and variant reanalysis. Operations are being matured through the HL7 balloting process, connectathons, pilots, and the HL7 FHIR Accelerator program. DISCUSSION: Next-generation sequencing can identify thousands to millions of variants, whose clinical significance can change over time as our knowledge evolves. To manage such a large volume of dynamic and complex data, new models of genomics-EHR integration are needed. Qualitative observations to date suggest that freeing application developers from the need to understand the nuances of genomic data, and instead base applications on standardized APIs can not only accelerate integration but also dramatically expand the applications of Omic data in driving precision care at scale for all.
Assuntos
Registros Eletrônicos de Saúde , Genômica , Tempo , Nível Sete de SaúdeRESUMO
Clinical Document Architecture, Release One (CDA R1), became an American National Standards Institute (ANSI)-approved HL7 Standard in November 2000, representing the first specification derived from the Health Level 7 (HL7) Reference Information Model (RIM). CDA, Release Two (CDA R2), became an ANSI-approved HL7 Standard in May 2005 and is the subject of this article, where the focus is primarily on how the standard has evolved since CDA R1, particularly in the area of semantic representation of clinical events. CDA is a document markup standard that specifies the structure and semantics of a clinical document (such as a discharge summary or progress note) for the purpose of exchange. A CDA document is a defined and complete information object that can include text, images, sounds, and other multimedia content. It can be transferred within a message and can exist independently, outside the transferring message. CDA documents are encoded in Extensible Markup Language (XML), and they derive their machine processable meaning from the RIM, coupled with terminology. The CDA R2 model is richly expressive, enabling the formal representation of clinical statements (such as observations, medication administrations, and adverse events) such that they can be interpreted and acted upon by a computer. On the other hand, CDA R2 offers a low bar for adoption, providing a mechanism for simply wrapping a non-XML document with the CDA header or for creating a document with a structured header and sections containing only narrative content. The intent is to facilitate widespread adoption, while providing a mechanism for incremental semantic interoperability.
Assuntos
Sistemas Computadorizados de Registros Médicos/normas , Linguagens de Programação , Redes de Comunicação de Computadores/normas , Controle de Formulários e Registros/normas , Registro Médico Coordenado , Semântica , Terminologia como AssuntoRESUMO
The deployment of sophisticated software tools and electronic health records offers many new opportunities and challenges to support care delivery. One of the key opportunities is to enhance the quality of care with evidence-based medicine (EBM). One of the key challenges is to embed EBM in tools that directly facilitate the process of documentation and care delivery. Since clinicians typically have the option of using free text for most of their documentation, the tools that provide embedded EBM must be at least as efficient as free text. There are many requirements that must be met in order to effectively embed EBM within clinical content tools and enhance both the usability and the actual use of such tools and clinical content: (1) Facilitate the documentation process; (2) Facilitate the care delivery process, e.g. make order entry faster; (3) Contain recommendations that are highly relevant to the clinical context of an encounter; (4) Aid in the capture of discrete coded data. Support for local variation is often key to meeting these objectives and becomes a central factor in helping clinicians shift from unstructured free text, to the use of these tools, which support the delivery of EBM. This document describes the central tension between the objective of national standardization and delivery of EBM and the need for regional localization of clinical content. This tension must be thoughtfully managed to maximize the quality of care delivery and associated workflow practices. The key elements of legitimate local variation that must be recognized in order to achieve these goals are described in this document, and the key principles for managing the tensions between generalization and localization are identified.
Assuntos
Sistemas de Apoio a Decisões Clínicas , Medicina Baseada em Evidências/normas , Guias de Prática Clínica como Assunto/normas , Prestação Integrada de Cuidados de Saúde , Sistemas Pré-Pagos de Saúde , Sistemas Computadorizados de Registros Médicos , Cultura Organizacional , Regionalização da SaúdeRESUMO
This paper describes Kaiser Permanente's (KP) enterprise-wide medical terminology solution, referred to as our Convergent Medical Terminology (CMT). Initially developed to serve the needs of a regional electronic health record, CMT has evolved into a core KP asset, serving as the common terminology across all applications. CMT serves as the definitive source of concept definitions for the organization, provides a consistent structure and access method to all codes used by the organization, and is KP's language of interoperability, with cross-mappings to regional ancillary systems and administrative billing codes. The core of CMT is comprised of SNOMED CT, laboratory LOINC, and First DataBank drug terminology. These are integrated into a single poly-hierarchically structured knowledge base. Cross map sets provide bi-directional translations between CMT and ancillary applications and administrative billing codes. Context sets provide subsets of CMT for use in specific contexts. Our experience with CMT has lead us to conclude that a successful terminology solution requires that: (1) usability considerations are an organizational priority; (2) "interface" terminology is differentiated from "reference" terminology; (3) it be easy for clinicians to find the concepts they need; (4) the immediate value of coded data be apparent to clinician user; (5) there be a well defined approach to terminology extensions. Over the past several years, there has been substantial progress made in the domain coverage and standardization of medical terminology. KP has learned to exploit that terminology in ways that are clinician-acceptable and that provide powerful options for data analysis and reporting.
Assuntos
Sistemas Pré-Pagos de Saúde , Vocabulário Controlado , Logical Observation Identifiers Names and Codes , Systematized Nomenclature of Medicine , Terminologia como Assunto , Estados UnidosRESUMO
BACKGROUND AND OBJECTIVE: Upgrades to electronic health record (EHR) systems scheduled to be introduced in the USA in 2014 will advance document interoperability between care providers. Specifically, the second stage of the federal incentive program for EHR adoption, known as Meaningful Use, requires use of the Consolidated Clinical Document Architecture (C-CDA) for document exchange. In an effort to examine and improve C-CDA based exchange, the SMART (Substitutable Medical Applications and Reusable Technology) C-CDA Collaborative brought together a group of certified EHR and other health information technology vendors. MATERIALS AND METHODS: We examined the machine-readable content of collected samples for semantic correctness and consistency. This included parsing with the open-source BlueButton.js tool, testing with a validator used in EHR certification, scoring with an automated open-source tool, and manual inspection. We also conducted group and individual review sessions with participating vendors to understand their interpretation of C-CDA specifications and requirements. RESULTS: We contacted 107 health information technology organizations and collected 91 C-CDA sample documents from 21 distinct technologies. Manual and automated document inspection led to 615 observations of errors and data expression variation across represented technologies. Based upon our analysis and vendor discussions, we identified 11 specific areas that represent relevant barriers to the interoperability of C-CDA documents. CONCLUSIONS: We identified errors and permissible heterogeneity in C-CDA documents that will limit semantic interoperability. Our findings also point to several practical opportunities to improve C-CDA document quality and exchange in the coming years.
Assuntos
Registros Eletrônicos de Saúde/normas , Uso Significativo , Registro Médico Coordenado , Certificação , Difusão de Inovações , Uso Significativo/legislação & jurisprudência , Sistemas Computadorizados de Registros Médicos , Integração de Sistemas , Estados UnidosRESUMO
Underutilization of glucose data and lack of easy and standardized glucose data collection, analysis, visualization, and guided clinical decision making are key contributors to poor glycemic control among individuals with type 1 diabetes mellitus. An expert panel of diabetes specialists, facilitated by the International Diabetes Center and sponsored by the Helmsley Charitable Trust, met in 2012 to discuss recommendations for standardizing the analysis and presentation of glucose monitoring data, with the initial focus on data derived from continuous glucose monitoring systems. The panel members were introduced to a universal software report, the Ambulatory Glucose Profile, and asked to provide feedback on its content and functionality, both as a research tool and in clinical settings. This article provides a summary of the topics and issues discussed during the meeting and presents recommendations from the expert panel regarding the need to standardize glucose profile summary metrics and the value of a uniform glucose report to aid clinicians, researchers, and patients.
Assuntos
Glicemia/análise , Tomada de Decisões , Diabetes Mellitus Tipo 1/sangue , Monitorização Ambulatorial/métodos , Guias de Prática Clínica como Assunto , Projetos de Pesquisa/normas , Automonitorização da Glicemia/normas , Apresentação de Dados/normas , Tomada de Decisões/fisiologia , Diabetes Mellitus Tipo 1/terapia , Humanos , Modelos Biológicos , Monitorização Ambulatorial/estatística & dados numéricos , Padrões de Referência , Projetos de Pesquisa/legislação & jurisprudência , Estatística como Assunto/legislação & jurisprudência , Estatística como Assunto/normasRESUMO
Abstract Underutilization of glucose data and lack of easy and standardized glucose data collection, analysis, visualization, and guided clinical decision making are key contributors to poor glycemic control among individuals with type 1 diabetes. An expert panel of diabetes specialists, facilitated by the International Diabetes Center and sponsored by the Helmsley Charitable Trust, met in 2012 to discuss recommendations for standardization of analysis and presentation of glucose monitoring data, with the initial focus on data derived from CGM systems. The panel members were introduced to a universal software report, the Ambulatory Glucose Profile (AGP), and asked to provide feedback on its content and functionality, both as a research tool and in clinical settings. This paper provides a summary of the topics and issues discussed during the meeting and presents recommendations from the expert panel regarding the need to standardize glucose profile summary metrics and the value of a uniform glucose report to aid clinicians, researchers, and patients.
Assuntos
Automonitorização da Glicemia/normas , Glicemia/metabolismo , Diabetes Mellitus/sangue , Hiperglicemia/sangue , Hipoglicemia/sangue , Monitorização Ambulatorial/normas , Tomada de Decisões , Feminino , Humanos , Masculino , Padrões de Referência , Software , Estados UnidosAssuntos
Serviços de Informação , Sistemas Computadorizados de Registros Médicos , Serviços de Informação/economia , Serviços de Informação/organização & administração , Sistemas Computadorizados de Registros Médicos/economia , Sistemas Computadorizados de Registros Médicos/organização & administração , Estados UnidosRESUMO
'Semantic Interoperability' is a driving objective behind many of Health Level Seven's standards. The objective in this paper is to take a step back, and consider what semantic interoperability means, assess whether or not it has been achieved, and, if not, determine what concrete next steps can be taken to get closer. A framework for measuring semantic interoperability is proposed, using a technique called the 'Single Logical Information Model' framework, which relies on an operational definition of semantic interoperability and an understanding that interoperability improves incrementally. Whether semantic interoperability tomorrow will enable one computer to talk to another, much as one person can talk to another person, is a matter for speculation. It is assumed, however, that what gets measured gets improved, and in that spirit this framework is offered as a means to improvement.
Assuntos
Sistemas de Informação/normas , Cooperação Internacional , Semântica , Integração de Sistemas , Vocabulário Controlado , Humanos , Sistemas de Informação/organização & administração , Avaliação de Programas e Projetos de Saúde , Padrões de ReferênciaRESUMO
We sought to determine how well the HL7/ASTM Continuity of Care Document (CCD) standard supports the requirements underlying the Joint Commission medication reconciliation recommendations. In particular, the Joint Commission emphasizes that transition points in the continuum of care are vulnerable to communication breakdowns, and that these breakdowns are a common source of medication errors. These transition points are the focus of communication standards, suggesting that CCD can support and enable medication related patient safety initiatives. Data elements needed to support the Joint Commission recommendations were identified and mapped to CCD, and a detailed clinical scenario was constructed. The mapping identified minor gaps, and identified fields present in CCD not specifically identified by Joint Commission, but useful nonetheless when managing medications across transitions of care, suggesting that a closer collaboration between the Joint Commission and standards organizations will be mutually beneficial. The nationally recognized CCD specification provides a standards-based solution for enabling Joint Commission medication reconciliation objectives.
Assuntos
Continuidade da Assistência ao Paciente/normas , Registro Médico Coordenado/normas , Sistemas de Medicação/normas , Redes de Comunicação de Computadores/normas , Tratamento Farmacológico , HumanosRESUMO
In general, it is very straightforward to store concept identifiers in electronic medical records and represent them in messages. Information models typically specify the fields that can contain coded entries. For each of these fields there may be additional constraints governing exactly which concept identifiers are applicable. However, because modern terminologies such as SNOMED CT are compositional, allowing concept expressions to be pre-coordinated within the terminology or post-coordinated within the medical record, there remains the potential to express a concept in more than one way. Often times, the various representations are similar, but not equivalent. This paper describes an approach for retrieving these pre- and post-coordinated concept expressions: (1) Create concept expressions using a logically-well-structured terminology (e.g., SNOMED CT) according to the rules of a well-specified information model (in this paper we use the HL7 RIM); (2) Transform pre- and post-coordinated concept expressions into a normalized form; (3) Transform queries into the same normalized form. The normalized instances can then be directly compared to the query. Several implementation considerations have been identified. Transformations into a normal form and execution of queries that require traversal of hierarchies need to be optimized. A detailed understanding of the information model and the terminology model are prerequisites. Queries based on the semantic properties of concepts are only as complete as the semantic information contained in the terminology model. Despite these considerations, the approach appears powerful and will continue to be refined.