Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
1.
AMIA Annu Symp Proc ; 2020: 1140-1149, 2020.
Artículo en Inglés | MEDLINE | ID: mdl-33936490

RESUMEN

This study developed and evaluated a JSON-LD 1.1 approach to automate the Resource Description Framework (RDF) serialization and deserialization of Fast Healthcare Interoperability Resources (FHIR) data, in preparation for updating the FHIR RDF standard. We first demonstrated that this JSON-LD 1.1 approach can produce the same output as the current FHIR RDF standard. We then used it to test, document and validate several proposed changes to the FHIR RDF specification, to address usability issues that were uncovered during trial use. This JSON-LD 1.1 approach was found to be effective and more declarative than the existing custom-code-based approach, in converting FHIR data from JSON to RDF and vice versa. This approach should enable future FHIR RDF servers to be implemented and maintained more easily.


Asunto(s)
Registros Electrónicos de Salud/normas , Interoperabilidad de la Información en Salud/normas , Lenguajes de Programación , Algoritmos , Atención a la Salud , Registros Electrónicos de Salud/organización & administración , Instituciones de Salud , Estándar HL7 , Humanos , Difusión de la Información , Semántica
2.
J Biomed Semantics ; 7: 10, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-26949508

RESUMEN

BACKGROUND: The Biomedical Research Integrated Domain Group (BRIDG) model is a formal domain analysis model for protocol-driven biomedical research, and serves as a semantic foundation for application and message development in the standards developing organizations (SDOs). The increasing sophistication and complexity of the BRIDG model requires new approaches to the management and utilization of the underlying semantics to harmonize domain-specific standards. The objective of this study is to develop and evaluate a Semantic Web-based approach that integrates the BRIDG model with ISO 21090 data types to generate domain-specific templates to support clinical study metadata standards development. METHODS: We developed a template generation and visualization system based on an open source Resource Description Framework (RDF) store backend, a SmartGWT-based web user interface, and a "mind map" based tool for the visualization of generated domain-specific templates. We also developed a RESTful Web Service informed by the Clinical Information Modeling Initiative (CIMI) reference model for access to the generated domain-specific templates. RESULTS: A preliminary usability study is performed and all reviewers (n = 3) had very positive responses for the evaluation questions in terms of the usability and the capability of meeting the system requirements (with the average score of 4.6). CONCLUSIONS: Semantic Web technologies provide a scalable infrastructure and have great potential to enable computable semantic interoperability of models in the intersection of health care and clinical research.


Asunto(s)
Internet , Informática Médica/métodos , Informática Médica/normas , Semántica , Investigación Biomédica , Humanos , Modelos Teóricos , Estándares de Referencia
3.
J Am Med Inform Assoc ; 20(e2): e341-8, 2013 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-24190931

RESUMEN

RESEARCH OBJECTIVE: To develop scalable informatics infrastructure for normalization of both structured and unstructured electronic health record (EHR) data into a unified, concept-based model for high-throughput phenotype extraction. MATERIALS AND METHODS: Software tools and applications were developed to extract information from EHRs. Representative and convenience samples of both structured and unstructured data from two EHR systems-Mayo Clinic and Intermountain Healthcare-were used for development and validation. Extracted information was standardized and normalized to meaningful use (MU) conformant terminology and value set standards using Clinical Element Models (CEMs). These resources were used to demonstrate semi-automatic execution of MU clinical-quality measures modeled using the Quality Data Model (QDM) and an open-source rules engine. RESULTS: Using CEMs and open-source natural language processing and terminology services engines-namely, Apache clinical Text Analysis and Knowledge Extraction System (cTAKES) and Common Terminology Services (CTS2)-we developed a data-normalization platform that ensures data security, end-to-end connectivity, and reliable data flow within and across institutions. We demonstrated the applicability of this platform by executing a QDM-based MU quality measure that determines the percentage of patients between 18 and 75 years with diabetes whose most recent low-density lipoprotein cholesterol test result during the measurement year was <100 mg/dL on a randomly selected cohort of 273 Mayo Clinic patients. The platform identified 21 and 18 patients for the denominator and numerator of the quality measure, respectively. Validation results indicate that all identified patients meet the QDM-based criteria. CONCLUSIONS: End-to-end automated systems for extracting clinical information from diverse EHR systems require extensive use of standardized vocabularies and terminologies, as well as robust information models for storing, discovering, and processing that information. This study demonstrates the application of modular and open-source resources for enabling secondary use of EHR data through normalization into standards-based, comparable, and consistent format for high-throughput phenotyping to identify patient cohorts.


Asunto(s)
Minería de Datos , Registros Electrónicos de Salud/normas , Aplicaciones de la Informática Médica , Procesamiento de Lenguaje Natural , Fenotipo , Algoritmos , Investigación Biomédica , Seguridad Computacional , Humanos , Programas Informáticos , Vocabulario Controlado
4.
AMIA Annu Symp Proc ; 2012: 532-41, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-23304325

RESUMEN

With increasing adoption of electronic health records (EHRs), the need for formal representations for EHR-driven phenotyping algorithms has been recognized for some time. The recently proposed Quality Data Model from the National Quality Forum (NQF) provides an information model and a grammar that is intended to represent data collected during routine clinical care in EHRs as well as the basic logic required to represent the algorithmic criteria for phenotype definitions. The QDM is further aligned with Meaningful Use standards to ensure that the clinical data and algorithmic criteria are represented in a consistent, unambiguous and reproducible manner. However, phenotype definitions represented in QDM, while structured, cannot be executed readily on existing EHRs. Rather, human interpretation, and subsequent implementation is a required step for this process. To address this need, the current study investigates open-source JBoss® Drools rules engine for automatic translation of QDM criteria into rules for execution over EHR data. In particular, using Apache Foundation's Unstructured Information Management Architecture (UIMA) platform, we developed a translator tool for converting QDM defined phenotyping algorithm criteria into executable Drools rules scripts, and demonstrated their execution on real patient data from Mayo Clinic to identify cases for Coronary Artery Disease and Diabetes. To the best of our knowledge, this is the first study illustrating a framework and an approach for executing phenotyping criteria modeled in QDM using the Drools business rules management system.


Asunto(s)
Algoritmos , Registros Electrónicos de Salud , Sistemas Especialistas , Enfermedad de la Arteria Coronaria/diagnóstico , Diabetes Mellitus/diagnóstico , Humanos , Bases del Conocimiento , Programas Informáticos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA