Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 48
Filtrar
1.
J Biomed Inform ; 117: 103755, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33781919

RESUMO

Resource Description Framework (RDF) is one of the three standardized data formats in the HL7 Fast Healthcare Interoperability Resources (FHIR) specification and is being used by healthcare and research organizations to join FHIR and non-FHIR data. However, RDF previously had not been integrated into popular FHIR tooling packages, hindering the adoption of FHIR RDF in the semantic web and other communities. The objective of the study is to develop and evaluate a Java based FHIR RDF data transformation toolkit to facilitate the use and validation of FHIR RDF data. We extended the popular HAPI FHIR tooling to add RDF support, thus enabling FHIR data in XML or JSON to be transformed to or from RDF. We also developed an RDF Shape Expression (ShEx)-based validation framework to verify conformance of FHIR RDF data to the ShEx schemas provided in the FHIR specification for FHIR versions R4 and R5. The effectiveness of ShEx validation was demonstrated by testing it against 2693 FHIR R4 examples and 2197 FHIR R5 examples that are included in the FHIR specification. A total of 5 types of errors including missing properties, unknown element, missing resource Type, invalid attribute value, and unknown resource name in the R5 examples were revealed, demonstrating the value of the ShEx in the quality assurance of the evolving R5 development. This FHIR RDF data transformation and validation framework, based on HAPI and ShEx, is robust and ready for community use in adopting FHIR RDF, improving FHIR data quality, and evolving the FHIR specification.


Assuntos
Atenção à Saúde , Registros Eletrônicos de Saúde
2.
J Biomed Inform ; 100S: 100002, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-34384571

RESUMO

The word "ontology" was introduced to information systems when only closed-world reasoning systems were available. It was "borrowed" from philosophy, but literal links to its philosophical meaning were explicitly disavowed. Since then, open-world reasoning systems based on description logics have been developed, OWL has become a standard, and philosophical issues have been raised. The result has too often been confusion. The question "What statements are ontological" receives a variety of answers. A clearer vocabulary that is better suited to today's information systems is needed. The project to base ICD-11 on a "Common Ontology" required addressing this confusion. This paper sets out to systematise the lessons of that experience and subsequent discussions. We explore the semantics of open-world and closed-world systems. For specifying knowledge bases and software, we propose "invariants" or, more fully, "the first order invariant part of the background domain knowledge base" as an alternative to the words "ontology" and "ontological." We discuss the role and limitations of OWL and description logics and how they are complementary to closed world systems such as frames and to less formal "knowledge organisation systems". We illustrate why the conventions of classifications such as ICD cannot be formulated directly in OWL, but can be linked to OWL knowledge bases by queries. We contend that while OWL and description logics are major advances for representing invariants and terminologies, they must be combined with other technologies to represent broader background knowledge faithfully. The ICD-11 architecture is one approach. We argue that such hybrid architectures can and should be developed further.

3.
J Biomed Inform ; 67: 90-100, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-28213144

RESUMO

BACKGROUND: HL7 Fast Healthcare Interoperability Resources (FHIR) is an emerging open standard for the exchange of electronic healthcare information. FHIR resources are defined in a specialized modeling language. FHIR instances can currently be represented in either XML or JSON. The FHIR and Semantic Web communities are developing a third FHIR instance representation format in Resource Description Framework (RDF). Shape Expressions (ShEx), a formal RDF data constraint language, is a candidate for describing and validating the FHIR RDF representation. OBJECTIVE: Create a FHIR to ShEx model transformation and assess its ability to describe and validate FHIR RDF data. METHODS: We created the methods and tools that generate the ShEx schemas modeling the FHIR to RDF specification being developed by HL7 ITS/W3C RDF Task Force, and evaluated the applicability of ShEx in the description and validation of FHIR to RDF transformations. RESULTS: The ShEx models contributed significantly to workgroup consensus. Algorithmic transformations from the FHIR model to ShEx schemas and FHIR example data to RDF transformations were incorporated into the FHIR build process. ShEx schemas representing 109 FHIR resources were used to validate 511 FHIR RDF data examples from the Standards for Trial Use (STU 3) Ballot version. We were able to uncover unresolved issues in the FHIR to RDF specification and detect 10 types of errors and root causes in the actual implementation. The FHIR ShEx representations have been included in the official FHIR web pages for the STU 3 Ballot version since September 2016. DISCUSSION: ShEx can be used to define and validate the syntax of a FHIR resource, which is complementary to the use of RDF Schema (RDFS) and Web Ontology Language (OWL) for semantic validation. CONCLUSION: ShEx proved useful for describing a standard model of FHIR RDF data. The combination of a formal model and a succinct format enabled comprehensive review and automated validation.


Assuntos
Algoritmos , Internet , Semântica , Registros Eletrônicos de Saúde , Humanos
4.
J Biomed Inform ; 62: 232-42, 2016 08.
Artigo em Inglês | MEDLINE | ID: mdl-27392645

RESUMO

The Quality Data Model (QDM) is an information model developed by the National Quality Forum for representing electronic health record (EHR)-based electronic clinical quality measures (eCQMs). In conjunction with the HL7 Health Quality Measures Format (HQMF), QDM contains core elements that make it a promising model for representing EHR-driven phenotype algorithms for clinical research. However, the current QDM specification is available only as descriptive documents suitable for human readability and interpretation, but not for machine consumption. The objective of the present study is to develop and evaluate a data element repository (DER) for providing machine-readable QDM data element service APIs to support phenotype algorithm authoring and execution. We used the ISO/IEC 11179 metadata standard to capture the structure for each data element, and leverage Semantic Web technologies to facilitate semantic representation of these metadata. We observed there are a number of underspecified areas in the QDM, including the lack of model constraints and pre-defined value sets. We propose a harmonization with the models developed in HL7 Fast Healthcare Interoperability Resources (FHIR) and Clinical Information Modeling Initiatives (CIMI) to enhance the QDM specification and enable the extensibility and better coverage of the DER. We also compared the DER with the existing QDM implementation utilized within the Measure Authoring Tool (MAT) to demonstrate the scalability and extensibility of our DER-based approach.


Assuntos
Algoritmos , Registros Eletrônicos de Saúde , Fenótipo , Pesquisa Biomédica , Bases de Dados Factuais , Humanos , Semântica
5.
J Biomed Inform ; 46(1): 128-38, 2013 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-23026232

RESUMO

Terminologies and ontologies are increasingly prevalent in healthcare and biomedicine. However they suffer from inconsistent renderings, distribution formats, and syntax that make applications through common terminologies services challenging. To address the problem, one could posit a shared representation syntax, associated schema, and tags. We identified a set of commonly-used elements in biomedical ontologies and terminologies based on our experience with the Common Terminology Services 2 (CTS2) Specification as well as the Lexical Grid (LexGrid) project. We propose guidelines for precisely such a shared terminology model, and recommend tags assembled from SKOS, OWL, Dublin Core, RDF Schema, and DCMI meta-terms. We divide these guidelines into lexical information (e.g. synonyms, and definitions) and semantic information (e.g. hierarchies). The latter we distinguish for use by informal terminologies vs. formal ontologies. We then evaluate the guidelines with a spectrum of widely used terminologies and ontologies to examine how the lexical guidelines are implemented, and whether our proposed guidelines would enhance interoperability.


Assuntos
Internet , Semântica , Vocabulário Controlado , Guias como Assunto
6.
Clin Transl Sci ; 15(8): 1848-1855, 2022 08.
Artigo em Inglês | MEDLINE | ID: mdl-36125173

RESUMO

Within clinical, biomedical, and translational science, an increasing number of projects are adopting graphs for knowledge representation. Graph-based data models elucidate the interconnectedness among core biomedical concepts, enable data structures to be easily updated, and support intuitive queries, visualizations, and inference algorithms. However, knowledge discovery across these "knowledge graphs" (KGs) has remained difficult. Data set heterogeneity and complexity; the proliferation of ad hoc data formats; poor compliance with guidelines on findability, accessibility, interoperability, and reusability; and, in particular, the lack of a universally accepted, open-access model for standardization across biomedical KGs has left the task of reconciling data sources to downstream consumers. Biolink Model is an open-source data model that can be used to formalize the relationships between data structures in translational science. It incorporates object-oriented classification and graph-oriented features. The core of the model is a set of hierarchical, interconnected classes (or categories) and relationships between them (or predicates) representing biomedical entities such as gene, disease, chemical, anatomic structure, and phenotype. The model provides class and edge attributes and associations that guide how entities should relate to one another. Here, we highlight the need for a standardized data model for KGs, describe Biolink Model, and compare it with other models. We demonstrate the utility of Biolink Model in various initiatives, including the Biomedical Data Translator Consortium and the Monarch Initiative, and show how it has supported easier integration and interoperability of biomedical KGs, bringing together knowledge from multiple sources and helping to realize the goals of translational science.


Assuntos
Reconhecimento Automatizado de Padrão , Ciência Translacional Biomédica , Conhecimento
7.
Database (Oxford) ; 20222022 05 25.
Artigo em Inglês | MEDLINE | ID: mdl-35616100

RESUMO

Despite progress in the development of standards for describing and exchanging scientific information, the lack of easy-to-use standards for mapping between different representations of the same or similar objects in different databases poses a major impediment to data integration and interoperability. Mappings often lack the metadata needed to be correctly interpreted and applied. For example, are two terms equivalent or merely related? Are they narrow or broad matches? Or are they associated in some other way? Such relationships between the mapped terms are often not documented, which leads to incorrect assumptions and makes them hard to use in scenarios that require a high degree of precision (such as diagnostics or risk prediction). Furthermore, the lack of descriptions of how mappings were done makes it hard to combine and reconcile mappings, particularly curated and automated ones. We have developed the Simple Standard for Sharing Ontological Mappings (SSSOM) which addresses these problems by: (i) Introducing a machine-readable and extensible vocabulary to describe metadata that makes imprecision, inaccuracy and incompleteness in mappings explicit. (ii) Defining an easy-to-use simple table-based format that can be integrated into existing data science pipelines without the need to parse or query ontologies, and that integrates seamlessly with Linked Data principles. (iii) Implementing open and community-driven collaborative workflows that are designed to evolve the standard continuously to address changing requirements and mapping practices. (iv) Providing reference tools and software libraries for working with the standard. In this paper, we present the SSSOM standard, describe several use cases in detail and survey some of the existing work on standardizing the exchange of mappings, with the goal of making mappings Findable, Accessible, Interoperable and Reusable (FAIR). The SSSOM specification can be found at http://w3id.org/sssom/spec. Database URL: http://w3id.org/sssom/spec.


Assuntos
Metadados , Web Semântica , Gerenciamento de Dados , Bases de Dados Factuais , Fluxo de Trabalho
8.
J Biomed Inform ; 44 Suppl 1: S78-S85, 2011 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-21840422

RESUMO

The binding of controlled terminology has been regarded as important for standardization of Common Data Elements (CDEs) in cancer research. However, the potential of such binding has not yet been fully explored, especially its quality assurance aspect. The objective of this study is to explore whether there is a relationship between terminological annotations and the UMLS Semantic Network (SN) that can be exploited to improve those annotations. We profiled the terminological concepts associated with the standard structure of the CDEs of the NCI Cancer Data Standards Repository (caDSR) using the UMLS SN. We processed 17798 data elements and extracted 17526 primary object class/property concept pairs. We identified dominant semantic types for the categories "object class" and "property" and determined that the preponderance of the instances were disjoint (i.e. the intersection of semantic types between the two categories is empty). We then performed a preliminary evaluation on the data elements whose asserted primary object class/property concept pairs conflict with this observation - where the semantic type of the object class fell into a SN category typically used by property or visa-versa. In conclusion, the UMLS SN based profiling approach is feasible for the quality assurance and accessibility of the cancer study CDEs. This approach could provide useful insight about how to build mechanisms of quality assurance in a meta-data repository.


Assuntos
Mineração de Dados/métodos , Neoplasias , Unified Medical Language System , Bases de Dados Factuais , Humanos , Controle de Qualidade , Semântica , Vocabulário Controlado
9.
J Am Med Inform Assoc ; 28(3): 427-443, 2021 03 01.
Artigo em Inglês | MEDLINE | ID: mdl-32805036

RESUMO

OBJECTIVE: Coronavirus disease 2019 (COVID-19) poses societal challenges that require expeditious data and knowledge sharing. Though organizational clinical data are abundant, these are largely inaccessible to outside researchers. Statistical, machine learning, and causal analyses are most successful with large-scale data beyond what is available in any given organization. Here, we introduce the National COVID Cohort Collaborative (N3C), an open science community focused on analyzing patient-level data from many centers. MATERIALS AND METHODS: The Clinical and Translational Science Award Program and scientific community created N3C to overcome technical, regulatory, policy, and governance barriers to sharing and harmonizing individual-level clinical data. We developed solutions to extract, aggregate, and harmonize data across organizations and data models, and created a secure data enclave to enable efficient, transparent, and reproducible collaborative analytics. RESULTS: Organized in inclusive workstreams, we created legal agreements and governance for organizations and researchers; data extraction scripts to identify and ingest positive, negative, and possible COVID-19 cases; a data quality assurance and harmonization pipeline to create a single harmonized dataset; population of the secure data enclave with data, machine learning, and statistical analytics tools; dissemination mechanisms; and a synthetic data pilot to democratize data access. CONCLUSIONS: The N3C has demonstrated that a multisite collaborative learning health network can overcome barriers to rapidly build a scalable infrastructure incorporating multiorganizational clinical data for COVID-19 analytics. We expect this effort to save lives by enabling rapid collaboration among clinicians, researchers, and data scientists to identify treatments and specialized care and thereby reduce the immediate and long-term impacts of COVID-19.


Assuntos
COVID-19 , Ciência de Dados/organização & administração , Disseminação de Informação , Colaboração Intersetorial , Segurança Computacional , Análise de Dados , Comitês de Ética em Pesquisa , Regulamentação Governamental , Humanos , National Institutes of Health (U.S.) , Estados Unidos
10.
AMIA Annu Symp Proc ; 2020: 1140-1149, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33936490

RESUMO

This study developed and evaluated a JSON-LD 1.1 approach to automate the Resource Description Framework (RDF) serialization and deserialization of Fast Healthcare Interoperability Resources (FHIR) data, in preparation for updating the FHIR RDF standard. We first demonstrated that this JSON-LD 1.1 approach can produce the same output as the current FHIR RDF standard. We then used it to test, document and validate several proposed changes to the FHIR RDF specification, to address usability issues that were uncovered during trial use. This JSON-LD 1.1 approach was found to be effective and more declarative than the existing custom-code-based approach, in converting FHIR data from JSON to RDF and vice versa. This approach should enable future FHIR RDF servers to be implemented and maintained more easily.


Assuntos
Registros Eletrônicos de Saúde/normas , Interoperabilidade da Informação em Saúde/normas , Linguagens de Programação , Algoritmos , Atenção à Saúde , Registros Eletrônicos de Saúde/organização & administração , Instalações de Saúde , Nível Sete de Saúde , Humanos , Disseminação de Informação , Semântica
11.
J Am Med Inform Assoc ; 16(3): 305-15, 2009.
Artigo em Inglês | MEDLINE | ID: mdl-19261933

RESUMO

Many biomedical terminologies, classifications, and ontological resources such as the NCI Thesaurus (NCIT), International Classification of Diseases (ICD), Systematized Nomenclature of Medicine (SNOMED), Current Procedural Terminology (CPT), and Gene Ontology (GO) have been developed and used to build a variety of IT applications in biology, biomedicine, and health care settings. However, virtually all these resources involve incompatible formats, are based on different modeling languages, and lack appropriate tooling and programming interfaces (APIs) that hinder their wide-scale adoption and usage in a variety of application contexts. The Lexical Grid (LexGrid) project introduced in this paper is an ongoing community-driven initiative, coordinated by the Mayo Clinic Division of Biomedical Statistics and Informatics, designed to bridge this gap using a common terminology model called the LexGrid model. The key aspect of the model is to accommodate multiple vocabulary and ontology distribution formats and support of multiple data stores for federated vocabulary distribution. The model provides a foundation for building consistent and standardized APIs to access multiple vocabularies that support lexical search queries, hierarchy navigation, and a rich set of features such as recursive subsumption (e.g., get all the children of the concept penicillin). Existing LexGrid implementations include the LexBIG API as well as a reference implementation of the HL7 Common Terminology Services (CTS) specification providing programmatic access via Java, Web, and Grid services.


Assuntos
Armazenamento e Recuperação da Informação/métodos , Sistemas de Informação/normas , Software , Vocabulário Controlado , Armazenamento e Recuperação da Informação/normas , Modelos Teóricos , Integração de Sistemas
12.
AMIA Annu Symp Proc ; 2018: 979-988, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30815141

RESUMO

HL7 Fast Healthcare Information Resources (FHIR) is rapidly becoming the de-facto standard for the exchange of clinical and healthcare related information. Major EHR vendors and healthcare providers are actively developing transformations between existing EHR databases and their corresponding FHIR representation. Many of these organizations are concurrently creating a second set of transformations from the same sources into integrated data repositories (IDRs). Considerable cost savings could be realized and overall quality could be improved were it possible to transformation primary FHIR EHR data directly into an IDR. We developed a FHIR to i2b2 transformation toolkit and evaluated the viability of such an approach.


Assuntos
Data Warehousing , Conjuntos de Dados como Assunto , Registros Eletrônicos de Saúde/normas , Interoperabilidade da Informação em Saúde/normas , Nível Sete de Saúde , Ontologias Biológicas , Humanos , Software
13.
Stud Health Technol Inform ; 245: 1327, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29295408

RESUMO

The OHDSI Common Data Model (CDM) is a deep information model, in which its vocabulary component plays a critical role in enabling consistent coding and query of clinical data. The objective of the study is to create methods and tools to expose the OHDSI vocabularies and mappings as the vocabulary mapping services using two HL7 FHIR core terminology resources ConceptMap and ValueSet. We discuss the benefits and challenges in building the FHIR-based terminology services.


Assuntos
Registros Eletrônicos de Saúde , Vocabulário Controlado , Humanos , Vocabulário
14.
AMIA Jt Summits Transl Sci Proc ; 2017: 259-267, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28815140

RESUMO

In this paper, we present a platform known as D2Refine for facilitating clinical research study data element harmonization and standardization. D2Refine is developed on top of OpenRefine (formerly Google Refine) and leverages simple interface and extensible architecture of OpenRefine. D2Refine empowers the tabular representation of clinical research study data element definitions by allowing it to be easily organized and standardized using reconciliation services. D2Refine builds on valuable built-in data transformation features of OpenRefine to bring source data sets to a finer state quickly. We implemented the reconciliation services and search capabilities based on the standard Common Terminology Services 2 (CTS2) and the serialization of clinical research study data element definitions into standard representation using clinical information modeling technology for semantic interoperability. We demonstrate that D2Refine is a useful and promising platform that would help address the emergent needs for clinical research study data element harmonization and standardization.

15.
Stud Health Technol Inform ; 245: 887-891, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29295227

RESUMO

A variety of data models have been developed to provide a standardized data interface that supports organizing clinical research data into a standard structure for building the integrated data repositories. HL7 Fast Healthcare Interoperability Resources (FHIR) is emerging as a next generation standards framework for facilitating health care and electronic health records-based data exchange. The objective of the study was to design and assess a consensus-based approach for harmonizing the OHDSI CDM with HL7 FHIR. We leverage a FHIR W5 (Who, What, When, Where, and Why) Classification System for designing the harmonization approaches and assess their utility in achieving the consensus among curators using a standard inter-rater agreement measure. Moderate agreement was achieved for the model-level harmonization (kappa = 0.50) whereas only fair agreement was achieved for the property-level harmonization (kappa = 0.21). FHIR W5 is a useful tool in designing the harmonization approaches between data models and FHIR, and facilitating the consensus achievement.


Assuntos
Consenso , Registros Eletrônicos de Saúde , Humanos
16.
J Biomed Semantics ; 8(1): 19, 2017 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-28583204

RESUMO

BACKGROUND: Detailed Clinical Models (DCMs) have been regarded as the basis for retaining computable meaning when data are exchanged between heterogeneous computer systems. To better support clinical cancer data capturing and reporting, there is an emerging need to develop informatics solutions for standards-based clinical models in cancer study domains. The objective of the study is to develop and evaluate a cancer genome study metadata management system that serves as a key infrastructure in supporting clinical information modeling in cancer genome study domains. METHODS: We leveraged a Semantic Web-based metadata repository enhanced with both ISO11179 metadata standard and Clinical Information Modeling Initiative (CIMI) Reference Model. We used the common data elements (CDEs) defined in The Cancer Genome Atlas (TCGA) data dictionary, and extracted the metadata of the CDEs using the NCI Cancer Data Standards Repository (caDSR) CDE dataset rendered in the Resource Description Framework (RDF). The ITEM/ITEM_GROUP pattern defined in the latest CIMI Reference Model is used to represent reusable model elements (mini-Archetypes). RESULTS: We produced a metadata repository with 38 clinical cancer genome study domains, comprising a rich collection of mini-Archetype pattern instances. We performed a case study of the domain "clinical pharmaceutical" in the TCGA data dictionary and demonstrated enriched data elements in the metadata repository are very useful in support of building detailed clinical models. CONCLUSION: Our informatics approach leveraging Semantic Web technologies provides an effective way to build a CIMI-compliant metadata repository that would facilitate the detailed clinical modeling to support use cases beyond TCGA in clinical cancer study domains.


Assuntos
Genômica/métodos , Metadados , Neoplasias/genética , Web Semântica , Humanos
17.
OMICS ; 10(2): 185-98, 2006.
Artigo em Inglês | MEDLINE | ID: mdl-16901225

RESUMO

The National Center for Biomedical Ontology is a consortium that comprises leading informaticians, biologists, clinicians, and ontologists, funded by the National Institutes of Health (NIH) Roadmap, to develop innovative technology and methods that allow scientists to record, manage, and disseminate biomedical information and knowledge in machine-processable form. The goals of the Center are (1) to help unify the divergent and isolated efforts in ontology development by promoting high quality open-source, standards-based tools to create, manage, and use ontologies, (2) to create new software tools so that scientists can use ontologies to annotate and analyze biomedical data, (3) to provide a national resource for the ongoing evaluation, integration, and evolution of biomedical ontologies and associated tools and theories in the context of driving biomedical projects (DBPs), and (4) to disseminate the tools and resources of the Center and to identify, evaluate, and communicate best practices of ontology development to the biomedical community. Through the research activities within the Center, collaborations with the DBPs, and interactions with the biomedical community, our goal is to help scientists to work more effectively in the e-science paradigm, enhancing experiment design, experiment execution, data analysis, information synthesis, hypothesis generation and testing, and understand human disease.


Assuntos
Pesquisa Biomédica/normas , National Institutes of Health (U.S.) , Software , Internet , Semântica , Estados Unidos
18.
J Biomed Semantics ; 7: 10, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26949508

RESUMO

BACKGROUND: The Biomedical Research Integrated Domain Group (BRIDG) model is a formal domain analysis model for protocol-driven biomedical research, and serves as a semantic foundation for application and message development in the standards developing organizations (SDOs). The increasing sophistication and complexity of the BRIDG model requires new approaches to the management and utilization of the underlying semantics to harmonize domain-specific standards. The objective of this study is to develop and evaluate a Semantic Web-based approach that integrates the BRIDG model with ISO 21090 data types to generate domain-specific templates to support clinical study metadata standards development. METHODS: We developed a template generation and visualization system based on an open source Resource Description Framework (RDF) store backend, a SmartGWT-based web user interface, and a "mind map" based tool for the visualization of generated domain-specific templates. We also developed a RESTful Web Service informed by the Clinical Information Modeling Initiative (CIMI) reference model for access to the generated domain-specific templates. RESULTS: A preliminary usability study is performed and all reviewers (n = 3) had very positive responses for the evaluation questions in terms of the usability and the capability of meeting the system requirements (with the average score of 4.6). CONCLUSIONS: Semantic Web technologies provide a scalable infrastructure and have great potential to enable computable semantic interoperability of models in the intersection of health care and clinical research.


Assuntos
Internet , Informática Médica/métodos , Informática Médica/normas , Semântica , Pesquisa Biomédica , Humanos , Modelos Teóricos , Padrões de Referência
19.
Stud Health Technol Inform ; 223: 267-72, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-27139413

RESUMO

The goal of this work is to contribute to a smooth and semantically sound inter-operability between the ICD-11 (International Classification of Diseases-11th revision Joint Linearization for Mortality, Morbidity and Statistics) and SNOMED CT (SCT). To guarantee such inter-operation between a classification, characterized by a single hierarchy of mutually exclusive and exhaustive classes, as is the JLMMS successor of ICD-10 on the one hand, and the multi-hierarchical, ontology-based clinical terminology SCT on the other hand, we use ontology axioms that logically express generalizable truths. This is expressed by the compositional grammar of SCT, together with queries on axiomsof SCT. We test the feasibility of the method on the circulatory chapter of ICD-11 JLMMS and present limitations and results.


Assuntos
Classificação Internacional de Doenças/normas , Systematized Nomenclature of Medicine , Linguística
20.
AMIA Annu Symp Proc ; 2016: 1119-1128, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-28269909

RESUMO

Researchers commonly use a tabular format to describe and represent clinical study data. The lack of standardization of data dictionary's metadata elements presents challenges for their harmonization for similar studies and impedes interoperability outside the local context. We propose that representing data dictionaries in the form of standardized archetypes can help to overcome this problem. The Archetype Modeling Language (AML) as developed by the Clinical Information Modeling Initiative (CIMI) can serve as a common format for the representation of data dictionary models. We mapped three different data dictionaries (identified from dbGAP, PheKB and TCGA) onto AML archetypes by aligning dictionary variable definitions with the AML archetype elements. The near complete alignment of data dictionaries helped map them into valid AML models that captured all data dictionary model metadata. The outcome of the work would help subject matter experts harmonize data models for quality, semantic interoperability and better downstream data integration.


Assuntos
Pesquisa Biomédica/normas , Bases de Dados Factuais/normas , Metadados/normas , Software
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA