Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
J Biomed Inform ; 117: 103755, 2021 05.
Artigo em Inglês | MEDLINE | ID: mdl-33781919

RESUMO

Resource Description Framework (RDF) is one of the three standardized data formats in the HL7 Fast Healthcare Interoperability Resources (FHIR) specification and is being used by healthcare and research organizations to join FHIR and non-FHIR data. However, RDF previously had not been integrated into popular FHIR tooling packages, hindering the adoption of FHIR RDF in the semantic web and other communities. The objective of the study is to develop and evaluate a Java based FHIR RDF data transformation toolkit to facilitate the use and validation of FHIR RDF data. We extended the popular HAPI FHIR tooling to add RDF support, thus enabling FHIR data in XML or JSON to be transformed to or from RDF. We also developed an RDF Shape Expression (ShEx)-based validation framework to verify conformance of FHIR RDF data to the ShEx schemas provided in the FHIR specification for FHIR versions R4 and R5. The effectiveness of ShEx validation was demonstrated by testing it against 2693 FHIR R4 examples and 2197 FHIR R5 examples that are included in the FHIR specification. A total of 5 types of errors including missing properties, unknown element, missing resource Type, invalid attribute value, and unknown resource name in the R5 examples were revealed, demonstrating the value of the ShEx in the quality assurance of the evolving R5 development. This FHIR RDF data transformation and validation framework, based on HAPI and ShEx, is robust and ready for community use in adopting FHIR RDF, improving FHIR data quality, and evolving the FHIR specification.


Assuntos
Atenção à Saúde , Registros Eletrônicos de Saúde
2.
AMIA Annu Symp Proc ; 2020: 1140-1149, 2020.
Artigo em Inglês | MEDLINE | ID: mdl-33936490

RESUMO

This study developed and evaluated a JSON-LD 1.1 approach to automate the Resource Description Framework (RDF) serialization and deserialization of Fast Healthcare Interoperability Resources (FHIR) data, in preparation for updating the FHIR RDF standard. We first demonstrated that this JSON-LD 1.1 approach can produce the same output as the current FHIR RDF standard. We then used it to test, document and validate several proposed changes to the FHIR RDF specification, to address usability issues that were uncovered during trial use. This JSON-LD 1.1 approach was found to be effective and more declarative than the existing custom-code-based approach, in converting FHIR data from JSON to RDF and vice versa. This approach should enable future FHIR RDF servers to be implemented and maintained more easily.


Assuntos
Registros Eletrônicos de Saúde/normas , Interoperabilidade da Informação em Saúde/normas , Linguagens de Programação , Algoritmos , Atenção à Saúde , Registros Eletrônicos de Saúde/organização & administração , Instalações de Saúde , Nível Sete de Saúde , Humanos , Disseminação de Informação , Semântica
3.
AMIA Annu Symp Proc ; 2018: 979-988, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30815141

RESUMO

HL7 Fast Healthcare Information Resources (FHIR) is rapidly becoming the de-facto standard for the exchange of clinical and healthcare related information. Major EHR vendors and healthcare providers are actively developing transformations between existing EHR databases and their corresponding FHIR representation. Many of these organizations are concurrently creating a second set of transformations from the same sources into integrated data repositories (IDRs). Considerable cost savings could be realized and overall quality could be improved were it possible to transformation primary FHIR EHR data directly into an IDR. We developed a FHIR to i2b2 transformation toolkit and evaluated the viability of such an approach.


Assuntos
Data Warehousing , Conjuntos de Dados como Assunto , Registros Eletrônicos de Saúde/normas , Interoperabilidade da Informação em Saúde/normas , Nível Sete de Saúde , Ontologias Biológicas , Humanos , Software
4.
AMIA Jt Summits Transl Sci Proc ; 2017: 259-267, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-28815140

RESUMO

In this paper, we present a platform known as D2Refine for facilitating clinical research study data element harmonization and standardization. D2Refine is developed on top of OpenRefine (formerly Google Refine) and leverages simple interface and extensible architecture of OpenRefine. D2Refine empowers the tabular representation of clinical research study data element definitions by allowing it to be easily organized and standardized using reconciliation services. D2Refine builds on valuable built-in data transformation features of OpenRefine to bring source data sets to a finer state quickly. We implemented the reconciliation services and search capabilities based on the standard Common Terminology Services 2 (CTS2) and the serialization of clinical research study data element definitions into standard representation using clinical information modeling technology for semantic interoperability. We demonstrate that D2Refine is a useful and promising platform that would help address the emergent needs for clinical research study data element harmonization and standardization.

5.
J Biomed Semantics ; 8(1): 19, 2017 Jun 05.
Artigo em Inglês | MEDLINE | ID: mdl-28583204

RESUMO

BACKGROUND: Detailed Clinical Models (DCMs) have been regarded as the basis for retaining computable meaning when data are exchanged between heterogeneous computer systems. To better support clinical cancer data capturing and reporting, there is an emerging need to develop informatics solutions for standards-based clinical models in cancer study domains. The objective of the study is to develop and evaluate a cancer genome study metadata management system that serves as a key infrastructure in supporting clinical information modeling in cancer genome study domains. METHODS: We leveraged a Semantic Web-based metadata repository enhanced with both ISO11179 metadata standard and Clinical Information Modeling Initiative (CIMI) Reference Model. We used the common data elements (CDEs) defined in The Cancer Genome Atlas (TCGA) data dictionary, and extracted the metadata of the CDEs using the NCI Cancer Data Standards Repository (caDSR) CDE dataset rendered in the Resource Description Framework (RDF). The ITEM/ITEM_GROUP pattern defined in the latest CIMI Reference Model is used to represent reusable model elements (mini-Archetypes). RESULTS: We produced a metadata repository with 38 clinical cancer genome study domains, comprising a rich collection of mini-Archetype pattern instances. We performed a case study of the domain "clinical pharmaceutical" in the TCGA data dictionary and demonstrated enriched data elements in the metadata repository are very useful in support of building detailed clinical models. CONCLUSION: Our informatics approach leveraging Semantic Web technologies provides an effective way to build a CIMI-compliant metadata repository that would facilitate the detailed clinical modeling to support use cases beyond TCGA in clinical cancer study domains.


Assuntos
Genômica/métodos , Metadados , Neoplasias/genética , Web Semântica , Humanos
6.
J Biomed Inform ; 67: 90-100, 2017 03.
Artigo em Inglês | MEDLINE | ID: mdl-28213144

RESUMO

BACKGROUND: HL7 Fast Healthcare Interoperability Resources (FHIR) is an emerging open standard for the exchange of electronic healthcare information. FHIR resources are defined in a specialized modeling language. FHIR instances can currently be represented in either XML or JSON. The FHIR and Semantic Web communities are developing a third FHIR instance representation format in Resource Description Framework (RDF). Shape Expressions (ShEx), a formal RDF data constraint language, is a candidate for describing and validating the FHIR RDF representation. OBJECTIVE: Create a FHIR to ShEx model transformation and assess its ability to describe and validate FHIR RDF data. METHODS: We created the methods and tools that generate the ShEx schemas modeling the FHIR to RDF specification being developed by HL7 ITS/W3C RDF Task Force, and evaluated the applicability of ShEx in the description and validation of FHIR to RDF transformations. RESULTS: The ShEx models contributed significantly to workgroup consensus. Algorithmic transformations from the FHIR model to ShEx schemas and FHIR example data to RDF transformations were incorporated into the FHIR build process. ShEx schemas representing 109 FHIR resources were used to validate 511 FHIR RDF data examples from the Standards for Trial Use (STU 3) Ballot version. We were able to uncover unresolved issues in the FHIR to RDF specification and detect 10 types of errors and root causes in the actual implementation. The FHIR ShEx representations have been included in the official FHIR web pages for the STU 3 Ballot version since September 2016. DISCUSSION: ShEx can be used to define and validate the syntax of a FHIR resource, which is complementary to the use of RDF Schema (RDFS) and Web Ontology Language (OWL) for semantic validation. CONCLUSION: ShEx proved useful for describing a standard model of FHIR RDF data. The combination of a formal model and a succinct format enabled comprehensive review and automated validation.


Assuntos
Algoritmos , Internet , Semântica , Registros Eletrônicos de Saúde , Humanos
7.
Stud Health Technol Inform ; 245: 887-891, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29295227

RESUMO

A variety of data models have been developed to provide a standardized data interface that supports organizing clinical research data into a standard structure for building the integrated data repositories. HL7 Fast Healthcare Interoperability Resources (FHIR) is emerging as a next generation standards framework for facilitating health care and electronic health records-based data exchange. The objective of the study was to design and assess a consensus-based approach for harmonizing the OHDSI CDM with HL7 FHIR. We leverage a FHIR W5 (Who, What, When, Where, and Why) Classification System for designing the harmonization approaches and assess their utility in achieving the consensus among curators using a standard inter-rater agreement measure. Moderate agreement was achieved for the model-level harmonization (kappa = 0.50) whereas only fair agreement was achieved for the property-level harmonization (kappa = 0.21). FHIR W5 is a useful tool in designing the harmonization approaches between data models and FHIR, and facilitating the consensus achievement.


Assuntos
Consenso , Registros Eletrônicos de Saúde , Humanos
8.
Stud Health Technol Inform ; 245: 1327, 2017.
Artigo em Inglês | MEDLINE | ID: mdl-29295408

RESUMO

The OHDSI Common Data Model (CDM) is a deep information model, in which its vocabulary component plays a critical role in enabling consistent coding and query of clinical data. The objective of the study is to create methods and tools to expose the OHDSI vocabularies and mappings as the vocabulary mapping services using two HL7 FHIR core terminology resources ConceptMap and ValueSet. We discuss the benefits and challenges in building the FHIR-based terminology services.


Assuntos
Registros Eletrônicos de Saúde , Vocabulário Controlado , Humanos , Vocabulário
9.
J Biomed Inform ; 62: 232-42, 2016 08.
Artigo em Inglês | MEDLINE | ID: mdl-27392645

RESUMO

The Quality Data Model (QDM) is an information model developed by the National Quality Forum for representing electronic health record (EHR)-based electronic clinical quality measures (eCQMs). In conjunction with the HL7 Health Quality Measures Format (HQMF), QDM contains core elements that make it a promising model for representing EHR-driven phenotype algorithms for clinical research. However, the current QDM specification is available only as descriptive documents suitable for human readability and interpretation, but not for machine consumption. The objective of the present study is to develop and evaluate a data element repository (DER) for providing machine-readable QDM data element service APIs to support phenotype algorithm authoring and execution. We used the ISO/IEC 11179 metadata standard to capture the structure for each data element, and leverage Semantic Web technologies to facilitate semantic representation of these metadata. We observed there are a number of underspecified areas in the QDM, including the lack of model constraints and pre-defined value sets. We propose a harmonization with the models developed in HL7 Fast Healthcare Interoperability Resources (FHIR) and Clinical Information Modeling Initiatives (CIMI) to enhance the QDM specification and enable the extensibility and better coverage of the DER. We also compared the DER with the existing QDM implementation utilized within the Measure Authoring Tool (MAT) to demonstrate the scalability and extensibility of our DER-based approach.


Assuntos
Algoritmos , Registros Eletrônicos de Saúde , Fenótipo , Pesquisa Biomédica , Bases de Dados Factuais , Humanos , Semântica
10.
J Biomed Semantics ; 7: 10, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-26949508

RESUMO

BACKGROUND: The Biomedical Research Integrated Domain Group (BRIDG) model is a formal domain analysis model for protocol-driven biomedical research, and serves as a semantic foundation for application and message development in the standards developing organizations (SDOs). The increasing sophistication and complexity of the BRIDG model requires new approaches to the management and utilization of the underlying semantics to harmonize domain-specific standards. The objective of this study is to develop and evaluate a Semantic Web-based approach that integrates the BRIDG model with ISO 21090 data types to generate domain-specific templates to support clinical study metadata standards development. METHODS: We developed a template generation and visualization system based on an open source Resource Description Framework (RDF) store backend, a SmartGWT-based web user interface, and a "mind map" based tool for the visualization of generated domain-specific templates. We also developed a RESTful Web Service informed by the Clinical Information Modeling Initiative (CIMI) reference model for access to the generated domain-specific templates. RESULTS: A preliminary usability study is performed and all reviewers (n = 3) had very positive responses for the evaluation questions in terms of the usability and the capability of meeting the system requirements (with the average score of 4.6). CONCLUSIONS: Semantic Web technologies provide a scalable infrastructure and have great potential to enable computable semantic interoperability of models in the intersection of health care and clinical research.


Assuntos
Internet , Informática Médica/métodos , Informática Médica/normas , Semântica , Pesquisa Biomédica , Humanos , Modelos Teóricos , Padrões de Referência
11.
J Am Med Inform Assoc ; 23(2): 248-56, 2016 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-26568604

RESUMO

OBJECTIVE: The objective of the Strategic Health IT Advanced Research Project area four (SHARPn) was to develop open-source tools that could be used for the normalization of electronic health record (EHR) data for secondary use--specifically, for high throughput phenotyping. We describe the role of Intermountain Healthcare's Clinical Element Models ([CEMs] Intermountain Healthcare Health Services, Inc, Salt Lake City, Utah) as normalization "targets" within the project. MATERIALS AND METHODS: Intermountain's CEMs were either repurposed or created for the SHARPn project. A CEM describes "valid" structure and semantics for a particular kind of clinical data. CEMs are expressed in a computable syntax that can be compiled into implementation artifacts. The modeling team and SHARPn colleagues agilely gathered requirements and developed and refined models. RESULTS: Twenty-eight "statement" models (analogous to "classes") and numerous "component" CEMs and their associated terminology were repurposed or developed to satisfy SHARPn high throughput phenotyping requirements. Model (structural) mappings and terminology (semantic) mappings were also created. Source data instances were normalized to CEM-conformant data and stored in CEM instance databases. A model browser and request site were built to facilitate the development. DISCUSSION: The modeling efforts demonstrated the need to address context differences and granularity choices and highlighted the inevitability of iso-semantic models. The need for content expertise and "intelligent" content tooling was also underscored. We discuss scalability and sustainability expectations for a CEM-based approach and describe the place of CEMs relative to other current efforts. CONCLUSIONS: The SHARPn effort demonstrated the normalization and secondary use of EHR data. CEMs proved capable of capturing data originating from a variety of sources within the normalization pipeline and serving as suitable normalization targets.


Assuntos
Registros Eletrônicos de Saúde/normas , Armazenamento e Recuperação da Informação , Registro Médico Coordenado/métodos , Sistemas de Informação em Saúde/normas , Semântica , Utah , Vocabulário Controlado
12.
AMIA Annu Symp Proc ; 2016: 1119-1128, 2016.
Artigo em Inglês | MEDLINE | ID: mdl-28269909

RESUMO

Researchers commonly use a tabular format to describe and represent clinical study data. The lack of standardization of data dictionary's metadata elements presents challenges for their harmonization for similar studies and impedes interoperability outside the local context. We propose that representing data dictionaries in the form of standardized archetypes can help to overcome this problem. The Archetype Modeling Language (AML) as developed by the Clinical Information Modeling Initiative (CIMI) can serve as a common format for the representation of data dictionary models. We mapped three different data dictionaries (identified from dbGAP, PheKB and TCGA) onto AML archetypes by aligning dictionary variable definitions with the AML archetype elements. The near complete alignment of data dictionaries helped map them into valid AML models that captured all data dictionary model metadata. The outcome of the work would help subject matter experts harmonize data models for quality, semantic interoperability and better downstream data integration.


Assuntos
Pesquisa Biomédica/normas , Bases de Dados Factuais/normas , Metadados/normas , Software
13.
Stud Health Technol Inform ; 216: 1098, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26262397

RESUMO

This study describes our efforts in developing a standards-based semantic metadata repository for supporting electronic health record (EHR)-driven phenotype authoring and execution. Our system comprises three layers: 1) a semantic data element repository layer; 2) a semantic services layer; and 3) a phenotype application layer. In a prototype implementation, we developed the repository and services through integrating the data elements from both Quality Data Model (QDM) and HL7 Fast Healthcare Inteoroperability Resources (FHIR) models. We discuss the modeling challenges and the potential of our system to support EHR phenotype authoring and execution applications.


Assuntos
Bases de Dados Factuais/normas , Registros Eletrônicos de Saúde/normas , Nível Sete de Saúde/normas , Semântica , Vocabulário Controlado , Guias como Assunto , Registro Médico Coordenado/normas , Processamento de Linguagem Natural , Estados Unidos
14.
Stud Health Technol Inform ; 216: 1097, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26262396

RESUMO

The lack of a standards-based information model has been recognized as a major barrier for representing computable diagnostic criteria. In this paper we describe our efforts in examining the feasibility of the Quality Data Model (QDM)-developed by the National Quality Forum (NQF)-for representing computable diagnostic criteria. We collected the diagnostic criteria for a number of diseases and disorders (n=12) from textbooks and profiled the data elements of the criteria using the QDM data elements. We identified a number of common patterns informed by the QDM. In conclusion, the common patterns informed by the QDM are useful and feasible in building a standards-based information model for computable diagnostic criteria.


Assuntos
Confiabilidade dos Dados , Diagnóstico , Classificação Internacional de Doenças/normas , Processamento de Linguagem Natural , Guias de Prática Clínica como Assunto , Terminologia como Assunto , Estudos de Viabilidade , Modelos Teóricos , Estados Unidos
15.
BioData Min ; 8: 12, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-25829948

RESUMO

BACKGROUND: Drug-drug interactions (DDIs) are a major contributing factor for unexpected adverse drug events (ADEs). However, few of knowledge resources cover the severity information of ADEs that is critical for prioritizing the medical need. The objective of the study is to develop and evaluate a Semantic Web-based approach for mining severe DDI-induced ADEs. METHODS: We utilized a normalized FDA Adverse Event Report System (AERS) dataset and performed a case study of three frequently prescribed cardiovascular drugs: Warfarin, Clopidogrel and Simvastatin. We extracted putative DDI-ADE pairs and their associated outcome codes. We developed a pipeline to filter the associations using ADE datasets from SIDER and PharmGKB. We also performed a signal enrichment using electronic medical records (EMR) data. We leveraged the Common Terminology Criteria for Adverse Event (CTCAE) grading system and classified the DDI-induced ADEs into the CTCAE in the Web Ontology Language (OWL). RESULTS: We identified 601 DDI-ADE pairs for the three drugs using the filtering pipeline, of which 61 pairs are in Grade 5, 56 pairs in Grade 4 and 484 pairs in Grade 3. Among 601 pairs, the signals of 59 DDI-ADE pairs were identified from the EMR data. CONCLUSIONS: The approach developed could be generalized to detect the signals of putative severe ADEs induced by DDIs in other drug domains and would be useful for supporting translational and pharmacovigilance study of severe ADEs.

16.
AMIA Annu Symp Proc ; 2015: 659-68, 2015.
Artigo em Inglês | MEDLINE | ID: mdl-26958201

RESUMO

Domain-specific common data elements (CDEs) are emerging as an effective approach to standards-based clinical research data storage and retrieval. A limiting factor, however, is the lack of robust automated quality assurance (QA) tools for the CDEs in clinical study domains. The objectives of the present study are to prototype and evaluate a QA tool for the study of cancer CDEs using a post-coordination approach. The study starts by integrating the NCI caDSR CDEs and The Cancer Genome Atlas (TCGA) data dictionaries in a single Resource Description Framework (RDF) data store. We designed a compositional expression pattern based on the Data Element Concept model structure informed by ISO/IEC 11179, and developed a transformation tool that converts the pattern-based compositional expressions into the Web Ontology Language (OWL) syntax. Invoking reasoning and explanation services, we tested the system utilizing the CDEs extracted from two TCGA clinical cancer study domains. The system could automatically identify duplicate CDEs, and detect CDE modeling errors. In conclusion, compositional expressions not only enable reuse of existing ontology codes to define new domain concepts, but also provide an automated mechanism for QA of terminological annotations for CDEs.


Assuntos
Elementos de Dados Comuns/normas , Armazenamento e Recuperação da Informação , Neoplasias , Ontologias Biológicas , Humanos , Sistema de Registros/normas , Systematized Nomenclature of Medicine
17.
Artigo em Inglês | MEDLINE | ID: mdl-24303245

RESUMO

A standardized Adverse Drug Events (ADEs) knowledge base that encodes known ADE knowledge can be very useful in improving ADE detection for drug safety surveillance. In our previous study, we developed the ADEpedia that is a standardized knowledge base of ADEs based on drug product labels. The objectives of the present study are 1) to integrate normalized ADE knowledge from the Unified Medical Language System (UMLS) into the ADEpedia; and 2) to enrich the knowledge base with the drug-disorder co-occurrence data from a 51-million-document electronic medical records (EMRs) system. We extracted 266,832 drug-disorder concept pairs from the UMLS, covering 14,256 (1.69%) distinct drug concepts and 19,006 (3.53%) distinct disorder concepts. Of them, 71,626 (26.8%) concept pairs from UMLS co-occurred in the EMRs. We performed a preliminary evaluation on the utility of the UMLS ADE data. In conclusion, we have built an ADEpedia 2.0 framework that intends to integrate known ADE knowledge from disparate sources. The UMLS is a useful source for providing standardized ADE knowledge relevant to indications, contraindications and adverse effects, and complementary to the ADE data from drug product labels. The statistics from EMRs would enable the meaningful use of ADE data for drug safety surveillance.

18.
J Am Med Inform Assoc ; 20(e2): e341-8, 2013 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-24190931

RESUMO

RESEARCH OBJECTIVE: To develop scalable informatics infrastructure for normalization of both structured and unstructured electronic health record (EHR) data into a unified, concept-based model for high-throughput phenotype extraction. MATERIALS AND METHODS: Software tools and applications were developed to extract information from EHRs. Representative and convenience samples of both structured and unstructured data from two EHR systems-Mayo Clinic and Intermountain Healthcare-were used for development and validation. Extracted information was standardized and normalized to meaningful use (MU) conformant terminology and value set standards using Clinical Element Models (CEMs). These resources were used to demonstrate semi-automatic execution of MU clinical-quality measures modeled using the Quality Data Model (QDM) and an open-source rules engine. RESULTS: Using CEMs and open-source natural language processing and terminology services engines-namely, Apache clinical Text Analysis and Knowledge Extraction System (cTAKES) and Common Terminology Services (CTS2)-we developed a data-normalization platform that ensures data security, end-to-end connectivity, and reliable data flow within and across institutions. We demonstrated the applicability of this platform by executing a QDM-based MU quality measure that determines the percentage of patients between 18 and 75 years with diabetes whose most recent low-density lipoprotein cholesterol test result during the measurement year was <100 mg/dL on a randomly selected cohort of 273 Mayo Clinic patients. The platform identified 21 and 18 patients for the denominator and numerator of the quality measure, respectively. Validation results indicate that all identified patients meet the QDM-based criteria. CONCLUSIONS: End-to-end automated systems for extracting clinical information from diverse EHR systems require extensive use of standardized vocabularies and terminologies, as well as robust information models for storing, discovering, and processing that information. This study demonstrates the application of modular and open-source resources for enabling secondary use of EHR data through normalization into standards-based, comparable, and consistent format for high-throughput phenotyping to identify patient cohorts.


Assuntos
Mineração de Dados , Registros Eletrônicos de Saúde/normas , Aplicações da Informática Médica , Processamento de Linguagem Natural , Fenótipo , Algoritmos , Pesquisa Biomédica , Segurança Computacional , Humanos , Software , Vocabulário Controlado
19.
Stud Health Technol Inform ; 192: 496-500, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-23920604

RESUMO

A semantically coded knowledge base of adverse drug events (ADEs) with severity information is critical for clinical decision support systems and translational research applications. However it remains challenging to measure and identify the severity information of ADEs. The objective of the study is to develop and evaluate a semantic web based approach for building a knowledge base of severe ADEs based on the FDA Adverse Event Reporting System (AERS) reporting data. We utilized a normalized AERS reporting dataset and extracted putative drug-ADE pairs and their associated outcome codes in the domain of cardiac disorders. We validated the drug-ADE associations using ADE datasets from SIDe Effect Resource (SIDER) and the UMLS. We leveraged the Common Terminology Criteria for Adverse Event (CTCAE) grading system and classified the ADEs into the CTCAE in the Web Ontology Language (OWL). We identified and validated 2,444 unique Drug-ADE pairs in the domain of cardiac disorders, of which 760 pairs are in Grade 5, 775 pairs in Grade 4 and 2,196 pairs in Grade 3.


Assuntos
Sistemas de Notificação de Reações Adversas a Medicamentos/estatística & dados numéricos , Sistemas de Gerenciamento de Base de Dados , Bases de Dados Factuais/estatística & dados numéricos , Efeitos Colaterais e Reações Adversas Relacionados a Medicamentos/classificação , Internet , Processamento de Linguagem Natural , Vocabulário Controlado , Humanos , Bases de Conhecimento , Semântica
20.
J Biomed Semantics ; 4(1): 11, 2013 Apr 21.
Artigo em Inglês | MEDLINE | ID: mdl-23601451

RESUMO

The beta phase of the 11th revision of International Classification of Diseases (ICD-11) intends to accept public input through a distributed model of authoring. One of the core use cases is to create textual definitions for the ICD categories. The objective of the present study is to design, develop, and evaluate approaches to support ICD-11 textual definitions authoring using Semantic Web technology. We investigated a number of heterogeneous resources related to the definitions of diseases, including the linked open data (LOD) from DBpedia, the textual definitions from the Unified Medical Language System (UMLS) and the formal definitions of the Systematized Nomenclature of Medicine-Clinical Terms (SNOMED CT). We integrated them in a Semantic Web framework (i.e., the Linked Data in a Resource Description Framework [RDF] triple store), which is being proposed as a backend in a prototype platform for collaborative authoring of ICD-11 beta. We performed a preliminary evaluation on the usefulness of our approaches and discussed the potential challenges from both technical and clinical perspectives.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...