Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 49
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Nucleic Acids Res ; 2024 Jul 05.
Artigo em Inglês | MEDLINE | ID: mdl-38967009

RESUMO

Knowledge about transcription factor binding and regulation, target genes, cis-regulatory modules and topologically associating domains is not only defined by functional associations like biological processes or diseases but also has a determinative genome location aspect. Here, we exploit these location and functional aspects together to develop new strategies to enable advanced data querying. Many databases have been developed to provide information about enhancers, but a schema that allows the standardized representation of data, securing interoperability between resources, has been lacking. In this work, we use knowledge graphs for the standardized representation of enhancers and topologically associating domains, together with data about their target genes, transcription factors, location on the human genome, and functional data about diseases and gene ontology annotations. We used this schema to integrate twenty-five enhancer datasets and two domain datasets, creating the most powerful integrative resource in this field to date. The knowledge graphs have been implemented using the Resource Description Framework and integrated within the open-access BioGateway knowledge network, generating a resource that contains an interoperable set of knowledge graphs (enhancers, TADs, genes, proteins, diseases, GO terms, and interactions between domains). We show how advanced queries, which combine functional and location restrictions, can be used to develop new hypotheses about functional aspects of gene expression regulation.

2.
Brief Bioinform ; 21(2): 473-485, 2020 03 23.
Artigo em Inglês | MEDLINE | ID: mdl-30715146

RESUMO

The development and application of biological ontologies have increased significantly in recent years. These ontologies can be retrieved from different repositories, which do not provide much information about quality aspects of the ontologies. In the past years, some ontology structural metrics have been proposed, but their validity as measurement instrument has not been sufficiently studied to date. In this work, we evaluate a set of reproducible and objective ontology structural metrics. Given the lack of standard methods for this purpose, we have applied an evaluation method based on the stability and goodness of the classifications of ontologies produced by each metric on an ontology corpus. The evaluation has been done using ontology repositories as corpora. More concretely, we have used 119 ontologies from the OBO Foundry repository and 78 ontologies from AgroPortal. First, we study the correlations between the metrics. Second, we study whether the clusters for a given metric are stable and have a good structure. The results show that the existing correlations are not biasing the evaluation, there are no metrics generating unstable clusterings and all the metrics evaluated provide at least reasonable clustering structure. Furthermore, our work permits to review and suggest the most reliable ontology structural metrics in terms of stability and goodness of their classifications. Availability: http://sele.inf.um.es/ontology-metrics.


Assuntos
Ontologias Biológicas , Sistemas de Gerenciamento de Base de Dados , Setor Público
3.
BMC Med Inform Decis Mak ; 20(Suppl 10): 284, 2020 12 15.
Artigo em Inglês | MEDLINE | ID: mdl-33319711

RESUMO

BACKGROUND: The increasing adoption of ontologies in biomedical research and the growing number of ontologies available have made it necessary to assure the quality of these resources. Most of the well-established ontologies, such as the Gene Ontology or SNOMED CT, have their own quality assurance processes. These have demonstrated their usefulness for the maintenance of the resources but are unable to detect all of the modelling flaws in the ontologies. Consequently, the development of efficient and effective quality assurance methods is needed. METHODS: Here, we propose a series of quantitative metrics based on the processing of the lexical regularities existing in the content of the ontology, to analyse readability and structural accuracy. The readability metrics account for the ratio of labels, descriptions, and synonyms associated with the ontology entities. The structural accuracy metrics evaluate how two ontology modelling best practices are followed: (1) lexically suggest locally define (LSLD), that is, if what is expressed in natural language for humans is available as logical axioms for machines; and (2) systematic naming, which accounts for the amount of label content of the classes in a given taxonomy shared. RESULTS: We applied the metrics to different versions of SNOMED CT. Both readability and structural accuracy metrics remained stable in time but could capture some changes in the modelling decisions in SNOMED CT. The value of the LSLD metric increased from 0.27 to 0.31, and the value of the systematic naming metric was around 0.17. We analysed the readability and structural accuracy in the SNOMED CT July 2019 release. The results showed that the fulfilment of the structural accuracy criteria varied among the SNOMED CT hierarchies. The value of the metrics for the hierarchies was in the range of 0-0.92 (LSLD) and 0.08-1 (systematic naming). We also identified the cases that did not meet the best practices. CONCLUSIONS: We generated useful information about the engineering of the ontology, making the following contributions: (1) a set of readability metrics, (2) the use of lexical regularities to define structural accuracy metrics, and (3) the generation of quality assurance information for SNOMED CT.


Assuntos
Ontologias Biológicas , Systematized Nomenclature of Medicine , Compreensão , Ontologia Genética , Humanos , Idioma , Processamento de Linguagem Natural
4.
Bioinformatics ; 34(22): 3788-3794, 2018 11 15.
Artigo em Inglês | MEDLINE | ID: mdl-29868922

RESUMO

Motivation: Translation is a key biological process controlled in eukaryotes by the initiation AUG codon. Variations affecting this codon may have pathological consequences by disturbing the correct initiation of translation. Unfortunately, there is no systematic study describing these variations in the human genome. Moreover, we aimed to develop new tools for in silico prediction of the pathogenicity of gene variations affecting AUG codons, because to date, these gene defects have been wrongly classified as missense. Results: Whole-exome analysis revealed the mean of 12 gene variations per person affecting initiation codons, mostly with high (>0.01) minor allele frequency (MAF). Moreover, analysis of Ensembl data (December 2017) revealed 11 261 genetic variations affecting the initiation AUG codon of 7205 genes. Most of these variations (99.5%) have low or unknown MAF, probably reflecting deleterious consequences. Only 62 variations had high MAF. Genetic variations with high MAF had closer alternative AUG downstream codons than did those with low MAF. Besides, the high-MAF group better maintained both the signal peptide and reading frame. These differentiating elements could help to determine the pathogenicity of this kind of variation. Availability and implementation: Data and scripts in Perl and R are freely available at https://github.com/fanavarro/hemodonacion. Supplementary information: Supplementary data are available at Bioinformatics online.


Assuntos
Códon de Iniciação , Biologia Computacional , Genoma Humano , Códon , Humanos , Biossíntese de Proteínas
5.
Bioinformatics ; 34(2): 323-329, 2018 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-28968857

RESUMO

The Quest for Orthologs (QfO) is an open collaboration framework for experts in comparative phylogenomics and related research areas who have an interest in highly accurate orthology predictions and their applications. We here report highlights and discussion points from the QfO meeting 2015 held in Barcelona. Achievements in recent years have established a basis to support developments for improved orthology prediction and to explore new approaches. Central to the QfO effort is proper benchmarking of methods and services, as well as design of standardized datasets and standardized formats to allow sharing and comparison of results. Simultaneously, analysis pipelines have been improved, evaluated and adapted to handle large datasets. All this would not have occurred without the long-term collaboration of Consortium members. Meeting regularly to review and coordinate complementary activities from a broad spectrum of innovative researchers clearly benefits the community. Highlights of the meeting include addressing sources of and legitimacy of disagreements between orthology calls, the context dependency of orthology definitions, special challenges encountered when analyzing very anciently rooted orthologies, orthology in the light of whole-genome duplications, and the concept of orthologous versus paralogous relationships at different levels, including domain-level orthology. Furthermore, particular needs for different applications (e.g. plant genomics, ancient gene families and others) and the infrastructure for making orthology inferences available (e.g. interfaces with model organism databases) were discussed, with several ongoing efforts that are expected to be reported on during the upcoming 2017 QfO meeting.

6.
J Biomed Inform ; 84: 59-74, 2018 08.
Artigo em Inglês | MEDLINE | ID: mdl-29908358

RESUMO

Ontologies and terminologies have been identified as key resources for the achievement of semantic interoperability in biomedical domains. The development of ontologies is performed as a joint work by domain experts and knowledge engineers. The maintenance and auditing of these resources is also the responsibility of such experts, and this is usually a time-consuming, mostly manual task. Manual auditing is impractical and ineffective for most biomedical ontologies, especially for larger ones. An example is SNOMED CT, a key resource in many countries for codifying medical information. SNOMED CT contains more than 300000 concepts. Consequently its auditing requires the support of automatic methods. Many biomedical ontologies contain natural language content for humans and logical axioms for machines. The 'lexically suggest, logically define' principle means that there should be a relation between what is expressed in natural language and as logical axioms, and that such a relation should be useful for auditing and quality assurance. Besides, the meaning of this principle is that the natural language content for humans could be used to generate the logical axioms for the machines. In this work, we propose a method that combines lexical analysis and clustering techniques to (1) identify regularities in the natural language content of ontologies; (2) cluster, by similarity, labels exhibiting a regularity; (3) extract relevant information from those clusters; and (4) propose logical axioms for each cluster with the support of axiom templates. These logical axioms can then be evaluated with the existing axioms in the ontology to check their correctness and completeness, which are two fundamental objectives in auditing and quality assurance. In this paper, we describe the application of the method to two SNOMED CT modules, a 'congenital' module, obtained using concepts exhibiting the attribute Occurrence - Congenital, and a 'chronic' module, using concepts exhibiting the attribute Clinical course - Chronic. We obtained a precision and a recall of respectively 75% and 28% for the 'congenital' module, and 64% and 40% for the 'chronic' one. We consider these results to be promising, so our method can contribute to the support of content editors by using automatic methods for assuring the quality of biomedical ontologies and terminologies.


Assuntos
Ontologias Biológicas , Biologia Computacional/métodos , Systematized Nomenclature of Medicine , Algoritmos , Análise por Conglomerados , Idioma , Informática Médica , Processamento de Linguagem Natural , Reconhecimento Automatizado de Padrão , Linguagens de Programação , Controle de Qualidade , Reprodutibilidade dos Testes , Software , Terminologia como Assunto
7.
J Biomed Inform ; 46(2): 304-17, 2013 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-23246613

RESUMO

Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes.


Assuntos
Registros Eletrônicos de Saúde , Informática Médica/métodos , Vocabulário Controlado , Humanos , Modelos Teóricos , Reprodutibilidade dos Testes , Semântica
8.
Front Plant Sci ; 14: 1120183, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36778675

RESUMO

Short term experiments have identified heat shock and cold response elements in many biological systems. However, the effect of long-term low or high temperatures is not well documented. To address this gap, we grew Antirrhinum majus plants from two-weeks old until maturity under control (normal) (22/16°C), cold (15/5°C), and hot (30/23°C) conditions for a period of two years. Flower size, petal anthocyanin content and pollen viability obtained higher values in cold conditions, decreasing in middle and high temperatures. Leaf chlorophyll content was higher in cold conditions and stable in control and hot temperatures, while pedicel length increased under hot conditions. The control conditions were optimal for scent emission and seed production. Scent complexity was low in cold temperatures. The transcriptomic analysis of mature flowers, followed by gene enrichment analysis and CNET plot visualization, showed two groups of genes. One group comprised genes controlling the affected traits, and a second group appeared as long-term adaptation to non-optimal temperatures. These included hypoxia, unsaturated fatty acid metabolism, ribosomal proteins, carboxylic acid, sugar and organic ion transport, or protein folding. We found a differential expression of floral organ identity functions, supporting the flower size data. Pollinator-related traits such as scent and color followed opposite trends, indicating an equilibrium for rendering the organs for pollination attractive under changing climate conditions. Prolonged heat or cold cause structural adaptations in protein synthesis and folding, membrane composition, and transport. Thus, adaptations to cope with non-optimal temperatures occur in basic cellular processes.

9.
iScience ; 26(11): 108214, 2023 Nov 17.
Artigo em Inglês | MEDLINE | ID: mdl-37953943

RESUMO

Repetitive sequences represent about 45% of the human genome. Some are transposable elements (TEs) with the ability to change their position in the genome, creating genetic variability both as insertions or deletions, with potential pathogenic consequences. We used long-read nanopore sequencing to identify TE variants in the genomes of 24 patients with antithrombin deficiency. We identified 7 344 TE insertions and 3 056 TE deletions, 2 926 were not previously described in publicly available databases. The insertions affected 3 955 genes, with 6 insertions located in exons, 3 929 in introns, and 147 in promoters. Potential functional impact was evaluated with gene annotation and enrichment analysis, which suggested a strong relationship with neuron-related functions and autism. We conclude that this study encourages the generation of a complete map of TEs in the human genome, which will be useful for identifying new TEs involved in genetic disorders.

10.
PLoS One ; 18(8): e0290372, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37616197

RESUMO

The World Health Organization has estimated that air pollution will be one of the most significant challenges related to the environment in the following years, and air quality monitoring and climate change mitigation actions have been promoted due to the Paris Agreement because of their impact on mortality risk. Thus, generating a methodology that supports experts in making decisions based on exposure data, identifying exposure-related activities, and proposing mitigation scenarios is essential. In this context, the emergence of Interactive Process Mining-a discipline that has progressed in the last years in healthcare-could help to develop a methodology based on human knowledge. For this reason, we propose a new methodology for a sequence-oriented sensitive analysis to identify the best activities and parameters to offer a mitigation policy. This methodology is innovative in the following points: i) we present in this paper the first application of Interactive Process Mining pollution personal exposure mitigation; ii) our solution reduces the computation cost and time of the traditional sensitive analysis; iii) the methodology is human-oriented in the sense that the process should be done with the environmental expert; and iv) our solution has been tested with synthetic data to explore the viability before the move to physical exposure measurements, taking the city of Valencia as the use case, and overcoming the difficulty of performing exposure measurements. This dataset has been generated with a model that considers the city of Valencia's demographic and epidemiological statistics. We have demonstrated that the assessments done using sequence-oriented sensitive analysis can identify target activities. The proposed scenarios can improve the initial KPIs-in the best scenario; we reduce the population exposure by 18% and the relative risk by 12%. Consequently, our proposal could be used with real data in future steps, becoming an innovative point for air pollution mitigation and environmental improvement.


Assuntos
Poluição do Ar , Humanos , Medição de Risco , Mudança Climática , Tomada de Decisões , Material Particulado
11.
J Biomed Inform ; 45(4): 746-62, 2012 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-22142945

RESUMO

Possibly the most important requirement to support co-operative work among health professionals and institutions is the ability of sharing EHRs in a meaningful way, and it is widely acknowledged that standardization of data and concepts is a prerequisite to achieve semantic interoperability in any domain. Different international organizations are working on the definition of EHR architectures but the lack of tools that implement them hinders their broad adoption. In this paper we present ResearchEHR, a software platform whose objective is to facilitate the practical application of EHR standards as a way of reaching the desired semantic interoperability. This platform is not only suitable for developing new systems but also for increasing the standardization of existing ones. The work reported here describes how the platform allows for the edition, validation, and search of archetypes, converts legacy data into normalized, archetypes extracts, is able to generate applications from archetypes and finally, transforms archetypes and data extracts into other EHR standards. We also include in this paper how ResearchEHR has made possible the application of the CEN/ISO 13606 standard in a real environment and the lessons learnt with this experience.


Assuntos
Sistemas de Gerenciamento de Base de Dados , Registros Eletrônicos de Saúde/normas , Semântica , Humanos , Reprodutibilidade dos Testes , Integração de Sistemas
12.
Stud Health Technol Inform ; 180: 963-7, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22874336

RESUMO

Linking Electronic Healthcare Records (EHR) content to educational materials has been considered a key international recommendation to enable clinical engagement and to promote patient safety. This would suggest citizens to access reliable information available on the web and to guide them properly. In this paper, we describe an approach in that direction, based on the use of dual model EHR standards and standardized educational contents. The recommendation method will be based on the semantic coverage of the learning content repository for a particular archetype, which will be calculated by applying semantic web technologies like ontologies and semantic annotations.


Assuntos
Instrução por Computador/normas , Educação Médica/métodos , Educação Médica/normas , Registros Eletrônicos de Saúde , Registros de Saúde Pessoal , Informática Médica/normas , Registro Médico Coordenado/normas , Internet/normas , Semântica , Espanha
13.
Comput Struct Biotechnol J ; 20: 2728-2744, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35685360

RESUMO

The process of gene regulation extends as a network in which both genetic sequences and proteins are involved. The levels of regulation and the mechanisms involved are multiple. Transcription is the main control mechanism for most genes, being the downstream steps responsible for refining the transcription patterns. In turn, gene transcription is mainly controlled by regulatory events that occur at promoters and enhancers. Several studies are focused on analyzing the contribution of enhancers in the development of diseases and their possible use as therapeutic targets. The study of regulatory elements has advanced rapidly in recent years with the development and use of next generation sequencing techniques. All this information has generated a large volume of information that has been transferred to a growing number of public repositories that store this information. In this article, we analyze the content of those public repositories that contain information about human enhancers with the aim of detecting whether the knowledge generated by scientific research is contained in those databases in a way that could be computationally exploited. The analysis will be based on three main aspects identified in the literature: types of enhancers, type of evidence about the enhancers, and methods for detecting enhancer-promoter interactions. Our results show that no single database facilitates the optimal exploitation of enhancer data, most types of enhancers are not represented in the databases and there is need for a standardized model for enhancers. We have identified major gaps and challenges for the computational exploitation of enhancer data.

14.
J Biomed Semantics ; 13(1): 19, 2022 07 15.
Artigo em Inglês | MEDLINE | ID: mdl-35841031

RESUMO

BACKGROUND: Ontology matching should contribute to the interoperability aspect of FAIR data (Findable, Accessible, Interoperable, and Reusable). Multiple data sources can use different ontologies for annotating their data and, thus, creating the need for dynamic ontology matching services. In this experimental study, we assessed the performance of ontology matching systems in the context of a real-life application from the rare disease domain. Additionally, we present a method for analyzing top-level classes to improve precision. RESULTS: We included three ontologies (NCIt, SNOMED CT, ORDO) and three matching systems (AgreementMakerLight 2.0, FCA-Map, LogMap 2.0). We evaluated the performance of the matching systems against reference alignments from BioPortal and the Unified Medical Language System Metathesaurus (UMLS). Then, we analyzed the top-level ancestors of matched classes, to detect incorrect mappings without consulting a reference alignment. To detect such incorrect mappings, we manually matched semantically equivalent top-level classes of ontology pairs. AgreementMakerLight 2.0, FCA-Map, and LogMap 2.0 had F1-scores of 0.55, 0.46, 0.55 for BioPortal and 0.66, 0.53, 0.58 for the UMLS respectively. Using vote-based consensus alignments increased performance across the board. Evaluation with manually created top-level hierarchy mappings revealed that on average 90% of the mappings' classes belonged to top-level classes that matched. CONCLUSIONS: Our findings show that the included ontology matching systems automatically produced mappings that were modestly accurate according to our evaluation. The hierarchical analysis of mappings seems promising when no reference alignments are available. All in all, the systems show potential to be implemented as part of an ontology matching service for querying FAIR data. Future research should focus on developing methods for the evaluation of mappings used in such mapping services, leading to their implementation in a FAIR data ecosystem.


Assuntos
Ontologias Biológicas , Ecossistema , Consenso , Armazenamento e Recuperação da Informação , Systematized Nomenclature of Medicine , Unified Medical Language System
15.
J Biomed Inform ; 44(5): 869-80, 2011 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-21645637

RESUMO

The semantic interoperability between health information systems is a major challenge to improve the quality of clinical practice and patient safety. In recent years many projects have faced this problem and provided solutions based on specific standards and technologies in order to satisfy the needs of a particular scenario. Most of such solutions cannot be easily adapted to new scenarios, thus more global solutions are needed. In this work, we have focused on the semantic interoperability of electronic healthcare records standards based on the dual model architecture and we have developed a solution that has been applied to ISO 13606 and openEHR. The technological infrastructure combines reference models, archetypes and ontologies, with the support of Model-driven Engineering techniques. For this purpose, the interoperability infrastructure developed in previous work by our group has been reused and extended to cover the requirements of data transformation.


Assuntos
Sistemas Computadorizados de Registros Médicos , Semântica , Bases de Dados Factuais , Humanos , Modelos Teóricos
16.
J Biomed Inform ; 44(6): 1020-31, 2011 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-21864715

RESUMO

Semantic Web technologies like RDF and OWL are currently applied in life sciences to improve knowledge management by integrating disparate information. Many of the systems that perform such task, however, only offer a SPARQL query interface, which is difficult to use for life scientists. We present the OGO system, which consists of a knowledge base that integrates information of orthologous sequences and genetic diseases, providing an easy to use ontology-constrain driven query interface. Such interface allows the users to define SPARQL queries through a graphical process, therefore not requiring SPARQL expertise.


Assuntos
Doença/genética , Armazenamento e Recuperação da Informação/métodos , Bases de Conhecimento , Semântica , Animais , Bases de Dados Factuais , Humanos , Internet , Neoplasias/genética , Vocabulário Controlado
17.
Stud Health Technol Inform ; 169: 789-93, 2011.
Artigo em Inglês | MEDLINE | ID: mdl-21893855

RESUMO

Electronic Health Record architectures based on the dual model architecture use archetypes for representing clinical knowledge. Therefore, ensuring their correctness and consistency is a fundamental research goal. In this work, we explore how an approach based on OWL technologies can be used for such purpose. This method has been applied to the openEHR archetype repository, which is the largest available one nowadays. The results of this validation are also reported in this study.


Assuntos
Registros Eletrônicos de Saúde , Informática Médica/métodos , Registro Médico Coordenado/normas , Algoritmos , Humanos , Internet , Sistemas Computadorizados de Registros Médicos , Linguagens de Programação , Garantia da Qualidade dos Cuidados de Saúde , Reprodutibilidade dos Testes , Semântica , Software , Systematized Nomenclature of Medicine , Integração de Sistemas
18.
Biochim Biophys Acta Gene Regul Mech ; 1864(11-12): 194766, 2021.
Artigo em Inglês | MEDLINE | ID: mdl-34710644

RESUMO

Gene regulation computational research requires handling and integrating large amounts of heterogeneous data. The Gene Ontology has demonstrated that ontologies play a fundamental role in biological data interoperability and integration. Ontologies help to express data and knowledge in a machine processable way, which enables complex querying and advanced exploitation of distributed data. Contributing to improve data interoperability in gene regulation is a major objective of the GREEKC Consortium, which aims to develop a standardized gene regulation knowledge commons. GREEKC proposes the use of ontologies and semantic tools for developing interoperable gene regulation knowledge models, which should support data annotation. In this work, we study how such knowledge models can be generated from cartoons of gene regulation scenarios. The proposed method consists of generating descriptions in natural language of the cartoons; extracting the entities from the texts; finding those entities in existing ontologies to reuse as much content as possible, especially from well known and maintained ontologies such as the Gene Ontology, the Sequence Ontology, the Relations Ontology and ChEBI; and implementation of the knowledge models. The models have been implemented using Protégé, a general ontology editor, and Noctua, the tool developed by the Gene Ontology Consortium for the development of causal activity models to capture more comprehensive annotations of genes and link their activities in a causal framework for Gene Ontology Annotations. We applied the method to two gene regulation scenarios and illustrate how to apply the models generated to support the annotation of data from research articles.


Assuntos
Regulação da Expressão Gênica , Modelos Genéticos , Curadoria de Dados , Ontologia Genética , Anotação de Sequência Molecular
19.
J Biomed Inform ; 43(5): 736-46, 2010 Oct.
Artigo em Inglês | MEDLINE | ID: mdl-20561912

RESUMO

The communication between health information systems of hospitals and primary care organizations is currently an important challenge to improve the quality of clinical practice and patient safety. However, clinical information is usually distributed among several independent systems that may be syntactically or semantically incompatible. This fact prevents healthcare professionals from accessing clinical information of patients in an understandable and normalized way. In this work, we address the semantic interoperability of two EHR standards: OpenEHR and ISO EN 13606. Both standards follow the dual model approach which distinguishes information and knowledge, this being represented through archetypes. The solution presented here is capable of transforming OpenEHR archetypes into ISO EN 13606 and vice versa by combining Semantic Web and Model-driven Engineering technologies. The resulting software implementation has been tested using publicly available collections of archetypes for both standards.


Assuntos
Redes de Comunicação de Computadores , Sistemas de Gerenciamento de Base de Dados , Registros Eletrônicos de Saúde , Armazenamento e Recuperação da Informação , Modelos Teóricos
20.
Stud Health Technol Inform ; 155: 129-35, 2010.
Artigo em Inglês | MEDLINE | ID: mdl-20543320

RESUMO

In this paper, we present the ResearchEHR project. It focuses on the usability of Electronic Health Record (EHR) sources and EHR standards for building advanced clinical systems. The aim is to support healthcare professional, institutions and authorities by providing a set of generic methods and tools for the capture, standardization, integration, description and dissemination of health related information. ResearchEHR combines several tools to manage EHR at two different levels. The internal level that deals with the normalization and semantic upgrading of exiting EHR by using archetypes and the external level that uses Semantic Web technologies to specify clinical archetypes for advanced EHR architectures and systems.


Assuntos
Pesquisa Biomédica/métodos , Registros Eletrônicos de Saúde/organização & administração , Registro Médico Coordenado/métodos , Semântica , Pesquisa Biomédica/normas , Registros Eletrônicos de Saúde/normas , Humanos , Integração de Sistemas
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA