Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 22
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Nucleic Acids Res ; 2024 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-38967009

RESUMEN

Knowledge about transcription factor binding and regulation, target genes, cis-regulatory modules and topologically associating domains is not only defined by functional associations like biological processes or diseases but also has a determinative genome location aspect. Here, we exploit these location and functional aspects together to develop new strategies to enable advanced data querying. Many databases have been developed to provide information about enhancers, but a schema that allows the standardized representation of data, securing interoperability between resources, has been lacking. In this work, we use knowledge graphs for the standardized representation of enhancers and topologically associating domains, together with data about their target genes, transcription factors, location on the human genome, and functional data about diseases and gene ontology annotations. We used this schema to integrate twenty-five enhancer datasets and two domain datasets, creating the most powerful integrative resource in this field to date. The knowledge graphs have been implemented using the Resource Description Framework and integrated within the open-access BioGateway knowledge network, generating a resource that contains an interoperable set of knowledge graphs (enhancers, TADs, genes, proteins, diseases, GO terms, and interactions between domains). We show how advanced queries, which combine functional and location restrictions, can be used to develop new hypotheses about functional aspects of gene expression regulation.

2.
J Biomed Inform ; 139: 104297, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36736448

RESUMEN

SNOMED CT postcoordination is an underused mechanism that can help to implement advanced systems for the automatic extraction and encoding of clinical information from text. It allows defining non-existing SNOMED CT concepts by their relationships with existing ones. Manually building postcoordinated expressions is a difficult task. It requires a deep knowledge of the terminology and the support of specialized tools that barely exist. In order to support the building of postcoordinated expressions, we have implemented KGE4SCT: a method that suggests the corresponding SNOMED CT postcoordinated expression for a given clinical term. We leverage on the SNOMED CT ontology and its graph-like structure and use knowledge graph embeddings (KGEs). The objective of such embeddings is to represent in a vector space knowledge graph components (e.g. entities and relations) in a way that captures the structure of the graph. Then, we use vector similarity and analogies for obtaining the postcoordinated expression of a given clinical term. We obtained a semantic type accuracy of 98%, relationship accuracy of 90%, and analogy accuracy of 60%, with an overall completeness of postcoordination of 52% for the Spanish SNOMED CT version. We have also applied it to the English SNOMED CT version and outperformed state of the art methods in both, corpus generation for language model training for this task (improvement of 6% for analogy accuracy), and automatic postcoordination of SNOMED CT expressions, with an increase of 17% for partial conversion rate.


Asunto(s)
Semántica , Systematized Nomenclature of Medicine , Reconocimiento de Normas Patrones Automatizadas , Lenguaje , Procesamiento de Lenguaje Natural
3.
BMC Med Inform Decis Mak ; 15: 12, 2015 Feb 22.
Artículo en Inglés | MEDLINE | ID: mdl-25880555

RESUMEN

BACKGROUND: Every year, hundreds of thousands of patients experience treatment failure or adverse drug reactions (ADRs), many of which could be prevented by pharmacogenomic testing. However, the primary knowledge needed for clinical pharmacogenomics is currently dispersed over disparate data structures and captured in unstructured or semi-structured formalizations. This is a source of potential ambiguity and complexity, making it difficult to create reliable information technology systems for enabling clinical pharmacogenomics. METHODS: We developed Web Ontology Language (OWL) ontologies and automated reasoning methodologies to meet the following goals: 1) provide a simple and concise formalism for representing pharmacogenomic knowledge, 2) finde errors and insufficient definitions in pharmacogenomic knowledge bases, 3) automatically assign alleles and phenotypes to patients, 4) match patients to clinically appropriate pharmacogenomic guidelines and clinical decision support messages and 5) facilitate the detection of inconsistencies and overlaps between pharmacogenomic treatment guidelines from different sources. We evaluated different reasoning systems and test our approach with a large collection of publicly available genetic profiles. RESULTS: Our methodology proved to be a novel and useful choice for representing, analyzing and using pharmacogenomic data. The Genomic Clinical Decision Support (Genomic CDS) ontology represents 336 SNPs with 707 variants; 665 haplotypes related to 43 genes; 22 rules related to drug-response phenotypes; and 308 clinical decision support rules. OWL reasoning identified CDS rules with overlapping target populations but differing treatment recommendations. Only a modest number of clinical decision support rules were triggered for a collection of 943 public genetic profiles. We found significant performance differences across available OWL reasoners. CONCLUSIONS: The ontology-based framework we developed can be used to represent, organize and reason over the growing wealth of pharmacogenomic knowledge, as well as to identify errors, inconsistencies and insufficient definitions in source data sets or individual patient data. Our study highlights both advantages and potential practical issues with such an ontology-based approach.


Asunto(s)
Ontologías Biológicas , Sistemas de Apoyo a Decisiones Clínicas , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos/prevención & control , Farmacogenética/métodos , Guías de Práctica Clínica como Asunto , Medicina de Precisión/métodos , Inteligencia Artificial , Toma de Decisiones Clínicas , Humanos
4.
J Biomed Inform ; 45(4): 746-62, 2012 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-22142945

RESUMEN

Possibly the most important requirement to support co-operative work among health professionals and institutions is the ability of sharing EHRs in a meaningful way, and it is widely acknowledged that standardization of data and concepts is a prerequisite to achieve semantic interoperability in any domain. Different international organizations are working on the definition of EHR architectures but the lack of tools that implement them hinders their broad adoption. In this paper we present ResearchEHR, a software platform whose objective is to facilitate the practical application of EHR standards as a way of reaching the desired semantic interoperability. This platform is not only suitable for developing new systems but also for increasing the standardization of existing ones. The work reported here describes how the platform allows for the edition, validation, and search of archetypes, converts legacy data into normalized, archetypes extracts, is able to generate applications from archetypes and finally, transforms archetypes and data extracts into other EHR standards. We also include in this paper how ResearchEHR has made possible the application of the CEN/ISO 13606 standard in a real environment and the lessons learnt with this experience.


Asunto(s)
Sistemas de Administración de Bases de Datos , Registros Electrónicos de Salud/normas , Semántica , Humanos , Reproducibilidad de los Resultados , Integración de Sistemas
5.
J Biomed Semantics ; 13(1): 19, 2022 07 15.
Artículo en Inglés | MEDLINE | ID: mdl-35841031

RESUMEN

BACKGROUND: Ontology matching should contribute to the interoperability aspect of FAIR data (Findable, Accessible, Interoperable, and Reusable). Multiple data sources can use different ontologies for annotating their data and, thus, creating the need for dynamic ontology matching services. In this experimental study, we assessed the performance of ontology matching systems in the context of a real-life application from the rare disease domain. Additionally, we present a method for analyzing top-level classes to improve precision. RESULTS: We included three ontologies (NCIt, SNOMED CT, ORDO) and three matching systems (AgreementMakerLight 2.0, FCA-Map, LogMap 2.0). We evaluated the performance of the matching systems against reference alignments from BioPortal and the Unified Medical Language System Metathesaurus (UMLS). Then, we analyzed the top-level ancestors of matched classes, to detect incorrect mappings without consulting a reference alignment. To detect such incorrect mappings, we manually matched semantically equivalent top-level classes of ontology pairs. AgreementMakerLight 2.0, FCA-Map, and LogMap 2.0 had F1-scores of 0.55, 0.46, 0.55 for BioPortal and 0.66, 0.53, 0.58 for the UMLS respectively. Using vote-based consensus alignments increased performance across the board. Evaluation with manually created top-level hierarchy mappings revealed that on average 90% of the mappings' classes belonged to top-level classes that matched. CONCLUSIONS: Our findings show that the included ontology matching systems automatically produced mappings that were modestly accurate according to our evaluation. The hierarchical analysis of mappings seems promising when no reference alignments are available. All in all, the systems show potential to be implemented as part of an ontology matching service for querying FAIR data. Future research should focus on developing methods for the evaluation of mappings used in such mapping services, leading to their implementation in a FAIR data ecosystem.


Asunto(s)
Ontologías Biológicas , Ecosistema , Consenso , Almacenamiento y Recuperación de la Información , Systematized Nomenclature of Medicine , Unified Medical Language System
6.
J Biomed Inform ; 44(6): 1020-31, 2011 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-21864715

RESUMEN

Semantic Web technologies like RDF and OWL are currently applied in life sciences to improve knowledge management by integrating disparate information. Many of the systems that perform such task, however, only offer a SPARQL query interface, which is difficult to use for life scientists. We present the OGO system, which consists of a knowledge base that integrates information of orthologous sequences and genetic diseases, providing an easy to use ontology-constrain driven query interface. Such interface allows the users to define SPARQL queries through a graphical process, therefore not requiring SPARQL expertise.


Asunto(s)
Enfermedad/genética , Almacenamiento y Recuperación de la Información/métodos , Bases del Conocimiento , Semántica , Animales , Bases de Datos Factuales , Humanos , Internet , Neoplasias/genética , Vocabulario Controlado
7.
BMC Bioinformatics ; 10 Suppl 10: S13, 2009 Oct 01.
Artículo en Inglés | MEDLINE | ID: mdl-19796397

RESUMEN

BACKGROUND: There exist several information resources about orthology of genes and proteins, and there are also systems for querying those resources in an integrated way. However, caveats with current approaches include lack of integration, since results are shown sequentially by resource, meaning that there is redundant information and the users are required to combine the results obtained manually. RESULTS: In this paper we have applied the Ontological Gene Orthology approach, which makes use of a domain ontology to integrate the information output from selected orthology resources. The integrated information is stored in a knowledge base, which can be queried through semantic languages. A friendly user interface has been developed to facilitate the search; consequently, users do not need to have knowledge on ontologies or ontological languages to obtain the relevant information. CONCLUSION: The development and application of our approach allows users to retrieve integrated results when querying orthology information, providing a gene product-oriented output instead of a traditional information resource-oriented one. Besides this benefit for users, it also allows a better exploitation and management of orthology information and knowledge.


Asunto(s)
Biología Computacional/métodos , Almacenamiento y Recuperación de la Información/métodos , Programas Informáticos , Bases de Datos Factuales , Procesamiento de Lenguaje Natural
8.
Stud Health Technol Inform ; 247: 666-670, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-29678044

RESUMEN

Organised repositories of published scientific literature represent a rich source for research in knowledge representation. MEDLINE, one of the largest and most popular biomedical literature databases, provides metadata for over 24 million articles each of which is indexed using the MeSH controlled vocabulary. In order to reuse MeSH annotations for knowledge construction, we processed this data and extracted the most relevant patterns of assigned descriptors over time. The patterns consist of UMLS semantic groups related to the MeSH headings together with their associated MeSH subheadings. Then, we connected the patterns with the most frequent predicates in their corresponding MEDLINE abstracts. Thereafter, we conducted a time series analysis of the extracted patterns from MEDLINE records and their associated predicates in order to study the evolution of manual MeSH indexing. The results show an increasing diversity of the assigned MESH terms over time, along with the increase of scientific publication per year. We obtained evidence of consistency of the relevant predicates associated with the extracted patterns. Moreover, for the most frequent patterns some predicates predominate over others such as Treats between substances and disorders, Causes between pairs of disorders, or Interacts between pairs of substances.


Asunto(s)
Minería de Datos , MEDLINE , Medical Subject Headings , Bases de Datos Factuales , Humanos , Semántica
9.
PLoS One ; 13(12): e0209547, 2018.
Artículo en Inglés | MEDLINE | ID: mdl-30589855

RESUMEN

SNOMED CT provides about 300,000 codes with fine-grained concept definitions to support interoperability of health data. Coding clinical texts with medical terminologies it is not a trivial task and is prone to disagreements between coders. We conducted a qualitative analysis to identify sources of disagreements on an annotation experiment which used a subset of SNOMED CT with some restrictions. A corpus of 20 English clinical text fragments from diverse origins and languages was annotated independently by two domain medically trained annotators following a specific annotation guideline. By following this guideline, the annotators had to assign sets of SNOMED CT codes to noun phrases, together with concept and term coverage ratings. Then, the annotations were manually examined against a reference standard to determine sources of disagreements. Five categories were identified. In our results, the most frequent cause of inter-annotator disagreement was related to human issues. In several cases disagreements revealed gaps in the annotation guidelines and lack of training of annotators. The reminder issues can be influenced by some SNOMED CT features.


Asunto(s)
Curaduría de Datos , Systematized Nomenclature of Medicine , Estudios de Evaluación como Asunto , Guías como Asunto , Humanos
10.
Stud Health Technol Inform ; 235: 446-450, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28423832

RESUMEN

SNOMED CT supports post-coordination, a technique to combine clinical concepts to ontologically define more complex concepts. This technique follows the validity restrictions defined in the SNOMED CT Concept Model. Pre-coordinated expressions are compositional expressions already in SNOMED CT, whereas post-coordinated expressions extend its content. In this project we aim to evaluate the suitability of existing pre-coordinated expressions to provide the patterns for composing typical clinical information based on a defined list of sets of interrelated SNOMED CT concepts. The method produces a 9.3% precision and a 95.9% recall. As a consequence, further investigations are needed to develop heuristics for the selection of the most meaningful matched patterns to improve the precision.


Asunto(s)
Almacenamiento y Recuperación de la Información , Systematized Nomenclature of Medicine , Vocabulario Controlado
11.
Stud Health Technol Inform ; 228: 582-6, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27577450

RESUMEN

Big data resources are difficult to process without a scaled hardware environment that is specifically adapted to the problem. The emergence of flexible cloud-based virtualization techniques promises solutions to this problem. This paper demonstrates how a billion of lines can be processed in a reasonable amount of time in a cloud-based environment. Our use case addresses the accumulation of concept co-occurrence data in MEDLINE annotation as a series of MapReduce jobs, which can be scaled and executed in the cloud. Besides showing an efficient way solving this problem, we generated an additional resource for the scientific community to be used for advanced text mining approaches.


Asunto(s)
Nube Computacional , MEDLINE , Medical Subject Headings , Minería de Datos , Humanos , MEDLINE/estadística & datos numéricos
12.
J Biomed Semantics ; 7: 32, 2016 Jun 03.
Artículo en Inglés | MEDLINE | ID: mdl-27255189

RESUMEN

BACKGROUND: Biomedical research usually requires combining large volumes of data from multiple heterogeneous sources, which makes difficult the integrated exploitation of such data. The Semantic Web paradigm offers a natural technological space for data integration and exploitation by generating content readable by machines. Linked Open Data is a Semantic Web initiative that promotes the publication and sharing of data in machine readable semantic formats. METHODS: We present an approach for the transformation and integration of heterogeneous biomedical data with the objective of generating open biomedical datasets in Semantic Web formats. The transformation of the data is based on the mappings between the entities of the data schema and the ontological infrastructure that provides the meaning to the content. Our approach permits different types of mappings and includes the possibility of defining complex transformation patterns. Once the mappings are defined, they can be automatically applied to datasets to generate logically consistent content and the mappings can be reused in further transformation processes. RESULTS: The results of our research are (1) a common transformation and integration process for heterogeneous biomedical data; (2) the application of Linked Open Data principles to generate interoperable, open, biomedical datasets; (3) a software tool, called SWIT, that implements the approach. In this paper we also describe how we have applied SWIT in different biomedical scenarios and some lessons learned. CONCLUSIONS: We have presented an approach that is able to generate open biomedical repositories in Semantic Web formats. SWIT is able to apply the Linked Open Data principles in the generation of the datasets, so allowing for linking their content to external repositories and creating linked open datasets. SWIT datasets may contain data from multiple sources and schemas, thus becoming integrated datasets.


Asunto(s)
Ontologías Biológicas , Investigación Biomédica , Bases de Datos Factuales , Semántica , Registros Electrónicos de Salud , Humanos , Internet
13.
Stud Health Technol Inform ; 228: 765-9, 2016.
Artículo en Inglés | MEDLINE | ID: mdl-27577489

RESUMEN

The construction and publication of predications form scientific literature databases like MEDLINE is necessary due to the large amount of resources available. The main goal is to infer meaningful predicates between relevant co-occurring MeSH concepts manually annotated from MEDLINE records. The resulting predications are formed as subject-predicate-object triples. We exploit the content of MRCOC file to extract the MeSH indexing terms (main headings and subheadings) of MEDLINE. The predications were inferred by combining the semantic predicates from SemMedDB, the clustering of MeSH terms by their associated MeSH subheadings and the frequency of relevant terms in the abstracts of MEDLINE records. The inferring process also obtains and associates a weight to each generated predication. As a result, we published the generated dataset of predications using the Linked Data principles to make it available for future projects.


Asunto(s)
MEDLINE , Medical Subject Headings , Análisis por Conglomerados , Semántica
14.
Stud Health Technol Inform ; 210: 597-601, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-25991218

RESUMEN

Translating huge medical terminologies like SNOMED CT is costly and time consuming. We present a methodology that acquires substring substitution rules for single words, based on the known similarity between medical words and their translations, due to their common Latin / Greek origin. Character translation rules are automatically acquired from pairs of English words and their automated translations to German. Using a training set with single words extracted from SNOMED CT as input we obtained a list of 268 translation rules. The evaluation of these rules improved the translation of 60% of words compared to Google Translate and 55% of translated words that exactly match the right translations. On a subset of words where machine translation had failed, our method improves translation in 56% of cases, with 27% exactly matching the gold standard.


Asunto(s)
Algoritmos , Procesamiento de Lenguaje Natural , Reconocimiento de Normas Patrones Automatizadas/métodos , Semántica , Systematized Nomenclature of Medicine , Traducción , Europa (Continente) , Control de Formularios y Registros/métodos , Alemania , Aprendizaje Automático , Registro Médico Coordinado/métodos , Terminología como Asunto
15.
Stud Health Technol Inform ; 216: 716-20, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-26262145

RESUMEN

The massive accumulation of biomedical knowledge is reflected by the growth of the literature database MEDLINE with over 23 million bibliographic records. All records are manually indexed by MeSH descriptors, many of them refined by MeSH subheadings. We use subheading information to cluster types of MeSH descriptor co-occurrences in MEDLINE by processing co-occurrence information provided by the UMLS. The goal is to infer plausible predicates to each resulting cluster. In an initial experiment this was done by grouping disease-pharmacologic substance co-occurrences into six clusters. Then, a domain expert manually performed the assignment of meaningful predicates to the clusters. The mean accuracy of the best ten generated biomedical facts of each cluster was 85%. This result supports the evidence of the potential of MeSH subheadings for extracting plausible medical predications from MEDLINE.


Asunto(s)
Bases del Conocimiento , MEDLINE/estadística & datos numéricos , Medical Subject Headings , Procesamiento de Lenguaje Natural , Publicaciones Periódicas como Asunto/estadística & datos numéricos , Análisis por Conglomerados , Minería de Datos/métodos , Aprendizaje Automático , Terminología como Asunto
16.
Stud Health Technol Inform ; 210: 165-9, 2015.
Artículo en Inglés | MEDLINE | ID: mdl-25991123

RESUMEN

Biomedical research usually requires combining large volumes of data from multiple heterogeneous sources. Such heterogeneity makes difficult not only the generation of research-oriented dataset but also its exploitation. In recent years, the Open Data paradigm has proposed new ways for making data available in ways that sharing and integration are facilitated. Open Data approaches may pursue the generation of content readable only by humans and by both humans and machines, which are the ones of interest in our work. The Semantic Web provides a natural technological space for data integration and exploitation and offers a range of technologies for generating not only Open Datasets but also Linked Datasets, that is, open datasets linked to other open datasets. According to the Berners-Lee's classification, each open dataset can be given a rating between one and five stars attending to can be given to each dataset. In the last years, we have developed and applied our SWIT tool, which automates the generation of semantic datasets from heterogeneous data sources. SWIT produces four stars datasets, given that fifth one can be obtained by being the dataset linked from external ones. In this paper, we describe how we have applied the tool in two projects related to health care records and orthology data, as well as the major lessons learned from such efforts.


Asunto(s)
Ontologías Biológicas , Investigación Biomédica/clasificación , Bases de Datos Factuales , Almacenamiento y Recuperación de la Información/métodos , Internet , Procesamiento de Lenguaje Natural , Semántica , Programas Informáticos , España , Terminología como Asunto
17.
Stud Health Technol Inform ; 205: 261-5, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25160186

RESUMEN

The availability of pharmacogenomic data of individual patients can significantly improve physicians' prescribing behavior, lead to a reduced incidence of adverse drug events and an improvement of effectiveness of treatment. The Medicine Safety Code (MSC) initiative is an effort to improve the ability of clinicians and patients to share pharmacogenomic data and to use it at the point of care. The MSC is a standardized two-dimensional barcode that captures individual pharmacogenomic data. The system is backed by a web service that allows the decoding and interpretation of anonymous MSCs without requiring the installation of dedicated software. The system is based on a curated, ontology-based knowledge base representing pharmacogenomic definitions and clinical guidelines. The MSC system performed well in preliminary tests. To evaluate the system in realistic health care settings and to translate it into practical applications, the future participation of stakeholders in clinical institutions, researchers, pharmaceutical companies, genetic testing providers, health IT companies and health insurance organizations will be essential.


Asunto(s)
Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos/genética , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos/prevención & control , Registros Electrónicos de Salud/normas , Registros de Salud Personal , Almacenamiento y Recuperación de la Información/normas , Farmacogenética/normas , Medicina de Precisión/normas , Sistemas de Registro de Reacción Adversa a Medicamentos/normas , Bases de Datos Genéticas/normas , Humanos , Internacionalidad , Sistemas de Identificación de Pacientes/normas
18.
Stud Health Technol Inform ; 205: 584-8, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25160253

RESUMEN

With the rapidly growing amount of biomedical literature it becomes increasingly difficult to find relevant information quickly and reliably. In this study we applied the word2vec deep learning toolkit to medical corpora to test its potential for improving the accessibility of medical knowledge. We evaluated the efficiency of word2vec in identifying properties of pharmaceuticals based on mid-sized, unstructured medical text corpora without any additional background knowledge. Properties included relationships to diseases ('may treat') or physiological processes ('has physiological effect'). We evaluated the relationships identified by word2vec through comparison with the National Drug File - Reference Terminology (NDF-RT) ontology. The results of our first evaluation were mixed, but helped us identify further avenues for employing deep learning technologies in medical information retrieval, as well as using them to complement curated knowledge captured in ontologies and taxonomies.


Asunto(s)
Algoritmos , Inteligencia Artificial , Manuscritos Médicos como Asunto , Procesamiento de Lenguaje Natural , Reconocimiento de Normas Patrones Automatizadas/métodos , Programas Informáticos , Vocabulario Controlado , Semántica
19.
Stud Health Technol Inform ; 198: 25-31, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24825681

RESUMEN

The availability of pharmacogenomic data of individual patients can significantly improve physicians' prescribing behavior, lead to a reduced incidence of adverse drug events and an improvement of effectiveness of treatment. The Medicine Safety Code (MSC) initiative is an effort to improve the ability of clinicians and patients to share pharmacogenomic data and to use it at the point of care. The MSC is a standardized two-dimensional barcode that captures individual pharmacogenomic data. The system is backed by a web service that allows the decoding and interpretation of anonymous MSCs without requiring the installation of dedicated software. The system is based on a curated, ontology-based knowledge base representing pharmacogenomic definitions and clinical guidelines. The MSC system performed well in preliminary tests. To evaluate the system in realistic health care settings and to translate it into practical applications, the future participation of stakeholders in clinical institutions, medical researchers, pharmaceutical companies, genetic testing providers, health IT companies and health insurance organizations will be essential.


Asunto(s)
Ontologías Biológicas , Código de Barras del ADN Taxonómico/métodos , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos/genética , Registros Electrónicos de Salud/organización & administración , Sistemas de Identificación de Pacientes/métodos , Farmacogenética/organización & administración , Medicina de Precisión/métodos , Sistemas de Registro de Reacción Adversa a Medicamentos/organización & administración , Confidencialidad , Bases de Datos Genéticas , Sistemas de Apoyo a Decisiones Clínicas/organización & administración , Efectos Colaterales y Reacciones Adversas Relacionados con Medicamentos/prevención & control , Humanos , Internacionalidad
20.
PLoS One ; 9(5): e93769, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-24787444

RESUMEN

BACKGROUND: The development of genotyping and genetic sequencing techniques and their evolution towards low costs and quick turnaround have encouraged a wide range of applications. One of the most promising applications is pharmacogenomics, where genetic profiles are used to predict the most suitable drugs and drug dosages for the individual patient. This approach aims to ensure appropriate medical treatment and avoid, or properly manage, undesired side effects. RESULTS: We developed the Medicine Safety Code (MSC) service, a novel pharmacogenomics decision support system, to provide physicians and patients with the ability to represent pharmacogenomic data in computable form and to provide pharmacogenomic guidance at the point-of-care. Pharmacogenomic data of individual patients are encoded as Quick Response (QR) codes and can be decoded and interpreted with common mobile devices without requiring a centralized repository for storing genetic patient data. In this paper, we present the first fully functional release of this system and describe its architecture, which utilizes Web Ontology Language 2 (OWL 2) ontologies to formalize pharmacogenomic knowledge and to provide clinical decision support functionalities. CONCLUSIONS: The MSC system provides a novel approach for enabling the implementation of personalized medicine in clinical routine.


Asunto(s)
Ontologías Biológicas , Teléfono Celular , Técnicas de Apoyo para la Decisión , Farmacogenética/métodos , Sistemas de Atención de Punto , Genotipo , Humanos , Errores de Medicación/prevención & control
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA