Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros

Banco de datos
País/Región como asunto
Tipo del documento
Intervalo de año de publicación
1.
Bioinformatics ; 31(9): 1505-7, 2015 May 01.
Artículo en Inglés | MEDLINE | ID: mdl-25505093

RESUMEN

MOTIVATION: The field of toxicogenomics (the application of '-omics' technologies to risk assessment of compound toxicities) has expanded in the last decade, partly driven by new legislation, aimed at reducing animal testing in chemical risk assessment but mainly as a result of a paradigm change in toxicology towards the use and integration of genome wide data. Many research groups worldwide have generated large amounts of such toxicogenomics data. However, there is no centralized repository for archiving and making these data and associated tools for their analysis easily available. RESULTS: The Data Infrastructure for Chemical Safety Assessment (diXa) is a robust and sustainable infrastructure storing toxicogenomics data. A central data warehouse is connected to a portal with links to chemical information and molecular and phenotype data. diXa is publicly available through a user-friendly web interface. New data can be readily deposited into diXa using guidelines and templates available online. Analysis descriptions and tools for interrogating the data are available via the diXa portal. AVAILABILITY AND IMPLEMENTATION: http://www.dixa-fp7.eu CONTACT: d.hendrickx@maastrichtuniversity.nl; info@dixa-fp7.eu SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Asunto(s)
Bases de Datos de Compuestos Químicos , Toxicogenética , Animales , Perfilación de la Expresión Génica , Humanos , Metabolómica , Proteómica , Ratas
2.
Biochim Biophys Acta ; 1844(11): 2002-2015, 2014 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-25110827

RESUMEN

More and more antibody therapeutics are being approved every year, mainly due to their high efficacy and antigen selectivity. However, it is still difficult to identify the antigen, and thereby the function, of an antibody if no other information is available. There are obstacles inherent to the antibody science in every project in antibody drug discovery. Recent experimental technologies allow for the rapid generation of large-scale data on antibody sequences, affinity, potency, structures, and biological functions; this should accelerate drug discovery research. Therefore, a robust bioinformatic infrastructure for these large data sets has become necessary. In this article, we first identify and discuss the typical obstacles faced during the antibody drug discovery process. We then summarize the current status of three sub-fields of antibody informatics as follows: (i) recent progress in technologies for antibody rational design using computational approaches to affinity and stability improvement, as well as ab-initio and homology-based antibody modeling; (ii) resources for antibody sequences, structures, and immune epitopes and open drug discovery resources for development of antibody drugs; and (iii) antibody numbering and IMGT. Here, we review "antibody informatics," which may integrate the above three fields so that bridging the gaps between industrial needs and academic solutions can be accelerated. This article is part of a Special Issue entitled: Recent advances in molecular engineering of antibody.

3.
Nat Rev Drug Discov ; 18(6): 463-477, 2019 06.
Artículo en Inglés | MEDLINE | ID: mdl-30976107

RESUMEN

Drug discovery and development pipelines are long, complex and depend on numerous factors. Machine learning (ML) approaches provide a set of tools that can improve discovery and decision making for well-specified questions with abundant, high-quality data. Opportunities to apply ML occur in all stages of drug discovery. Examples include target validation, identification of prognostic biomarkers and analysis of digital pathology data in clinical trials. Applications have ranged in context and methodology, with some approaches yielding accurate predictions and insights. The challenges of applying ML lie primarily with the lack of interpretability and repeatability of ML-generated results, which may limit their application. In all areas, systematic and comprehensive high-dimensional data still need to be generated. With ongoing efforts to tackle these issues, as well as increasing awareness of the factors needed to validate ML approaches, the application of ML can promote data-driven decision making and has the potential to speed up the process and reduce failure rates in drug discovery and development.


Asunto(s)
Diseño de Fármacos , Descubrimiento de Drogas/métodos , Aprendizaje Automático , Animales , Humanos , Redes Neurales de la Computación
4.
BMC Bioinformatics ; 8 Suppl 1: S18, 2007 Mar 08.
Artículo en Inglés | MEDLINE | ID: mdl-17430562

RESUMEN

BACKGROUND: The SYMBIOmatics Specific Support Action (SSA) is "an information gathering and dissemination activity" that seeks "to identify synergies between the bioinformatics and the medical informatics" domain to improve collaborative progress between both domains (ref. to http://www.symbiomatics.org). As part of the project experts in both research fields will be identified and approached through a survey. To provide input to the survey, the scientific literature was analysed to extract topics relevant to both medical informatics and bioinformatics. RESULTS: This paper presents results of a systematic analysis of the scientific literature from medical informatics research and bioinformatics research. In the analysis pairs of words (bigrams) from the leading bioinformatics and medical informatics journals have been used as indication of existing and emerging technologies and topics over the period 2000-2005 ("recent") and 1990-1990 ("past"). We identified emerging topics that were equally important to bioinformatics and medical informatics in recent years such as microarray experiments, ontologies, open source, text mining and support vector machines. Emerging topics that evolved only in bioinformatics were system biology, protein interaction networks and statistical methods for microarray analyses, whereas emerging topics in medical informatics were grid technology and tissue microarrays. CONCLUSION: We conclude that although both fields have their own specific domains of interest, they share common technological developments that tend to be initiated by new developments in biotechnology and computer science.


Asunto(s)
Biotecnología/estadística & datos numéricos , Biología Computacional/estadística & datos numéricos , Informática Médica/estadística & datos numéricos , Procesamiento de Lenguaje Natural , Publicaciones Periódicas como Asunto/estadística & datos numéricos , Ciencia/estadística & datos numéricos , Evaluación de la Tecnología Biomédica , Biotecnología/tendencias , Biología Computacional/tendencias , Predicción , Informática Médica/tendencias , Publicaciones Periódicas como Asunto/tendencias , Ciencia/tendencias , Integración de Sistemas
5.
Cancer Res ; 77(21): e62-e66, 2017 11 01.
Artículo en Inglés | MEDLINE | ID: mdl-29092942

RESUMEN

Patient-derived tumor xenograft (PDX) mouse models have emerged as an important oncology research platform to study tumor evolution, mechanisms of drug response and resistance, and tailoring chemotherapeutic approaches for individual patients. The lack of robust standards for reporting on PDX models has hampered the ability of researchers to find relevant PDX models and associated data. Here we present the PDX models minimal information standard (PDX-MI) for reporting on the generation, quality assurance, and use of PDX models. PDX-MI defines the minimal information for describing the clinical attributes of a patient's tumor, the processes of implantation and passaging of tumors in a host mouse strain, quality assurance methods, and the use of PDX models in cancer research. Adherence to PDX-MI standards will facilitate accurate search results for oncology models and their associated data across distributed repository databases and promote reproducibility in research studies using these models. Cancer Res; 77(21); e62-66. ©2017 AACR.


Asunto(s)
Neoplasias , Ensayos Antitumor por Modelo de Xenoinjerto/estadística & datos numéricos , Animales , Bases de Datos como Asunto , Modelos Animales de Enfermedad , Humanos , Ratones , Neoplasias/tratamiento farmacológico , Neoplasias/genética , Pacientes
6.
J Biotechnol ; 98(2-3): 269-83, 2002 Sep 25.
Artículo en Inglés | MEDLINE | ID: mdl-12141992

RESUMEN

Expression arrays facilitate the monitoring of changes in the expression patterns of large collections of genes. The analysis of expression array data has become a computationally-intensive task that requires the development of bioinformatics technology for a number of key stages in the process, such as image analysis, database storage, gene clustering and information extraction. Here, we review the current trends in each of these areas, with particular emphasis on the development of the related technology being carried out within our groups.


Asunto(s)
Análisis por Conglomerados , Sistemas de Administración de Bases de Datos , Perfilación de la Expresión Génica/métodos , Almacenamiento y Recuperación de la Información/métodos , Análisis de Secuencia por Matrices de Oligonucleótidos/métodos , Indización y Redacción de Resúmenes/métodos , Bases de Datos Genéticas , Expresión Génica , Procesamiento de Imagen Asistido por Computador/métodos , Internet , MEDLINE , National Library of Medicine (U.S.) , Estados Unidos
7.
Drug Discov Today ; 19(7): 882-9, 2014 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-24201223

RESUMEN

In the Semantic Enrichment of the Scientific Literature (SESL) project, researchers from academia and from life science and publishing companies collaborated in a pre-competitive way to integrate and share information for type 2 diabetes mellitus (T2DM) in adults. This case study exposes benefits from semantic interoperability after integrating the scientific literature with biomedical data resources, such as UniProt Knowledgebase (UniProtKB) and the Gene Expression Atlas (GXA). We annotated scientific documents in a standardized way, by applying public terminological resources for diseases and proteins, and other text-mining approaches. Eventually, we compared the genetic causes of T2DM across the data resources to demonstrate the benefits from the SESL triple store. Our solution enables publishers to distribute their content with little overhead into remote data infrastructures, such as into any Virtual Knowledge Broker.


Asunto(s)
Investigación Biomédica/métodos , Minería de Datos/métodos , Diabetes Mellitus Tipo 2/genética , Semántica , Integración de Sistemas , Animales , Diabetes Mellitus Tipo 2/diagnóstico , Humanos , Bases del Conocimiento
8.
J Gastrointest Cancer ; 43(3): 405-12, 2012 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-22733564

RESUMEN

OBJECTIVE: Small bowel tumours account for only 2-5 % of gastrointestinal neoplasms but are an important source of morbidity and mortality. This article presents the features demonstrated by a wide range of small bowel tumours across different imaging modalities. CONCLUSION: Early and accurate diagnosis via radiological means is an important factor in overall survival for malignant tumours and a thorough understanding of the common features is essential for all radiologists.


Asunto(s)
Diagnóstico por Imagen , Neoplasias Intestinales/diagnóstico , Neoplasias Intestinales/mortalidad , Intestino Delgado/patología , Humanos , Pronóstico , Tasa de Supervivencia
9.
ALTEX ; 29(2): 129-37, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22562486

RESUMEN

Foreign substances can have a dramatic and unpredictable adverse effect on human health. In the development of new therapeutic agents, it is essential that the potential adverse effects of all candidates be identified as early as possible. The field of predictive toxicology strives to profile the potential for adverse effects of novel chemical substances before they occur, both with traditional in vivo experimental approaches and increasingly through the development of in vitro and computational methods which can supplement and reduce the need for animal testing. To be maximally effective, the field needs access to the largest possible knowledge base of previous toxicology findings, and such results need to be made available in such a fashion so as to be interoperable, comparable, and compatible with standard toolkits. This necessitates the development of open, public, computable, and standardized toxicology vocabularies and ontologies so as to support the applications required by in silico, in vitro, and in vivo toxicology methods and related analysis and reporting activities. Such ontology development will support data management, model building, integrated analysis, validation and reporting, including regulatory reporting and alternative testing submission requirements as required by guidelines such as the REACH legislation, leading to new scientific advances in a mechanistically-based predictive toxicology. Numerous existing ontology and standards initiatives can contribute to the creation of a toxicology ontology supporting the needs of predictive toxicology and risk assessment. Additionally, new ontologies are needed to satisfy practical use cases and scenarios where gaps currently exist. Developing and integrating these resources will require a well-coordinated and sustained effort across numerous stakeholders engaged in a public-private partnership. In this communication, we set out a roadmap for the development of an integrated toxicology ontology, harnessing existing resources where applicable. We describe the stakeholders' requirements analysis from the academic and industry perspectives, timelines, and expected benefits of this initiative, with a view to engagement with the wider community.


Asunto(s)
Toxicología/métodos , Vocabulario Controlado , Alternativas a las Pruebas en Animales , Animales , Biología Computacional , Bases de Datos Factuales , Humanos , Investigación , Medición de Riesgo , Toxicología/economía , Toxicología/legislación & jurisprudencia
10.
ALTEX ; 29(2): 139-56, 2012.
Artículo en Inglés | MEDLINE | ID: mdl-22562487

RESUMEN

The field of predictive toxicology requires the development of open, public, computable, standardized toxicology vocabularies and ontologies to support the applications required by in silico, in vitro, and in vivo toxicology methods and related analysis and reporting activities. In this article we review ontology developments based on a set of perspectives showing how ontologies are being used in predictive toxicology initiatives and applications. Perspectives on resources and initiatives reviewed include OpenTox, eTOX, Pistoia Alliance, ToxWiz, Virtual Liver, EU-ADR, BEL, ToxML, and Bioclipse. We also review existing ontology developments in neighboring fields that can contribute to establishing an ontological framework for predictive toxicology. A significant set of resources is already available to provide a foundation for an ontological framework for 21st century mechanistic-based toxicology research. Ontologies such as ToxWiz provide a basis for application to toxicology investigations, whereas other ontologies under development in the biological, chemical, and biomedical communities could be incorporated in an extended future framework. OpenTox has provided a semantic web framework for the implementation of such ontologies into software applications and linked data resources. Bioclipse developers have shown the benefit of interoperability obtained through ontology by being able to link their workbench application with remote OpenTox web services. Although these developments are promising, an increased international coordination of efforts is greatly needed to develop a more unified, standardized, and open toxicology ontology framework.


Asunto(s)
Toxicología/métodos , Vocabulario Controlado , Animales , Bases de Datos Factuales , Regulación de la Expresión Génica/efectos de los fármacos , Humanos
11.
Nat Rev Drug Discov ; 10(9): 661-9, 2011 Aug 31.
Artículo en Inglés | MEDLINE | ID: mdl-21878981

RESUMEN

Bioactive molecules such as drugs, pesticides and food additives are produced in large numbers by many commercial and academic groups around the world. Enormous quantities of data are generated on the biological properties and quality of these molecules. Access to such data - both on licensed and commercially available compounds, and also on those that fail during development - is crucial for understanding how improved molecules could be developed. For example, computational analysis of aggregated data on molecules that are investigated in drug discovery programmes has led to a greater understanding of the properties of successful drugs. However, the information required to perform these analyses is rarely published, and when it is made available it is often missing crucial data or is in a format that is inappropriate for efficient data-mining. Here, we propose a solution: the definition of reporting guidelines for bioactive entities - the Minimum Information About a Bioactive Entity (MIABE) - which has been developed by representatives of pharmaceutical companies, data resource providers and academic groups.


Asunto(s)
Industria Química/normas , Industria Farmacéutica/normas , Difusión de la Información , Animales , Biomarcadores , Química Física , Comunicación , Recolección de Datos , Diseño de Fármacos , Guías como Asunto , Humanos , Plaguicidas , Preparaciones Farmacéuticas , Farmacocinética , Terminología como Asunto , Toxicología
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA