Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Proteomics ; 17(10): e1700056, 2017 May.
Artículo en Inglés | MEDLINE | ID: mdl-28397356

RESUMEN

Focusing on the interactomes of Homo sapiens, Saccharomyces cerevisiae, and Escherichia coli, we investigated interactions between controlling proteins. In particular, we determined critical, intermittent, and redundant proteins based on their tendency to participate in minimum dominating sets. Independently of the organisms considered, we found that interactions that involved critical nodes had the most prominent effects on the topology of their corresponding networks. Furthermore, we observed that phosphorylation and regulatory events were considerably enriched when the corresponding transcription factors and kinases were critical proteins, while such interactions were depleted when they were redundant proteins. Moreover, interactions involving critical proteins were enriched with essential genes, disease genes, and drug targets, suggesting that such characteristics may be key for the detection of novel drug targets as well as assess their efficacy.

2.
Drug Discov Today ; 27(5): 1441-1447, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35066138

RESUMEN

Over recent years, there has been exciting growth in collaboration between academia and industry in the life sciences to make data more Findable, Accessible, Interoperable and Reusable (FAIR) to achieve greater value. Despite considerable progress, the transformative shift from an application-centric to a data-centric perspective, enabled by FAIR implementation, remains very much a work in progress on the 'FAIR journey'. In this review, we consider use cases for FAIR implementation. These can be deployed alongside assessment of data quality to maximize the value of data generated from research, clinical trials, and real-world healthcare data, which are essential for the discovery and development of new medical treatments by biopharma.


Asunto(s)
Disciplinas de las Ciencias Biológicas , Exactitud de los Datos , Industrias
3.
Food Chem ; 391: 133243, 2022 Oct 15.
Artículo en Inglés | MEDLINE | ID: mdl-35623276

RESUMEN

Determining attributes such as classification, creating taxonomies and nutrients for foods can be a challenging and resource-intensive task, albeit important for a better understanding of foods. In this study, a novel dataset, 134 k BFPD, was collected from USDA Branded Food Products Database with modification and labeled with three food taxonomy and nutrient values and became an artificial intelligence (AI) dataset that covered the largest food types to date. Overall, the Multi-Layer Perceptron (MLP)-TF-SE method obtained the highest learning efficiency for food natural language processing tasks using AI, which achieved up to 99% accuracy for food classification and 0.98 R2 for calcium estimation (0.93 âˆ¼ 0.97 for calories, protein, sodium, total carbohydrate, total lipids, etc.). The deep learning approach has great potential to be embedded in other food classification and regression tasks and as an extension to other applications in the food and nutrient scope.


Asunto(s)
Inteligencia Artificial , Aprendizaje Profundo , Alimentos , Redes Neurales de la Computación , Nutrientes
4.
Methods Mol Biol ; 1939: 49-69, 2019.
Artículo en Inglés | MEDLINE | ID: mdl-30848456

RESUMEN

Technological advancements in many fields have led to huge increases in data production, including data volume, diversity, and the speed at which new data is becoming available. In accordance with this, there is a lack of conformity in the ways data is interpreted. This era of "big data" provides unprecedented opportunities for data-driven research and "big picture" models. However, in-depth analyses-making use of various data types and data sources and extracting knowledge-have become a more daunting task. This is especially the case in life sciences where simplification and flattening of diverse data types often lead to incorrect predictions. Effective applications of big data approaches in life sciences require better, knowledge-based, semantic models that are suitable as a framework for big data integration, while avoiding oversimplifications, such as reducing various biological data types to the gene level. A huge hurdle in developing such semantic knowledge models, or ontologies, is the knowledge acquisition bottleneck. Automated methods are still very limited, and significant human expertise is required. In this chapter, we describe a methodology to systematize this knowledge acquisition and representation challenge, termed KNowledge Acquisition and Representation Methodology (KNARM). We then describe application of the methodology while implementing the Drug Target Ontology (DTO). We aimed to create an approach, involving domain experts and knowledge engineers, to build useful, comprehensive, consistent ontologies that will enable big data approaches in the domain of drug discovery, without the currently common simplifications.


Asunto(s)
Macrodatos , Ontologías Biológicas , Descubrimiento de Drogas/métodos , Web Semántica , Animales , Bases de Datos Factuales , Humanos , Terapia Molecular Dirigida
5.
J Biomed Semantics ; 8(1): 50, 2017 Nov 09.
Artículo en Inglés | MEDLINE | ID: mdl-29122012

RESUMEN

BACKGROUND: One of the most successful approaches to develop new small molecule therapeutics has been to start from a validated druggable protein target. However, only a small subset of potentially druggable targets has attracted significant research and development resources. The Illuminating the Druggable Genome (IDG) project develops resources to catalyze the development of likely targetable, yet currently understudied prospective drug targets. A central component of the IDG program is a comprehensive knowledge resource of the druggable genome. RESULTS: As part of that effort, we have developed a framework to integrate, navigate, and analyze drug discovery data based on formalized and standardized classifications and annotations of druggable protein targets, the Drug Target Ontology (DTO). DTO was constructed by extensive curation and consolidation of various resources. DTO classifies the four major drug target protein families, GPCRs, kinases, ion channels and nuclear receptors, based on phylogenecity, function, target development level, disease association, tissue expression, chemical ligand and substrate characteristics, and target-family specific characteristics. The formal ontology was built using a new software tool to auto-generate most axioms from a database while supporting manual knowledge acquisition. A modular, hierarchical implementation facilitate ontology development and maintenance and makes use of various external ontologies, thus integrating the DTO into the ecosystem of biomedical ontologies. As a formal OWL-DL ontology, DTO contains asserted and inferred axioms. Modeling data from the Library of Integrated Network-based Cellular Signatures (LINCS) program illustrates the potential of DTO for contextual data integration and nuanced definition of important drug target characteristics. DTO has been implemented in the IDG user interface Portal, Pharos and the TIN-X explorer of protein target disease relationships. CONCLUSIONS: DTO was built based on the need for a formal semantic model for druggable targets including various related information such as protein, gene, protein domain, protein structure, binding site, small molecule drug, mechanism of action, protein tissue localization, disease association, and many other types of information. DTO will further facilitate the otherwise challenging integration and formal linking to biological assays, phenotypes, disease models, drug poly-pharmacology, binding kinetics and many other processes, functions and qualities that are at the core of drug discovery. The first version of DTO is publically available via the website http://drugtargetontology.org/ , Github ( http://github.com/DrugTargetOntology/DTO ), and the NCBO Bioportal ( http://bioportal.bioontology.org/ontologies/DTO ). The long-term goal of DTO is to provide such an integrative framework and to populate the ontology with this information as a community resource.


Asunto(s)
Ontologías Biológicas , Biología Computacional/métodos , Sistemas de Liberación de Medicamentos/métodos , Descubrimiento de Drogas/métodos , Humanos , Proteínas/clasificación , Proteínas/genética , Proteínas/metabolismo , Semántica , Programas Informáticos
6.
J Biomed Semantics ; 5(Suppl 1 Proceedings of the Bio-Ontologies Spec Interest G): S5, 2014.
Artículo en Inglés | MEDLINE | ID: mdl-25093074

RESUMEN

The lack of established standards to describe and annotate biological assays and screening outcomes in the domain of drug and chemical probe discovery is a severe limitation to utilize public and proprietary drug screening data to their maximum potential. We have created the BioAssay Ontology (BAO) project (http://bioassayontology.org) to develop common reference metadata terms and definitions required for describing relevant information of low-and high-throughput drug and probe screening assays and results. The main objectives of BAO are to enable effective integration, aggregation, retrieval, and analyses of drug screening data. Since we first released BAO on the BioPortal in 2010 we have considerably expanded and enhanced BAO and we have applied the ontology in several internal and external collaborative projects, for example the BioAssay Research Database (BARD). We describe the evolution of BAO with a design that enables modeling complex assays including profile and panel assays such as those in the Library of Integrated Network-based Cellular Signatures (LINCS). One of the critical questions in evolving BAO is the following: how can we provide a way to efficiently reuse and share among various research projects specific parts of our ontologies without violating the integrity of the ontology and without creating redundancies. This paper provides a comprehensive answer to this question with a description of a methodology for ontology modularization using a layered architecture. Our modularization approach defines several distinct BAO components and separates internal from external modules and domain-level from structural components. This approach facilitates the generation/extraction of derived ontologies (or perspectives) that can suit particular use cases or software applications. We describe the evolution of BAO related to its formal structures, engineering approaches, and content to enable modeling of complex assays and integration with other ontologies and datasets.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA