Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
1.
AMIA Annu Symp Proc ; 2019: 681-690, 2019.
Artigo em Inglês | MEDLINE | ID: mdl-32308863

RESUMO

Developing promising treatments in biomedicine often requires aggregation and analysis of data from disparate sources across the healthcare and research spectrum. To facilitate these approaches, there is a growing focus on supporting interoperation of datasets by standardizing data-capture and reporting requirements. Common Data Elements (CDEs)-precise specifications of questions and the set of allowable answers to each question-are increasingly being adopted to help meet these standardization goals. While CDEs can provide a strong conceptual foundation for interoperation, there are no widely recognized serialization or interchange formats to describe and exchange their definitions. As a result, CDEs defined in one system cannot be easily be reused by other systems. An additional problem is that current CDE-based systems tend to be rather heavyweight and cannot be easily adopted and used by third-parties. To address these problems, we developed extensions to a metadata management system called the CEDAR Workbench to provide a platform to simplify the creation, exchange, and use of CDEs. We show how the resulting system allows users to quickly define and share CDEs and to immediately use these CDEs to build and deploy Web-based forms to acquire conforming metadata. We also show how we incorporated a large CDE library from the National Cancer Institute's caDSR system and made these CDEs publicly available for general use.


Assuntos
Pesquisa Biomédica , Elementos de Dados Comuns , Coleta de Dados/normas , Gerenciamento de Dados/métodos , Elementos de Dados Comuns/normas , Gerenciamento de Dados/normas , Humanos , Internet , Metadados , National Institutes of Health (U.S.) , Sistema de Registros , Estados Unidos , Interface Usuário-Computador
2.
Front Immunol ; 9: 1877, 2018.
Artigo em Inglês | MEDLINE | ID: mdl-30166985

RESUMO

The adaptation of high-throughput sequencing to the B cell receptor and T cell receptor has made it possible to characterize the adaptive immune receptor repertoire (AIRR) at unprecedented depth. These AIRR sequencing (AIRR-seq) studies offer tremendous potential to increase the understanding of adaptive immune responses in vaccinology, infectious disease, autoimmunity, and cancer. The increasingly wide application of AIRR-seq is leading to a critical mass of studies being deposited in the public domain, offering the possibility of novel scientific insights through secondary analyses and meta-analyses. However, effective sharing of these large-scale data remains a challenge. The AIRR community has proposed minimal information about adaptive immune receptor repertoire (MiAIRR), a standard for reporting AIRR-seq studies. The MiAIRR standard has been operationalized using the National Center for Biotechnology Information (NCBI) repositories. Submissions of AIRR-seq data to the NCBI repositories typically use a combination of web-based and flat-file templates and include only a minimal amount of terminology validation. As a result, AIRR-seq studies at the NCBI are often described using inconsistent terminologies, limiting scientists' ability to access, find, interoperate, and reuse the data sets. In order to improve metadata quality and ease submission of AIRR-seq studies to the NCBI, we have leveraged the software framework developed by the Center for Expanded Data Annotation and Retrieval (CEDAR), which develops technologies involving the use of data standards and ontologies to improve metadata quality. The resulting CEDAR-AIRR (CAIRR) pipeline enables data submitters to: (i) create web-based templates whose entries are controlled by ontology terms, (ii) generate and validate metadata, and (iii) submit the ontology-linked metadata and sequence files (FASTQ) to the NCBI BioProject, BioSample, and Sequence Read Archive databases. Overall, CAIRR provides a web-based metadata submission interface that supports compliance with the MiAIRR standard. This pipeline is available at http://cairr.miairr.org, and will facilitate the NCBI submission process and improve the metadata quality of AIRR-seq studies.


Assuntos
Biologia Computacional/métodos , Bases de Dados de Ácidos Nucleicos , Receptores de Antígenos de Linfócitos B/genética , Receptores de Antígenos de Linfócitos T/genética , Software , Biologia Computacional/organização & administração , Mineração de Dados , Ontologia Genética , Humanos , Metadados , Reprodutibilidade dos Testes , Interface Usuário-Computador , Fluxo de Trabalho
3.
Transl Oncol ; 7(1): 23-35, 2014 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-24772204

RESUMO

THERE ARE TWO KEY CHALLENGES HINDERING EFFECTIVE USE OF QUANTITATIVE ASSESSMENT OF IMAGING IN CANCER RESPONSE ASSESSMENT: 1) Radiologists usually describe the cancer lesions in imaging studies subjectively and sometimes ambiguously, and 2) it is difficult to repurpose imaging data, because lesion measurements are not recorded in a format that permits machine interpretation and interoperability. We have developed a freely available software platform on the basis of open standards, the electronic Physician Annotation Device (ePAD), to tackle these challenges in two ways. First, ePAD facilitates the radiologist in carrying out cancer lesion measurements as part of routine clinical trial image interpretation workflow. Second, ePAD records all image measurements and annotations in a data format that permits repurposing image data for analyses of alternative imaging biomarkers of treatment response. To determine the impact of ePAD on radiologist efficiency in quantitative assessment of imaging studies, a radiologist evaluated computed tomography (CT) imaging studies from 20 subjects having one baseline and three consecutive follow-up imaging studies with and without ePAD. The radiologist made measurements of target lesions in each imaging study using Response Evaluation Criteria in Solid Tumors 1.1 criteria, initially with the aid of ePAD, and then after a 30-day washout period, the exams were reread without ePAD. The mean total time required to review the images and summarize measurements of target lesions was 15% (P < .039) shorter using ePAD than without using this tool. In addition, it was possible to rapidly reanalyze the images to explore lesion cross-sectional area as an alternative imaging biomarker to linear measure. We conclude that ePAD appears promising to potentially improve reader efficiency for quantitative assessment of CT examinations, and it may enable discovery of future novel image-based biomarkers of cancer treatment response.

4.
AMIA Annu Symp Proc ; 2009: 359-63, 2009 Nov 14.
Artigo em Inglês | MEDLINE | ID: mdl-20351880

RESUMO

Identifying, tracking and reasoning about tumor lesions is a central task in cancer research and clinical practice that could potentially be automated. However, information about tumor lesions in imaging studies is not easily accessed by machines for automated reasoning. The Annotation and Image Markup (AIM) information model recently developed for the cancer Biomedical Informatics Grid provides a method for encoding the semantic information related to imaging findings, enabling their storage and transfer. However, it is currently not possible to apply automated reasoning methods to image information encoded in AIM. We have developed a methodology and a suite of tools for transforming AIM image annotations into OWL, and an ontology for reasoning with the resulting image annotations for tumor lesion assessment. Our methods enable automated inference of semantic information about cancer lesions in images.


Assuntos
Interpretação de Imagem Assistida por Computador , Processamento de Linguagem Natural , Neoplasias/diagnóstico , Linguagens de Programação , Ensaios Clínicos como Assunto , Diagnóstico por Imagem , Humanos , Semântica
5.
AMIA Annu Symp Proc ; : 614-9, 2007 Oct 11.
Artigo em Inglês | MEDLINE | ID: mdl-18693909

RESUMO

Many biomedical research databases contain time-oriented data resulting from longitudinal, time-series and time-dependent study designs, knowledge of which is not handled explicitly by most data-analytic methods. To make use of such knowledge about research data, we have developed an ontology-driven temporal mining method, called ChronoMiner. Most mining algorithms require data be inputted in a single table. ChronoMiner, in contrast, can search for interesting temporal patterns among multiple input tables and at different levels of hierarchical representation. In this paper, we present the application of our method to the discovery of temporal associations between newly arising mutations in the HIV genome and past drug regimens. We discuss the various components of ChronoMiner, including its user interface, and provide results of a study indicating the efficiency and potential value of ChronoMiner on an existing HIV drug resistance data repository.


Assuntos
Algoritmos , Fármacos Anti-HIV/uso terapêutico , Farmacorresistência Viral/genética , Infecções por HIV/tratamento farmacológico , HIV/genética , Armazenamento e Recuperação da Informação/métodos , Bases de Dados como Assunto , Genoma Viral , Infecções por HIV/virologia , Humanos , Bases de Conhecimento , Estudos Longitudinais , Mutação , Tempo , Interface Usuário-Computador , Carga Viral , Vocabulário Controlado
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA