Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 60
Filtrar
1.
Phys Rev Lett ; 133(7): 071801, 2024 Aug 16.
Artículo en Inglés | MEDLINE | ID: mdl-39213562

RESUMEN

Millicharged particles appear in several extensions of the standard model, but have not yet been detected. These hypothetical particles could be produced by an intense proton beam striking a fixed target. We use data collected in 2020 by the SENSEI experiment in the MINOS cavern at the Fermi National Accelerator Laboratory to search for ultrarelativistic millicharged particles produced in collisions of protons in the NuMI beam with a fixed graphite target. The absence of any ionization events with 3 to 6 electrons in the SENSEI data allow us to place world-leading constraints on millicharged particles for masses between 30 to 380 MeV. This work also demonstrates the potential of utilizing low-threshold detectors to investigate new particles in beam-dump experiments, and motivates a future experiment designed specifically for this purpose.

2.
bioRxiv ; 2024 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-38979347

RESUMEN

The large-scale experimental measures of variant functional assays submitted to MaveDB have the potential to provide key information for resolving variants of uncertain significance, but the reporting of results relative to assayed sequence hinders their downstream utility. The Atlas of Variant Effects Alliance mapped multiplexed assays of variant effect data to human reference sequences, creating a robust set of machine-readable homology mappings. This method processed approximately 2.5 million protein and genomic variants in MaveDB, successfully mapping 98.61% of examined variants and disseminating data to resources such as the UCSC Genome Browser and Ensembl Variant Effect Predictor.

3.
Genome Biol ; 25(1): 100, 2024 04 19.
Artículo en Inglés | MEDLINE | ID: mdl-38641812

RESUMEN

Multiplexed assays of variant effect (MAVEs) have emerged as a powerful approach for interrogating thousands of genetic variants in a single experiment. The flexibility and widespread adoption of these techniques across diverse disciplines have led to a heterogeneous mix of data formats and descriptions, which complicates the downstream use of the resulting datasets. To address these issues and promote reproducibility and reuse of MAVE data, we define a set of minimum information standards for MAVE data and metadata and outline a controlled vocabulary aligned with established biomedical ontologies for describing these experimental designs.


Asunto(s)
Metadatos , Proyectos de Investigación , Reproducibilidad de los Resultados
4.
Nucleic Acids Res ; 52(D1): D1227-D1235, 2024 Jan 05.
Artículo en Inglés | MEDLINE | ID: mdl-37953380

RESUMEN

The Drug-Gene Interaction Database (DGIdb, https://dgidb.org) is a publicly accessible resource that aggregates genes or gene products, drugs and drug-gene interaction records to drive hypothesis generation and discovery for clinicians and researchers. DGIdb 5.0 is the latest release and includes substantial architectural and functional updates to support integration into clinical and drug discovery pipelines. The DGIdb service architecture has been split into separate client and server applications, enabling consistent data access for users of both the application programming interface (API) and web interface. The new interface was developed in ReactJS, and includes dynamic visualizations and consistency in the display of user interface elements. A GraphQL API has been added to support customizable queries for all drugs, genes, annotations and associated data. Updated documentation provides users with example queries and detailed usage instructions for these new features. In addition, six sources have been added and many existing sources have been updated. Newly added sources include ChemIDplus, HemOnc, NCIt (National Cancer Institute Thesaurus), Drugs@FDA, HGNC (HUGO Gene Nomenclature Committee) and RxNorm. These new sources have been incorporated into DGIdb to provide additional records and enhance annotations of regulatory approval status for therapeutics. Methods for grouping drugs and genes have been expanded upon and developed as independent modular normalizers during import. The updates to these sources and grouping methods have resulted in an improvement in FAIR (findability, accessibility, interoperability and reusability) data representation in DGIdb.


Asunto(s)
Medicina de Precisión , Humanos , Bases de Datos Farmacéuticas , Descubrimiento de Drogas , Internet , Interfaz Usuario-Computador , Vocabulario Controlado
5.
JAMIA Open ; 6(4): ooad093, 2023 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-37954974

RESUMEN

Objective: The diversity of nomenclature and naming strategies makes therapeutic terminology difficult to manage and harmonize. As the number and complexity of available therapeutic ontologies continues to increase, the need for harmonized cross-resource mappings is becoming increasingly apparent. This study creates harmonized concept mappings that enable the linking together of like-concepts despite source-dependent differences in data structure or semantic representation. Materials and Methods: For this study, we created Thera-Py, a Python package and web API that constructs searchable concepts for drugs and therapeutic terminologies using 9 public resources and thesauri. By using a directed graph approach, Thera-Py captures commonly used aliases, trade names, annotations, and associations for any given therapeutic and combines them under a single concept record. Results: We highlight the creation of 16 069 unique merged therapeutic concepts from 9 distinct sources using Thera-Py and observe an increase in overlap of therapeutic concepts in 2 or more knowledge bases after harmonization using Thera-Py (9.8%-41.8%). Conclusion: We observe that Thera-Py tends to normalize therapeutic concepts to their underlying active ingredients (excluding nondrug therapeutics, eg, radiation therapy, biologics), and unifies all available descriptors regardless of ontological origin.

6.
Learn Health Syst ; 7(4): e10385, 2023 Oct.
Artículo en Inglés | MEDLINE | ID: mdl-37860057

RESUMEN

Introduction: Variant annotation is a critical component in next-generation sequencing, enabling a sequencing lab to comb through a sea of variants in order to hone in on those likely to be most significant, and providing clinicians with necessary context for decision-making. But with the rapid evolution of genomics knowledge, reported annotations can quickly become out-of-date. Under the ONC Sync for Genes program, our team sought to standardize the sharing of dynamically annotated variants (e.g., variants annotated on demand, based on current knowledge). The computable biomedical knowledge artifacts that were developed enable a clinical decision support (CDS) application to surface up-to-date annotations to clinicians. Methods: The work reported in this article relies on the Health Level 7 Fast Healthcare Interoperability Resources (FHIR) Genomics and Global Alliance for Genomics and Health (GA4GH) Variant Annotation (VA) standards. We developed a CDS pipeline that dynamically annotates patient's variants through an intersection with current knowledge and serves up the FHIR-encoded variants and annotations (diagnostic and therapeutic implications, molecular consequences, population allele frequencies) via FHIR Genomics Operations. ClinVar, CIViC, and PharmGKB were used as knowledge sources, encoded as per the GA4GH VA specification. Results: Primary public artifacts from this project include a GitHub repository with all source code, a Swagger interface that allows anyone to visualize and interact with the code using only a web browser, and a backend database where all (synthetic and anonymized) patient data and knowledge are housed. Conclusions: We found that variant annotation varies in complexity based on the variant type, and that various bioinformatics strategies can greatly improve automated annotation fidelity. More importantly, we demonstrated the feasibility of an ecosystem where genomic knowledge bases have standardized knowledge (e.g., based on the GA4GH VA spec), and CDS applications can dynamically leverage that knowledge to provide real-time decision support, based on current knowledge, to clinicians at the point of care.

7.
ArXiv ; 2023 Jun 26.
Artículo en Inglés | MEDLINE | ID: mdl-37426450

RESUMEN

Multiplexed Assays of Variant Effect (MAVEs) have emerged as a powerful approach for interrogating thousands of genetic variants in a single experiment. The flexibility and widespread adoption of these techniques across diverse disciplines has led to a heterogeneous mix of data formats and descriptions, which complicates the downstream use of the resulting datasets. To address these issues and promote reproducibility and reuse of MAVE data, we define a set of minimum information standards for MAVE data and metadata and outline a controlled vocabulary aligned with established biomedical ontologies for describing these experimental designs.

8.
PLoS One ; 18(5): e0285433, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37196000

RESUMEN

The Global Alliance for Genomics and Health (GA4GH) is a standards-setting organization that is developing a suite of coordinated standards for genomics. The GA4GH Phenopacket Schema is a standard for sharing disease and phenotype information that characterizes an individual person or biosample. The Phenopacket Schema is flexible and can represent clinical data for any kind of human disease including rare disease, complex disease, and cancer. It also allows consortia or databases to apply additional constraints to ensure uniform data collection for specific goals. We present phenopacket-tools, an open-source Java library and command-line application for construction, conversion, and validation of phenopackets. Phenopacket-tools simplifies construction of phenopackets by providing concise builders, programmatic shortcuts, and predefined building blocks (ontology classes) for concepts such as anatomical organs, age of onset, biospecimen type, and clinical modifiers. Phenopacket-tools can be used to validate the syntax and semantics of phenopackets as well as to assess adherence to additional user-defined requirements. The documentation includes examples showing how to use the Java library and the command-line tool to create and validate phenopackets. We demonstrate how to create, convert, and validate phenopackets using the library or the command-line application. Source code, API documentation, comprehensive user guide and a tutorial can be found at https://github.com/phenopackets/phenopacket-tools. The library can be installed from the public Maven Central artifact repository and the application is available as a standalone archive. The phenopacket-tools library helps developers implement and standardize the collection and exchange of phenotypic and other clinical data for use in phenotype-driven genomic diagnostics, translational research, and precision medicine applications.


Asunto(s)
Neoplasias , Programas Informáticos , Humanos , Genómica , Bases de Datos Factuales , Biblioteca de Genes
9.
Adv Genet (Hoboken) ; 4(1): 2200016, 2023 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-36910590

RESUMEN

The Global Alliance for Genomics and Health (GA4GH) is developing a suite of coordinated standards for genomics for healthcare. The Phenopacket is a new GA4GH standard for sharing disease and phenotype information that characterizes an individual person, linking that individual to detailed phenotypic descriptions, genetic information, diagnoses, and treatments. A detailed example is presented that illustrates how to use the schema to represent the clinical course of a patient with retinoblastoma, including demographic information, the clinical diagnosis, phenotypic features and clinical measurements, an examination of the extirpated tumor, therapies, and the results of genomic analysis. The Phenopacket Schema, together with other GA4GH data and technical standards, will enable data exchange and provide a foundation for the computational analysis of disease and phenotype information to improve our ability to diagnose and conduct research on all types of disorders, including cancer and rare diseases.

10.
Pac Symp Biocomput ; 28: 383-394, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36540993

RESUMEN

As the diversity of genomic variation data increases with our growing understanding of the role of variation in health and disease, it is critical to develop standards for precise inter-system exchange of these data for research and clinical applications. The Global Alliance for Genomics and Health (GA4GH) Variation Representation Specification (VRS) meets this need through a technical terminology and information model for disambiguating and concisely representing variation concepts. Here we discuss the recent Genotype model in VRS, which may be used to represent the allelic composition of a genetic locus. We demonstrate the use of the Genotype model and the constituent Haplotype model for the precise and interoperable representation of pharmacogenomic diplotypes, HGVS variants, and VCF records using VRS and discuss how this can be leveraged to enable interoperable exchange and search operations between assayed variation and genomic knowledgebases.


Asunto(s)
Biología Computacional , Variación Genética , Humanos , Bases de Datos Genéticas , Genómica , Genotipo
11.
Pac Symp Biocomput ; 28: 531-535, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-36541006

RESUMEN

The Clinical Genome Resource (ClinGen) serves as an authoritative resource on the clinical relevance of genes and variants. In order to support our curation activities and to disseminate our findings to the community, we have developed a Data Platform of informatics resources backed by standardized data models. In this workshop we demonstrate our publicly available resources including curation interfaces, (Variant Curation Interface, CIViC), supporting infrastructure (Allele Registry, Genegraph), and data models (SEPIO, GA4GH VRS, VA).


Asunto(s)
Biología Computacional , Variación Genética , Humanos , Bases de Datos Genéticas , Genoma Humano , Genómica
12.
Bioinformatics ; 38(23): 5279-5287, 2022 11 30.
Artículo en Inglés | MEDLINE | ID: mdl-36222570

RESUMEN

MOTIVATION: Despite the increasing evidence of utility of genomic medicine in clinical practice, systematically integrating genomic medicine information and knowledge into clinical systems with a high-level of consistency, scalability and computability remains challenging. A comprehensive terminology is required for relevant concepts and the associated knowledge model for representing relationships. In this study, we leveraged PharmGKB, a comprehensive pharmacogenomics (PGx) knowledgebase, to formulate a terminology for drug response phenotypes that can represent relationships between genetic variants and treatments. We evaluated coverage of the terminology through manual review of a randomly selected subset of 200 sentences extracted from genetic reports that contained concepts for 'Genes and Gene Products' and 'Treatments'. RESULTS: Results showed that our proposed drug response phenotype terminology could cover 96% of the drug response phenotypes in genetic reports. Among 18 653 sentences that contained both 'Genes and Gene Products' and 'Treatments', 3011 sentences were able to be mapped to a drug response phenotype in our proposed terminology, among which the most discussed drug response phenotypes were response (994), sensitivity (829) and survival (332). In addition, we were able to re-analyze genetic report context incorporating the proposed terminology and enrich our previously proposed PGx knowledge model to reveal relationships between genetic variants and treatments. In conclusion, we proposed a drug response phenotype terminology that enhanced structured knowledge representation of genomic medicine. SUPPLEMENTARY INFORMATION: Supplementary data are available at Bioinformatics online.


Asunto(s)
Medicina Genómica , Farmacogenética , Farmacogenética/métodos , Bases del Conocimiento , Fenotipo
14.
Br J Cancer ; 127(8): 1540-1549, 2022 11.
Artículo en Inglés | MEDLINE | ID: mdl-35871236

RESUMEN

BACKGROUND: Cholangiocarcinoma (CCA) is a primary malignancy of the biliary tract with a dismal prognosis. Recently, several actionable genetic aberrations were identified with significant enrichment in intrahepatic CCA, including FGFR2 gene fusions with a prevalence of 10-15%. Recent clinical data demonstrate that these fusions are druggable in a second-line setting in advanced/metastatic disease and the efficacy in earlier lines of therapy is being evaluated in ongoing clinical trials. This scenario warrants standardised molecular profiling of these tumours. METHODS: A detailed analysis of the original genetic data from the FIGHT-202 trial, on which the approval of Pemigatinib was based, was conducted. RESULTS: Comparing different detection approaches and displaying representative cases, we described the genetic landscape and architecture of FGFR2 fusions in iCCA and show biological and technical aspects to be considered for their detection. We elaborated parameters, including a suggestion for annotation, that should be stated in a molecular diagnostic FGFR2 report to allow a complete understanding of the analysis performed and the information provided. CONCLUSION: This study provides a detailed presentation and dissection of the technical and biological aspects regarding FGFR2 fusion detection, which aims to support molecular pathologists, pathologists and clinicians in diagnostics, reporting of the results and decision-making.


Asunto(s)
Neoplasias de los Conductos Biliares , Colangiocarcinoma , Neoplasias de los Conductos Biliares/tratamiento farmacológico , Conductos Biliares Intrahepáticos/patología , Colangiocarcinoma/tratamiento farmacológico , Genómica , Humanos , Técnicas de Diagnóstico Molecular , Receptor Tipo 2 de Factor de Crecimiento de Fibroblastos/genética
16.
Database (Oxford) ; 20222022 05 25.
Artículo en Inglés | MEDLINE | ID: mdl-35616100

RESUMEN

Despite progress in the development of standards for describing and exchanging scientific information, the lack of easy-to-use standards for mapping between different representations of the same or similar objects in different databases poses a major impediment to data integration and interoperability. Mappings often lack the metadata needed to be correctly interpreted and applied. For example, are two terms equivalent or merely related? Are they narrow or broad matches? Or are they associated in some other way? Such relationships between the mapped terms are often not documented, which leads to incorrect assumptions and makes them hard to use in scenarios that require a high degree of precision (such as diagnostics or risk prediction). Furthermore, the lack of descriptions of how mappings were done makes it hard to combine and reconcile mappings, particularly curated and automated ones. We have developed the Simple Standard for Sharing Ontological Mappings (SSSOM) which addresses these problems by: (i) Introducing a machine-readable and extensible vocabulary to describe metadata that makes imprecision, inaccuracy and incompleteness in mappings explicit. (ii) Defining an easy-to-use simple table-based format that can be integrated into existing data science pipelines without the need to parse or query ontologies, and that integrates seamlessly with Linked Data principles. (iii) Implementing open and community-driven collaborative workflows that are designed to evolve the standard continuously to address changing requirements and mapping practices. (iv) Providing reference tools and software libraries for working with the standard. In this paper, we present the SSSOM standard, describe several use cases in detail and survey some of the existing work on standardizing the exchange of mappings, with the goal of making mappings Findable, Accessible, Interoperable and Reusable (FAIR). The SSSOM specification can be found at http://w3id.org/sssom/spec. Database URL: http://w3id.org/sssom/spec.


Asunto(s)
Metadatos , Web Semántica , Manejo de Datos , Bases de Datos Factuales , Flujo de Trabajo
18.
Genet Med ; 24(5): 986-998, 2022 05.
Artículo en Inglés | MEDLINE | ID: mdl-35101336

RESUMEN

PURPOSE: Several professional societies have published guidelines for the clinical interpretation of somatic variants, which specifically address diagnostic, prognostic, and therapeutic implications. Although these guidelines for the clinical interpretation of variants include data types that may be used to determine the oncogenicity of a variant (eg, population frequency, functional, and in silico data or somatic frequency), they do not provide a direct, systematic, and comprehensive set of standards and rules to classify the oncogenicity of a somatic variant. This insufficient guidance leads to inconsistent classification of rare somatic variants in cancer, generates variability in their clinical interpretation, and, importantly, affects patient care. Therefore, it is essential to address this unmet need. METHODS: Clinical Genome Resource (ClinGen) Somatic Cancer Clinical Domain Working Group and ClinGen Germline/Somatic Variant Subcommittee, the Cancer Genomics Consortium, and the Variant Interpretation for Cancer Consortium used a consensus approach to develop a standard operating procedure (SOP) for the classification of oncogenicity of somatic variants. RESULTS: This comprehensive SOP has been developed to improve consistency in somatic variant classification and has been validated on 94 somatic variants in 10 common cancer-related genes. CONCLUSION: The comprehensive SOP is now available for classification of oncogenicity of somatic variants.


Asunto(s)
Genoma Humano , Neoplasias , Pruebas Genéticas/métodos , Variación Genética/genética , Genoma Humano/genética , Genómica/métodos , Humanos , Neoplasias/genética , Virulencia
19.
Semin Cancer Biol ; 84: 129-143, 2022 09.
Artículo en Inglés | MEDLINE | ID: mdl-33631297

RESUMEN

The complexity of diagnostic (surgical) pathology has increased substantially over the last decades with respect to histomorphological and molecular profiling. Pathology has steadily expanded its role in tumor diagnostics and beyond from disease entity identification via prognosis estimation to precision therapy prediction. It is therefore not surprising that pathology is among the disciplines in medicine with high expectations in the application of artificial intelligence (AI) or machine learning approaches given their capabilities to analyze complex data in a quantitative and standardized manner to further enhance scope and precision of diagnostics. While an obvious application is the analysis of histological images, recent applications for the analysis of molecular profiling data from different sources and clinical data support the notion that AI will enhance both histopathology and molecular pathology in the future. At the same time, current literature should not be misunderstood in a way that pathologists will likely be replaced by AI applications in the foreseeable future. Although AI will transform pathology in the coming years, recent studies reporting AI algorithms to diagnose cancer or predict certain molecular properties deal with relatively simple diagnostic problems that fall short of the diagnostic complexity pathologists face in clinical routine. Here, we review the pertinent literature of AI methods and their applications to pathology, and put the current achievements and what can be expected in the future in the context of the requirements for research and routine diagnostics.


Asunto(s)
Inteligencia Artificial , Neoplasias , Humanos , Aprendizaje Automático , Neoplasias/diagnóstico , Neoplasias/genética , Pronóstico
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA