Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 59
Filter
Add more filters

Country/Region as subject
Publication year range
1.
Metabolomics ; 19(2): 11, 2023 02 06.
Article in English | MEDLINE | ID: mdl-36745241

ABSTRACT

BACKGROUND: Liquid chromatography-high resolution mass spectrometry (LC-HRMS) is a popular approach for metabolomics data acquisition and requires many data processing software tools. The FAIR Principles - Findability, Accessibility, Interoperability, and Reusability - were proposed to promote open science and reusable data management, and to maximize the benefit obtained from contemporary and formal scholarly digital publishing. More recently, the FAIR principles were extended to include Research Software (FAIR4RS). AIM OF REVIEW: This study facilitates open science in metabolomics by providing an implementation solution for adopting FAIR4RS in the LC-HRMS metabolomics data processing software. We believe our evaluation guidelines and results can help improve the FAIRness of research software. KEY SCIENTIFIC CONCEPTS OF REVIEW: We evaluated 124 LC-HRMS metabolomics data processing software obtained from a systematic review and selected 61 software for detailed evaluation using FAIR4RS-related criteria, which were extracted from the literature along with internal discussions. We assigned each criterion one or more FAIR4RS categories through discussion. The minimum, median, and maximum percentages of criteria fulfillment of software were 21.6%, 47.7%, and 71.8%. Statistical analysis revealed no significant improvement in FAIRness over time. We identified four criteria covering multiple FAIR4RS categories but had a low %fulfillment: (1) No software had semantic annotation of key information; (2) only 6.3% of evaluated software were registered to Zenodo and received DOIs; (3) only 14.5% of selected software had official software containerization or virtual machine; (4) only 16.7% of evaluated software had a fully documented functions in code. According to the results, we discussed improvement strategies and future directions.


Subject(s)
Metabolomics , Software , Metabolomics/methods , Chromatography, Liquid/methods , Mass Spectrometry/methods , Data Management
2.
BMC Bioinformatics ; 20(Suppl 21): 708, 2019 Dec 23.
Article in English | MEDLINE | ID: mdl-31865907

ABSTRACT

BACKGROUND: The Drug Ontology (DrOn) is a modular, extensible ontology of drug products, their ingredients, and their biological activity created to enable comparative effectiveness and health services researchers to query National Drug Codes (NDCs) that represent products by ingredient, by molecular disposition, by therapeutic disposition, and by physiological effect (e.g., diuretic). It is based on the RxNorm drug terminology maintained by the U.S. National Library of Medicine, and on the Chemical Entities of Biological Interest ontology. Both national drug codes (NDCs) and RxNorm unique concept identifiers (RXCUIS) can undergo changes over time that can obfuscate their meaning when these identifiers occur in historic data. We present a new approach to modeling these entities within DrOn that will allow users of DrOn working with historic prescription data to more easily and correctly interpret that data. RESULTS: We have implemented a full accounting of national drug codes and RxNorm unique concept identifiers as information content entities, and of the processes involved in managing their creation and changes. This includes an OWL file that implements and defines the classes necessary to model these entities. A separate file contains an instance-level prototype in OWL that demonstrates the feasibility of this approach to representing NDCs and RXCUIs and the processes of managing them by retrieving and representing several individual NDCs, both active and inactive, and the RXCUIs to which they are connected. We also demonstrate how historic information about these identifiers in DrOn can be easily retrieved using a simple SPARQL query. CONCLUSIONS: An accurate model of how these identifiers operate in reality is a valuable addition to DrOn that enhances its usefulness as a knowledge management resource for working with historic data.


Subject(s)
Vocabulary, Controlled , Biological Ontologies , National Library of Medicine (U.S.) , RxNorm , Semantics , United States
4.
J Biomed Inform ; 55: 206-17, 2015 Jun.
Article in English | MEDLINE | ID: mdl-25917055

ABSTRACT

Although potential drug-drug interactions (PDDIs) are a significant source of preventable drug-related harm, there is currently no single complete source of PDDI information. In the current study, all publically available sources of PDDI information that could be identified using a comprehensive and broad search were combined into a single dataset. The combined dataset merged fourteen different sources including 5 clinically-oriented information sources, 4 Natural Language Processing (NLP) Corpora, and 5 Bioinformatics/Pharmacovigilance information sources. As a comprehensive PDDI source, the merged dataset might benefit the pharmacovigilance text mining community by making it possible to compare the representativeness of NLP corpora for PDDI text extraction tasks, and specifying elements that can be useful for future PDDI extraction purposes. An analysis of the overlap between and across the data sources showed that there was little overlap. Even comprehensive PDDI lists such as DrugBank, KEGG, and the NDF-RT had less than 50% overlap with each other. Moreover, all of the comprehensive lists had incomplete coverage of two data sources that focus on PDDIs of interest in most clinical settings. Based on this information, we think that systems that provide access to the comprehensive lists, such as APIs into RxNorm, should be careful to inform users that the lists may be incomplete with respect to PDDIs that drug experts suggest clinicians be aware of. In spite of the low degree of overlap, several dozen cases were identified where PDDI information provided in drug product labeling might be augmented by the merged dataset. Moreover, the combined dataset was also shown to improve the performance of an existing PDDI NLP pipeline and a recently published PDDI pharmacovigilance protocol. Future work will focus on improvement of the methods for mapping between PDDI information sources, identifying methods to improve the use of the merged dataset in PDDI NLP algorithms, integrating high-quality PDDI information from the merged dataset into Wikidata, and making the combined dataset accessible as Semantic Web Linked Data.


Subject(s)
Adverse Drug Reaction Reporting Systems/organization & administration , Data Mining/methods , Database Management Systems/organization & administration , Databases, Factual , Drug Interactions , Natural Language Processing , Internet/organization & administration , Machine Learning , Medical Record Linkage/methods , Pharmacovigilance
5.
Stud Health Technol Inform ; 314: 3-13, 2024 May 23.
Article in English | MEDLINE | ID: mdl-38784996

ABSTRACT

Health and social care systems around the globe currently undergo a transformation towards personalized, preventive, predictive, participative precision medicine (5PM), considering the individual health status, conditions, genetic and genomic dispositions, etc., in personal, social, occupational, environmental and behavioral context. This transformation is strongly supported by technologies such as micro- and nanotechnologies, advanced computing, artificial intelligence, edge computing, etc. For enabling communication and cooperation between actors from different domains using different methodologies, languages and ontologies based on different education, experiences, etc., we have to understand the transformed health ecosystems and all its components in structure, function and relationships in the necessary detail ranging from elementary particles up to the universe. That way, we advance design and management of the complex and highly dynamic ecosystem from data to knowledge level. The challenge is the consistent, correct and formalized representation of the transformed health ecosystem from the perspectives of all domains involved, representing and managing them based on related ontologies. The resulting business view of the real-world ecosystem must be interrelated using the ISO/IEC 21838 Top Level Ontologies standard. Thereafter, the outcome can be transformed into implementable solutions using the ISO/IEC 10746 Open Distributed Processing Reference Model. Model and framework for this system-oriented, architecture-centric, ontology-based, policy-driven approach have been developed by the first author and meanwhile standardized as ISO 23903 Interoperability and Integration Reference Architecture.


Subject(s)
Precision Medicine , Humans , Artificial Intelligence
6.
J Clin Transl Sci ; 7(1): e32, 2023.
Article in English | MEDLINE | ID: mdl-36845317

ABSTRACT

Background: The murder of George Floyd created national outcry that echoed down to national institutions, including universities and academic systems to take a hard look at systematic and systemic racism in higher education. This motivated the creation of a fear and tension-minimizing, curricular offering, "Courageous Conversations," collaboratively engaging students, staff, and faculty in matters of diversity, equity, and inclusion (DEI) in the Department of Health Outcomes and Biomedical Informatics at the University of Florida. Methods: A qualitative design was employed assessing narrative feedback from participants during the Fall semester of 2020. Additionally, the ten-factor model implementation framework was applied and assessed. Data collection included two focus groups and document analysis with member-checking. Thematic analysis (i.e., organizing, coding, synthesizing) was used to analyze a priori themes based on the four agreements of the courageous conversations framework, stay engaged, expect to experience discomfort, speak your truth, and expect and accept non-closure. Results: A total of 41 participants of which 20 (48.78%) were department staff members, 11 (26.83%) were department faculty members, and 10 (24.30%) were graduate students. The thematic analysis revealed 1) that many participants credited their learning experiences to what their peers had said about their own personal lived experiences during group sessions, and 2) several participants said they would either retake the course or recommend it to a colleague. Conclusion: With structured implementation, courageous conversations can be an effective approach to create more diverse, equitable, and inclusive spaces in training programs with similar DEI ecosystems.

7.
Front Med (Lausanne) ; 10: 1073313, 2023.
Article in English | MEDLINE | ID: mdl-37007792

ABSTRACT

This paper provides an overview of current linguistic and ontological challenges which have to be met in order to provide full support to the transformation of health ecosystems in order to meet precision medicine (5 PM) standards. It highlights both standardization and interoperability aspects regarding formal, controlled representations of clinical and research data, requirements for smart support to produce and encode content in a way that humans and machines can understand and process it. Starting from the current text-centered communication practices in healthcare and biomedical research, it addresses the state of the art in information extraction using natural language processing (NLP). An important aspect of the language-centered perspective of managing health data is the integration of heterogeneous data sources, employing different natural languages and different terminologies. This is where biomedical ontologies, in the sense of formal, interchangeable representations of types of domain entities come into play. The paper discusses the state of the art of biomedical ontologies, addresses their importance for standardization and interoperability and sheds light to current misconceptions and shortcomings. Finally, the paper points out next steps and possible synergies of both the field of NLP and the area of Applied Ontology and Semantic Web to foster data interoperability for 5 PM.

8.
J Pers Med ; 13(8)2023 Jul 30.
Article in English | MEDLINE | ID: mdl-37623460

ABSTRACT

The ongoing transformation of health systems around the world aims at personalized, preventive, predictive, participative precision medicine, supported by technology. It considers individual health status, conditions, and genetic and genomic dispositions in personal, social, occupational, environmental and behavioral contexts. In this way, it transforms health and social care from art to science by fully understanding the pathology of diseases and turning health and social care from reactive to proactive. The challenge is the understanding and the formal as well as consistent representation of the world of sciences and practices, i.e., of multidisciplinary and dynamic systems in variable context. This enables mapping between the different disciplines, methodologies, perspectives, intentions, languages, etc., as philosophy or cognitive sciences do. The approach requires the deployment of advanced technologies including autonomous systems and artificial intelligence. This poses important ethical and governance challenges. This paper describes the aforementioned transformation of health and social care ecosystems as well as the related challenges and solutions, resulting in a sophisticated, formal reference architecture. This reference architecture provides a system-theoretical, architecture-centric, ontology-based, policy-driven model and framework for designing and managing intelligent and ethical ecosystems in general and health ecosystems in particular.

9.
J Clin Transl Sci ; 7(1): e3, 2023.
Article in English | MEDLINE | ID: mdl-36755541

ABSTRACT

Background/Objective: Informed consent forms (ICFs) and practices vary widely across institutions. This project expands on previous work at the University of Arkansas for Medical Sciences (UAMS) Center for Health Literacy to develop a plain language ICF template. Our interdisciplinary team of researchers, comprised of biomedical informaticists, health literacy experts, and stakeholders in the Institutional Review Board (IRB) process, has developed the ICF Navigator, a novel tool to facilitate the creation of plain language ICFs that comply with all relevant regulatory requirements. Methods: Our team first developed requirements for the ICF Navigator tool. The tool was then implemented by a technical team of informaticists and software developers, in consultation with an informed consent legal expert. We developed and formalized a detailed knowledge map modeling regulatory requirements for ICFs, which drives workflows within the tool. Results: The ICF Navigator is a web-based tool that guides researchers through creating an ICF as they answer questions about their project. The navigator uses those responses to produce a clear and compliant ICF, displaying a real-time preview of the final form as content is added. Versioning and edits can be tracked to facilitate collaborative revisions by the research team and communication with the IRB. The navigator helps guide the creation of study-specific language, ensures compliance with regulatory requirements, and ensures that the resulting ICF is easy to read and understand. Conclusion: The ICF Navigator is an innovative, customizable, open-source software tool that helps researchers produce custom readable and compliant ICFs for research studies involving human subjects.

10.
J Biomed Semantics ; 14(1): 14, 2023 09 20.
Article in English | MEDLINE | ID: mdl-37730667

ABSTRACT

BACKGROUND: Clinical early warning scoring systems, have improved patient outcomes in a range of specializations and global contexts. These systems are used to predict patient deterioration. A multitude of patient-level physiological decompensation data has been made available through the widespread integration of early warning scoring systems within EHRs across national and international health care organizations. These data can be used to promote secondary research. The diversity of early warning scoring systems and various EHR systems is one barrier to secondary analysis of early warning score data. Given that early warning score parameters are varied, this makes it difficult to query across providers and EHR systems. Moreover, mapping and merging the parameters is challenging. We develop and validate the Early Warning System Scores Ontology (EWSSO), representing three commonly used early warning scores: the National Early Warning Score (NEWS), the six-item modified Early Warning Score (MEWS), and the quick Sequential Organ Failure Assessment (qSOFA) to overcome these problems. METHODS: We apply the Software Development Lifecycle Framework-conceived by Winston Boyce in 1970-to model the activities involved in organizing, producing, and evaluating the EWSSO. We also follow OBO Foundry Principles and the principles of best practice for domain ontology design, terms, definitions, and classifications to meet BFO requirements for ontology building. RESULTS: We developed twenty-nine new classes, reused four classes and four object properties to create the EWSSO. When we queried the data our ontology-based process could differentiate between necessary and unnecessary features for score calculation 100% of the time. Further, our process applied the proper temperature conversions for the early warning score calculator 100% of the time. CONCLUSIONS: Using synthetic datasets, we demonstrate the EWSSO can be used to generate and query health system data on vital signs and provide input to calculate the NEWS, six-item MEWS, and qSOFA. Future work includes extending the EWSSO by introducing additional early warning scores for adult and pediatric patient populations and creating patient profiles that contain clinical, demographic, and outcomes data regarding the patient.


Subject(s)
Early Warning Score , Adult , Child , Humans , Software
11.
J Am Soc Mass Spectrom ; 34(12): 2857-2863, 2023 Dec 06.
Article in English | MEDLINE | ID: mdl-37874901

ABSTRACT

Liquid chromatography-mass spectrometry (LC-MS) metabolomics studies produce high-dimensional data that must be processed by a complex network of informatics tools to generate analysis-ready data sets. As the first computational step in metabolomics, data processing is increasingly becoming a challenge for researchers to develop customized computational workflows that are applicable for LC-MS metabolomics analysis. Ontology-based automated workflow composition (AWC) systems provide a feasible approach for developing computational workflows that consume high-dimensional molecular data. We used the Automated Pipeline Explorer (APE) to create an AWC for LC-MS metabolomics data processing across three use cases. Our results show that APE predicted 145 data processing workflows across all the three use cases. We identified six traditional workflows and six novel workflows. Through manual review, we found that one-third of novel workflows were executable whereby the data processing function could be completed without obtaining an error. When selecting the top six workflows from each use case, the computational viable rate of our predicted workflows reached 45%. Collectively, our study demonstrates the feasibility of developing an AWC system for LC-MS metabolomics data processing.


Subject(s)
Hominidae , Software , Animals , Workflow , Metabolomics/methods , Mass Spectrometry , Chromatography, Liquid/methods
12.
Stud Health Technol Inform ; 180: 1087-9, 2012.
Article in English | MEDLINE | ID: mdl-22874362

ABSTRACT

Comprehensive interoperability between distributed eHealth/pHealth environments requires that the systems involved are based on a common architectural framework and share common knowledge. The paper deals with the representation of systems by related ontologies. Therefore, the architectural principles ruling the system design and the interrelations of its components also rule the design of those ontologies and their management as exemplified.


Subject(s)
Database Management Systems/organization & administration , Electronic Health Records/organization & administration , Health Information Management/methods , Health Records, Personal , Information Storage and Retrieval/methods , Medical Record Linkage/methods , Models, Theoretical , Germany
13.
Stud Health Technol Inform ; 295: 302-303, 2022 Jun 29.
Article in English | MEDLINE | ID: mdl-35773868

ABSTRACT

Integration of clinical-pathological information of Biobanks with genomics-epidemiological data/inferences in a structured and consistent manner, mitigating inherent heterogeneities of sites/sources of data/sample collection, processing, and information storage hurdles, is primary to achieving an automated surveillance system. Genomics Integrated Biobanking Ontology (GIBO) presents a solution for preserving the contextual meaning of heterogeneous data, while interlinking different genomics and epidemiological concepts in machine comprehensible format with the biobank framework. GIBO an OWL ontology introduces 84 new classes to integrate genomics data relevant to public health.


Subject(s)
Biological Specimen Banks , Genomics , Information Storage and Retrieval , Public Health , Specimen Handling
14.
Phys Med Biol ; 68(1)2022 12 23.
Article in English | MEDLINE | ID: mdl-36279873

ABSTRACT

The cancer imaging archive (TICA) receives and manages an ever-increasing quantity of clinical (non-image) data containing valuable information about subjects in imaging collections. To harmonize and integrate these data, we have first cataloged the types of information occurring across public TCIA collections. We then produced mappings for these diverse instance data using ontology-based representation patterns and transformed the data into a knowledge graph in a semantic database. This repository combined the transformed instance data with relevant background knowledge from domain ontologies. The resulting repository of semantically integrated data is a rich source of information about subjects that can be queried across imaging collections. Building on this work we have implemented and deployed a REST API and a user-facing semantic cohort builder tool. This tool allows allow researchers and other users to search and identify groups of subject-level records based on non-image data that were not queryable prior to this work. The search results produced by this interface link to images, allowing users to quickly identify and view images matching the selection criteria, as well as allowing users to export the harmonized clinical data.


Subject(s)
Neoplasms , Software , Humans , Semantics , Neoplasms/diagnostic imaging , Diagnostic Imaging , Databases, Factual
15.
Metabolites ; 12(1)2022 Jan 17.
Article in English | MEDLINE | ID: mdl-35050209

ABSTRACT

Clinical metabolomics emerged as a novel approach for biomarker discovery with the translational potential to guide next-generation therapeutics and precision health interventions. However, reproducibility in clinical research employing metabolomics data is challenging. Checklists are a helpful tool for promoting reproducible research. Existing checklists that promote reproducible metabolomics research primarily focused on metadata and may not be sufficient to ensure reproducible metabolomics data processing. This paper provides a checklist including actions that need to be taken by researchers to make computational steps reproducible for clinical metabolomics studies. We developed an eight-item checklist that includes criteria related to reusable data sharing and reproducible computational workflow development. We also provided recommended tools and resources to complete each item, as well as a GitHub project template to guide the process. The checklist is concise and easy to follow. Studies that follow this checklist and use recommended resources may facilitate other researchers to reproduce metabolomics results easily and efficiently.

16.
J Pers Med ; 12(5)2022 May 07.
Article in English | MEDLINE | ID: mdl-35629179

ABSTRACT

To improve patient outcomes after trauma, the need to decrypt the post-traumatic immune response has been identified. One prerequisite to drive advancement in understanding that domain is the implementation of surgical biobanks. This paper focuses on the outcomes of patients with one of two diagnoses: post-traumatic arthritis and osteomyelitis. In creating surgical biobanks, currently, many obstacles must be overcome. Roadblocks exist around scoping of data that is to be collected, and the semantic integration of these data. In this paper, the generic component model and the Semantic Web technology stack are used to solve issues related to data integration. The results are twofold: (a) a scoping analysis of data and the ontologies required to harmonize and integrate it, and (b) resolution of common data integration issues in integrating data relevant to trauma surgery.

17.
J Biomed Inform ; 44(1): 8-25, 2011 Feb.
Article in English | MEDLINE | ID: mdl-20438862

ABSTRACT

OBJECTIVE: This paper introduces the objectives, methods and results of ontology development in the EU co-funded project Advancing Clinico-genomic Trials on Cancer-Open Grid Services for Improving Medical Knowledge Discovery (ACGT). While the available data in the life sciences has recently grown both in amount and quality, the full exploitation of it is being hindered by the use of different underlying technologies, coding systems, category schemes and reporting methods on the part of different research groups. The goal of the ACGT project is to contribute to the resolution of these problems by developing an ontology-driven, semantic grid services infrastructure that will enable efficient execution of discovery-driven scientific workflows in the context of multi-centric, post-genomic clinical trials. The focus of the present paper is the ACGT Master Ontology (MO). METHODS: ACGT project researchers undertook a systematic review of existing domain and upper-level ontologies, as well as of existing ontology design software, implementation methods, and end-user interfaces. This included the careful study of best practices, design principles and evaluation methods for ontology design, maintenance, implementation, and versioning, as well as for use on the part of domain experts and clinicians. RESULTS: To date, the results of the ACGT project include (i) the development of a master ontology (the ACGT-MO) based on clearly defined principles of ontology development and evaluation; (ii) the development of a technical infrastructure (the ACGT Platform) that implements the ACGT-MO utilizing independent tools, components and resources that have been developed based on open architectural standards, and which includes an application updating and evolving the ontology efficiently in response to end-user needs; and (iii) the development of an Ontology-based Trial Management Application (ObTiMA) that integrates the ACGT-MO into the design process of clinical trials in order to guarantee automatic semantic integration without the need to perform a separate mapping process.


Subject(s)
Computational Biology , Database Management Systems , Medical Informatics , Medical Oncology , Neoplasms , Animals , Databases, Factual , Humans , Vocabulary, Controlled
18.
Stud Health Technol Inform ; 169: 739-43, 2011.
Article in English | MEDLINE | ID: mdl-21893845

ABSTRACT

The representation of multiple relations is one of the main criteria of ontologies. In formalizing both ontologies and terminologies in biomedicine relations are used to code axioms for the classes of the ontology. However, a huge number of relations represented in medical ontologies and terminologies are derived from language and formal definition is omitted. We present a strategy based on an architectural approach to facility formal analysis of relations for use in ontology systems in biomedicine and in general.


Subject(s)
Computational Biology/methods , Algorithms , Artificial Intelligence , Humans , Informatics , Information Services , Integrated Advanced Information Management Systems , Medical Informatics Applications , Neoplasms/diagnosis , Software Design , Systems Integration , Systems Theory , Terminology as Topic , Vocabulary, Controlled
19.
Stud Health Technol Inform ; 169: 734-8, 2011.
Article in English | MEDLINE | ID: mdl-21893844

ABSTRACT

The challenges regarding seamless integration of distributed, heterogeneous and multilevel data arising in the context of contemporary, post-genomic clinical trials cannot be effectively addressed with current methodologies. An urgent need exists to access data in a uniform manner, to share information among different clinical and research centers, and to store data in secure repositories assuring the privacy of patients. Advancing Clinico-Genomic Trials (ACGT) was a European Commission funded Integrated Project that aimed at providing tools and methods to enhance the efficiency of clinical trials in the -omics era. The project, now completed after four years of work, involved the development of both a set of methodological approaches as well as tools and services and its testing in the context of real-world clinico-genomic scenarios. This paper describes the main experiences using the ACGT platform and its tools within one such scenario and highlights the very promising results obtained.


Subject(s)
Computational Biology/organization & administration , Medical Informatics/organization & administration , Biomedical Research , Clinical Trials as Topic , Computer Systems , Computers , Europe , Genomics , Humans , Neoplasms/genetics , Program Development , User-Computer Interface , Workflow
20.
Stud Health Technol Inform ; 285: 3-14, 2021 Oct 27.
Article in English | MEDLINE | ID: mdl-34734847

ABSTRACT

For meeting the challenge of aging, multi-diseased societies, cost containment, workforce development and consumerism by improved care quality and patient safety as well as more effective and efficient care processes, health and social care systems around the globe undergo an organizational, methodological and technological transformation towards personalized, preventive, predictive, participative precision medicine (P5 medicine). This paper addresses chances, challenges and risks of specific disruptive methodologies and technologies for the transformation of health and social care systems, especially focusing on the deployment of intelligent and autonomous systems.


Subject(s)
Artificial Intelligence , Precision Medicine , Humans
SELECTION OF CITATIONS
SEARCH DETAIL