Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 105
Filtrar
Más filtros

Tipo del documento
Intervalo de año de publicación
1.
Health Informatics J ; 30(4): 14604582241287010, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-39367798

RESUMEN

Objective: A comprehensive understanding of professional and technical terms is essential to achieving practical results in multidisciplinary projects dealing with health informatics and digital health. The medical informatics multilingual ontology (MIMO) initiative has been created through international cooperation. MIMO is continuously updated and comprises over 3700 concepts in 37 languages on the Health Terminology/Ontology Portal (HeTOP). Methods: We conducted case studies to assess the feasibility and impact of integrating MIMO into real-world healthcare projects. In HosmartAI, MIMO is used to index technological tools in a dedicated marketplace and improve partners' communication. Then, in SaNuRN, MIMO supports the development of a "Catalog and Index of Digital Health Teaching Resources" (CIDHR) backing digital health resources retrieval for health and allied health students. Results: In HosmartAI, MIMO facilitates the indexation of technological tools and smooths partners' interactions. In SaNuRN within CIDHR, MIMO ensures that students and practitioners access up-to-date, multilingual, and high-quality resources to enhance their learning endeavors. Conclusion: Integrating MIMO into training in smart hospital projects allows healthcare students and experts worldwide with different mother tongues and knowledge to tackle challenges facing the health informatics and digital health landscape to find innovative solutions improving initial and continuous education.


Asunto(s)
Inteligencia Artificial , Informática Médica , Humanos , Inteligencia Artificial/tendencias , Informática Médica/educación , Informática Médica/métodos , Hospitales , Salud Digital
2.
J Microsc ; 2024 Sep 14.
Artículo en Inglés | MEDLINE | ID: mdl-39275979

RESUMEN

Modern bioimaging core facilities at research institutions are essential for managing and maintaining high-end instruments, providing training and support for researchers in experimental design, image acquisition and data analysis. An important task for these facilities is the professional management of complex multidimensional bioimaging data, which are often produced in large quantity and very different file formats. This article details the process that led to successfully implementing the OME Remote Objects system (OMERO) for bioimage-specific research data management (RDM) at the Core Facility Cellular Imaging (CFCI) at the Technische Universität Dresden (TU Dresden). Ensuring compliance with the FAIR (findable, accessible, interoperable, reusable) principles, we outline here the challenges that we faced in adapting data handling and storage to a new RDM system. These challenges included the introduction of a standardised group-specific naming convention, metadata curation with tagging and Key-Value pairs, and integration of existing image processing workflows. By sharing our experiences, this article aims to provide insights and recommendations for both individual researchers and educational institutions intending to implement OMERO as a management system for bioimaging data. We showcase how tailored decisions and structured approaches lead to successful outcomes in RDM practices.

3.
J Exp Biol ; 227(18)2024 Sep 15.
Artículo en Inglés | MEDLINE | ID: mdl-39287119

RESUMEN

JEB has broadened its scope to include non-hypothesis-led research. In this Perspective, based on our lab's lived experience, I argue that this is excellent news, because truly novel insights can occur from 'blue skies' idea-led experiments. Hypothesis-led and hypothesis-free experimentation are not philosophically antagonistic; rather, the latter can provide a short-cut to an unbiased view of organism function, and is intrinsically hypothesis generating. Insights derived from hypothesis-free research are commonly obtained by the generation and analysis of big datasets - for example, by genetic screens - or from omics-led approaches (notably transcriptomics). Furthermore, meta-analyses of existing datasets can also provide a lower-cost means to formulating new hypotheses, specifically if researchers take advantage of the FAIR principles (findability, accessibility, interoperability and reusability) to access relevant, publicly available datasets. The broadened scope will thus bring new, original work and novel insights to our journal, by expanding the range of fundamental questions that can be asked.


Asunto(s)
Macrodatos
4.
J Appl Microbiol ; 135(9)2024 Sep 02.
Artículo en Inglés | MEDLINE | ID: mdl-39113269

RESUMEN

Public sector data associated with health are a highly valuable resource with multiple potential end-users, from health practitioners, researchers, public bodies, policy makers, and industry. Data for infectious disease agents are used for epidemiological investigations, disease tracking and assessing emerging biological threats. Yet, there are challenges in collating and re-using it. Data may be derived from multiple sources, generated and collected for different purposes. While public sector data should be open access, providers from public health settings or from agriculture, food, or environment sources have sensitivity criteria to meet with ethical restrictions in how the data can be reused. Yet, sharable datasets need to describe the pathogens with sufficient contextual metadata for maximal utility, e.g. associated disease or disease potential and the pathogen source. As data comprise the physical resources of pathogen collections and potentially associated sequences, there is an added emerging technical issue of integration of omics 'big data'. Thus, there is a need to identify suitable means to integrate and safely access diverse data for pathogens. Established genomics alliances and platforms interpret and meet the challenges in different ways depending on their own context. Nonetheless, their templates and frameworks provide a solution for adaption to pathogen datasets.


Asunto(s)
Genómica , Difusión de la Información , Salud Pública , Humanos , Enfermedades Transmisibles
5.
Stud Health Technol Inform ; 316: 200-201, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39176707

RESUMEN

Transforming the population based biomedical cohort into the Common Data Model (OMOP-CDM) empowers researchers to access direct sources of information, enabling a deeper understanding of how genetic profiles relate to clinical outcomes and providing new knowledge that can significantly influence health care practices around the world.


Asunto(s)
Registros Electrónicos de Salud , Humanos , España
6.
Stud Health Technol Inform ; 316: 1449-1450, 2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39176654

RESUMEN

This paper presents ongoing work on the modeling of different datasets using the ART-DECOR modeling tool, with a focus on adherence to the FAIR principles (Findable, Accessible, Interoperable, and Reusable). The successful modeling of the French minimal dataset for rare diseases (Set de donnees minimal des maladies rares (SDM-MR.fr)) should provide inspiration for the development of the German minimal dataset for rare diseases (Minimalbasisdatensatz fur Seltene Erkrankungen (MBDS-SE.de)).


Asunto(s)
Enfermedades Raras , Humanos , Conjuntos de Datos como Asunto , Alemania , Programas Informáticos , Registros Electrónicos de Salud , Francia , Bases de Datos Factuales
7.
Data Brief ; 55: 110687, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39049974

RESUMEN

This data article presents a set of primary, analyzed, and digitalized mechanical testing datasets for nine copper alloys. The mechanical testing methods including the Brinell and Vickers hardness, tensile, stress relaxation, and low-cycle fatigue (LCF) testing were performed according to the DIN/ISO standards. The obtained primary testing data (84 files) mainly contain the raw measured data along with the testing metadata of the processes, materials, and testing machines. Five secondary datasets were also provided for each testing method by collecting the main meta- and measurement data from the primary data and the outputs of data analyses. These datasets give materials scientists beneficial data for comparative material selection analyses by clarifying the wide range of mechanical properties of copper alloys, including Brinell and Vickers hardness, yield and tensile strengths, elongation, reduction of area, relaxed and residual stresses, and LCF fatigue life. Furthermore, both the primary and secondary datasets were digitalized by the approach introduced in the research article entitled "Toward a digital materials mechanical testing lab" [1]. The resulting open-linked data are the machine-processable semantic descriptions of data and their generation processes and can be easily queried by semantic searches to enable advanced data-driven materials research.

8.
J Biomed Inform ; 157: 104700, 2024 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-39079607

RESUMEN

BACKGROUND: The future European Health Research and Innovation Cloud (HRIC), as fundamental part of the European Health Data Space (EHDS), will promote the secondary use of data and the capabilities to push the boundaries of health research within an ethical and legally compliant framework that reinforces the trust of patients and citizens. OBJECTIVE: This study aimed to analyse health data management mechanisms in Europe to determine their alignment with FAIR principles and data discovery generating best. practices for new data hubs joining the HRIC ecosystem. In this line, the compliance of health data hubs with FAIR principles and data discovery were assessed, and a set of best practices for health data hubs was concluded. METHODS: A survey was conducted in January 2022, involving 99 representative health data hubs from multiple countries, and 42 responses were obtained in June 2022. Stratification methods were employed to cover different levels of granularity. The survey data was analysed to assess compliance with FAIR and data discovery principles. The study started with a general analysis of survey responses, followed by the creation of specific profiles based on three categories: organization type, function, and level of data aggregation. RESULTS: The study produced specific best practices for data hubs regarding the adoption of FAIR principles and data discoverability. It also provided an overview of the survey study and specific profiles derived from category analysis, considering different types of data hubs. CONCLUSIONS: The study concluded that a significant number of health data hubs in Europe did not fully comply with FAIR and data discovery principles. However, the study identified specific best practices that can guide new data hubs in adhering to these principles. The study highlighted the importance of aligning health data management mechanisms with FAIR principles to enhance interoperability and reusability in the future HRIC.


Asunto(s)
Nube Computacional , Humanos , Europa (Continente) , Encuestas y Cuestionarios , Manejo de Datos/métodos , Registros Electrónicos de Salud , Informática Médica/métodos
9.
ACS Synth Biol ; 13(8): 2621-2624, 2024 Aug 16.
Artículo en Inglés | MEDLINE | ID: mdl-39051984

RESUMEN

The BioRECIPE (Biological system Representation for Evaluation, Curation, Interoperability, Preserving, and Execution) knowledge representation format was introduced to standardize and facilitate human-machine interaction while creating, verifying, evaluating, curating, and expanding executable models of intra- and intercellular signaling. This format allows a human user to easily preview and modify any model component, while it is at the same time readable by machines and can be processed by a suite of model development and analysis tools. The BioRECIPE format is compatible with multiple representation formats, natural language processing tools, modeling tools, and databases that are used by the systems and synthetic biology communities.


Asunto(s)
Biología Sintética , Humanos , Biología Sintética/métodos , Procesamiento de Lenguaje Natural , Programas Informáticos , Modelos Biológicos , Bases de Datos Factuales , Biología de Sistemas/métodos
10.
JMIR Public Health Surveill ; 10: e54281, 2024 07 23.
Artículo en Inglés | MEDLINE | ID: mdl-39042429

RESUMEN

Infectious disease (ID) cohorts are key to advancing public health surveillance, public policies, and pandemic responses. Unfortunately, ID cohorts often lack funding to store and share clinical-epidemiological (CE) data and high-dimensional laboratory (HDL) data long term, which is evident when the link between these data elements is not kept up to date. This becomes particularly apparent when smaller cohorts fail to successfully address the initial scientific objectives due to limited case numbers, which also limits the potential to pool these studies to monitor long-term cross-disease interactions within and across populations. CE data from 9 arbovirus (arthropod-borne viruses) cohorts in Latin America were retrospectively harmonized using the Maelstrom Research methodology and standardized to Clinical Data Interchange Standards Consortium (CDISC). We created a harmonized and standardized meta-cohort that contains CE and HDL data from 9 arbovirus studies from Latin America. To facilitate advancements in cross-population inference and reuse of cohort data, the Reconciliation of Cohort Data for Infectious Diseases (ReCoDID) Consortium harmonized and standardized CE and HDL from 9 arbovirus cohorts into 1 meta-cohort. Interested parties will be able to access data dictionaries that include information on variables across the data sets via Bio Studies. After consultation with each cohort, linked harmonized and curated human cohort data (CE and HDL) will be made accessible through the European Genome-phenome Archive platform to data users after their requests are evaluated by the ReCoDID Data Access Committee. This meta-cohort can facilitate various joint research projects (eg, on immunological interactions between sequential flavivirus infections and for the evaluation of potential biomarkers for severe arboviral disease).


Asunto(s)
Infecciones por Arbovirus , Humanos , Infecciones por Arbovirus/epidemiología , Estudios de Cohortes , América Latina/epidemiología , Masculino , Femenino , Niño , Arbovirus , Estudios Retrospectivos , Adolescente , Preescolar , Adulto
11.
Plant Methods ; 20(1): 103, 2024 Jul 13.
Artículo en Inglés | MEDLINE | ID: mdl-39003455

RESUMEN

BACKGROUND: Genotyping of individuals plays a pivotal role in various biological analyses, with technology choice influenced by multiple factors including genomic constraints, number of targeted loci and individuals, cost considerations, and the ease of sample preparation and data processing. Target enrichment capture of specific polymorphic regions has emerged as a flexible and cost-effective genomic reduction method for genotyping, especially adapted to the case of very large genomes. However, this approach necessitates complex bioinformatics treatment to extract genotyping data from raw reads. Existing workflows predominantly cater to phylogenetic inference, leaving a gap in user-friendly tools for genotyping analysis based on capture methods. In response to these challenges, we have developed GeCKO (Genotyping Complexity Knocked-Out). To assess the effectiveness of combining target enrichment capture with GeCKO, we conducted a case study on durum wheat domestication history, involving sequencing, processing, and analyzing variants in four relevant durum wheat groups. RESULTS: GeCKO encompasses four distinct workflows, each designed for specific steps of genomic data processing: (i) read demultiplexing and trimming for data cleaning, (ii) read mapping to align sequences to a reference genome, (iii) variant calling to identify genetic variants, and (iv) variant filtering. Each workflow in GeCKO can be easily configured and is executable across diverse computational environments. The workflows generate comprehensive HTML reports including key summary statistics and illustrative graphs, ensuring traceable, reproducible results and facilitating straightforward quality assessment. A specific innovation within GeCKO is its 'targeted remapping' feature, specifically designed for efficient treatment of targeted enrichment capture data. This process consists of extracting reads mapped to the targeted regions, constructing a smaller sub-reference genome, and remapping the reads to this sub-reference, thereby enhancing the efficiency of subsequent steps. CONCLUSIONS: The case study results showed the expected intra-group diversity and inter-group differentiation levels, confirming the method's effectiveness for genotyping and analyzing genetic diversity in species with complex genomes. GeCKO streamlined the data processing, significantly improving computational performance and efficiency. The targeted remapping enabled straightforward SNP calling in durum wheat, a task otherwise complicated by the species' large genome size. This illustrates its potential applications in various biological research contexts.

12.
Brief Bioinform ; 25(4)2024 May 23.
Artículo en Inglés | MEDLINE | ID: mdl-38836701

RESUMEN

Biomedical data are generated and collected from various sources, including medical imaging, laboratory tests and genome sequencing. Sharing these data for research can help address unmet health needs, contribute to scientific breakthroughs, accelerate the development of more effective treatments and inform public health policy. Due to the potential sensitivity of such data, however, privacy concerns have led to policies that restrict data sharing. In addition, sharing sensitive data requires a secure and robust infrastructure with appropriate storage solutions. Here, we examine and compare the centralized and federated data sharing models through the prism of five large-scale and real-world use cases of strategic significance within the European data sharing landscape: the French Health Data Hub, the BBMRI-ERIC Colorectal Cancer Cohort, the federated European Genome-phenome Archive, the Observational Medical Outcomes Partnership/OHDSI network and the EBRAINS Medical Informatics Platform. Our analysis indicates that centralized models facilitate data linkage, harmonization and interoperability, while federated models facilitate scaling up and legal compliance, as the data typically reside on the data generator's premises, allowing for better control of how data are shared. This comparative study thus offers guidance on the selection of the most appropriate sharing strategy for sensitive datasets and provides key insights for informed decision-making in data sharing efforts.


Asunto(s)
Disciplinas de las Ciencias Biológicas , Difusión de la Información , Humanos , Informática Médica/métodos
13.
Acta Crystallogr D Struct Biol ; 80(Pt 6): 439-450, 2024 Jun 01.
Artículo en Inglés | MEDLINE | ID: mdl-38832828

RESUMEN

The expansive scientific software ecosystem, characterized by millions of titles across various platforms and formats, poses significant challenges in maintaining reproducibility and provenance in scientific research. The diversity of independently developed applications, evolving versions and heterogeneous components highlights the need for rigorous methodologies to navigate these complexities. In response to these challenges, the SBGrid team builds, installs and configures over 530 specialized software applications for use in the on-premises and cloud-based computing environments of SBGrid Consortium members. To address the intricacies of supporting this diverse application collection, the team has developed the Capsule Software Execution Environment, generally referred to as Capsules. Capsules rely on a collection of programmatically generated bash scripts that work together to isolate the runtime environment of one application from all other applications, thereby providing a transparent cross-platform solution without requiring specialized tools or elevated account privileges for researchers. Capsules facilitate modular, secure software distribution while maintaining a centralized, conflict-free environment. The SBGrid platform, which combines Capsules with the SBGrid collection of structural biology applications, aligns with FAIR goals by enhancing the findability, accessibility, interoperability and reusability of scientific software, ensuring seamless functionality across diverse computing environments. Its adaptability enables application beyond structural biology into other scientific fields.


Asunto(s)
Programas Informáticos , Biología Computacional/métodos
14.
Int J Digit Humanit ; 6(1): 23-43, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38799025

RESUMEN

Reproducibility has become a requirement in the hard sciences, and its adoption is gradually extending to the digital humanities. The FAIR criteria and the publication of data papers are both indicative of this trend. However, the question that arises is whether the strict prerequisites of digital reproducibility serve only to exclude digital humanities from broader humanities scholarship. Instead of adopting a binary approach, an alternative method acknowledges the unique features of the objects, inquiries, and techniques of the humanities, including digital humanities, as well as the social and historical contexts in which the concept of reproducibility has developed in the human sciences. In the first part of this paper, I propose to examine the historical and disciplinary context in which the concept of reproducibility has developed within the human sciences, and the disciplinary struggles involved in this process, especially for art history and literature studies. In the second part, I will explore the question of reproducibility through two art history research projects that utilize various computational methods. I argue that issues of corpus, method, and interpretation cannot be separated, rendering a procedural definition of reproducibility impractical. Consequently, I propose the adoption of 'post-computational reproducibility', which is based on FAIREST criteria as far as digital corpora are concerned (FAIR + Ethics and Expertise, Source mention + Time-Stamp), but extended to include further sources that confirm computational results with other non-computational methodologies.

15.
J Biomed Inform ; 154: 104647, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38692465

RESUMEN

OBJECTIVE: To use software, datasets, and data formats in the domain of Infectious Disease Epidemiology as a test collection to evaluate a novel M1 use case, which we introduce in this paper. M1 is a machine that upon receipt of a new digital object of research exhaustively finds all valid compositions of it with existing objects. METHOD: We implemented a data-format-matching-only M1 using exhaustive search, which we refer to as M1DFM. We then ran M1DFM on the test collection and used error analysis to identify needed semantic constraints. RESULTS: Precision of M1DFM search was 61.7%. Error analysis identified needed semantic constraints and needed changes in handling of data services. Most semantic constraints were simple, but one data format was sufficiently complex to be practically impossible to represent semantic constraints over, from which we conclude limitatively that software developers will have to meet the machines halfway by engineering software whose inputs are sufficiently simple that their semantic constraints can be represented, akin to the simple APIs of services. We summarize these insights as M1-FAIR guiding principles for composability and suggest a roadmap for progressively capable devices in the service of reuse and accelerated scientific discovery. CONCLUSION: Algorithmic search of digital repositories for valid workflow compositions has potential to accelerate scientific discovery but requires a scalable solution to the problem of knowledge acquisition about semantic constraints on software inputs. Additionally, practical limitations on the logical complexity of semantic constraints must be respected, which has implications for the design of software.


Asunto(s)
Programas Informáticos , Humanos , Semántica , Aprendizaje Automático , Algoritmos , Bases de Datos Factuales
16.
Front Cell Infect Microbiol ; 14: 1384809, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38774631

RESUMEN

Introduction: Sharing microbiome data among researchers fosters new innovations and reduces cost for research. Practically, this means that the (meta)data will have to be standardized, transparent and readily available for researchers. The microbiome data and associated metadata will then be described with regards to composition and origin, in order to maximize the possibilities for application in various contexts of research. Here, we propose a set of tools and protocols to develop a real-time FAIR (Findable. Accessible, Interoperable and Reusable) compliant database for the handling and storage of human microbiome and host-associated data. Methods: The conflicts arising from privacy laws with respect to metadata, possible human genome sequences in the metagenome shotgun data and FAIR implementations are discussed. Alternate pathways for achieving compliance in such conflicts are analyzed. Sample traceable and sensitive microbiome data, such as DNA sequences or geolocalized metadata are identified, and the role of the GDPR (General Data Protection Regulation) data regulations are considered. For the construction of the database, procedures have been realized to make data FAIR compliant, while preserving privacy of the participants providing the data. Results and discussion: An open-source development platform, Supabase, was used to implement the microbiome database. Researchers can deploy this real-time database to access, upload, download and interact with human microbiome data in a FAIR complaint manner. In addition, a large language model (LLM) powered by ChatGPT is developed and deployed to enable knowledge dissemination and non-expert usage of the database.


Asunto(s)
Microbiota , Humanos , Microbiota/genética , Bases de Datos Factuales , Metadatos , Metagenoma , Difusión de la Información , Biología Computacional/métodos , Metagenómica/métodos , Bases de Datos Genéticas
17.
PeerJ Comput Sci ; 10: e1951, 2024.
Artículo en Inglés | MEDLINE | ID: mdl-38660149

RESUMEN

Software plays a fundamental role in research as a tool, an output, or even as an object of study. This special issue on software citation, indexing, and discoverability brings together five papers examining different aspects of how the use of software is recorded and made available to others. It describes new work on datasets that enable large-scale analysis of the evolution of software usage and citation, that presents evidence of increased citation rates when software artifacts are released, that provides guidance for registries and repositories to support software citation and findability, and that shows there are still barriers to improving and formalising software citation and publication practice. As the use of software increases further, driven by modern research methods, addressing the barriers to software citation and discoverability will encourage greater sharing and reuse of software, in turn enabling research progress.

19.
J Biomed Semantics ; 15(1): 1, 2024 Mar 04.
Artículo en Inglés | MEDLINE | ID: mdl-38438913

RESUMEN

The increasing number of articles on adverse interactions that may occur when specific foods are consumed with certain drugs makes it difficult to keep up with the latest findings. Conflicting information is available in the scientific literature and specialized knowledge bases because interactions are described in an unstructured or semi-structured format. The FIDEO ontology aims to integrate and represent information about food-drug interactions in a structured way. This article reports on the new version of this ontology in which more than 1700 interactions are integrated from two online resources: DrugBank and Hedrine. These food-drug interactions have been represented in FIDEO in the form of precompiled concepts, each of which specifies both the food and the drug involved. Additionally, competency questions that can be answered are reviewed, and avenues for further enrichment are discussed.


Asunto(s)
Interacciones Alimento-Droga , Bases del Conocimiento
20.
Stud Health Technol Inform ; 310: 154-158, 2024 Jan 25.
Artículo en Inglés | MEDLINE | ID: mdl-38269784

RESUMEN

Decision-making in healthcare is heavily reliant on data that is findable, accessible, interoperable and reusable (FAIR). Evolving advancements in genomics also heavily rely on FAIR data to steer reliable research for the future. For practical purposes, ensuring FAIRness of a clinical data set can be challenging but could be aided by using FAIR validators. The study describes the test of two open-access web-tools in their demo versions to determine the FAIR levels of three submitted genomic data files with different formats (JSON, TXT, CSV). The F-UJI tool and FAIR-Checker tools provided similar FAIR scores for the three submitted files. However, the F-UJI tool assigned a total rating whereas the FAIR-Checker gave scores clustered by FAIR principles. Neither tool was suited to determine FAIR levels of a FHIR® JSON metadata file. Despite their early developmental status, FAIR validator tools have great potential to assist clinicians in the FAIRification of their research data.


Asunto(s)
Genómica , Instituciones de Salud , Metadatos , Registros
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA