Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 11.005
Filtrar
Mais filtros








Intervalo de ano de publicação
2.
Med Ref Serv Q ; 43(2): 130-151, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38722608

RESUMO

While LibGuides are widely used in libraries to curate resources for users, there are a number of common problems, including maintenance, design and layout, and curating relevant and concise content. One health sciences library sought to improve our LibGuides, consulting usage statistics, user feedback, and recommendations from the literature to inform decision making. Our team recommended a number of changes to make LibGuides more usable, including creating robust maintenance and content guidelines, scheduling regular updates, and various changes to the format of the guides themselves to make them more user-friendly.


Assuntos
Bibliotecas Médicas , Estudos de Casos Organizacionais , Bibliotecas Médicas/organização & administração , Humanos , Armazenamento e Recuperação da Informação/métodos
3.
Med Ref Serv Q ; 43(2): 182-190, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38722607

RESUMO

Created by the NIH in 2015, the Common Data Elements (CDE) Repository provides free online access to search and use Common Data Elements. This tool helps to ensure consistent data collection, saves time and resources, and ultimately improves the accuracy of and interoperability among datasets. The purpose of this column is to provide an overview of the database, discuss why it is important for researchers and relevant for health sciences librarians, and review the basic layout of the website, including sample searches that will demonstrate how it can be used.


Assuntos
Elementos de Dados Comuns , Estados Unidos , Humanos , Bases de Dados Factuais , Armazenamento e Recuperação da Informação/métodos , National Institutes of Health (U.S.)
4.
Med Ref Serv Q ; 43(2): 196-202, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38722609

RESUMO

Named entity recognition (NER) is a powerful computer system that utilizes various computing strategies to extract information from raw text input, since the early 1990s. With rapid advancement in AI and computing, NER models have gained significant attention and been serving as foundational tools across numerus professional domains to organize unstructured data for research and practical applications. This is particularly evident in the medical and healthcare fields, where NER models are essential in efficiently extract critical information from complex documents that are challenging for manual review. Despite its successes, NER present limitations in fully comprehending natural language nuances. However, the development of more advanced and user-friendly models promises to improve work experiences of professional users significantly.


Assuntos
Armazenamento e Recuperação da Informação , Processamento de Linguagem Natural , Armazenamento e Recuperação da Informação/métodos , Humanos , Inteligência Artificial
5.
Bioinformatics ; 40(5)2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38648049

RESUMO

MOTIVATION: As data storage challenges grow and existing technologies approach their limits, synthetic DNA emerges as a promising storage solution due to its remarkable density and durability advantages. While cost remains a concern, emerging sequencing and synthetic technologies aim to mitigate it, yet introduce challenges such as errors in the storage and retrieval process. One crucial task in a DNA storage system is clustering numerous DNA reads into groups that represent the original input strands. RESULTS: In this paper, we review different methods for evaluating clustering algorithms and introduce a novel clustering algorithm for DNA storage systems, named Gradual Hash-based clustering (GradHC). The primary strength of GradHC lies in its capability to cluster with excellent accuracy various types of designs, including varying strand lengths, cluster sizes (including extremely small clusters), and different error ranges. Benchmark analysis demonstrates that GradHC is significantly more stable and robust than other clustering algorithms previously proposed for DNA storage, while also producing highly reliable clustering results. AVAILABILITY AND IMPLEMENTATION: https://github.com/bensdvir/GradHC.


Assuntos
Algoritmos , DNA , Análise de Sequência de DNA , DNA/química , Análise por Conglomerados , Análise de Sequência de DNA/métodos , Software , Armazenamento e Recuperação da Informação/métodos
6.
BMC Med Inform Decis Mak ; 24(1): 109, 2024 Apr 25.
Artigo em Inglês | MEDLINE | ID: mdl-38664792

RESUMO

BACKGROUND: A blockchain can be described as a distributed ledger database where, under a consensus mechanism, data are permanently stored in records, called blocks, linked together with cryptography. Each block contains a cryptographic hash function of the previous block, a timestamp, and transaction data, which are permanently stored in thousands of nodes and never altered. This provides a potential real-world application for generating a permanent, decentralized record of scientific data, taking advantage of blockchain features such as timestamping and immutability. IMPLEMENTATION: Here, we propose INNBC DApp, a Web3 decentralized application providing a simple front-end user interface connected with a smart contract for recording scientific data on a modern, proof-of-stake (POS) blockchain such as BNB Smart Chain. Unlike previously proposed blockchain tools that only store a hash of the data on-chain, here the data are stored fully on-chain within the transaction itself as "transaction input data", with a true decentralized storage solution. In addition to plain text, the DApp can record various types of files, such as documents, images, audio, and video, by using Base64 encoding. In this study, we describe how to use the DApp and perform real-world transactions storing different kinds of data from previously published research articles, describing the advantages and limitations of using such a technology, analyzing the cost in terms of transaction fees, and discussing possible use cases. RESULTS: We have been able to store several different types of data on the BNB Smart Chain: raw text, documents, images, audio, and video. Notably, we stored several complete research articles at a reasonable cost. We found a limit of 95KB for each single file upload. Considering that Base64 encoding increases file size by approximately 33%, this provides us with a theoretical limit of 126KB. We successfully overcome this limitation by splitting larger files into smaller chunks and uploading them as multi-volume archives. Additionally, we propose AES encryption to protect sensitive data. Accordingly, we show that it is possible to include enough data to be useful for storing and sharing scientific documents and images on the blockchain at a reasonable cost for the users. CONCLUSION: INNBC DApp represents a real use case for blockchain technology in decentralizing biomedical data storage and sharing, providing us with features such as immutability, timestamp, and identity that can be used to ensure permanent availability of the data and to provide proof-of-existence as well as to protect authorship, a freely available decentralized science (DeSci) tool aiming to help bring mass adoption of blockchain technology among the scientific community.


Assuntos
Blockchain , Humanos , Armazenamento e Recuperação da Informação/métodos , Segurança Computacional/normas
7.
Stud Health Technol Inform ; 313: 74-80, 2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38682508

RESUMO

While adherence to clinical guidelines improves the quality and consistency of care, personalized healthcare also requires a deep understanding of individual disease models and treatment plans. The structured preparation of medical routine data in a certain clinical context, e.g. a treatment pathway outlined in a medical guideline, is currently a challenging task. Medical data is often stored in diverse formats and systems, and the relevant clinical knowledge defining the context is not available in machine-readable formats. We present an approach to extract information from medical free text documentation by using structured clinical knowledge to guide information extraction into a structured and encoded format, overcoming the known challenges for natural language processing algorithms. Preliminary results have been encouraging, as one of our methods managed to extract 100% of all data-points with 85% accuracy in details. These advancements show the potential of our approach to effectively use unstructured clinical data to elevate the quality of patient care and reduce the workload of medical personnel.


Assuntos
Registros Eletrônicos de Saúde , Processamento de Linguagem Natural , Humanos , Mineração de Dados/métodos , Armazenamento e Recuperação da Informação/métodos , Algoritmos
8.
Stud Health Technol Inform ; 313: 198-202, 2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38682530

RESUMO

Secondary use of clinical health data implies a prior integration of mostly heterogenous and multidimensional data sets. A clinical data warehouse addresses the technological and organizational framework conditions required for this, by making any data available for analysis. However, users of a data warehouse often do not have a comprehensive overview of all available data and only know about their own data in their own systems - a situation which is also referred to as 'data siloed state'. This problem can be addressed and ultimately solved by implementation of a data catalog. Its core function is a search engine, which allows for searching the metadata collected from different data sources and thereby accessing all data there is. With this in mind, we conducted an explorative online market survey followed by vendor comparison as a pre-requisite for system selection of a data catalog. Assessment of vendor performance was based on seven predetermined and weighted selection criteria. Although three vendors achieved the highest score, results were lying closely together. Detailed investigations and test installations are needed for further narrowing down the selection process.


Assuntos
Data Warehousing , Registros Eletrônicos de Saúde , Ferramenta de Busca , Humanos , Armazenamento e Recuperação da Informação/métodos , Metadados
9.
Stud Health Technol Inform ; 313: 215-220, 2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38682533

RESUMO

BACKGROUND: Tele-ophthalmology is gaining recognition for its role in improving eye care accessibility via cloud-based solutions. The Google Cloud Platform (GCP) Healthcare API enables secure and efficient management of medical image data such as high-resolution ophthalmic images. OBJECTIVES: This study investigates cloud-based solutions' effectiveness in tele-ophthalmology, with a focus on GCP's role in data management, annotation, and integration for a novel imaging device. METHODS: Leveraging the Integrating the Healthcare Enterprise (IHE) Eye Care profile, the cloud platform was utilized as a PACS and integrated with the Open Health Imaging Foundation (OHIF) Viewer for image display and annotation capabilities for ophthalmic images. RESULTS: The setup of a GCP DICOM storage and the OHIF Viewer facilitated remote image data analytics. Prolonged loading times and relatively large individual image file sizes indicated system challenges. CONCLUSION: Cloud platforms have the potential to ease distributed data analytics, as needed for efficient tele-ophthalmology scenarios in research and clinical practice, by providing scalable and secure image management solutions.


Assuntos
Computação em Nuvem , Oftalmologia , Telemedicina , Humanos , Sistemas de Informação em Radiologia , Armazenamento e Recuperação da Informação/métodos
10.
Biotechniques ; 76(5): 203-215, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38573592

RESUMO

In the absence of a DNA template, the ab initio production of long double-stranded DNA molecules of predefined sequences is particularly challenging. The DNA synthesis step remains a bottleneck for many applications such as functional assessment of ancestral genes, analysis of alternative splicing or DNA-based data storage. In this report we propose a fully in vitro protocol to generate very long double-stranded DNA molecules starting from commercially available short DNA blocks in less than 3 days using Golden Gate assembly. This innovative application allowed us to streamline the process to produce a 24 kb-long DNA molecule storing part of the Declaration of the Rights of Man and of the Citizen of 1789 . The DNA molecule produced can be readily cloned into a suitable host/vector system for amplification and selection.


Assuntos
DNA , DNA/genética , DNA/química , Armazenamento e Recuperação da Informação/métodos , Humanos , Sequência de Bases/genética , Clonagem Molecular/métodos
11.
J Biomed Semantics ; 15(1): 3, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38654304

RESUMO

BACKGROUND: Systematic reviews of Randomized Controlled Trials (RCTs) are an important part of the evidence-based medicine paradigm. However, the creation of such systematic reviews by clinical experts is costly as well as time-consuming, and results can get quickly outdated after publication. Most RCTs are structured based on the Patient, Intervention, Comparison, Outcomes (PICO) framework and there exist many approaches which aim to extract PICO elements automatically. The automatic extraction of PICO information from RCTs has the potential to significantly speed up the creation process of systematic reviews and this way also benefit the field of evidence-based medicine. RESULTS: Previous work has addressed the extraction of PICO elements as the task of identifying relevant text spans or sentences, but without populating a structured representation of a trial. In contrast, in this work, we treat PICO elements as structured templates with slots to do justice to the complex nature of the information they represent. We present two different approaches to extract this structured information from the abstracts of RCTs. The first approach is an extractive approach based on our previous work that is extended to capture full document representations as well as by a clustering step to infer the number of instances of each template type. The second approach is a generative approach based on a seq2seq model that encodes the abstract describing the RCT and uses a decoder to infer a structured representation of a trial including its arms, treatments, endpoints and outcomes. Both approaches are evaluated with different base models on a manually annotated dataset consisting of RCT abstracts on an existing dataset comprising 211 annotated clinical trial abstracts for Type 2 Diabetes and Glaucoma. For both diseases, the extractive approach (with flan-t5-base) reached the best F 1 score, i.e. 0.547 ( ± 0.006 ) for type 2 diabetes and 0.636 ( ± 0.006 ) for glaucoma. Generally, the F 1 scores were higher for glaucoma than for type 2 diabetes and the standard deviation was higher for the generative approach. CONCLUSION: In our experiments, both approaches show promising performance extracting structured PICO information from RCTs, especially considering that most related work focuses on the far easier task of predicting less structured objects. In our experimental results, the extractive approach performs best in both cases, although the lead is greater for glaucoma than for type 2 diabetes. For future work, it remains to be investigated how the base model size affects the performance of both approaches in comparison. Although the extractive approach currently leaves more room for direct improvements, the generative approach might benefit from larger models.


Assuntos
Indexação e Redação de Resumos , Ensaios Clínicos Controlados Aleatórios como Assunto , Humanos , Processamento de Linguagem Natural , Armazenamento e Recuperação da Informação/métodos
12.
PLoS One ; 19(4): e0301277, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38662720

RESUMO

Outsourcing data to remote cloud providers is becoming increasingly popular amongst organizations and individuals. A semi-trusted server uses Searchable Symmetric Encryption (SSE) to keep the search information under acceptable leakage levels whilst searching an encrypted database. A dynamic SSE (DSSE) scheme enables the adding and removing of documents by performing update queries, where some information is leaked to the server each time a record is added or removed. The complexity of structures and cryptographic primitives in most existing DSSE schemes makes them inefficient, in terms of storage, and query requests generate overhead costs on the Smart Device Client (SDC) side. Achieving constant storage cost for SDCs enhances the viability, efficiency, and easy user experience of smart devices, promoting their widespread adoption in various applications while upholding robust privacy and security standards. DSSE schemes must address two important privacy requirements: forward and backward privacy. Due to the increasing number of keywords, the cost of storage on the client side is also increasing at a linear rate. This article introduces an innovative, secure, and lightweight Dynamic Searchable Symmetric Encryption (DSSE) scheme, ensuring Type-II backward and forward privacy without incurring ongoing storage costs and high-cost query generation for the SDC. The proposed scheme, based on an inverted index structure, merges the hash table with linked nodes, linking encrypted keywords in all hash tables. Achieving a one-time O(1) storage cost without keyword counters on the SDC side, the scheme enhances security by generating a fresh key for each update. Experimental results show low-cost query generation on the SDC side (6,460 nanoseconds), making it compatible with resource-limited devices. The scheme outperforms existing ones, reducing server-side search costs significantly.


Assuntos
Segurança Computacional , Humanos , Computação em Nuvem , Armazenamento e Recuperação da Informação/métodos , Algoritmos , Privacidade
13.
Cell Rep ; 43(4): 113699, 2024 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-38517891

RESUMO

Over the past decade, the rapid development of DNA synthesis and sequencing technologies has enabled preliminary use of DNA molecules for digital data storage, overcoming the capacity and persistence bottlenecks of silicon-based storage media. DNA storage has now been fully accomplished in the laboratory through existing biotechnology, which again demonstrates the viability of carbon-based storage media. However, the high cost and latency of data reconstruction pose challenges that hinder the practical implementation of DNA storage beyond the laboratory. In this article, we review existing advanced DNA storage methods, analyze the characteristics and performance of biotechnological approaches at various stages of data writing and reading, and discuss potential factors influencing DNA storage from the perspective of data reconstruction.


Assuntos
DNA , DNA/metabolismo , Armazenamento e Recuperação da Informação/métodos , Humanos
14.
Res Synth Methods ; 15(3): 372-383, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38185812

RESUMO

Literature screening is the process of identifying all relevant records from a pool of candidate paper records in systematic review, meta-analysis, and other research synthesis tasks. This process is time consuming, expensive, and prone to human error. Screening prioritization methods attempt to help reviewers identify most relevant records while only screening a proportion of candidate records with high priority. In previous studies, screening prioritization is often referred to as automatic literature screening or automatic literature identification. Numerous screening prioritization methods have been proposed in recent years. However, there is a lack of screening prioritization methods with reliable performance. Our objective is to develop a screening prioritization algorithm with reliable performance for practical use, for example, an algorithm that guarantees an 80% chance of identifying at least 80 % of the relevant records. Based on a target-based method proposed in Cormack and Grossman, we propose a screening prioritization algorithm using sampling with replacement. The algorithm is a wrapper algorithm that can work with any current screening prioritization algorithm to guarantee the performance. We prove, with mathematics and probability theory, that the algorithm guarantees the performance. We also run numeric experiments to test the performance of our algorithm when applied in practice. The numeric experiment results show this algorithm achieve reliable performance under different circumstances. The proposed screening prioritization algorithm can be reliably used in real world research synthesis tasks.


Assuntos
Algoritmos , Automação , Armazenamento e Recuperação da Informação/métodos , Metanálise como Assunto , Modelos Estatísticos , Probabilidade , Reprodutibilidade dos Testes , Literatura de Revisão como Assunto , Revisões Sistemáticas como Assunto/métodos
15.
Res Synth Methods ; 15(3): 441-449, 2024 May.
Artigo em Inglês | MEDLINE | ID: mdl-38098285

RESUMO

The literature search underpins data collection for all systematic reviews (SRs). The SR reporting guideline PRISMA, and its extensions, aim to facilitate research transparency and reproducibility, and ultimately improve the quality of research, by instructing authors to provide specific research materials and data upon publication of the manuscript. Search strategies are one item of data that are explicitly included in PRISMA and the critical appraisal tool AMSTAR2. Yet some authors use search availability statements implying that the search strategies are available upon request instead of providing strategies up front. We sought out reviews with search availability statements, characterized them, and requested the search strategies from authors via email. Over half of the included reviews cited PRISMA but less than a third included any search strategies. After requesting the strategies via email as instructed, we received replies from 46% of authors, and eventually received at least one search strategy from 36% of authors. Requesting search strategies via email has a low chance of success. Ask and you might receive-but you probably will not. SRs that do not make search strategies available are low quality at best according to AMSTAR2; Journal editors can and should enforce the requirement for authors to include their search strategies alongside their SR manuscripts.


Assuntos
Revisões Sistemáticas como Assunto , Humanos , Armazenamento e Recuperação da Informação/métodos , Reprodutibilidade dos Testes , Guias como Assunto , Projetos de Pesquisa , Disseminação de Informação , Literatura de Revisão como Assunto , Correio Eletrônico , Bases de Dados Bibliográficas
17.
J Am Med Inform Assoc ; 30(4): 718-725, 2023 03 16.
Artigo em Inglês | MEDLINE | ID: mdl-36688534

RESUMO

OBJECTIVE: Convert the Medical Information Mart for Intensive Care (MIMIC)-IV database into Health Level 7 Fast Healthcare Interoperability Resources (FHIR). Additionally, generate and publish an openly available demo of the resources, and create a FHIR Implementation Guide to support and clarify the usage of MIMIC-IV on FHIR. MATERIALS AND METHODS: FHIR profiles and terminology system of MIMIC-IV were modeled from the base FHIR R4 resources. Data and terminology were reorganized from the relational structure into FHIR according to the profiles. Resources generated were validated for conformance with the FHIR profiles. Finally, FHIR resources were published as newline delimited JSON files and the profiles were packaged into an implementation guide. RESULTS: The modeling of MIMIC-IV in FHIR resulted in 25 profiles, 2 extensions, 35 ValueSets, and 34 CodeSystems. An implementation guide encompassing the FHIR modeling can be accessed at mimic.mit.edu/fhir/mimic. The generated demo dataset contained 100 patients and over 915 000 resources. The full dataset contained 315 000 patients covering approximately 5 840 000 resources. The final datasets in NDJSON format are accessible on PhysioNet. DISCUSSION: Our work highlights the challenges and benefits of generating a real-world FHIR store. The challenges arise from terminology mapping and profiling modeling decisions. The benefits come from the extensively validated openly accessible data created as a result of the modeling work. CONCLUSION: The newly created MIMIC-IV on FHIR provides one of the first accessible deidentified critical care FHIR datasets. The extensive real-world data found in MIMIC-IV on FHIR will be invaluable for research and the development of healthcare applications.


Assuntos
Nível Sete de Saúde , Disseminação de Informação , Armazenamento e Recuperação da Informação , Pacientes , Armazenamento e Recuperação da Informação/métodos , Armazenamento e Recuperação da Informação/normas , Humanos , Conjuntos de Dados como Assunto , Reprodutibilidade dos Testes , Registros Eletrônicos de Saúde , Disseminação de Informação/métodos
18.
IEEE/ACM Trans Comput Biol Bioinform ; 20(3): 1864-1875, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-36331640

RESUMO

Retrieval Question Answering (ReQA) is an essential mechanism of information sharing which aims to find the answer to a posed question from large-scale candidates. Currently, the most efficient solution is Dual-Encoder which has shown great potential in the general domain, while it still lacks research on biomedical ReQA. Obtaining a robust Dual-Encoder from biomedical datasets is challenging, as scarce annotated data are not enough to sufficiently train the model which results in over-fitting problems. In this work, we first build ReQA BioASQ datasets for retrieving answers to biomedical questions, which can facilitate the corresponding research. On that basis, we propose a framework to solve the over-fitting issue for robust biomedical answer retrieval. Under the proposed framework, we first pre-train Dual-Encoder on natural language inference (NLI) task before the training on biomedical ReQA, where we appropriately change the pre-training objective of NLI to improve the consistency between NLI and biomedical ReQA, which significantly improve the transferability. Moreover, to eliminate the feature redundancies of Dual-Encoder, consistent post-whitening is proposed to conduct decorrelation on the training and trained sentence embeddings. With extensive experiments, the proposed framework achieves promising results and exhibits significant improvement compared with various competitive methods.


Assuntos
Armazenamento e Recuperação da Informação , Armazenamento e Recuperação da Informação/métodos , Aprendizado de Máquina , Curadoria de Dados , Inteligência Artificial
19.
Rev. cuba. inform. méd ; 14(2): e480, jul.-dic. 2022. graf
Artigo em Inglês | LILACS, CUMED | ID: biblio-1408545

RESUMO

Anemia is the most common blood disorder in the world, affecting millions of people yearly. It has multiple causes and jeopardizes development, growth and learning. Current tools provide non-hematologist doctors with just numbers. The present paper proposes a web application intended to provide doctors at Celia Sánchez Manduley Hospital in Manzanillo with a tool for determining the morphologic type of anemia, creating a list of possible causes, and also storing patient data in a database for future researches. As an information system, the website constitutes a powerful tool for the decision-making process, in particular, the diagnostic process, intended to provide more detailed information on anemia as well as to foster the information management at the hospital. For the application, HTML, CSS, JavaScript and PHP were used as languages; Apache as a web server; CodeIgniter as a PHP framework; MariaDB as a database management system; and Visual Studio Code as a development environment(AU)


La anemia afecta a millones de personas anualmente, esta obedece a múltiples causas y compromete el crecimiento, desarrollo y aprendizaje. Las herramientas actuales ofrecen al médico no especialista en hematología solo cifras. El presente trabajo propone una aplicación web con la cual se pretende brindar a los médicos del Hospital Celia Sánchez Manduley en Manzanillo una herramienta para la determinación del tipo morfológico de anemia, la elaboración de un listado de las posibles causas que la originan, así como el almacenamiento de datos de los pacientes para futuras investigaciones, y con ello contribuir a la toma de decisiones, en particular, al proceso de diagnóstico, y a una mejor gestión de información en el hospital. Para la aplicación se utilizaron los lenguajes HTML, CSS, JavaScript y PHP, el servidor web Apache, el framework PHP CodeIgniter, el gestor de base de datos MariaDB y el entorno de desarrollo Visual Studio Code(AU)


Assuntos
Humanos , Masculino , Feminino , Redes de Comunicação de Computadores , Aplicações da Informática Médica , Linguagens de Programação , Armazenamento e Recuperação da Informação/métodos , Anemia/diagnóstico , Anemia/epidemiologia
20.
Rev. cuba. inform. méd ; 14(2): e520, jul.-dic. 2022. graf
Artigo em Espanhol | LILACS, CUMED | ID: biblio-1408543

RESUMO

Para los neurocientíficos constituye un desafío realizar un seguimiento de los datos y metadatos generados en cada investigación y extraer con precisión toda la información relevante, hecho crucial para interpretar resultados y requisito mínimo para que los investigadores construyan sus investigaciones sobre los hallazgos anteriores. Se debe mantener tanta información como sea posible desde el inicio, incluso si esta pudiera parece ser irrelevante, además de registrar y almacenar los datos con sus metadatos de forma clara y concisa. Un análisis preliminar sobre la literatura especializada arrojó ausencia de una investigación detallada sobre cómo incorporar la gestión de datos y metadatos en las investigaciones clínicas del cerebro, en términos de organizar datos y metadatos completamente en repositorios digitales, recopilar e ingresar estos teniendo en cuenta su completitud, y sacar provecho de dicha recopilación en el proceso de análisis de los datos. Esta investigación tiene como objetivo caracterizar conceptual y técnicamente los datos y metadatos de neurociencias para facilitar el desarrollo de soluciones informáticas para su gestión y procesamiento. Se consultaron diferentes fuentes bibliográficas, así como bases de datos y repositorios tales como: Pubmed, Scielo, Nature, Researchgate, entre otros. El análisis sobre la recopilación, organización, procesamiento y almacenamiento de los datos y metadatos de neurociencias para cada técnica de adquisición de datos (EEG, iEEG, MEG, PET), así como su vínculo a la estructura de datos de imágenes cerebrales (BIDS) permitió obtener una caracterización general de cómo gestionar y procesar la información contenida en los mismos(AU)


For neuroscientists, it is a challenge to keep track of the data and metadata generated in each investigation and accurately extract all the relevant information, a crucial fact to interpret results and a minimum requirement for researchers to build their investigations on previous findings. Keep as much information as possible from the start, even if it may seem irrelevant and record and store the data with its metadata clearly and concisely. A preliminary analysis of the specialized literature revealed an absence of detailed research on how to incorporate data and metadata management in clinical brain research, in terms of organizing data and metadata completely in digital repositories, collecting and inputting them taking into account their completeness. , and take advantage of such collection in the process of data analysis. This research aims to conceptually and technically characterize neuroscience data and metadata to facilitate the development of computer solutions for its management and processing. Different bibliographic sources were consulted, as well as databases and repositories such as: Pubmed, Scielo, Nature, Researchgate, among others. The analysis on the collection, organization, processing and storage of neuroscience data and metadata for each data acquisition technique (EEG, iEEG, MEG, PET), as well as its link to the brain imaging data structure (BIDS) allowed to obtain a general characterization of how to manage and process the information contained in them(AU)


Assuntos
Humanos , Masculino , Feminino , Informática Médica , Aplicações da Informática Médica , Linguagens de Programação , Armazenamento e Recuperação da Informação/métodos , Metadados , Neurociências
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA