Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 11.032
Filtrar
1.
BMC Med ; 22(1): 276, 2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-38956666

RESUMO

BACKGROUND: Pregnancy acts as a cardiovascular stress test. Although many complications resolve following birth, women with hypertensive disorder of pregnancy have an increased risk of developing cardiovascular disease (CVD) long-term. Monitoring postnatal health can reduce this risk but requires better methods to identity high-risk women for timely interventions. METHODS: Employing a qualitative descriptive study design, focus groups and/or interviews were conducted, separately engaging public contributors and clinical professionals. Diverse participants were recruited through social media convenience sampling. Semi-structured, facilitator-led discussions explored perspectives of current postnatal assessment and attitudes towards linking patient electronic healthcare data to develop digital tools for identifying postpartum women at risk of CVD. Participant perspectives were gathered using post-it notes or a facilitator scribe and analysed thematically. RESULTS: From 27 public and seven clinical contributors, five themes regarding postnatal check expectations versus reality were developed, including 'limited resources', 'low maternal health priority', 'lack of knowledge', 'ineffective systems' and 'new mum syndrome'. Despite some concerns, all supported data linkage to identify women postnatally, targeting intervention to those at greater risk of CVD. Participants outlined potential benefits of digitalisation and risk prediction, highlighting design and communication needs for diverse communities. CONCLUSIONS: Current health system constraints in England contribute to suboptimal postnatal care. Integrating data linkage and improving education on data and digital tools for maternal healthcare shows promise for enhanced monitoring and improved future health. Recognised for streamlining processes and risk prediction, digital tools may enable more person-centred care plans, addressing the gaps in current postnatal care practice.


Assuntos
Cuidado Pós-Natal , Pesquisa Qualitativa , Humanos , Feminino , Cuidado Pós-Natal/métodos , Gravidez , Armazenamento e Recuperação da Informação/métodos , Adulto , Medição de Risco , Grupos Focais , Doenças Cardiovasculares/prevenção & controle , Entrevistas como Assunto , Período Pós-Parto
2.
PLoS One ; 19(7): e0304915, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38950045

RESUMO

A trademark's image is usually the first type of indirect contact between a consumer and a product or a service. Companies rely on graphical trademarks as a symbol of quality and instant recognition, seeking to protect them from copyright infringements. A popular defense mechanism is graphical searching, where an image is compared to a large database to find potential conflicts with similar trademarks. Despite not being a new subject, image retrieval state-of-the-art lacks reliable solutions in the Industrial Property (IP) sector, where datasets are practically unrestricted in content, with abstract images for which modeling human perception is a challenging task. Existing Content-based Image Retrieval (CBIR) systems still present several problems, particularly in terms of efficiency and reliability. In this paper, we propose a new CBIR system that overcomes these major limitations. It follows a modular methodology, composed of a set of individual components tasked with the retrieval, maintenance and gradual optimization of trademark image searching, working on large-scale, unlabeled datasets. Its generalization capacity is achieved using multiple feature descriptions, weighted separately, and combined to represent a single similarity score. Images are evaluated for general features, edge maps, and regions of interest, using a method based on Watershedding K-Means segments. We propose an image recovery process that relies on a new similarity measure between all feature descriptions. New trademark images are added every day to ensure up-to-date results. The proposed system showcases a timely retrieval speed, with 95% of searches having a 10 second presentation speed and a mean average precision of 93.7%, supporting its applicability to real-word IP protection scenarios.


Assuntos
Propriedade Intelectual , Humanos , Armazenamento e Recuperação da Informação/métodos , Bases de Dados Factuais , Algoritmos , Processamento de Imagem Assistida por Computador/métodos
3.
PLoS One ; 19(7): e0304009, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38985790

RESUMO

The burgeoning field of fog computing introduces a transformative computing paradigm with extensive applications across diverse sectors. At the heart of this paradigm lies the pivotal role of edge servers, which are entrusted with critical computing and storage functions. The optimization of these servers' storage capacities emerges as a crucial factor in augmenting the efficacy of fog computing infrastructures. This paper presents a novel storage optimization algorithm, dubbed LIRU (Low Interference Recently Used), which synthesizes the strengths of the LIRS (Low Interference Recency Set) and LRU (Least Recently Used) replacement algorithms. Set against the backdrop of constrained storage resources, this research endeavours to formulate an algorithm that optimizes storage space utilization, elevates data access efficiency, and diminishes access latencies. The investigation initiates a comprehensive analysis of the storage resources available on edge servers, pinpointing the essential considerations for optimization algorithms: storage resource utilization and data access frequency. The study then constructs an optimization model that harmonizes data frequency with cache capacity, employing optimization theory to discern the optimal solution for storage maximization. Subsequent experimental validations of the LIRU algorithm underscore its superiority over conventional replacement algorithms, showcasing significant improvements in storage utilization, data access efficiency, and reduced access delays. Notably, the LIRU algorithm registers a 5% increment in one-hop hit ratio relative to the LFU algorithm, a 66% enhancement over the LRU algorithm, and a 14% elevation in system hit ratio against the LRU algorithm. Moreover, it curtails the average system response time by 2.4% and 16.5% compared to the LRU and LFU algorithms, respectively, particularly in scenarios involving large cache sizes. This research not only sheds light on the intricacies of edge server storage optimization but also significantly propels the performance and efficiency of the broader fog computing ecosystem. Through these insights, the study contributes a valuable framework for enhancing data management strategies within fog computing architectures, marking a noteworthy advancement in the field.


Assuntos
Algoritmos , Armazenamento e Recuperação da Informação/métodos , Computação em Nuvem
4.
Medwave ; 24(5): e2781, 2024 Jun 17.
Artigo em Inglês | MEDLINE | ID: mdl-38885522

RESUMO

Introduction: Updating recommendations for guidelines requires a comprehensive and efficient literature search. Although new information platforms are available for developing groups, their relative contributions to this purpose remain uncertain. Methods: As part of a review/update of eight selected evidence-based recommendationsfor type 2 diabetes, we evaluated the following five literature search approaches (targeting systematic reviews, using predetermined criteria): PubMed for MEDLINE, Epistemonikos database basic search, Epistemonikos database using a structured search strategy, Living overview of evidence (L.OVE) platform, and TRIP database. Three reviewers independently classified the retrieved references as definitely eligible, probably eligible, or not eligible. Those falling in the same "definitely" categories for all reviewers were labelled as "true" positives/negatives. The rest went to re-assessment and if found eligible/not eligible by consensus became "false" negatives/positives, respectively. We described the yield for each approach and computed "diagnostic accuracy" measures and agreement statistics. Results: Altogether, the five approaches identified 318 to 505 references for the eight recommendations, from which reviewers considered 4.2 to 9.4% eligible after the two rounds. While Pubmed outperformed the other approaches (diagnostic odds ratio 12.5 versus 2.6 to 5.3), no single search approach returned eligible references for all recommendations. Individually, searches found up to 40% of all eligible references (n = 71), and no combination of any three approaches could find over 80% of them. Kappa statistics for retrieval between searches were very poor (9 out of 10 paired comparisons did not surpass the chance-expected agreement). Conclusion: Among the information platforms assessed, PubMed appeared to be more efficient in updating this set of recommendations. However, the very poor agreement among search approaches in the reference yield demands that developing groups add information from several (probably more than three) sources for this purpose. Further research is needed to replicate our findings and enhance our understanding of how to efficiently update recommendations.


Introducción: La actualización de recomendaciones de las guías de práctica clínica requiere búsquedas bibliográficas exhaustivas y eficientes. Aunque están disponibles nuevas plataformas de información para grupos desarrolladores, su contribución a este propósito sigue siendo incierta. Métodos: Como parte de una revisión/actualización de 8 recomendaciones basadas en evidencia seleccionadas sobre diabetes tipo 2, evaluamos las siguientes cinco aproximaciones de búsqueda bibliográfica (dirigidas a revisiones sistemáticas, utilizando criterios predeterminados): PubMed para MEDLINE; Epistemonikos utilizando una búsqueda básica; Epistemonikos utilizando una estrategia de búsqueda estructurada; plataforma (L.OVE) y TRIP . Tres revisores clasificaron de forma independiente las referencias recuperadas como definitivamente o probablemente elegibles/no elegibles. Aquellas clasificadas en las mismas categorías "definitivas" para todos los revisores, se etiquetaron como "verdaderas" positivas/negativas. El resto se sometieron a una nueva evaluación y, si se consideraban por consenso elegibles/no elegibles, se convirtieron en "falsos" negativos/positivos, respectivamente. Describimos el rendimiento de cada aproximación, junto a sus medidas de "precisión diagnóstica" y las estadísticas de acuerdo. Resultados: En conjunto, las cinco aproximaciones identificaron 318-505 referencias para las 8 recomendaciones, de las cuales los revisores consideraron elegibles el 4,2 a 9,4% tras las dos rondas. Mientras que Pubmed superó a las otras aproximaciones (odds ratio de diagnóstico 12,5 versus 2,6 a 53), ninguna aproximación de búsqueda identificó por sí misma referencias elegibles para todas las recomendaciones. Individualmente, las búsquedas identificaron hasta el 40% de todas las referencias elegibles (n=71), y ninguna combinación de cualquiera de los tres enfoques pudo identificar más del 80% de ellas. Las estadísticas Kappa para la recuperación entre búsquedas fueron muy pobres (9 de cada 10 comparaciones pareadas no superaron el acuerdo esperado por azar). Conclusiones: Entre las plataformas de información evaluadas, Pubmed parece ser la más eficiente para actualizar este conjunto de recomendaciones. Sin embargo, la escasa concordancia en el rendimiento de las referencias exige que los grupos desarrolladores incorporen información de varias fuentes (probablemente más de tres) para este fin. Es necesario seguir investigando para replicar nuestros hallazgos y mejorar nuestra comprensión de cómo actualizar recomendaciones de forma eficiente.


Assuntos
Diabetes Mellitus Tipo 2 , Medicina Baseada em Evidências , Guias de Prática Clínica como Assunto , Humanos , Colômbia , Bases de Dados Bibliográficas , Armazenamento e Recuperação da Informação/métodos , Armazenamento e Recuperação da Informação/normas
5.
Am J Nurs ; 124(7): 40-50, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38900123

RESUMO

This is the third article in a new series designed to provide readers with insight into educating nurses about evidence-based decision-making (EBDM). It builds on AJN's award-winning previous series-Evidence-Based Practice, Step by Step and EBP 2.0: Implementing and Sustaining Change (to access both series, go to http://links.lww.com/AJN/A133). This follow-up series on EBDM will address how to teach and facilitate learning about the evidence-based practice (EBP) and quality improvement (QI) processes and how they impact health care quality. This series is relevant for all nurses interested in EBP and QI, especially DNP faculty and students. The brief case scenario included in each article describes one DNP student's journey. To access previous articles in this EBDM series, go to http://links.lww.com/AJN/A256.


Assuntos
Enfermagem Baseada em Evidências , Humanos , Melhoria de Qualidade , Educação de Pós-Graduação em Enfermagem , Armazenamento e Recuperação da Informação/métodos
6.
Bioinformatics ; 40(Supplement_1): i119-i129, 2024 Jun 28.
Artigo em Inglês | MEDLINE | ID: mdl-38940167

RESUMO

SUMMARY: Recent proprietary large language models (LLMs), such as GPT-4, have achieved a milestone in tackling diverse challenges in the biomedical domain, ranging from multiple-choice questions to long-form generations. To address challenges that still cannot be handled with the encoded knowledge of LLMs, various retrieval-augmented generation (RAG) methods have been developed by searching documents from the knowledge corpus and appending them unconditionally or selectively to the input of LLMs for generation. However, when applying existing methods to different domain-specific problems, poor generalization becomes apparent, leading to fetching incorrect documents or making inaccurate judgments. In this paper, we introduce Self-BioRAG, a framework reliable for biomedical text that specializes in generating explanations, retrieving domain-specific documents, and self-reflecting generated responses. We utilize 84k filtered biomedical instruction sets to train Self-BioRAG that can assess its generated explanations with customized reflective tokens. Our work proves that domain-specific components, such as a retriever, domain-related document corpus, and instruction sets are necessary for adhering to domain-related instructions. Using three major medical question-answering benchmark datasets, experimental results of Self-BioRAG demonstrate significant performance gains by achieving a 7.2% absolute improvement on average over the state-of-the-art open-foundation model with a parameter size of 7B or less. Similarly, Self-BioRAG outperforms RAG by 8% Rouge-1 score in generating more proficient answers on two long-form question-answering benchmarks on average. Overall, we analyze that Self-BioRAG finds the clues in the question, retrieves relevant documents if needed, and understands how to answer with information from retrieved documents and encoded knowledge as a medical expert does. We release our data and code for training our framework components and model weights (7B and 13B) to enhance capabilities in biomedical and clinical domains. AVAILABILITY AND IMPLEMENTATION: Self-BioRAG is available at https://github.com/dmis-lab/self-biorag.


Assuntos
Armazenamento e Recuperação da Informação , Humanos , Armazenamento e Recuperação da Informação/métodos , Processamento de Linguagem Natural
7.
PLoS One ; 19(6): e0306291, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38941309

RESUMO

To explore the application effect of the deep learning (DL) network model in the Internet of Things (IoT) database query and optimization. This study first analyzes the architecture of IoT database queries, then explores the DL network model, and finally optimizes the DL network model through optimization strategies. The advantages of the optimized model in this study are verified through experiments. Experimental results show that the optimized model has higher efficiency than other models in the model training and parameter optimization stages. Especially when the data volume is 2000, the model training time and parameter optimization time of the optimized model are remarkably lower than that of the traditional model. In terms of resource consumption, the Central Processing Unit and Graphics Processing Unit usage and memory usage of all models have increased as the data volume rises. However, the optimized model exhibits better performance on energy consumption. In throughput analysis, the optimized model can maintain high transaction numbers and data volumes per second when handling large data requests, especially at 4000 data volumes, and its peak time processing capacity exceeds that of other models. Regarding latency, although the latency of all models increases with data volume, the optimized model performs better in database query response time and data processing latency. The results of this study not only reveal the optimized model's superior performance in processing IoT database queries and their optimization but also provide a valuable reference for IoT data processing and DL model optimization. These findings help to promote the application of DL technology in the IoT field, especially in the need to deal with large-scale data and require efficient processing scenarios, and offer a vital reference for the research and practice in related fields.


Assuntos
Bases de Dados Factuais , Aprendizado Profundo , Internet das Coisas , Redes Neurais de Computação , Humanos , Armazenamento e Recuperação da Informação/métodos
8.
PLoS One ; 19(6): e0305690, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38917118

RESUMO

This study aims to develop a digital retrieval system for art museums to solve the problems of inaccurate information and low retrieval efficiency in the digital management of cultural heritage. By introducing an improved Genetic Algorithm (GA), digital management and access efficiency are enhanced, to bring substantial optimization and innovation to the digital management of cultural heritage. Based on the collection of art museums, this study first integrates the collection's images, texts, and metadata with multi-source intelligent information to achieve a more accurate and comprehensive description of digital content. Second, a GA is introduced, and a GA 2 Convolutional Neural Network (GA2CNN) optimization model combining domain knowledge is proposed. Moreover, the convergence speed of traditional GA is improved to adapt to the characteristics of cultural heritage data. Lastly, the Convolutional Neural Network (CNN), GA, and GA2CNN are compared to verify the proposed system's superiority. The results show that in all models, the sample output results' actual value is 2.62, which represents the real data observation results. For sample number 5, compared with the actual value of 2.62, the predicted values of the GA2CNN and GA models are 2.6177 and 2.6313, and their errors are 0.0023 and 0.0113. The CNN model's predicted value is 2.6237, with an error of 0.0037. It can be found that the network fitting accuracy after optimization of the GA2CNN model is high, and the predicted value is very close to the actual value. The digital retrieval system integrated with the GA2CNN model has a good performance in enhancing retrieval efficiency and accuracy. This study provides technical support for the digital organization and display of cultural heritage and offers valuable references for innovative exploration of museum information management in the digital era.


Assuntos
Algoritmos , Museus , Redes Neurais de Computação , Armazenamento e Recuperação da Informação/métodos , Arte , Humanos
9.
BMC Med Res Methodol ; 24(1): 139, 2024 Jun 25.
Artigo em Inglês | MEDLINE | ID: mdl-38918736

RESUMO

BACKGROUND: Large language models (LLMs) that can efficiently screen and identify studies meeting specific criteria would streamline literature reviews. Additionally, those capable of extracting data from publications would enhance knowledge discovery by reducing the burden on human reviewers. METHODS: We created an automated pipeline utilizing OpenAI GPT-4 32 K API version "2023-05-15" to evaluate the accuracy of the LLM GPT-4 responses to queries about published papers on HIV drug resistance (HIVDR) with and without an instruction sheet. The instruction sheet contained specialized knowledge designed to assist a person trying to answer questions about an HIVDR paper. We designed 60 questions pertaining to HIVDR and created markdown versions of 60 published HIVDR papers in PubMed. We presented the 60 papers to GPT-4 in four configurations: (1) all 60 questions simultaneously; (2) all 60 questions simultaneously with the instruction sheet; (3) each of the 60 questions individually; and (4) each of the 60 questions individually with the instruction sheet. RESULTS: GPT-4 achieved a mean accuracy of 86.9% - 24.0% higher than when the answers to papers were permuted. The overall recall and precision were 72.5% and 87.4%, respectively. The standard deviation of three replicates for the 60 questions ranged from 0 to 5.3% with a median of 1.2%. The instruction sheet did not significantly increase GPT-4's accuracy, recall, or precision. GPT-4 was more likely to provide false positive answers when the 60 questions were submitted individually compared to when they were submitted together. CONCLUSIONS: GPT-4 reproducibly answered 3600 questions about 60 papers on HIVDR with moderately high accuracy, recall, and precision. The instruction sheet's failure to improve these metrics suggests that more sophisticated approaches are necessary. Either enhanced prompt engineering or finetuning an open-source model could further improve an LLM's ability to answer questions about highly specialized HIVDR papers.


Assuntos
Infecções por HIV , Humanos , Reprodutibilidade dos Testes , Infecções por HIV/tratamento farmacológico , PubMed , Publicações/estatística & dados numéricos , Publicações/normas , Armazenamento e Recuperação da Informação/métodos , Armazenamento e Recuperação da Informação/normas , Software
10.
J Med Libr Assoc ; 112(1): 42-47, 2024 Jan 16.
Artigo em Inglês | MEDLINE | ID: mdl-38911529

RESUMO

Background: By defining search strategies and related database exports as code/scripts and data, librarians and information professionals can expand the mandate of research data management (RDM) infrastructure to include this work. This new initiative aimed to create a space in McGill University's institutional data repository for our librarians to deposit and share their search strategies for knowledge syntheses (KS). Case Presentation: The authors, a health sciences librarian and an RDM specialist, created a repository collection of librarian-authored knowledge synthesis (KS) searches in McGill University's Borealis Dataverse collection. We developed and hosted a half-day "Dataverse-a-thon" where we worked with a team of health sciences librarians to develop a standardized KS data management plan (DMP), search reporting documentation, Dataverse software training, and howto guidance for the repository. Conclusion: In addition to better documentation and tracking of KS searches at our institution, the KS Dataverse collection enables sharing of searches among colleagues with discoverable metadata fields for searching within deposited searches. While the initial creation of the DMP and documentation took about six hours, the subsequent deposit of search strategies into the institutional data repository requires minimal effort (e.g., 5-10 minutes on average per deposit). The Dataverse collection also empowers librarians to retain intellectual ownership over search strategies as valuable stand-alone research outputs and raise the visibility of their labor. Overall, institutional data repositories provide specific benefits in facilitating compliance both with PRISMA-S guidance and with RDM best practices.


Assuntos
Armazenamento e Recuperação da Informação , Humanos , Armazenamento e Recuperação da Informação/métodos , Disseminação de Informação/métodos , Gerenciamento de Dados/métodos , Bibliotecas Médicas/organização & administração , Bibliotecários/estatística & dados numéricos
11.
J Med Libr Assoc ; 112(1): 33-41, 2024 Jan 16.
Artigo em Inglês | MEDLINE | ID: mdl-38911530

RESUMO

Objective: With exponential growth in the publication of interprofessional education (IPE) research studies, it has become more difficult to find relevant literature and stay abreast of the latest research. To address this gap, we developed, evaluated, and validated search strategies for IPE studies in PubMed, to improve future access to and synthesis of IPE research. These search strategies, or search hedges, provide comprehensive, validated sets of search terms for IPE publications. Methods: The search strategies were created for PubMed using relative recall methodology. The research methods followed the guidance of previous search hedge and search filter validation studies in creating a gold standard set of relevant references using systematic reviews, having expert searchers identify and test search terms, and using relative recall calculations to validate the searches' performance against the gold standard set. Results: The three recommended search hedges for IPE studies presented had recall of 71.5%, 82.7%, and 95.1%; the first more focused for efficient literature searching, the last with high recall for comprehensive literature searching, and the remaining hedge as a middle ground between the other two options. Conclusion: These validated search hedges can be used in PubMed to expedite finding relevant scholarships, staying up to date with IPE research, and conducting literature reviews and evidence syntheses.


Assuntos
Armazenamento e Recuperação da Informação , Educação Interprofissional , PubMed , Humanos , Armazenamento e Recuperação da Informação/métodos , Educação Interprofissional/métodos
12.
J Med Libr Assoc ; 112(1): 22-32, 2024 Jan 16.
Artigo em Inglês | MEDLINE | ID: mdl-38911528

RESUMO

Objective: There is a need for additional comprehensive and validated filters to find relevant references more efficiently in the growing body of research on immigrant populations. Our goal was to create reliable search filters that direct librarians and researchers to pertinent studies indexed in PubMed about health topics specific to immigrant populations. Methods: We applied a systematic and multi-step process that combined information from expert input, authoritative sources, automation, and manual review of sources. We established a focused scope and eligibility criteria, which we used to create the development and validation sets. We formed a term ranking system that resulted in the creation of two filters: an immigrant-specific and an immigrant-sensitive search filter. Results: When tested against the validation set, the specific filter sensitivity was 88.09%, specificity 97.26%, precision 97.88%, and the NNR 1.02. The sensitive filter sensitivity was 97.76%when tested against the development set. The sensitive filter had a sensitivity of 97.14%, specificity of 82.05%, precision of 88.59%, accuracy of 90.94%, and NNR [See Table 1] of 1.13 when tested against the validation set. Conclusion: We accomplished our goal of developing PubMed search filters to help researchers retrieve studies about immigrants. The specific and sensitive PubMed search filters give information professionals and researchers options to maximize the specificity and precision or increase the sensitivity of their search for relevant studies in PubMed. Both search filters generated strong performance measurements and can be used as-is, to capture a subset of immigrant-related literature, or adapted and revised to fit the unique research needs of specific project teams (e.g. remove US-centric language, add location-specific terminology, or expand the search strategy to include terms for the topic/s being investigated in the immigrant population identified by the filter). There is also a potential for teams to employ the search filter development process described here for their own topics and use.


Assuntos
Emigrantes e Imigrantes , PubMed , Emigrantes e Imigrantes/estatística & dados numéricos , Humanos , Armazenamento e Recuperação da Informação/métodos , Armazenamento e Recuperação da Informação/normas , Ferramenta de Busca/normas
13.
Int J Mol Sci ; 25(12)2024 Jun 11.
Artigo em Inglês | MEDLINE | ID: mdl-38928155

RESUMO

Polymerase Chain Reaction (PCR) amplification is widely used for retrieving information from DNA storage. During the PCR amplification process, nonspecific pairing between the 3' end of the primer and the DNA sequence can cause cross-talk in the amplification reaction, leading to the generation of interfering sequences and reduced amplification accuracy. To address this issue, we propose an efficient coding algorithm for PCR amplification information retrieval (ECA-PCRAIR). This algorithm employs variable-length scanning and pruning optimization to construct a codebook that maximizes storage density while satisfying traditional biological constraints. Subsequently, a codeword search tree is constructed based on the primer library to optimize the codebook, and a variable-length interleaver is used for constraint detection and correction, thereby minimizing the likelihood of nonspecific pairing. Experimental results demonstrate that ECA-PCRAIR can reduce the probability of nonspecific pairing between the 3' end of the primer and the DNA sequence to 2-25%, enhancing the robustness of the DNA sequences. Additionally, ECA-PCRAIR achieves a storage density of 2.14-3.67 bits per nucleotide (bits/nt), significantly improving storage capacity.


Assuntos
Algoritmos , Reação em Cadeia da Polimerase , Reação em Cadeia da Polimerase/métodos , DNA/genética , Armazenamento e Recuperação da Informação/métodos , Primers do DNA/genética , Sequência de Bases
14.
BMC Med Res Methodol ; 24(1): 135, 2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-38907198

RESUMO

BACKGROUND: As evidence related to the COVID-19 pandemic surged, databases, platforms, and repositories evolved with features and functions to assist users in promptly finding the most relevant evidence. In response, research synthesis teams adopted novel searching strategies to sift through the vast amount of evidence to synthesize and disseminate the most up-to-date evidence. This paper explores the key database features that facilitated systematic searching for rapid evidence synthesis during the COVID-19 pandemic to inform knowledge management infrastructure during future global health emergencies. METHODS: This paper outlines the features and functions of previously existing and newly created evidence sources routinely searched as part of the NCCMT's Rapid Evidence Service methods, including databases, platforms, and repositories. Specific functions of each evidence source were assessed as they pertain to searching in the context of a public health emergency, including the topics of indexed citations, the level of evidence of indexed citations, and specific usability features of each evidence source. RESULTS: Thirteen evidence sources were assessed, of which four were newly created and nine were either pre-existing or adapted from previously existing resources. Evidence sources varied in topics indexed, level of evidence indexed, and specific searching functions. CONCLUSION: This paper offers insights into which features enabled systematic searching for the completion of rapid reviews to inform decision makers within 5-10 days. These findings provide guidance for knowledge management strategies and evidence infrastructures during future public health emergencies.


Assuntos
COVID-19 , Bases de Dados Factuais , Saúde Pública , SARS-CoV-2 , COVID-19/epidemiologia , Humanos , Saúde Pública/métodos , Pandemias , Emergências , Armazenamento e Recuperação da Informação/métodos
15.
Nutr Clin Pract ; 39(4): 743-750, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-38864650

RESUMO

From its first printing in 1879 to when publication ceased in 2004, the Index Medicus had proved invaluable for persons wishing to conduct healthcare-related research. With the loss of this resource and the rapid expansion of alternative, online sources, it is vital that persons understand how to appropriately search for and use this information. The purpose of this review is to outline the information sources available, discuss how to use current search technology to best obtain relevant information while minimizing nonproductive references, and give the author's opinion on the reliability of the various informational sources available. Topics to be discussed will include Medical Subject Headings and PICO searches and sources ranging from the National Library of Medicine and Cochrane Reviews to Wikipedia and other sites, such as associations and commercial interest sites.


Assuntos
Internet , Humanos , Armazenamento e Recuperação da Informação/métodos , Reprodutibilidade dos Testes
16.
Bioinformatics ; 40(6)2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38830083

RESUMO

MOTIVATION: Answering and solving complex problems using a large language model (LLM) given a certain domain such as biomedicine is a challenging task that requires both factual consistency and logic, and LLMs often suffer from some major limitations, such as hallucinating false or irrelevant information, or being influenced by noisy data. These issues can compromise the trustworthiness, accuracy, and compliance of LLM-generated text and insights. RESULTS: Knowledge Retrieval Augmented Generation ENgine (KRAGEN) is a new tool that combines knowledge graphs, Retrieval Augmented Generation (RAG), and advanced prompting techniques to solve complex problems with natural language. KRAGEN converts knowledge graphs into a vector database and uses RAG to retrieve relevant facts from it. KRAGEN uses advanced prompting techniques: namely graph-of-thoughts (GoT), to dynamically break down a complex problem into smaller subproblems, and proceeds to solve each subproblem by using the relevant knowledge through the RAG framework, which limits the hallucinations, and finally, consolidates the subproblems and provides a solution. KRAGEN's graph visualization allows the user to interact with and evaluate the quality of the solution's GoT structure and logic. AVAILABILITY AND IMPLEMENTATION: KRAGEN is deployed by running its custom Docker containers. KRAGEN is available as open-source from GitHub at: https://github.com/EpistasisLab/KRAGEN.


Assuntos
Software , Processamento de Linguagem Natural , Resolução de Problemas , Algoritmos , Armazenamento e Recuperação da Informação/métodos , Humanos , Biologia Computacional/métodos , Bases de Dados Factuais
17.
SLAS Technol ; 29(3): 100135, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38703999

RESUMO

Laboratory management automation is essential for achieving interoperability in the domain of experimental research and accelerating scientific discovery. The integration of resources and the sharing of knowledge across organisations enable scientific discoveries to be accelerated by increasing the productivity of laboratories, optimising funding efficiency, and addressing emerging global challenges. This paper presents a novel framework for digitalising and automating the administration of research laboratories through The World Avatar, an all-encompassing dynamic knowledge graph. This Digital Laboratory Framework serves as a flexible tool, enabling users to efficiently leverage data from diverse systems and formats without being confined to a specific software or protocol. Establishing dedicated ontologies and agents and combining them with technologies such as QR codes, RFID tags, and mobile apps, enabled us to develop modular applications that tackle some key challenges related to lab management. Here, we showcase an automated tracking and intervention system for explosive chemicals as well as an easy-to-use mobile application for asset management and information retrieval. Implementing these, we have achieved semantic linking of BIM and BMS data with laboratory inventory and chemical knowledge. Our approach can capture the crucial data points and reduce inventory processing time. All data provenance is recorded following the FAIR principles, ensuring its accessibility and interoperability.


Assuntos
Automação Laboratorial , Automação Laboratorial/métodos , Laboratórios , Armazenamento e Recuperação da Informação/métodos
18.
J Med Internet Res ; 26: e52655, 2024 May 30.
Artigo em Inglês | MEDLINE | ID: mdl-38814687

RESUMO

BACKGROUND: Since the beginning of the COVID-19 pandemic, >1 million studies have been collected within the COVID-19 Open Research Dataset, a corpus of manuscripts created to accelerate research against the disease. Their related abstracts hold a wealth of information that remains largely unexplored and difficult to search due to its unstructured nature. Keyword-based search is the standard approach, which allows users to retrieve the documents of a corpus that contain (all or some of) the words in a target list. This type of search, however, does not provide visual support to the task and is not suited to expressing complex queries or compensating for missing specifications. OBJECTIVE: This study aims to consider small graphs of concepts and exploit them for expressing graph searches over existing COVID-19-related literature, leveraging the increasing use of graphs to represent and query scientific knowledge and providing a user-friendly search and exploration experience. METHODS: We considered the COVID-19 Open Research Dataset corpus and summarized its content by annotating the publications' abstracts using terms selected from the Unified Medical Language System and the Ontology of Coronavirus Infectious Disease. Then, we built a co-occurrence network that includes all relevant concepts mentioned in the corpus, establishing connections when their mutual information is relevant. A sophisticated graph query engine was built to allow the identification of the best matches of graph queries on the network. It also supports partial matches and suggests potential query completions using shortest paths. RESULTS: We built a large co-occurrence network, consisting of 128,249 entities and 47,198,965 relationships; the GRAPH-SEARCH interface allows users to explore the network by formulating or adapting graph queries; it produces a bibliography of publications, which are globally ranked; and each publication is further associated with the specific parts of the query that it explains, thereby allowing the user to understand each aspect of the matching. CONCLUSIONS: Our approach supports the process of query formulation and evidence search upon a large text corpus; it can be reapplied to any scientific domain where documents corpora and curated ontologies are made available.


Assuntos
Algoritmos , COVID-19 , SARS-CoV-2 , COVID-19/epidemiologia , Humanos , Pandemias , Armazenamento e Recuperação da Informação/métodos , Pesquisa Biomédica/métodos , Unified Medical Language System , Ferramenta de Busca
19.
J Am Med Inform Assoc ; 31(7): 1569-1577, 2024 Jun 20.
Artigo em Inglês | MEDLINE | ID: mdl-38718216

RESUMO

OBJECTIVE: Social media-based public health research is crucial for epidemic surveillance, but most studies identify relevant corpora with keyword-matching. This study develops a system to streamline the process of curating colloquial medical dictionaries. We demonstrate the pipeline by curating a Unified Medical Language System (UMLS)-colloquial symptom dictionary from COVID-19-related tweets as proof of concept. METHODS: COVID-19-related tweets from February 1, 2020, to April 30, 2022 were used. The pipeline includes three modules: a named entity recognition module to detect symptoms in tweets; an entity normalization module to aggregate detected entities; and a mapping module that iteratively maps entities to Unified Medical Language System concepts. A random 500 entity samples were drawn from the final dictionary for accuracy validation. Additionally, we conducted a symptom frequency distribution analysis to compare our dictionary to a pre-defined lexicon from previous research. RESULTS: We identified 498 480 unique symptom entity expressions from the tweets. Pre-processing reduces the number to 18 226. The final dictionary contains 38 175 unique expressions of symptoms that can be mapped to 966 UMLS concepts (accuracy = 95%). Symptom distribution analysis found that our dictionary detects more symptoms and is effective at identifying psychiatric disorders like anxiety and depression, often missed by pre-defined lexicons. CONCLUSIONS: This study advances public health research by implementing a novel, systematic pipeline for curating symptom lexicons from social media data. The final lexicon's high accuracy, validated by medical professionals, underscores the potential of this methodology to reliably interpret, and categorize vast amounts of unstructured social media data into actionable medical insights across diverse linguistic and regional landscapes.


Assuntos
COVID-19 , Aprendizado Profundo , Mídias Sociais , Unified Medical Language System , Humanos , Saúde Pública , Armazenamento e Recuperação da Informação/métodos
20.
Med Ref Serv Q ; 43(2): 130-151, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38722608

RESUMO

While LibGuides are widely used in libraries to curate resources for users, there are a number of common problems, including maintenance, design and layout, and curating relevant and concise content. One health sciences library sought to improve our LibGuides, consulting usage statistics, user feedback, and recommendations from the literature to inform decision making. Our team recommended a number of changes to make LibGuides more usable, including creating robust maintenance and content guidelines, scheduling regular updates, and various changes to the format of the guides themselves to make them more user-friendly.


Assuntos
Bibliotecas Médicas , Estudos de Casos Organizacionais , Bibliotecas Médicas/organização & administração , Humanos , Armazenamento e Recuperação da Informação/métodos
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...