Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 598
Filtrar
1.
Stud Health Technol Inform ; 316: 1151-1155, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39176584

RESUMO

In clinical research, the analysis of patient cohorts is a widely employed method for investigating relevant healthcare questions. The ability to automatically extract large-scale patient cohorts from hospital systems is vital in order to unlock the potential of real-world clinical data, and answer pivotal medical questions through retrospective research studies. However, existing medical data is often dispersed across various systems and databases, preventing a systematic approach to access and interoperability. Even when the data are readily accessible, clinical researchers need to sift through Electronic Medical Records, confirm ethical approval, verify status of patient consent, check the availability of imaging data, and filter the data based on disease-specific image biomarkers. We present Cohort Builder, a software pipeline designed to facilitate the creation of patient cohorts with predefined baseline characteristics from real-world ophthalmic imaging data and electronic medical records. The applicability of our approach extends beyond ophthalmology to other medical domains with similar requirements such as neurology, cardiology and orthopedics.


Assuntos
Registros Eletrônicos de Saúde , Software , Humanos , Diagnóstico por Imagem , Estudos de Coortes , Oftalmopatias/diagnóstico por imagem
2.
Stud Health Technol Inform ; 316: 1255-1259, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39176609

RESUMO

This paper presents a chatbot that simplifies accessing and understanding the open-access records of adverse events related to medical devices in the MAUDE database. The chatbot is powered by generative AI technology, enabling count and search queries. The chatbot uses the openFDA API and GPT-4 model to interpret users' natural language queries, generate appropriate API calls, and summarize adverse event reports. The chatbot also provides a downloadable link to the original reports. The model's performance in generating accurate API calls was assessed and improved by training it with few-shot examples of query-URL pairs. Additionally, the quality of content-based summaries was evaluated by human expert ratings. This initiative is a significant step towards making patient safety data accessible, replicable, and easily manageable by a broader range of researchers.


Assuntos
Processamento de Linguagem Natural , Humanos , Inteligência Artificial , Bases de Dados Factuais , Segurança do Paciente , Registros Eletrônicos de Saúde
3.
Stud Health Technol Inform ; 316: 761-765, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39176905

RESUMO

Effective medication management poses significant challenges, particularly when navigating multiple medications with intricate dosages and schedules. This paper presents a prototype mobile application to streamline information retrieval from dense medication leaflets. By utilizing automated information extraction based on large language models, the application seamlessly retrieves pertinent details from the Austrian medicinal product index upon scanning the medication package. This extracted information is organized and displayed within the app, ensuring clarity and accessibility for users. In addition to this core functionality, the application offers a suite of features tailored to facilitate effective medication management. By integrating comprehensive medication information with practical medication management tools, the application empowers users to navigate complex medication regimes with confidence and ease.


Assuntos
Aplicativos Móveis , Inteligência Artificial , Humanos , Áustria , Folhetos , Rotulagem de Medicamentos , Processamento de Linguagem Natural
4.
Stud Health Technol Inform ; 316: 827-831, 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39176920

RESUMO

Finding relevant information in the biomedical literature increasingly depends on efficient information retrieval (IR) algorithms. Cross-Encoders, SentenceBERT, and ColBERT are algorithms based on pre-trained language models that use nuanced but computable vector representations of search queries and documents for IR applications. Here we investigate how well these vectorization algorithms estimate relevance labels of biomedical documents for search queries using the OHSUMED dataset. For our evaluation, we compared computed scores to provided labels by using boxplots and Spearman's rank correlations. According to these metrics, we found that Sentence-BERT moderately outperformed the alternative vectorization algorithms and that additional fine-tuning based on a subset of OHSUMED labels yielded little additional benefit. Future research might aim to develop a larger dedicated dataset in order to optimize such methods more systematically, and to evaluate the corresponding functions in IR tools with end-users.


Assuntos
Algoritmos , Armazenamento e Recuperação da Informação , Processamento de Linguagem Natural , Armazenamento e Recuperação da Informação/métodos , Humanos
5.
Campbell Syst Rev ; 20(3): e1432, 2024 Sep.
Artigo em Inglês | MEDLINE | ID: mdl-39176233

RESUMO

The search methods used in systematic reviews provide the foundation for establishing the body of literature from which conclusions are drawn and recommendations made. Searches should aim to be comprehensive and reporting of search methods should be transparent and reproducible. Campbell Collaboration systematic reviews strive to adhere to the best methodological guidance available for this type of searching. The current work aims to provide an assessment of the conduct and reporting of searches in Campbell Collaboration systematic reviews. Our objectives were to examine how searches are currently conducted in Campbell systematic reviews, how search strategies, search methods and search reporting adhere to the Methodological Expectations of Campbell Collaboration Intervention Reviews (MECCIR) and PRISMA standards, and identify emerging or novel methods used in searching in Campbell systematic reviews. We also investigated the role of information specialists in Campbell systematic reviews. We handsearched the Campbell Systematic Reviews journal tables of contents from January 2017 to March 2024. We included all systematic reviews published since 2017. We excluded other types of evidence synthesis (e.g., evidence and gap maps), updates to systematic reviews when search methods were not changed from the original pre-2017 review, and systematic reviews that did not conduct their own original searches. We developed a data extraction form in part based on the conduct and reporting items in MECCIR and PRISMA. In addition, we extracted information about the general quality of searches based on the use of Boolean operators, keywords, database syntax and subject headings. Data extraction included information about reporting of sources searched, some aspects of search quality, the use and reporting of supplementary search methods, reporting of the search strategy, the involvement of information specialists, date of the most recent search, and citation of the Campbell search methods guidance. Items were rated as fully, partially or not conducted or reported. We cross-walked our data extraction items to the 2019 MECCIR standards and 2020 PRISMA guidelines and provide descriptive analyses of the conduct and reporting of searches in Campbell systematic reviews, indicating level of adherence to standards where applicable. We included 111 Campbell systematic reviews across all coordinating groups published since 2017 up to the search date. Almost all (98%) included reviews searched at least two relevant databases and all reported the databases searched. All reviews searched grey literature and most (82%) provided a full list of grey literature sources. Detailed information about databases such as platform and date range coverage was lacking in 16% and 77% of the reviews, respectively. In terms of search strategies, most used Boolean operators, search syntax and phrase searching correctly, but subject headings in databases with controlled vocabulary were used in only about half of the reviews. Most reviews reported at least one full database search strategy (90%), with 63% providing full search strategies for all databases. Most reviews conducted some supplementary searching, most commonly searching the references of included studies, whereas handsearching of journals and forward citation searching were less commonly reported (51% and 62%, respectively). Twenty-nine percent of reviews involved an information specialist co-author and about 45% did not mention the involvement of any information specialist. When information specialists were co-authors, there was a concomitant increase in adherence to many reporting and conduct standards and guidelines, including reporting website URLs, reporting methods for forward citation searching, using database syntax correctly and using subject headings. No longitudinal trends in adherence to conducting and reporting standards were found and the Campbell search methods guidance published in 2017 was cited in only twelve reviews. We also found a median time lag of 20 months between the most recent search and the publication date. In general, the included Campbell systematic reviews searched a wide range of bibliographic databases and grey literature, and conducted at least some supplementary searching such as searching references of included studies or contacting experts. Reporting of mandatory standards was variable with some frequently unreported (e.g., website URLs and database date ranges) and others well reported in most reviews. For example, database search strategies were reported in detail in most reviews. For grey literature, source names were well reported but search strategies were less so. The findings will be used to identify opportunities for advancing current practices in Campbell reviews through updated guidance, peer review processes and author training and support.

6.
Bioinformatics ; 2024 Aug 22.
Artigo em Inglês | MEDLINE | ID: mdl-39171832

RESUMO

MOTIVATION: Integrating information from data sources representing different study designs has the potential to strengthen evidence in population health research. However, this concept of evidence "triangulation" presents a number of challenges for systematically identifying and integrating relevant information. These include the harmonization of heterogenous evidence with common semantic concepts and properties, as well as the priortization of the retrieved evidence for triangulation with the question of interest. RESULTS: We present ASQ (Annotated Semantic Queries), a natural language query interface to the integrated biomedical entities and epidemiological evidence in EpiGraphDB, which enables users to extract "claims" from a piece of unstructured text, and then investigate the evidence that could either support, contradict the claims, or offer additional information to the query.This approach has the potential to support the rapid review of preprints, grant applications, conference abstracts and articles submitted for peer review. ASQ implements strategies to harmonize biomedical entities in different taxonomies and evidence from different sources, to facilitate evidence triangulation and interpretation. AVAILABILITY AND IMPLEMENTATION: ASQ is openly available at https://asq.epigraphdb.org and its source code is available at https://github.com/mrcieu/epigraphdb-asq under GPL-3.0 license. SUPPLEMENTARY INFORMATION: Further information can be found in the Supplementary Materials as well as on the ASQ platform via https://asq.epigraphdb.org/docs.

7.
Res Synth Methods ; 2024 Aug 12.
Artigo em Inglês | MEDLINE | ID: mdl-39135430

RESUMO

A thorough literature search is a key feature of scoping reviews. We investigated the search practices used by social science researchers as reported in their scoping reviews. We collected scoping reviews published between 2015 and 2021 from Social Science Citation Index. In the 2484 included studies, we observed a 58% average annual increase in published reviews, primarily from clinical and applied social science disciplines. Bibliographic databases comprised most of the information sources in the primary search strategy (n = 9565, 75%), although reporting practices varied. Most scoping reviews (n = 1805, 73%) included at least one supplementary search strategy. A minority of studies (n = 713, 29%) acknowledged an LIS professional and few listed one as a co-author (n = 194, 8%). We conclude that to improve reporting and strengthen the impact of the scoping review method in the social sciences, researchers should consider (1) adhering to PRISMA-S reporting guidelines, (2) employing more supplementary search strategies, and (3) collaborating with LIS professionals.

9.
MethodsX ; 13: 102780, 2024 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-39007030

RESUMO

In today's world of managing multimedia content, dealing with the amount of CCTV footage poses challenges related to storage, accessibility and efficient navigation. To tackle these issues, we suggest an encompassing technique, for summarizing videos that merges machine-learning techniques with user engagement. Our methodology consists of two phases, each bringing improvements to video summarization. In Phase I we introduce a method for summarizing videos based on keyframe detection and behavioral analysis. By utilizing technologies like YOLOv5 for object recognition, Deep SORT for object tracking, and Single Shot Detector (SSD) for creating video summaries. In Phase II we present a User Interest Based Video summarization system driven by machine learning. By incorporating user preferences into the summarization process we enhance techniques with personalized content curation. Leveraging tools such as NLTK, OpenCV, TensorFlow, and the EfficientDET model enables our system to generate customized video summaries tailored to preferences. This innovative approach not only enhances user interactions but also efficiently handles the overwhelming amount of video data on digital platforms. By combining these two methodologies we make progress in applying machine learning techniques while offering a solution to the complex challenges presented by managing multimedia data.

10.
Comput Methods Programs Biomed ; 255: 108326, 2024 Jul 18.
Artigo em Inglês | MEDLINE | ID: mdl-39029416

RESUMO

BACKGROUND AND OBJECTIVE: Researchers commonly use automated solutions such as Natural Language Processing (NLP) systems to extract clinical information from large volumes of unstructured data. However, clinical text's poor semantic structure and domain-specific vocabulary can make it challenging to develop a one-size-fits-all solution. Large Language Models (LLMs), such as OpenAI's Generative Pre-Trained Transformer 3 (GPT-3), offer a promising solution for capturing and standardizing unstructured clinical information. This study evaluated the performance of InstructGPT, a family of models derived from LLM GPT-3, to extract relevant patient information from medical case reports and discussed the advantages and disadvantages of LLMs versus dedicated NLP methods. METHODS: In this paper, 208 articles related to case reports of foreign body injuries in children were identified by searching PubMed, Scopus, and Web of Science. A reviewer manually extracted information on sex, age, the object that caused the injury, and the injured body part for each patient to build a gold standard to compare the performance of InstructGPT. RESULTS: InstructGPT achieved high accuracy in classifying the sex, age, object and body part involved in the injury, with 94%, 82%, 94% and 89%, respectively. When excluding articles for which InstructGPT could not retrieve any information, the accuracy for determining the child's sex and age improved to 97%, and the accuracy for identifying the injured body part improved to 93%. InstructGPT was also able to extract information from non-English language articles. CONCLUSIONS: The study highlights that LLMs have the potential to eliminate the necessity for task-specific training (zero-shot extraction), allowing the retrieval of clinical information from unstructured natural language text, particularly from published scientific literature like case reports, by directly utilizing the PDF file of the article without any pre-processing and without requiring any technical expertise in NLP or Machine Learning. The diverse nature of the corpus, which includes articles written in languages other than English, some of which contain a wide range of clinical details while others lack information, adds to the strength of the study.

11.
J Med Internet Res ; 26: e58764, 2024 Jul 31.
Artigo em Inglês | MEDLINE | ID: mdl-39083765

RESUMO

Evidence-based medicine (EBM) emerged from McMaster University in the 1980-1990s, which emphasizes the integration of the best research evidence with clinical expertise and patient values. The Health Information Research Unit (HiRU) was created at McMaster University in 1985 to support EBM. Early on, digital health informatics took the form of teaching clinicians how to search MEDLINE with modems and phone lines. Searching and retrieval of published articles were transformed as electronic platforms provided greater access to clinically relevant studies, systematic reviews, and clinical practice guidelines, with PubMed playing a pivotal role. In the early 2000s, the HiRU introduced Clinical Queries-validated search filters derived from the curated, gold-standard, human-appraised Hedges dataset-to enhance the precision of searches, allowing clinicians to hone their queries based on study design, population, and outcomes. Currently, almost 1 million articles are added to PubMed annually. To filter through this volume of heterogenous publications for clinically important articles, the HiRU team and other researchers have been applying classical machine learning, deep learning, and, increasingly, large language models (LLMs). These approaches are built upon the foundation of gold-standard annotated datasets and humans in the loop for active machine learning. In this viewpoint, we explore the evolution of health informatics in supporting evidence search and retrieval processes over the past 25+ years within the HiRU, including the evolving roles of LLMs and responsible artificial intelligence, as we continue to facilitate the dissemination of knowledge, enabling clinicians to integrate the best available evidence into their clinical practice.


Assuntos
Medicina Baseada em Evidências , Informática Médica , Informática Médica/métodos , Informática Médica/tendências , Humanos , História do Século XX , História do Século XXI , Aprendizado de Máquina
12.
Data Brief ; 55: 110672, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39071970

RESUMO

The existence of diverse traditional machine learning and deep learning models designed for various multimodal music information retrieval (MIR) applications, such as multimodal music sentiment analysis, genre classification, recommender systems, and emotion recognition, renders the machine learning and deep learning models indispensable for the MIR tasks. However, solving these tasks in a data-driven manner depends on the availability of high-quality benchmark datasets. Hence, the necessity for datasets tailored for multimodal music information retrieval applications is paramount. While a handful of multimodal datasets exist for distinct music information retrieval applications, they are not available in low-resourced languages, like Sotho-Tswana languages. In response to this gap, we introduce a novel multimodal music information retrieval dataset for various music information retrieval applications. This dataset centres on Sotho-Tswana musical videos, encompassing both textual, visual, and audio modalities specific to Sotho-Tswana musical content. The musical videos were downloaded from YouTube, but Python programs were written to process the musical videos and extract relevant spectral-based acoustic features, using different Python libraries. Annotation of the dataset was done manually by native speakers of Sotho-Tswana languages, who understand the culture and traditions of the Sotho-Tswana people. It is distinctive as, to our knowledge, no such dataset has been established until now.

13.
Heliyon ; 10(13): e33645, 2024 Jul 15.
Artigo em Inglês | MEDLINE | ID: mdl-39040344

RESUMO

Aim: This review aims to explore earthquake-based transport strategies in seismic areas, providing state-of-the-art insights into the components necessary to guide urban planners and policymakers in their decision-making processes. Outputs: The review provides a variety of methodologies and approaches employed for the reinforcement planning and emergency demand management to analyze and evaluate the impact of seismic events on transportation systems, in turn to develop strategies for preparedness, mitigation, response, and recovery phases. The selection of the appropriate approach depends on factors such as the specific transport system, urbanization level and type, built environment, and critical components involved. Originality and value: Besides providing a distinctive illustration of the integration of transportation and seismic literature as a valuable consolidated resource, this article introduces a novel methodology named ALARM for conducting state-of-the-art reviews on any topic, incorporating AI through the utilization of large language models (LLMs) built upon transformer deep neural networks, along with indexing data structures (in this study mainly OPEN-AI DAVINCI-003 model and vector-storing index). Hence, it is of paramount significance as the first instance of implementing LLMs within academic review standards. This paves the way for the potential integration of AI and human collaboration to become a standard practice under enhanced criteria for comprehending and analyzing specific information.

14.
JMIR Med Inform ; 12: e50209, 2024 Jun 19.
Artigo em Inglês | MEDLINE | ID: mdl-38896468

RESUMO

BACKGROUND: Diagnostic errors pose significant health risks and contribute to patient mortality. With the growing accessibility of electronic health records, machine learning models offer a promising avenue for enhancing diagnosis quality. Current research has primarily focused on a limited set of diseases with ample training data, neglecting diagnostic scenarios with limited data availability. OBJECTIVE: This study aims to develop an information retrieval (IR)-based framework that accommodates data sparsity to facilitate broader diagnostic decision support. METHODS: We introduced an IR-based diagnostic decision support framework called CliniqIR. It uses clinical text records, the Unified Medical Language System Metathesaurus, and 33 million PubMed abstracts to classify a broad spectrum of diagnoses independent of training data availability. CliniqIR is designed to be compatible with any IR framework. Therefore, we implemented it using both dense and sparse retrieval approaches. We compared CliniqIR's performance to that of pretrained clinical transformer models such as Clinical Bidirectional Encoder Representations from Transformers (ClinicalBERT) in supervised and zero-shot settings. Subsequently, we combined the strength of supervised fine-tuned ClinicalBERT and CliniqIR to build an ensemble framework that delivers state-of-the-art diagnostic predictions. RESULTS: On a complex diagnosis data set (DC3) without any training data, CliniqIR models returned the correct diagnosis within their top 3 predictions. On the Medical Information Mart for Intensive Care III data set, CliniqIR models surpassed ClinicalBERT in predicting diagnoses with <5 training samples by an average difference in mean reciprocal rank of 0.10. In a zero-shot setting where models received no disease-specific training, CliniqIR still outperformed the pretrained transformer models with a greater mean reciprocal rank of at least 0.10. Furthermore, in most conditions, our ensemble framework surpassed the performance of its individual components, demonstrating its enhanced ability to make precise diagnostic predictions. CONCLUSIONS: Our experiments highlight the importance of IR in leveraging unstructured knowledge resources to identify infrequently encountered diagnoses. In addition, our ensemble framework benefits from combining the complementary strengths of the supervised and retrieval-based models to diagnose a broad spectrum of diseases.

15.
Sci Rep ; 14(1): 12731, 2024 Jun 03.
Artigo em Inglês | MEDLINE | ID: mdl-38830946

RESUMO

Conversational Agents (CAs) have made their way to providing interactive assistance to users. However, the current dialogue modelling techniques for CAs are predominantly based on hard-coded rules and rigid interaction flows, which negatively affects their flexibility and scalability. Large Language Models (LLMs) can be used as an alternative, but unfortunately they do not always provide good levels of privacy protection for end-users since most of them are running on cloud services. To address these problems, we leverage the potential of transfer learning and study how to best fine-tune lightweight pre-trained LLMs to predict the intent of user queries. Importantly, our LLMs allow for on-device deployment, making them suitable for personalised, ubiquitous, and privacy-preserving scenarios. Our experiments suggest that RoBERTa and XLNet offer the best trade-off considering these constraints. We also show that, after fine-tuning, these models perform on par with ChatGPT. We also discuss the implications of this research for relevant stakeholders, including researchers and practitioners. Taken together, this paper provides insights into LLM suitability for on-device CAs and highlights the middle ground between LLM performance and memory footprint while also considering privacy implications.

16.
JMIR Med Inform ; 12: e49613, 2024 Jun 21.
Artigo em Inglês | MEDLINE | ID: mdl-38904996

RESUMO

BACKGROUND: Dermoscopy is a growing field that uses microscopy to allow dermatologists and primary care physicians to identify skin lesions. For a given skin lesion, a wide variety of differential diagnoses exist, which may be challenging for inexperienced users to name and understand. OBJECTIVE: In this study, we describe the creation of the dermoscopy differential diagnosis explorer (D3X), an ontology linking dermoscopic patterns to differential diagnoses. METHODS: Existing ontologies that were incorporated into D3X include the elements of visuals ontology and dermoscopy elements of visuals ontology, which connect visual features to dermoscopic patterns. A list of differential diagnoses for each pattern was generated from the literature and in consultation with domain experts. Open-source images were incorporated from DermNet, Dermoscopedia, and open-access research papers. RESULTS: D3X was encoded in the OWL 2 web ontology language and includes 3041 logical axioms, 1519 classes, 103 object properties, and 20 data properties. We compared D3X with publicly available ontologies in the dermatology domain using a semiotic theory-driven metric to measure the innate qualities of D3X with others. The results indicate that D3X is adequately comparable with other ontologies of the dermatology domain. CONCLUSIONS: The D3X ontology is a resource that can link and integrate dermoscopic differential diagnoses and supplementary information with existing ontology-based resources. Future directions include developing a web application based on D3X for dermoscopy education and clinical practice.

17.
JMIR AI ; 3: e42630, 2024 May 02.
Artigo em Inglês | MEDLINE | ID: mdl-38875551

RESUMO

BACKGROUND: Widespread misinformation in web resources can lead to serious implications for individuals seeking health advice. Despite that, information retrieval models are often focused only on the query-document relevance dimension to rank results. OBJECTIVE: We investigate a multidimensional information quality retrieval model based on deep learning to enhance the effectiveness of online health care information search results. METHODS: In this study, we simulated online health information search scenarios with a topic set of 32 different health-related inquiries and a corpus containing 1 billion web documents from the April 2019 snapshot of Common Crawl. Using state-of-the-art pretrained language models, we assessed the quality of the retrieved documents according to their usefulness, supportiveness, and credibility dimensions for a given search query on 6030 human-annotated, query-document pairs. We evaluated this approach using transfer learning and more specific domain adaptation techniques. RESULTS: In the transfer learning setting, the usefulness model provided the largest distinction between help- and harm-compatible documents, with a difference of +5.6%, leading to a majority of helpful documents in the top 10 retrieved. The supportiveness model achieved the best harm compatibility (+2.4%), while the combination of usefulness, supportiveness, and credibility models achieved the largest distinction between help- and harm-compatibility on helpful topics (+16.9%). In the domain adaptation setting, the linear combination of different models showed robust performance, with help-harm compatibility above +4.4% for all dimensions and going as high as +6.8%. CONCLUSIONS: These results suggest that integrating automatic ranking models created for specific information quality dimensions can increase the effectiveness of health-related information retrieval. Thus, our approach could be used to enhance searches made by individuals seeking online health information.

18.
J Med Libr Assoc ; 112(1): 13-21, 2024 Jan 16.
Artigo em Inglês | MEDLINE | ID: mdl-38911524

RESUMO

Objective: To evaluate the ability of DynaMedex, an evidence-based drug and disease Point of Care Information (POCI) resource, in answering clinical queries using keyword searches. Methods: Real-world disease-related questions compiled from clinicians at an academic medical center, DynaMedex search query data, and medical board review resources were categorized into five clinical categories (complications & prognosis, diagnosis & clinical presentation, epidemiology, prevention & screening/monitoring, and treatment) and six specialties (cardiology, endocrinology, hematology-oncology, infectious disease, internal medicine, and neurology). A total of 265 disease-related questions were evaluated by pharmacist reviewers based on if an answer was found (yes, no), whether the answer was relevant (yes, no), difficulty in finding the answer (easy, not easy), cited best evidence available (yes, no), clinical practice guidelines included (yes, no), and level of detail provided (detailed, limited details). Results: An answer was found for 259/265 questions (98%). Both reviewers found an answer for 241 questions (91%), neither found the answer for 6 questions (2%), and only one reviewer found an answer for 18 questions (7%). Both reviewers found a relevant answer 97% of the time when an answer was found. Of all relevant answers found, 68% were easy to find, 97% cited best quality of evidence available, 72% included clinical guidelines, and 95% were detailed. Recommendations for areas of resource improvement were identified. Conclusions: The resource enabled reviewers to answer most questions easily with the best quality of evidence available, providing detailed answers and clinical guidelines, with a high level of replication of results across users.


Assuntos
Sistemas Automatizados de Assistência Junto ao Leito , Humanos , Medicina Baseada em Evidências
19.
Sensors (Basel) ; 24(11)2024 May 22.
Artigo em Inglês | MEDLINE | ID: mdl-38894095

RESUMO

The revolution of the Internet of Things (IoT) and the Web of Things (WoT) has brought new opportunities and challenges for the information retrieval (IR) field. The exponential number of interconnected physical objects and real-time data acquisition requires new approaches and architectures for IR systems. Research and prototypes can be crucial in designing and developing new systems and refining architectures for IR in the WoT. This paper proposes a unified and holistic approach for IR in the WoT, called IR.WoT. The proposed system contemplates the critical indexing, scoring, and presentation stages applied to some smart cities' use cases and scenarios. Overall, this paper describes the research, architecture, and vision for advancing the field of IR in the WoT and addresses some of the remaining challenges and opportunities in this exciting area. The article also describes the design considerations, cloud implementation, and experimentation based on a simulated collection of synthetic XML documents with technical efficiency measures. The experimentation results show promising outcomes, whereas further studies are required to improve IR.WoT effectiveness, considering the WoT dynamic characteristics and, more importantly, the heterogeneity and divergence of WoT modeling proposals in the IR domain.

20.
Am J Prev Cardiol ; 18: 100678, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38756692

RESUMO

Objectives: To investigate the potential value and feasibility of creating a listing system-wide registry of patients with at-risk and established Atherosclerotic Cardiovascular Disease (ASCVD) within a large healthcare system using automated data extraction methods to systematically identify burden, determinants, and the spectrum of at-risk patients to inform population health management. Additionally, the Houston Methodist Cardiovascular Disease Learning Health System (HM CVD-LHS) registry intends to create high-quality data-driven analytical insights to assess, track, and promote cardiovascular research and care. Methods: We conducted a retrospective multi-center, cohort analysis of adult patients who were seen in the outpatient settings of a large healthcare system between June 2016 - December 2022 to create an EMR-based registry. A common framework was developed to automatically extract clinical data from the EMR and then integrate it with the social determinants of health information retrieved from external sources. Microsoft's SQL Server Management Studio was used for creating multiple Extract-Transform-Load scripts and stored procedures for collecting, cleaning, storing, monitoring, reviewing, auto-updating, validating, and reporting the data based on the registry goals. Results: A real-time, programmatically deidentified, auto-updated EMR-based HM CVD-LHS registry was developed with ∼450 variables stored in multiple tables each containing information related to patient's demographics, encounters, diagnoses, vitals, labs, medication use, and comorbidities. Out of 1,171,768 adult individuals in the registry, 113,022 (9.6%) ASCVD patients were identified between June 2016 and December 2022 (mean age was 69.2 ± 12.2 years, with 55% Men and 15% Black individuals). Further, multi-level groupings of patients with laboratory test results and medication use have been analyzed for evaluating the outcomes of interest. Conclusions: HM CVD-LHS registry database was developed successfully providing the listing registry of patients with established ASCVD and those at risk. This approach empowers knowledge inference and provides support for efforts to move away from manual patient chart abstraction by suggesting that a common registry framework with a concurrent design of data collection tools and reporting rapidly extracting useful structured clinical data from EMRs for creating patient or specialty population registries.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA