Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros










Base de datos
Intervalo de año de publicación
1.
Crit Rev Clin Lab Sci ; : 1-15, 2024 Jul 23.
Artículo en Inglés | MEDLINE | ID: mdl-39041650

RESUMEN

Immunoglobulin G (IgG) and immunoglobulin M (IgM) testing are commonly used to determine infection status. Typically, the detection of IgM indicates an acute or recent infection, while the presence of IgG alone suggests a chronic or past infection. However, relying solely on IgG and IgM antibody positivity may not be sufficient to differentiate acute from chronic infections. This limitation arises from several factors. The prolonged presence of IgM can complicate diagnostic interpretations, and false positive IgM results often arise from antibody cross-reactivity with various antigens. Additionally, IgM may remain undetectable in prematurely collected samples or in individuals who are immunocompromised, further complicating accurate diagnosis. As a result, additional diagnostic tools are required to confirm infection status. Avidity is a measure of the strength of the binding between an antigen and antibody. Avidity-based assays have been developed for various infectious agents, including toxoplasma, cytomegalovirus (CMV), SARS-CoV-2, and avian influenza, and are promising tools in clinical diagnostics. By measuring the strength of antibody binding, they offer critical insights into the maturity of the immune response. These assays are instrumental in distinguishing between acute and chronic or past infections, monitoring disease progression, and guiding treatment decisions. The development of automated platforms has optimized the testing process by enhancing efficiency and minimizing the risk of manual errors. Additionally, the recent advent of real-time biosensor immunoassays, including the label-free immunoassays (LFIA), has further amplified the capabilities of these assays. These advances have expanded the clinical applications of avidity-based assays, making them useful tools for the diagnosis and management of various infectious diseases. This review is structured around several key aspects of IgG avidity in clinical diagnosis, including: (i) a detailed exposition of the IgG affinity maturation process; (ii) a thorough discussion of the IgG avidity assays, including the recently emerged biosensor-based approaches; and (iii) an examination of the applications of IgG avidity in clinical diagnosis. This review is intended to contribute toward the development of enhanced diagnostic tools through critical assessment of the present landscape of avidity-based testing, which allows us to identify the existing knowledge gaps and highlight areas for future investigation.

2.
Clin Chem ; 70(3): 465-467, 2024 03 02.
Artículo en Inglés | MEDLINE | ID: mdl-38431277
3.
J Bone Miner Res ; 39(2): 106-115, 2024 Mar 22.
Artículo en Inglés | MEDLINE | ID: mdl-38477743

RESUMEN

Artificial intelligence (AI) chatbots utilizing large language models (LLMs) have recently garnered significant interest due to their ability to generate humanlike responses to user inquiries in an interactive dialog format. While these models are being increasingly utilized to obtain medical information by patients, scientific and medical providers, and trainees to address biomedical questions, their performance may vary from field to field. The opportunities and risks these chatbots pose to the widespread understanding of skeletal health and science are unknown. Here we assess the performance of 3 high-profile LLM chatbots, Chat Generative Pre-Trained Transformer (ChatGPT) 4.0, BingAI, and Bard, to address 30 questions in 3 categories: basic and translational skeletal biology, clinical practitioner management of skeletal disorders, and patient queries to assess the accuracy and quality of the responses. Thirty questions in each of these categories were posed, and responses were independently graded for their degree of accuracy by four reviewers. While each of the chatbots was often able to provide relevant information about skeletal disorders, the quality and relevance of these responses varied widely, and ChatGPT 4.0 had the highest overall median score in each of the categories. Each of these chatbots displayed distinct limitations that included inconsistent, incomplete, or irrelevant responses, inappropriate utilization of lay sources in a professional context, a failure to take patient demographics or clinical context into account when providing recommendations, and an inability to consistently identify areas of uncertainty in the relevant literature. Careful consideration of both the opportunities and risks of current AI chatbots is needed to formulate guidelines for best practices for their use as source of information about skeletal health and biology.


Artificial intelligence chatbots are increasingly used as a source of information in health care and research settings due to their accessibility and ability to summarize complex topics using conversational language. However, it is still unclear whether they can provide accurate information for questions related to the medicine and biology of the skeleton. Here, we tested the performance of three prominent chatbots­ChatGPT, Bard, and BingAI­by tasking them with a series of prompts based on well-established skeletal biology concepts, realistic physician­patient scenarios, and potential patient questions. Despite their similarities in function, differences in the accuracy of responses were observed across the three different chatbot services. While in some contexts, chatbots performed well, and in other cases, strong limitations were observed, including inconsistent consideration of clinical context and patient demographics, occasionally providing incorrect or out-of-date information, and citation of inappropriate sources. With careful consideration of their current weaknesses, artificial intelligence chatbots offer the potential to transform education on skeletal health and science.


Asunto(s)
Inteligencia Artificial , Huesos , Humanos , Huesos/fisiología , Enfermedades Óseas/terapia
4.
J Orthop Res ; 42(6): 1276-1282, 2024 Jun.
Artículo en Inglés | MEDLINE | ID: mdl-38245845

RESUMEN

Large language model (LLM) chatbots possess a remarkable capacity to synthesize complex information into concise, digestible summaries across a wide range of orthopedic subject matter. As LLM chatbots become widely available they will serve as a powerful, accessible resource that patients, clinicians, and researchers may reference to obtain information about orthopedic science and clinical management. Here, we examined the performance of three well-known and easily accessible chatbots-ChatGPT, Bard, and Bing AI-in responding to inquiries relating to clinical management and orthopedic concepts. Although all three chatbots were found to be capable of generating relevant responses, ChatGPT outperformed Bard and BingAI in each category due to its ability to provide accurate and complete responses to orthopedic queries. Despite their promising applications in clinical management, shortcomings observed included incomplete responses, lack of context, and outdated information. Nonetheless, the ability for these LLM chatbots to address these inquires has largely yet to be evaluated and will be critical for understanding the risks and opportunities of LLM chatbots in orthopedics.


Asunto(s)
Ortopedia , Humanos , Inteligencia Artificial
5.
Res Sq ; 2023 Dec 04.
Artículo en Inglés | MEDLINE | ID: mdl-38106170

RESUMEN

Objective: While artificial intelligence (AI), particularly large language models (LLMs), offers significant potential for medicine, it raises critical concerns due to the possibility of generating factually incorrect information, leading to potential long-term risks and ethical issues. This review aims to provide a comprehensive overview of the faithfulness problem in existing research on AI in healthcare and medicine, with a focus on the analysis of the causes of unfaithful results, evaluation metrics, and mitigation methods. Materials and Methods: Using PRISMA methodology, we sourced 5,061 records from five databases (PubMed, Scopus, IEEE Xplore, ACM Digital Library, Google Scholar) published between January 2018 to March 2023. We removed duplicates and screened records based on exclusion criteria. Results: With 40 leaving articles, we conducted a systematic review of recent developments aimed at optimizing and evaluating factuality across a variety of generative medical AI approaches. These include knowledge-grounded LLMs, text-to-text generation, multimodality-to-text generation, and automatic medical fact-checking tasks. Discussion: Current research investigating the factuality problem in medical AI is in its early stages. There are significant challenges related to data resources, backbone models, mitigation methods, and evaluation metrics. Promising opportunities exist for novel faithful medical AI research involving the adaptation of LLMs and prompt engineering. Conclusion: This comprehensive review highlights the need for further research to address the issues of reliability and factuality in medical AI, serving as both a reference and inspiration for future research into the safe, ethical use of AI in medicine and healthcare.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA