Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Más filtros

Banco de datos
Tipo del documento
Asunto de la revista
Intervalo de año de publicación
1.
J Med Internet Res ; 26: e50130, 2024 Jul 22.
Artículo en Inglés | MEDLINE | ID: mdl-39038285

RESUMEN

BACKGROUND: Artificial intelligence (AI) holds immense potential for enhancing clinical and administrative health care tasks. However, slow adoption and implementation challenges highlight the need to consider how humans can effectively collaborate with AI within broader socio-technical systems in health care. OBJECTIVE: In the example of intensive care units (ICUs), we compare data scientists' and clinicians' assessments of the optimal utilization of human and AI capabilities by determining suitable levels of human-AI teaming for safely and meaningfully augmenting or automating 6 core tasks. The goal is to provide actionable recommendations for policy makers and health care practitioners regarding AI design and implementation. METHODS: In this multimethod study, we combine a systematic task analysis across 6 ICUs with an international Delphi survey involving 19 health data scientists from the industry and academia and 61 ICU clinicians (25 physicians and 36 nurses) to define and assess optimal levels of human-AI teaming (level 1=no performance benefits; level 2=AI augments human performance; level 3=humans augment AI performance; level 4=AI performs without human input). Stakeholder groups also considered ethical and social implications. RESULTS: Both stakeholder groups chose level 2 and 3 human-AI teaming for 4 out of 6 core tasks in the ICU. For one task (monitoring), level 4 was the preferred design choice. For the task of patient interactions, both data scientists and clinicians agreed that AI should not be used regardless of technological feasibility due to the importance of the physician-patient and nurse-patient relationship and ethical concerns. Human-AI design choices rely on interpretability, predictability, and control over AI systems. If these conditions are not met and AI performs below human-level reliability, a reduction to level 1 or shifting accountability away from human end users is advised. If AI performs at or beyond human-level reliability and these conditions are not met, shifting to level 4 automation should be considered to ensure safe and efficient human-AI teaming. CONCLUSIONS: By considering the sociotechnical system and determining appropriate levels of human-AI teaming, our study showcases the potential for improving the safety and effectiveness of AI usage in ICUs and broader health care settings. Regulatory measures should prioritize interpretability, predictability, and control if clinicians hold full accountability. Ethical and social implications must be carefully evaluated to ensure effective collaboration between humans and AI, particularly considering the most recent advancements in generative AI.


Asunto(s)
Inteligencia Artificial , Cuidados Críticos , Humanos , Cuidados Críticos/métodos , Unidades de Cuidados Intensivos , Automatización , Técnica Delphi , Ciencia de los Datos/métodos , Masculino , Femenino
2.
Diagnostics (Basel) ; 14(13)2024 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-39001331

RESUMEN

Artificial Intelligence (AI)-based image analysis has immense potential to support diagnostic histopathology, including cancer diagnostics. However, developing supervised AI methods requires large-scale annotated datasets. A potentially powerful solution is to augment training data with synthetic data. Latent diffusion models, which can generate high-quality, diverse synthetic images, are promising. However, the most common implementations rely on detailed textual descriptions, which are not generally available in this domain. This work proposes a method that constructs structured textual prompts from automatically extracted image features. We experiment with the PCam dataset, composed of tissue patches only loosely annotated as healthy or cancerous. We show that including image-derived features in the prompt, as opposed to only healthy and cancerous labels, improves the Fréchet Inception Distance (FID) by 88.6. We also show that pathologists find it challenging to detect synthetic images, with a median sensitivity/specificity of 0.55/0.55. Finally, we show that synthetic data effectively train AI models.

3.
Nutr Res ; 128: 105-114, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39102765

RESUMEN

Artificial intelligence chatbots based on large language models have recently emerged as an alternative to traditional online searches and are also entering the nutrition space. In this study, we wanted to investigate whether the artificial intelligence chatbots ChatGPT and Bard (now Gemini) can create meal plans that meet the dietary reference intake (DRI) for different dietary patterns. We further hypothesized that nutritional adequacy could be improved by modifying the prompts used. Meal plans were generated by 3 accounts for different dietary patterns (omnivorous, vegetarian, and vegan) using 2 distinct prompts resulting in 108 meal plans total. The nutrient content of the plans was subsequently analyzed and compared to the DRIs. On average, the meal plans contained less energy and carbohydrates but mostly exceeded the DRI for protein. Vitamin D and fluoride fell below the DRI for all plans, whereas only the vegan plans contained insufficient vitamin B12. ChatGPT suggested using vitamin B12 supplements in 5 of 18 instances, whereas Bard never recommended supplements. There were no significant differences between the prompts or the tools. Although the meal plans generated by ChatGPT and Bard met most DRIs, there were some exceptions, particularly for vegan diets. These tools maybe useful for individuals looking for general dietary inspiration, but they should not be relied on to create nutritionally adequate meal plans, especially for individuals with restrictive dietary needs.


Asunto(s)
Inteligencia Artificial , Patrones Dietéticos , Ingestión de Energía , Comidas , Humanos , Carbohidratos de la Dieta/administración & dosificación , Suplementos Dietéticos , Nutrientes , Valor Nutritivo , Ingesta Diaria Recomendada
4.
JMIR Med Inform ; 12: e56426, 2024 Aug 08.
Artículo en Inglés | MEDLINE | ID: mdl-39115930

RESUMEN

BACKGROUND: Chronic hepatitis B (CHB) imposes substantial economic and social burdens globally. The management of CHB involves intricate monitoring and adherence challenges, particularly in regions like China, where a high prevalence of CHB intersects with health care resource limitations. This study explores the potential of ChatGPT-3.5, an emerging artificial intelligence (AI) assistant, to address these complexities. With notable capabilities in medical education and practice, ChatGPT-3.5's role is examined in managing CHB, particularly in regions with distinct health care landscapes. OBJECTIVE: This study aimed to uncover insights into ChatGPT-3.5's potential and limitations in delivering personalized medical consultation assistance for CHB patients across diverse linguistic contexts. METHODS: Questions sourced from published guidelines, online CHB communities, and search engines in English and Chinese were refined, translated, and compiled into 96 inquiries. Subsequently, these questions were presented to both ChatGPT-3.5 and ChatGPT-4.0 in independent dialogues. The responses were then evaluated by senior physicians, focusing on informativeness, emotional management, consistency across repeated inquiries, and cautionary statements regarding medical advice. Additionally, a true-or-false questionnaire was employed to further discern the variance in information accuracy for closed questions between ChatGPT-3.5 and ChatGPT-4.0. RESULTS: Over half of the responses (228/370, 61.6%) from ChatGPT-3.5 were considered comprehensive. In contrast, ChatGPT-4.0 exhibited a higher percentage at 74.5% (172/222; P<.001). Notably, superior performance was evident in English, particularly in terms of informativeness and consistency across repeated queries. However, deficiencies were identified in emotional management guidance, with only 3.2% (6/186) in ChatGPT-3.5 and 8.1% (15/154) in ChatGPT-4.0 (P=.04). ChatGPT-3.5 included a disclaimer in 10.8% (24/222) of responses, while ChatGPT-4.0 included a disclaimer in 13.1% (29/222) of responses (P=.46). When responding to true-or-false questions, ChatGPT-4.0 achieved an accuracy rate of 93.3% (168/180), significantly surpassing ChatGPT-3.5's accuracy rate of 65.0% (117/180) (P<.001). CONCLUSIONS: In this study, ChatGPT demonstrated basic capabilities as a medical consultation assistant for CHB management. The choice of working language for ChatGPT-3.5 was considered a potential factor influencing its performance, particularly in the use of terminology and colloquial language, and this potentially affects its applicability within specific target populations. However, as an updated model, ChatGPT-4.0 exhibits improved information processing capabilities, overcoming the language impact on information accuracy. This suggests that the implications of model advancement on applications need to be considered when selecting large language models as medical consultation assistants. Given that both models performed inadequately in emotional guidance management, this study highlights the importance of providing specific language training and emotional management strategies when deploying ChatGPT for medical purposes. Furthermore, the tendency of these models to use disclaimers in conversations should be further investigated to understand the impact on patients' experiences in practical applications.

5.
JMIR Med Inform ; 11: e53785, 2023 Dec 21.
Artículo en Inglés | MEDLINE | ID: mdl-38127431

RESUMEN

The realm of health care is on the cusp of a significant technological leap, courtesy of the advancements in artificial intelligence (AI) language models, but ensuring the ethical design, deployment, and use of these technologies is imperative to truly realize their potential in improving health care delivery and promoting human well-being and safety. Indeed, these models have demonstrated remarkable prowess in generating humanlike text, evidenced by a growing body of research and real-world applications. This capability paves the way for enhanced patient engagement, clinical decision support, and a plethora of other applications that were once considered beyond reach. However, the journey from potential to real-world application is laden with challenges ranging from ensuring reliability and transparency to navigating a complex regulatory landscape. There is still a need for comprehensive evaluation and rigorous validation to ensure that these models are reliable, transparent, and ethically sound. This editorial introduces the new section, titled "AI Language Models in Health Care." This section seeks to create a platform for academics, practitioners, and innovators to share their insights, research findings, and real-world applications of AI language models in health care. The aim is to foster a community that is not only excited about the possibilities but also critically engaged with the ethical, practical, and regulatory challenges that lie ahead.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA