Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 50
Filtrar
2.
Crit Care ; 28(1): 263, 2024 Aug 05.
Artículo en Inglés | MEDLINE | ID: mdl-39103945

RESUMEN

BACKGROUND: Automated analysis of lung computed tomography (CT) scans may help characterize subphenotypes of acute respiratory illness. We integrated lung CT features measured via deep learning with clinical and laboratory data in spontaneously breathing subjects to enhance the identification of COVID-19 subphenotypes. METHODS: This is a multicenter observational cohort study in spontaneously breathing patients with COVID-19 respiratory failure exposed to early lung CT within 7 days of admission. We explored lung CT images using deep learning approaches to quantitative and qualitative analyses; latent class analysis (LCA) by using clinical, laboratory and lung CT variables; regional differences between subphenotypes following 3D spatial trajectories. RESULTS: Complete datasets were available in 559 patients. LCA identified two subphenotypes (subphenotype 1 and 2). As compared with subphenotype 2 (n = 403), subphenotype 1 patients (n = 156) were older, had higher inflammatory biomarkers, and were more hypoxemic. Lungs in subphenotype 1 had a higher density gravitational gradient with a greater proportion of consolidated lungs as compared with subphenotype 2. In contrast, subphenotype 2 had a higher density submantellar-hilar gradient with a greater proportion of ground glass opacities as compared with subphenotype 1. Subphenotype 1 showed higher prevalence of comorbidities associated with endothelial dysfunction and higher 90-day mortality than subphenotype 2, even after adjustment for clinically meaningful variables. CONCLUSIONS: Integrating lung-CT data in a LCA allowed us to identify two subphenotypes of COVID-19, with different clinical trajectories. These exploratory findings suggest a role of automated imaging characterization guided by machine learning in subphenotyping patients with respiratory failure. TRIAL REGISTRATION: ClinicalTrials.gov Identifier: NCT04395482. Registration date: 19/05/2020.


Asunto(s)
COVID-19 , Pulmón , Fenotipo , Insuficiencia Respiratoria , Tomografía Computarizada por Rayos X , Humanos , COVID-19/diagnóstico por imagen , COVID-19/fisiopatología , Tomografía Computarizada por Rayos X/métodos , Femenino , Masculino , Persona de Mediana Edad , Pulmón/diagnóstico por imagen , Pulmón/fisiopatología , Anciano , Insuficiencia Respiratoria/diagnóstico por imagen , Insuficiencia Respiratoria/etiología , Insuficiencia Respiratoria/fisiopatología , Estudios de Cohortes , Adulto
3.
J Clin Monit Comput ; 38(4): 931-939, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-38573370

RESUMEN

The integration of Clinical Decision Support Systems (CDSS) based on artificial intelligence (AI) in healthcare is groundbreaking evolution with enormous potential, but its development and ethical implementation, presents unique challenges, particularly in critical care, where physicians often deal with life-threating conditions requiring rapid actions and patients unable to participate in the decisional process. Moreover, development of AI-based CDSS is complex and should address different sources of bias, including data acquisition, health disparities, domain shifts during clinical use, and cognitive biases in decision-making. In this scenario algor-ethics is mandatory and emphasizes the integration of 'Human-in-the-Loop' and 'Algorithmic Stewardship' principles, and the benefits of advanced data engineering. The establishment of Clinical AI Departments (CAID) is necessary to lead AI innovation in healthcare, ensuring ethical integrity and human-centered development in this rapidly evolving field.


Asunto(s)
Algoritmos , Inteligencia Artificial , Cuidados Críticos , Sistemas de Apoyo a Decisiones Clínicas , Humanos , Inteligencia Artificial/ética , Cuidados Críticos/ética , Sistemas de Apoyo a Decisiones Clínicas/ética , Toma de Decisiones Clínicas/ética
5.
J Med Syst ; 48(1): 22, 2024 Feb 17.
Artículo en Inglés | MEDLINE | ID: mdl-38366043

RESUMEN

Within the domain of Natural Language Processing (NLP), Large Language Models (LLMs) represent sophisticated models engineered to comprehend, generate, and manipulate text resembling human language on an extensive scale. They are transformer-based deep learning architectures, obtained through the scaling of model size, pretraining of corpora, and computational resources. The potential healthcare applications of these models primarily involve chatbots and interaction systems for clinical documentation management, and medical literature summarization (Biomedical NLP). The challenge in this field lies in the research for applications in diagnostic and clinical decision support, as well as patient triage. Therefore, LLMs can be used for multiple tasks within patient care, research, and education. Throughout 2023, there has been an escalation in the release of LLMs, some of which are applicable in the healthcare domain. This remarkable output is largely the effect of the customization of pre-trained models for applications like chatbots, virtual assistants, or any system requiring human-like conversational engagement. As healthcare professionals, we recognize the imperative to stay at the forefront of knowledge. However, keeping abreast of the rapid evolution of this technology is practically unattainable, and, above all, understanding its potential applications and limitations remains a subject of ongoing debate. Consequently, this article aims to provide a succinct overview of the recently released LLMs, emphasizing their potential use in the field of medicine. Perspectives for a more extensive range of safe and effective applications are also discussed. The upcoming evolutionary leap involves the transition from an AI-powered model primarily designed for answering medical questions to a more versatile and practical tool for healthcare providers such as generalist biomedical AI systems for multimodal-based calibrated decision-making processes. On the other hand, the development of more accurate virtual clinical partners could enhance patient engagement, offering personalized support, and improving chronic disease management.


Asunto(s)
Comunicación , Lenguaje , Humanos , Documentación , Escolaridad , Suministros de Energía Eléctrica
7.
Curr Med Res Opin ; 40(3): 353-358, 2024 03.
Artículo en Inglés | MEDLINE | ID: mdl-38265047

RESUMEN

OBJECTIVE: Large language models (LLMs) such as ChatGPT-4 have raised critical questions regarding their distinguishability from human-generated content. In this research, we evaluated the effectiveness of online detection tools in identifying ChatGPT-4 vs human-written text. METHODS: A two texts produced by ChatGPT-4 using differing prompts and one text created by a human author were analytically assessed using the following online detection tools: GPTZero, ZeroGPT, Writer ACD, and Originality. RESULTS: The findings revealed a notable variance in the detection capabilities of the employed detection tools. GPTZero and ZeroGPT exhibited inconsistent assessments regarding the AI-origin of the texts. Writer ACD predominantly identified texts as human-written, whereas Originality consistently recognized the AI-generated content in both samples from ChatGPT-4. This highlights Originality's enhanced sensitivity to patterns characteristic of AI-generated text. CONCLUSION: The study demonstrates that while automatic detection tools may discern texts generated by ChatGPT-4 significant variability exists in their accuracy. Undoubtedly, there is an urgent need for advanced detection tools to ensure the authenticity and integrity of content, especially in scientific and academic research. However, our findings underscore an urgent need for more refined detection methodologies to prevent the misdetection of human-written content as AI-generated and vice versa.


Asunto(s)
Inteligencia Artificial , Escritura , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA