Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Assunto da revista
Intervalo de ano de publicação
1.
Neuropathol Appl Neurobiol ; 50(4): e12997, 2024 Aug.
Artigo em Inglês | MEDLINE | ID: mdl-39010256

RESUMO

AIMS: Recent advances in artificial intelligence, particularly with large language models like GPT-4Vision (GPT-4V)-a derivative feature of ChatGPT-have expanded the potential for medical image interpretation. This study evaluates the accuracy of GPT-4V in image classification tasks of histopathological images and compares its performance with a traditional convolutional neural network (CNN). METHODS: We utilised 1520 images, including haematoxylin and eosin staining and tau immunohistochemistry, from patients with various neurodegenerative diseases, such as Alzheimer's disease (AD), progressive supranuclear palsy (PSP) and corticobasal degeneration (CBD). We assessed GPT-4V's performance using multi-step prompts to determine how textual context influences image interpretation. We also employed few-shot learning to enhance improvements in GPT-4V's diagnostic performance in classifying three specific tau lesions-astrocytic plaques, neuritic plaques and tufted astrocytes-and compared the outcomes with the CNN model YOLOv8. RESULTS: GPT-4V accurately recognised staining techniques and tissue origin but struggled with specific lesion identification. The interpretation of images was notably influenced by the provided textual context, which sometimes led to diagnostic inaccuracies. For instance, when presented with images of the motor cortex, the diagnosis shifted inappropriately from AD to CBD or PSP. However, few-shot learning markedly improved GPT-4V's diagnostic capabilities, enhancing accuracy from 40% in zero-shot learning to 90% with 20-shot learning, matching the performance of YOLOv8, which required 100-shot learning to achieve the same accuracy. CONCLUSIONS: Although GPT-4V faces challenges in independently interpreting histopathological images, few-shot learning significantly improves its performance. This approach is especially promising for neuropathology, where acquiring extensive labelled datasets is often challenging.


Assuntos
Redes Neurais de Computação , Doenças Neurodegenerativas , Humanos , Doenças Neurodegenerativas/patologia , Interpretação de Imagem Assistida por Computador/métodos , Doença de Alzheimer/patologia
2.
Am J Clin Pathol ; 162(3): 220-226, 2024 Sep 03.
Artigo em Inglês | MEDLINE | ID: mdl-38567909

RESUMO

OBJECTIVES: ChatGPT (OpenAI, San Francisco, CA) has shown impressive results across various medical examinations, but its performance in kidney pathology is not yet established. This study evaluated proficiencies of GPT-4 Vision (GPT-4V), an updated version of the platform with the ability to analyze images, on kidney pathology questions and compared its responses with those of nephrology trainees. METHODS: Thirty-nine questions (19 text-based questions and 20 with various kidney biopsy images) designed specifically for the training of nephrology fellows were employed. RESULTS: GPT-4V displayed comparable accuracy rates in the first and second runs (67% and 72%, respectively, P = .50). The aggregated accuracy, however-particularly, the consistent accuracy-of GPT-4V was lower than that of trainees (72% and 67% vs 79%). Both GPT-4V and trainees displayed comparable accuracy in responding to image-based and text-only questions (55% vs 79% and 81% vs 78%, P = .11 and P = .67, respectively). The consistent accuracy in image-based, directly asked questions for GPT-4V was 29%, much lower than its 88% consistency on text-only, directly asked questions (P = .02). In contrast, trainees maintained similar accuracy in directly asked image-based and text-based questions (80% vs 77%, P = .65). Although the aggregated accuracy for correctly interpreting images was 69%, the consistent accuracy across both runs was only 39%. The accuracy of GPT-4V in answering questions with correct image interpretation was significantly higher than for questions with incorrect image interpretation (100% vs 0% and 100% vs 33% for the first and second runs, P = .001 and P = .02, respectively). CONCLUSIONS: The performance of GPT-4V in handling kidney pathology questions, especially those including images, is limited. There is a notable need for enhancement in GPT-4V proficiency in interpreting images.


Assuntos
Rim , Humanos , Rim/patologia , Competência Clínica , Avaliação Educacional/métodos , Nefrologia/educação , Nefropatias/patologia , Nefropatias/diagnóstico
3.
JMIR AI ; 3: e58342, 2024 05 31.
Artigo em Inglês | MEDLINE | ID: mdl-38875669

RESUMO

BACKGROUND: The integration of artificial intelligence (AI), particularly deep learning models, has transformed the landscape of medical technology, especially in the field of diagnosis using imaging and physiological data. In otolaryngology, AI has shown promise in image classification for middle ear diseases. However, existing models often lack patient-specific data and clinical context, limiting their universal applicability. The emergence of GPT-4 Vision (GPT-4V) has enabled a multimodal diagnostic approach, integrating language processing with image analysis. OBJECTIVE: In this study, we investigated the effectiveness of GPT-4V in diagnosing middle ear diseases by integrating patient-specific data with otoscopic images of the tympanic membrane. METHODS: The design of this study was divided into two phases: (1) establishing a model with appropriate prompts and (2) validating the ability of the optimal prompt model to classify images. In total, 305 otoscopic images of 4 middle ear diseases (acute otitis media, middle ear cholesteatoma, chronic otitis media, and otitis media with effusion) were obtained from patients who visited Shinshu University or Jichi Medical University between April 2010 and December 2023. The optimized GPT-4V settings were established using prompts and patients' data, and the model created with the optimal prompt was used to verify the diagnostic accuracy of GPT-4V on 190 images. To compare the diagnostic accuracy of GPT-4V with that of physicians, 30 clinicians completed a web-based questionnaire consisting of 190 images. RESULTS: The multimodal AI approach achieved an accuracy of 82.1%, which is superior to that of certified pediatricians at 70.6%, but trailing behind that of otolaryngologists at more than 95%. The model's disease-specific accuracy rates were 89.2% for acute otitis media, 76.5% for chronic otitis media, 79.3% for middle ear cholesteatoma, and 85.7% for otitis media with effusion, which highlights the need for disease-specific optimization. Comparisons with physicians revealed promising results, suggesting the potential of GPT-4V to augment clinical decision-making. CONCLUSIONS: Despite its advantages, challenges such as data privacy and ethical considerations must be addressed. Overall, this study underscores the potential of multimodal AI for enhancing diagnostic accuracy and improving patient care in otolaryngology. Further research is warranted to optimize and validate this approach in diverse clinical settings.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA