Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 4 de 4
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Ophthalmol Ther ; 2024 Aug 11.
Artigo em Inglês | MEDLINE | ID: mdl-39127983

RESUMO

INTRODUCTION: The aim of this work is to develop a deep learning (DL) system for rapidly and accurately screening for intraocular tumor (IOT), retinal detachment (RD), vitreous hemorrhage (VH), and posterior scleral staphyloma (PSS) using ocular B-scan ultrasound images. METHODS: Ultrasound images from five clinically confirmed categories, including vitreous hemorrhage, retinal detachment, intraocular tumor, posterior scleral staphyloma, and normal eyes, were used to develop and evaluate a fine-grained classification system (the Dual-Path Lesion Attention Network, DPLA-Net). Images were derived from five centers scanned by different sonographers and divided into training, validation, and test sets in a ratio of 7:1:2. Two senior ophthalmologists and four junior ophthalmologists were recruited to evaluate the system's performance. RESULTS: This multi-center cross-sectional study was conducted in six hospitals in China. A total of 6054 ultrasound images were collected; 4758 images were used for the training and validation of the system, and 1296 images were used as a testing set. DPLA-Net achieved a mean accuracy of 0.943 in the testing set, and the area under the curve was 0.988 for IOT, 0.997 for RD, 0.994 for PSS, 0.988 for VH, and 0.993 for normal. With the help of DPLA-Net, the accuracy of the four junior ophthalmologists improved from 0.696 (95% confidence interval [CI] 0.684-0.707) to 0.919 (95% CI 0.912-0.926, p < 0.001), and the time used for classifying each image reduced from 16.84 ± 2.34 s to 10.09 ± 1.79 s. CONCLUSIONS: The proposed DPLA-Net showed high accuracy for screening and classifying multiple ophthalmic diseases using B-scan ultrasound images across mutiple centers. Moreover, the system can promote the efficiency of classification by ophthalmologists.

2.
Br J Ophthalmol ; 2023 Oct 18.
Artigo em Inglês | MEDLINE | ID: mdl-37852741

RESUMO

BACKGROUND: Ultrasound imaging is suitable for detecting and diagnosing ophthalmic abnormalities. However, a shortage of experienced sonographers and ophthalmologists remains a problem. This study aims to develop a multibranch transformer network (MBT-Net) for the automated classification of multiple ophthalmic diseases using B-mode ultrasound images. METHODS: Ultrasound images with six clinically confirmed categories, including normal, retinal detachment, vitreous haemorrhage, intraocular tumour, posterior scleral staphyloma and other abnormalities, were used to develop and evaluate the MBT-Net. Images were derived from five different ultrasonic devices operated by different sonographers and divided into training set, validation set, internal testing set and temporal external testing set. Two senior ophthalmologists and two junior ophthalmologists were recruited to compare the model's performance. RESULTS: A total of 10 184 ultrasound images were collected. The MBT-Net got an accuracy of 87.80% (95% CI 86.26% to 89.18%) in the internal testing set, which was significantly higher than junior ophthalmologists (95% CI 67.37% to 79.16%; both p<0.05) and lower than senior ophthalmologists (95% CI 89.45% to 92.61%; both p<0.05). The micro-average area under the curve of the six-category classification was 0.98. With reference to comprehensive clinical diagnosis, the measurements of agreement were almost perfect in the MBT-Net (kappa=0.85, p<0.05). There was no significant difference in the accuracy of the MBT-Net across five ultrasonic devices (p=0.27). The MBT-Net got an accuracy of 82.21% (95% CI 78.45% to 85.44%) in the temporal external testing set. CONCLUSIONS: The MBT-Net showed high accuracy for screening and diagnosing multiple ophthalmic diseases using only ultrasound images across mutioperators and mutidevices.

3.
Cureus ; 15(6): e40895, 2023 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-37492832

RESUMO

Objective The primary aim of this research was to address the limitations observed in the medical knowledge of prevalent large language models (LLMs) such as ChatGPT, by creating a specialized language model with enhanced accuracy in medical advice. Methods We achieved this by adapting and refining the large language model meta-AI (LLaMA) using a large dataset of 100,000 patient-doctor dialogues sourced from a widely used online medical consultation platform. These conversations were cleaned and anonymized to respect privacy concerns. In addition to the model refinement, we incorporated a self-directed information retrieval mechanism, allowing the model to access and utilize real-time information from online sources like Wikipedia and data from curated offline medical databases. Results The fine-tuning of the model with real-world patient-doctor interactions significantly improved the model's ability to understand patient needs and provide informed advice. By equipping the model with self-directed information retrieval from reliable online and offline sources, we observed substantial improvements in the accuracy of its responses. Conclusion Our proposed ChatDoctor, represents a significant advancement in medical LLMs, demonstrating a significant improvement in understanding patient inquiries and providing accurate advice. Given the high stakes and low error tolerance in the medical field, such enhancements in providing accurate and reliable information are not only beneficial but essential.

4.
IEEE J Biomed Health Inform ; 27(7): 3525-3536, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37126620

RESUMO

Precise and rapid categorization of images in the B-scan ultrasound modality is vital for diagnosing ocular diseases. Nevertheless, distinguishing various diseases in ultrasound still challenges experienced ophthalmologists. Thus a novel contrastive disentangled network (CDNet) is developed in this work, aiming to tackle the fine-grained image categorization (FGIC) challenges of ocular abnormalities in ultrasound images, including intraocular tumor (IOT), retinal detachment (RD), posterior scleral staphyloma (PSS), and vitreous hemorrhage (VH). Three essential components of CDNet are the weakly-supervised lesion localization module (WSLL), contrastive multi-zoom (CMZ) strategy, and hyperspherical contrastive disentangled loss (HCD-Loss), respectively. These components facilitate feature disentanglement for fine-grained recognition in both the input and output aspects. The proposed CDNet is validated on our ZJU Ocular Ultrasound Dataset (ZJUOUSD), consisting of 5213 samples. Furthermore, the generalization ability of CDNet is validated on two public and widely-used chest X-ray FGIC benchmarks. Quantitative and qualitative results demonstrate the efficacy of our proposed CDNet, which achieves state-of-the-art performance in the FGIC task.


Assuntos
Face , Oftalmologistas , Humanos , Benchmarking , Neuroimagem , Tórax
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA