Your browser doesn't support javascript.
loading
Evaluation of Spectrum-Aided Visual Enhancer (SAVE) in Esophageal Cancer Detection Using YOLO Frameworks.
Chou, Chu-Kuang; Karmakar, Riya; Tsao, Yu-Ming; Jie, Lim Wei; Mukundan, Arvind; Huang, Chien-Wei; Chen, Tsung-Hsien; Ko, Chau-Yuan; Wang, Hsiang-Chen.
Afiliação
  • Chou CK; Division of Gastroenterology and Hepatology, Department of Internal Medicine, Ditmanson Medical Foundation Chia-Yi Christian Hospital, Chia-Yi 60002, Taiwan.
  • Karmakar R; Obesity Center, Ditmanson Medical Foundation Chia-Yi Christian Hospital, Chia-Yi 60002, Taiwan.
  • Tsao YM; Department of Medical Quality, Ditmanson Medical Foundation Chia-Yi Christian Hospital, Chia-Yi 60002, Taiwan.
  • Jie LW; Department of Mechanical Engineering, National Chung Cheng University, Chia-Yi 62102, Taiwan.
  • Mukundan A; Department of Mechanical Engineering, National Chung Cheng University, Chia-Yi 62102, Taiwan.
  • Huang CW; Department of Computer Science, Multimedia University (Cyberjaya), Persiaran Multimedia, Cyberjaya 63100, Malaysia.
  • Chen TH; Department of Mechanical Engineering, National Chung Cheng University, Chia-Yi 62102, Taiwan.
  • Ko CY; Department of Gastroenterology, Kaohsiung Armed Forces General Hospital, 2, Zhongzheng 1st. Rd., Lingya District, Kaohsiung City 80284, Taiwan.
  • Wang HC; Department of Nursing, Tajen University, 20, Weixin Rd., Yanpu Township 90741, Pingtung County, Taiwan.
Diagnostics (Basel) ; 14(11)2024 May 29.
Article em En | MEDLINE | ID: mdl-38893655
ABSTRACT
The early detection of esophageal cancer presents a substantial difficulty, which contributes to its status as a primary cause of cancer-related fatalities. This study used You Only Look Once (YOLO) frameworks, specifically YOLOv5 and YOLOv8, to predict and detect early-stage EC by using a dataset sourced from the Division of Gastroenterology and Hepatology, Ditmanson Medical Foundation, Chia-Yi Christian Hospital. The dataset comprised 2741 white-light images (WLI) and 2741 hyperspectral narrowband images (HSI-NBI). They were divided into 60% training, 20% validation, and 20% test sets to facilitate robust detection. The images were produced using a conversion method called the spectrum-aided vision enhancer (SAVE). This algorithm can transform a WLI into an NBI without requiring a spectrometer or spectral head. The main goal was to identify dysplasia and squamous cell carcinoma (SCC). The model's performance was evaluated using five essential metrics precision, recall, F1-score, mAP, and the confusion matrix. The experimental results demonstrated that the HSI model exhibited improved learning capabilities for SCC characteristics compared with the original RGB images. Within the YOLO framework, YOLOv5 outperformed YOLOv8, indicating that YOLOv5's design possessed superior feature-learning skills. The YOLOv5 model, when used in conjunction with HSI-NBI, demonstrated the best performance. It achieved a precision rate of 85.1% (CI95 83.2-87.0%, p < 0.01) in diagnosing SCC and an F1-score of 52.5% (CI95 50.1-54.9%, p < 0.01) in detecting dysplasia. The results of these figures were much better than those of YOLOv8. YOLOv8 achieved a precision rate of 81.7% (CI95 79.6-83.8%, p < 0.01) and an F1-score of 49.4% (CI95 47.0-51.8%, p < 0.05). The YOLOv5 model with HSI demonstrated greater performance than other models in multiple scenarios. This difference was statistically significant, suggesting that the YOLOv5 model with HSI significantly improved detection capabilities.
Palavras-chave

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article

Texto completo: 1 Base de dados: MEDLINE Idioma: En Ano de publicação: 2024 Tipo de documento: Article