Your browser doesn't support javascript.
loading
Integrated image and location analysis for wound classification: a deep learning approach.
Patel, Yash; Shah, Tirth; Dhar, Mrinal Kanti; Zhang, Taiyu; Niezgoda, Jeffrey; Gopalakrishnan, Sandeep; Yu, Zeyun.
Afiliação
  • Patel Y; Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI, USA.
  • Shah T; Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI, USA.
  • Dhar MK; Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI, USA.
  • Zhang T; Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI, USA.
  • Niezgoda J; Advancing the Zenith of Healthcare (AZH) Wound and Vascular Center, Milwaukee, WI, USA.
  • Gopalakrishnan S; College of Nursing, University of Wisconsin Milwaukee, Milwaukee, WI, USA.
  • Yu Z; Department of Computer Science, University of Wisconsin-Milwaukee, Milwaukee, WI, USA. yuz@uwm.edu.
Sci Rep ; 14(1): 7043, 2024 03 25.
Article em En | MEDLINE | ID: mdl-38528003
ABSTRACT
The global burden of acute and chronic wounds presents a compelling case for enhancing wound classification methods, a vital step in diagnosing and determining optimal treatments. Recognizing this need, we introduce an innovative multi-modal network based on a deep convolutional neural network for categorizing wounds into four categories diabetic, pressure, surgical, and venous ulcers. Our multi-modal network uses wound images and their corresponding body locations for more precise classification. A unique aspect of our methodology is incorporating a body map system that facilitates accurate wound location tagging, improving upon traditional wound image classification techniques. A distinctive feature of our approach is the integration of models such as VGG16, ResNet152, and EfficientNet within a novel architecture. This architecture includes elements like spatial and channel-wise Squeeze-and-Excitation modules, Axial Attention, and an Adaptive Gated Multi-Layer Perceptron, providing a robust foundation for classification. Our multi-modal network was trained and evaluated on two distinct datasets comprising relevant images and corresponding location information. Notably, our proposed network outperformed traditional methods, reaching an accuracy range of 74.79-100% for Region of Interest (ROI) without location classifications, 73.98-100% for ROI with location classifications, and 78.10-100% for whole image classifications. This marks a significant enhancement over previously reported performance metrics in the literature. Our results indicate the potential of our multi-modal network as an effective decision-support tool for wound image classification, paving the way for its application in various clinical contexts.
Assuntos
Palavras-chave

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Neoplasias de Células Escamosas / Aprendizado Profundo / Lesões Acidentais Limite: Humans Idioma: En Revista: Sci Rep Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Estados Unidos

Texto completo: 1 Coleções: 01-internacional Base de dados: MEDLINE Assunto principal: Neoplasias de Células Escamosas / Aprendizado Profundo / Lesões Acidentais Limite: Humans Idioma: En Revista: Sci Rep Ano de publicação: 2024 Tipo de documento: Article País de afiliação: Estados Unidos