Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 28
Filter
Add more filters










Publication year range
1.
Curr Med Imaging ; 20(1): e15734056313837, 2024.
Article in English | MEDLINE | ID: mdl-39039669

ABSTRACT

INTRODUCTION: This study introduces SkinLiTE, a lightweight supervised contrastive learning model tailored to enhance the detection and typification of skin lesions in dermoscopic images. The core of SkinLiTE lies in its unique integration of supervised and contrastive learning approaches, which leverages labeled data to learn generalizable representations. This approach is particularly adept at handling the challenge of complexities and imbalances inherent in skin lesion datasets. METHODS: The methodology encompasses a two-phase learning process. In the first phase, SkinLiTE utilizes an encoder network and a projection head to transform and project dermoscopic images into a feature space where contrastive loss is applied, focusing on minimizing intra-class variations while maximizing inter-class differences. The second phase freezes the encoder's weights, leveraging the learned representations for classification through a series of dense and dropout layers. The model was evaluated using three datasets from Skin Cancer ISIC 2019-2020, covering a wide range of skin conditions. RESULTS: SkinLiTE demonstrated superior performance across various metrics, including accuracy, AUC, and F1 scores, particularly when compared with traditional supervised learning models. Notably, SkinLiTE achieved an accuracy of 0.9087 using AugMix augmentation for binary classification of skin lesions. It also showed comparable results with the state-of-the-art approaches of ISIC challenge without relying on external data, underscoring its efficacy and efficiency. The results highlight the potential of SkinLiTE as a significant step forward in the field of dermatological AI, offering a robust, efficient, and accurate tool for skin lesion detection and classification. Its lightweight architecture and ability to handle imbalanced datasets make it particularly suited for integration into Internet of Medical Things environments, paving the way for enhanced remote patient monitoring and diagnostic capabilities. CONCLUSION: This research contributes to the evolving landscape of AI in healthcare, demonstrating the impact of innovative learning methodologies in medical image analysis.


Subject(s)
Dermoscopy , Skin Neoplasms , Supervised Machine Learning , Humans , Dermoscopy/methods , Skin Neoplasms/diagnostic imaging , Image Interpretation, Computer-Assisted/methods , Skin/diagnostic imaging
2.
Diagnostics (Basel) ; 14(13)2024 Jun 24.
Article in English | MEDLINE | ID: mdl-39001229

ABSTRACT

Skin lesion classification is vital for the early detection and diagnosis of skin diseases, facilitating timely intervention and treatment. However, existing classification methods face challenges in managing complex information and long-range dependencies in dermoscopic images. Therefore, this research aims to enhance the feature representation by incorporating local, global, and hierarchical features to improve the performance of skin lesion classification. We introduce a novel dual-track deep learning (DL) model in this research for skin lesion classification. The first track utilizes a modified Densenet-169 architecture that incorporates a Coordinate Attention Module (CoAM). The second track employs a customized convolutional neural network (CNN) comprising a Feature Pyramid Network (FPN) and Global Context Network (GCN) to capture multiscale features and global contextual information. The local features from the first track and the global features from second track are used for precise localization and modeling of the long-range dependencies. By leveraging these architectural advancements within the DenseNet framework, the proposed neural network achieved better performance compared to previous approaches. The network was trained and validated using the HAM10000 dataset, achieving a classification accuracy of 93.2%.

3.
Med Biol Eng Comput ; 2024 Jun 04.
Article in English | MEDLINE | ID: mdl-38833025

ABSTRACT

Melanoma is an uncommon and dangerous type of skin cancer. Dermoscopic imaging aids skilled dermatologists in detection, yet the nuances between melanoma and non-melanoma conditions complicate diagnosis. Early identification of melanoma is vital for successful treatment, but manual diagnosis is time-consuming and requires a dermatologist with training. To overcome this issue, this article proposes an Optimized Attention-Induced Multihead Convolutional Neural Network with EfficientNetV2-fostered melanoma classification using dermoscopic images (AIMCNN-ENetV2-MC). The input pictures are extracted from the dermoscopic images dataset. Adaptive Distorted Gaussian Matched Filter (ADGMF) is used to remove the noise and maximize the superiority of skin dermoscopic images. These pre-processed images are fed to AIMCNN. The AIMCNN-ENetV2 classifies acral melanoma and benign nevus. Boosted Chimp Optimization Algorithm (BCOA) optimizes the AIMCNN-ENetV2 classifier for accurate classification. The proposed AIMCNN-ENetV2-MC is implemented using Python. The proposed approach attains an outstanding overall accuracy of 98.75%, less computation time of 98 s compared with the existing models.

4.
Comput Biol Med ; 176: 108594, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38761501

ABSTRACT

Skin cancer is one of the common types of cancer. It spreads quickly and is not easy to detect in the early stages, posing a major threat to human health. In recent years, deep learning methods have attracted widespread attention for skin cancer detection in dermoscopic images. However, training a practical classifier becomes highly challenging due to inter-class similarity and intra-class variation in skin lesion images. To address these problems, we propose a multi-scale fusion structure that combines shallow and deep features for more accurate classification. Simultaneously, we implement three approaches to the problem of class imbalance: class weighting, label smoothing, and resampling. In addition, the HAM10000_RE dataset strips out hair features to demonstrate the role of hair features in the classification process. We demonstrate that the region of interest is the most critical classification feature for the HAM10000_SE dataset, which segments lesion regions. We evaluated the effectiveness of our model using the HAM10000 and ISIC2019 dataset. The results showed that this method performed well in dermoscopic classification tasks, with ACC and AUC of 94.0% and 99.3%, on the HAM10000 dataset and ACC of 89.8% for the ISIC2019 dataset. The overall performance of our model is excellent in comparison to state-of-the-art models.


Subject(s)
Dermoscopy , Skin Neoplasms , Humans , Skin Neoplasms/diagnostic imaging , Skin Neoplasms/pathology , Skin Neoplasms/classification , Dermoscopy/methods , Deep Learning , Image Interpretation, Computer-Assisted/methods , Skin/diagnostic imaging , Skin/pathology , Databases, Factual , Algorithms
5.
Diagnostics (Basel) ; 14(7)2024 Apr 02.
Article in English | MEDLINE | ID: mdl-38611666

ABSTRACT

A crucial challenge in critical settings like medical diagnosis is making deep learning models used in decision-making systems interpretable. Efforts in Explainable Artificial Intelligence (XAI) are underway to address this challenge. Yet, many XAI methods are evaluated on broad classifiers and fail to address complex, real-world issues, such as medical diagnosis. In our study, we focus on enhancing user trust and confidence in automated AI decision-making systems, particularly for diagnosing skin lesions, by tailoring an XAI method to explain an AI model's ability to identify various skin lesion types. We generate explanations using synthetic images of skin lesions as examples and counterexamples, offering a method for practitioners to pinpoint the critical features influencing the classification outcome. A validation survey involving domain experts, novices, and laypersons has demonstrated that explanations increase trust and confidence in the automated decision system. Furthermore, our exploration of the model's latent space reveals clear separations among the most common skin lesion classes, a distinction that likely arises from the unique characteristics of each class and could assist in correcting frequent misdiagnoses by human professionals.

6.
Sci Rep ; 14(1): 9127, 2024 04 21.
Article in English | MEDLINE | ID: mdl-38644396

ABSTRACT

Vitiligo is a hypopigmented skin disease characterized by the loss of melanin. The progressive nature and widespread incidence of vitiligo necessitate timely and accurate detection. Usually, a single diagnostic test often falls short of providing definitive confirmation of the condition, necessitating the assessment by dermatologists who specialize in vitiligo. However, the current scarcity of such specialized medical professionals presents a significant challenge. To mitigate this issue and enhance diagnostic accuracy, it is essential to build deep learning models that can support and expedite the detection process. This study endeavors to establish a deep learning framework to enhance the diagnostic accuracy of vitiligo. To this end, a comparative analysis of five models including ResNet (ResNet34, ResNet50, and ResNet101 models) and Swin Transformer series (Swin Transformer Base, and Swin Transformer Large models), were conducted under the uniform condition to identify the model with superior classification capabilities. Moreover, the study sought to augment the interpretability of these models by selecting one that not only provides accurate diagnostic outcomes but also offers visual cues highlighting the regions pertinent to vitiligo. The empirical findings reveal that the Swin Transformer Large model achieved the best performance in classification, whose AUC, accuracy, sensitivity, and specificity are 0.94, 93.82%, 94.02%, and 93.5%, respectively. In terms of interpretability, the highlighted regions in the class activation map correspond to the lesion regions of the vitiligo images, which shows that it effectively indicates the specific category regions associated with the decision-making of dermatological diagnosis. Additionally, the visualization of feature maps generated in the middle layer of the deep learning model provides insights into the internal mechanisms of the model, which is valuable for improving the interpretability of the model, tuning performance, and enhancing clinical applicability. The outcomes of this study underscore the significant potential of deep learning models to revolutionize medical diagnosis by improving diagnostic accuracy and operational efficiency. The research highlights the necessity for ongoing exploration in this domain to fully leverage the capabilities of deep learning technologies in medical diagnostics.


Subject(s)
Deep Learning , Vitiligo , Vitiligo/diagnosis , Humans
7.
Sci Rep ; 14(1): 9336, 2024 04 23.
Article in English | MEDLINE | ID: mdl-38653997

ABSTRACT

Skin cancer is the most prevalent kind of cancer in people. It is estimated that more than 1 million people get skin cancer every year in the world. The effectiveness of the disease's therapy is significantly impacted by early identification of this illness. Preprocessing is the initial detecting stage in enhancing the quality of skin images by removing undesired background noise and objects. This study aims is to compile preprocessing techniques for skin cancer imaging that are currently accessible. Researchers looking into automated skin cancer diagnosis might use this article as an excellent place to start. The fully convolutional encoder-decoder network and Sparrow search algorithm (FCEDN-SpaSA) are proposed in this study for the segmentation of dermoscopic images. The individual wolf method and the ensemble ghosting technique are integrated to generate a neighbour-based search strategy in SpaSA for stressing the correct balance between navigation and exploitation. The classification procedure is accomplished by using an adaptive CNN technique to discriminate between normal skin and malignant skin lesions suggestive of disease. Our method provides classification accuracies comparable to commonly used incremental learning techniques while using less energy, storage space, memory access, and training time (only network updates with new training samples, no network sharing). In a simulation, the segmentation performance of the proposed technique on the ISBI 2017, ISIC 2018, and PH2 datasets reached accuracies of 95.28%, 95.89%, 92.70%, and 98.78%, respectively, on the same dataset and assessed the classification performance. It is accurate 91.67% of the time. The efficiency of the suggested strategy is demonstrated through comparisons with cutting-edge methodologies.


Subject(s)
Algorithms , Dermoscopy , Neural Networks, Computer , Skin Neoplasms , Humans , Skin Neoplasms/diagnosis , Skin Neoplasms/diagnostic imaging , Skin Neoplasms/classification , Skin Neoplasms/pathology , Dermoscopy/methods , Image Processing, Computer-Assisted/methods , Image Interpretation, Computer-Assisted/methods , Skin/pathology , Skin/diagnostic imaging
8.
Microsc Res Tech ; 87(6): 1271-1285, 2024 Jun.
Article in English | MEDLINE | ID: mdl-38353334

ABSTRACT

Skin is the exposed part of the human body that constantly protected from UV rays, heat, light, dust, and other hazardous radiation. One of the most dangerous illnesses that affect people is skin cancer. A type of skin cancer called melanoma starts in the melanocytes, which regulate the colour in human skin. Reducing the fatality rate from skin cancer requires early detection and diagnosis of conditions like melanoma. In this article, a Self-attention based cycle-consistent generative adversarial network optimized with Archerfish Hunting Optimization Algorithm adopted Melanoma Classification (SACCGAN-AHOA-MC-DI) from dermoscopic images is proposed. Primarily, the input Skin dermoscopic images are gathered via the dataset of ISIC 2019. Then, the input Skin dermoscopic images is pre-processed using adjusted quick shift phase preserving dynamic range compression (AQSP-DRC) for removing noise and increase the quality of Skin dermoscopic images. These pre-processed images are fed to the piecewise fuzzy C-means clustering (PF-CMC) for ROI region segmentation. The segmented ROI region is supplied to the Hexadecimal Local Adaptive Binary Pattern (HLABP) to extract the Radiomic features, like Grayscale statistic features (standard deviation, mean, kurtosis, and skewness) together with Haralick Texture features (contrast, energy, entropy, homogeneity, and inverse different moments). The extracted features are fed to self-attention based cycle-consistent generative adversarial network (SACCGAN) which classifies the skin cancers as Melanocytic nevus, Basal cell carcinoma, Actinic Keratosis, Benign keratosis, Dermatofibroma, Vascular lesion, Squamous cell carcinoma and melanoma. In general, SACCGAN not adapt any optimization modes to define the ideal parameters to assure accurate classification of skin cancer. Hence, Archerfish Hunting Optimization Algorithm (AHOA) is considered to maximize the SACCGAN classifier, which categorizes the skin cancer accurately. The proposed method attains 23.01%, 14.96%, and 45.31% higher accuracy and 32.16%, 11.32%, and 24.56% lesser computational time evaluated to the existing methods, like melanoma prediction method for unbalanced data utilizing optimized Squeeze Net through bald eagle search optimization (CNN-BES-MC-DI), hyper-parameter optimized CNN depending on Grey wolf optimization algorithm (CNN-GWOA-MC-DI), DEANN incited skin cancer finding depending on fuzzy c-means clustering (DEANN-MC-DI). RESEARCH HIGHLIGHTS: This manuscript, self-attention based cycle-consistent. SACCGAN-AHOA-MC-DI method is implemented in Python. (SACCGAN-AHOA-MC-DI) from dermoscopic images is proposed. Adjusted quick shift phase preserving dynamic range compression (AQSP-DRC). Removing noise and increase the quality of Skin dermoscopic images.


Subject(s)
Keratosis, Actinic , Melanoma , Skin Neoplasms , Humans , Melanoma/diagnosis , Skin Neoplasms/diagnosis , Melanocytes/pathology , Algorithms , Diagnosis, Computer-Assisted/methods
9.
Stud Health Technol Inform ; 310: 199-203, 2024 Jan 25.
Article in English | MEDLINE | ID: mdl-38269793

ABSTRACT

Dermatology is one of the medical fields outside the radiology service that uses image acquisition and analysis in its daily medical practice, mostly through digital dermoscopy imaging modality. The acquisition, transfer, and storage of dermatology images has become an important issue to resolve. We aimed to describe our experience in integrating dermoscopic images into PACS using DICOM as a guide for the health informatics and dermatology community. During 2022 we integrated the video dermoscopy equipment through a strategic plan with an 8-step procedure. We used the DICOM standard with Modality Worklist and Storage commitment. Three systems were involved (video dermoscopy software, the EHR, and PACS). We identified critical steps and faced many challenges, such as the lack of a final model of DICOM standard for dermatology images.


Subject(s)
Medical Informatics , Software
10.
Bioengineering (Basel) ; 10(11)2023 Nov 16.
Article in English | MEDLINE | ID: mdl-38002446

ABSTRACT

In recent decades, the incidence of melanoma has grown rapidly. Hence, early diagnosis is crucial to improving clinical outcomes. Here, we propose and compare a classical image analysis-based machine learning method with a deep learning one to automatically classify benign vs. malignant dermoscopic skin lesion images. The same dataset of 25,122 publicly available dermoscopic images was used to train both models, while a disjointed test set of 200 images was used for the evaluation phase. The training dataset was randomly divided into 10 datasets of 19,932 images to obtain an equal distribution between the two classes. By testing both models on the disjoint set, the deep learning-based method returned accuracy of 85.4 ± 3.2% and specificity of 75.5 ± 7.6%, while the machine learning one showed accuracy and specificity of 73.8 ± 1.1% and 44.5 ± 4.7%, respectively. Although both approaches performed well in the validation phase, the convolutional neural network outperformed the ensemble boosted tree classifier on the disjoint test set, showing better generalization ability. The integration of new melanoma detection algorithms with digital dermoscopic devices could enable a faster screening of the population, improve patient management, and achieve better survival rates.

11.
Proc Inst Mech Eng H ; 237(10): 1228-1239, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37840254

ABSTRACT

Skin cancer is a chronic illness seen visually and further diagnosed with a dermoscopic examination. It is crucial to precisely localize and classify lesions from dermoscopic images to diagnose and treat skin cancers as soon as possible. This work presents melanoma identification, and the classification method significantly improves accuracy and precision. This work proposes a method Hybrid of Genetic and Particle swarm optimization (HG-PSO), and You only look once version 7 (YOLOv7) based convolutional network for skin cancer classification. The infected region is first located using optimized YOLOv7 object detection. Then color thresholding is applied to segment it, which is passed to the proposed convolutional network for classification. This work is tested on the Human Against Machine with 10,000 training images (HAM10000), International Skin Imaging Collaboration (ISIC)-2019, and Hospital Pedro Hispano (PH2) datasets, and the findings are compared to the state-of-the-art methods for classifying skin cancer. The proposed method achieves 98.86% accuracy, 99.00% average precision, 98.85% average recall, and 98.85% average F1-score on the HAM10000 dataset. It achieves 97.10% accuracy on ISIC-2019 datasets. The average precision obtained is 97.37%, the average recall is 97.13%, and the average F1-score is 97.13% on the ISIC-2019 dataset. It achieves a 97.7% accuracy on the PH2 dataset. The average precision obtained is 99.00%, the average recall is 96.00%, and the average F1-score is 97.00% on the PH2 dataset. The test time taken by this method on datasets HAM10000, ISIC-2019, and PH2 dataset is 2, 3, and 2 s, respectively, which may help give faster responses in telemedicine.


Subject(s)
Melanoma , Skin Neoplasms , Humans , Dermoscopy/methods , Skin Neoplasms/diagnostic imaging , Skin Neoplasms/pathology , Melanoma/diagnostic imaging , Melanoma/pathology , Skin/diagnostic imaging
12.
Cancers (Basel) ; 15(20)2023 Oct 17.
Article in English | MEDLINE | ID: mdl-37894383

ABSTRACT

Internet of Things (IoT)-assisted skin cancer recognition integrates several connected devices and sensors for supporting the primary analysis and monitoring of skin conditions. A preliminary analysis of skin cancer images is extremely difficult because of factors such as distinct sizes and shapes of lesions, differences in color illumination, and light reflections on the skin surface. In recent times, IoT-based skin cancer recognition utilizing deep learning (DL) has been used for enhancing the early analysis and monitoring of skin cancer. This article presents an optimal deep learning-based skin cancer detection and classification (ODL-SCDC) methodology in the IoT environment. The goal of the ODL-SCDC technique is to exploit metaheuristic-based hyperparameter selection approaches with a DL model for skin cancer classification. The ODL-SCDC methodology involves an arithmetic optimization algorithm (AOA) with the EfficientNet model for feature extraction. For skin cancer detection, a stacked denoising autoencoder (SDAE) classification model has been used. Lastly, the dragonfly algorithm (DFA) is utilized for the optimal hyperparameter selection of the SDAE algorithm. The simulation validation of the ODL-SCDC methodology has been tested on a benchmark ISIC skin lesion database. The extensive outcomes reported a better solution of the ODL-SCDC methodology compared with other models, with a maximum sensitivity of 97.74%, specificity of 99.71%, and accuracy of 99.55%. The proposed model can assist medical professionals, specifically dermatologists and potentially other healthcare practitioners, in the skin cancer diagnosis process.

13.
JMIR Dermatol ; 6: e42129, 2023 Aug 24.
Article in English | MEDLINE | ID: mdl-37616039

ABSTRACT

BACKGROUND: Previous research studies have demonstrated that medical content image retrieval can play an important role by assisting dermatologists in skin lesion diagnosis. However, current state-of-the-art approaches have not been adopted in routine consultation, partly due to the lack of interpretability limiting trust by clinical users. OBJECTIVE: This study developed a new image retrieval architecture for polarized or dermoscopic imaging guided by interpretable saliency maps. This approach provides better feature extraction, leading to better quantitative retrieval performance as well as providing interpretability for an eventual real-world implementation. METHODS: Content-based image retrieval (CBIR) algorithms rely on the comparison of image features embedded by convolutional neural network (CNN) against a labeled data set. Saliency maps are computer vision-interpretable methods that highlight the most relevant regions for the prediction made by a neural network. By introducing a fine-tuning stage that includes saliency maps to guide feature extraction, the accuracy of image retrieval is optimized. We refer to this approach as saliency-enhanced CBIR (SE-CBIR). A reader study was designed at the University Hospital Zurich Dermatology Clinic to evaluate SE-CBIR's retrieval accuracy as well as the impact of the participant's confidence on the diagnosis. RESULTS: SE-CBIR improved the retrieval accuracy by 7% (77% vs 84%) when doing single-lesion retrieval against traditional CBIR. The reader study showed an overall increase in classification accuracy of 22% (62% vs 84%) when the participant is provided with SE-CBIR retrieved images. In addition, the overall confidence in the lesion's diagnosis increased by 24%. Finally, the use of SE-CBIR as a support tool helped the participants reduce the number of nonmelanoma lesions previously diagnosed as melanoma (overdiagnosis) by 53%. CONCLUSIONS: SE-CBIR presents better retrieval accuracy compared to traditional CBIR CNN-based approaches. Furthermore, we have shown how these support tools can help dermatologists and residents improve diagnosis accuracy and confidence. Additionally, by introducing interpretable methods, we should expect increased acceptance and use of these tools in routine consultation.

14.
J Imaging ; 9(7)2023 Jul 21.
Article in English | MEDLINE | ID: mdl-37504825

ABSTRACT

The automatic detection of dermoscopic features is a task that provides the specialists with an image with indications about the different patterns present in it. This information can help them fully understand the image and improve their decisions. However, the automatic analysis of dermoscopic features can be a difficult task because of their small size. Some work was performed in this area, but the results can be improved. The objective of this work is to improve the precision of the automatic detection of dermoscopic features. To achieve this goal, an algorithm named yolo-dermoscopic-features is proposed. The algorithm consists of four points: (i) generate annotations in the JSON format for supervised learning of the model; (ii) propose a model based on the latest version of Yolo; (iii) pre-train the model for the segmentation of skin lesions; (iv) train five models for the five dermoscopic features. The experiments are performed on the ISIC 2018 task2 dataset. After training, the model is evaluated and compared to the performance of two methods. The proposed method allows us to reach average performances of 0.9758, 0.954, 0.9724, 0.938, and 0.9692, respectively, for the Dice similarity coefficient, Jaccard similarity coefficient, precision, recall, and average precision. Furthermore, comparing to other methods, the proposed method reaches a better Jaccard similarity coefficient of 0.954 and, thus, presents the best similarity with the annotations made by specialists. This method can also be used to automatically annotate images and, therefore, can be a solution to the lack of features annotation in the dataset.

15.
Front Oncol ; 13: 1151257, 2023.
Article in English | MEDLINE | ID: mdl-37346069

ABSTRACT

Skin cancer is a serious disease that affects people all over the world. Melanoma is an aggressive form of skin cancer, and early detection can significantly reduce human mortality. In the United States, approximately 97,610 new cases of melanoma will be diagnosed in 2023. However, challenges such as lesion irregularities, low-contrast lesions, intraclass color similarity, redundant features, and imbalanced datasets make improved recognition accuracy using computerized techniques extremely difficult. This work presented a new framework for skin lesion recognition using data augmentation, deep learning, and explainable artificial intelligence. In the proposed framework, data augmentation is performed at the initial step to increase the dataset size, and then two pretrained deep learning models are employed. Both models have been fine-tuned and trained using deep transfer learning. Both models (Xception and ShuffleNet) utilize the global average pooling layer for deep feature extraction. The analysis of this step shows that some important information is missing; therefore, we performed the fusion. After the fusion process, the computational time was increased; therefore, we developed an improved Butterfly Optimization Algorithm. Using this algorithm, only the best features are selected and classified using machine learning classifiers. In addition, a GradCAM-based visualization is performed to analyze the important region in the image. Two publicly available datasets-ISIC2018 and HAM10000-have been utilized and obtained improved accuracy of 99.3% and 91.5%, respectively. Comparing the proposed framework accuracy with state-of-the-art methods reveals improved and less computational time.

16.
Diagnostics (Basel) ; 13(11)2023 May 24.
Article in English | MEDLINE | ID: mdl-37296686

ABSTRACT

Red, blue, white, pink, or black spots with irregular borders and small lesions on the skin are known as skin cancer that is categorized into two types: benign and malignant. Skin cancer can lead to death in advanced stages, however, early detection can increase the chances of survival of skin cancer patients. There exist several approaches developed by researchers to identify skin cancer at an early stage, however, they may fail to detect the tiniest tumours. Therefore, we propose a robust method for the diagnosis of skin cancer, namely SCDet, based on a convolutional neural network (CNN) having 32 layers for the detection of skin lesions. The images, having a size of 227 × 227, are fed to the image input layer, and then pair of convolution layers is utilized to withdraw the hidden patterns of the skin lesions for training. After that, batch normalization and ReLU layers are used. The performance of our proposed SCDet is computed using the evaluation matrices: precision 99.2%; recall 100%; sensitivity 100%; specificity 99.20%; and accuracy 99.6%. Moreover, the proposed technique is compared with the pre-trained models, i.e., VGG16, AlexNet, and SqueezeNet and it is observed that SCDet provides higher accuracy than these pre-trained models and identifies the tiniest skin tumours with maximum precision. Furthermore, our proposed model is faster than the pre-trained model as the depth of its architecture is not too high as compared to pre-trained models such as ResNet50. Additionally, our proposed model consumes fewer resources during training; therefore, it is better in terms of computational cost than the pre-trained models for the detection of skin lesions.

17.
Cancers (Basel) ; 15(7)2023 Apr 04.
Article in English | MEDLINE | ID: mdl-37046806

ABSTRACT

Artificial Intelligence (AI) techniques have changed the general perceptions about medical diagnostics, especially after the introduction and development of Convolutional Neural Networks (CNN) and advanced Deep Learning (DL) and Machine Learning (ML) approaches. In general, dermatologists visually inspect the images and assess the morphological variables such as borders, colors, and shapes to diagnose the disease. In this background, AI techniques make use of algorithms and computer systems to mimic the cognitive functions of the human brain and assist clinicians and researchers. In recent years, AI has been applied extensively in the domain of dermatology, especially for the detection and classification of skin cancer and other general skin diseases. In this research article, the authors propose an Optimal Multi-Attention Fusion Convolutional Neural Network-based Skin Cancer Diagnosis (MAFCNN-SCD) technique for the detection of skin cancer in dermoscopic images. The primary aim of the proposed MAFCNN-SCD technique is to classify skin cancer on dermoscopic images. In the presented MAFCNN-SCD technique, the data pre-processing is performed at the initial stage. Next, the MAFNet method is applied as a feature extractor with Henry Gas Solubility Optimization (HGSO) algorithm as a hyperparameter optimizer. Finally, the Deep Belief Network (DBN) method is exploited for the detection and classification of skin cancer. A sequence of simulations was conducted to establish the superior performance of the proposed MAFCNN-SCD approach. The comprehensive comparative analysis outcomes confirmed the supreme performance of the proposed MAFCNN-SCD technique over other methodologies.

18.
Cancers (Basel) ; 15(7)2023 Apr 06.
Article in English | MEDLINE | ID: mdl-37046840

ABSTRACT

Skin cancer is one of the most lethal kinds of human illness. In the present state of the health care system, skin cancer identification is a time-consuming procedure and if it is not diagnosed initially then it can be threatening to human life. To attain a high prospect of complete recovery, early detection of skin cancer is crucial. In the last several years, the application of deep learning (DL) algorithms for the detection of skin cancer has grown in popularity. Based on a DL model, this work intended to build a multi-classification technique for diagnosing skin cancers such as melanoma (MEL), basal cell carcinoma (BCC), squamous cell carcinoma (SCC), and melanocytic nevi (MN). In this paper, we have proposed a novel model, a deep learning-based skin cancer classification network (DSCC_Net) that is based on a convolutional neural network (CNN), and evaluated it on three publicly available benchmark datasets (i.e., ISIC 2020, HAM10000, and DermIS). For the skin cancer diagnosis, the classification performance of the proposed DSCC_Net model is compared with six baseline deep networks, including ResNet-152, Vgg-16, Vgg-19, Inception-V3, EfficientNet-B0, and MobileNet. In addition, we used SMOTE Tomek to handle the minority classes issue that exists in this dataset. The proposed DSCC_Net obtained a 99.43% AUC, along with a 94.17%, accuracy, a recall of 93.76%, a precision of 94.28%, and an F1-score of 93.93% in categorizing the four distinct types of skin cancer diseases. The rates of accuracy for ResNet-152, Vgg-19, MobileNet, Vgg-16, EfficientNet-B0, and Inception-V3 are 89.32%, 91.68%, 92.51%, 91.12%, 89.46% and 91.82%, respectively. The results showed that our proposed DSCC_Net model performs better as compared to baseline models, thus offering significant support to dermatologists and health experts to diagnose skin cancer.

19.
Comput Methods Programs Biomed ; 231: 107408, 2023 Apr.
Article in English | MEDLINE | ID: mdl-36805279

ABSTRACT

BACKGROUND AND OBJECTIVE: Deep learning (DL) models have been used for medical imaging for a long time but they did not achieve their full potential in the past because of insufficient computing power and scarcity of training data. In recent years, we have seen substantial growth in DL networks because of improved technology and an abundance of data. However, previous studies indicate that even a well-trained DL algorithm may struggle to generalize data from multiple sources because of domain shifts. Additionally, ineffectiveness of basic data fusion methods, complexity of segmentation target and low interpretability of current DL models limit their use in clinical decisions. To meet these challenges, we present a new two-phase cross-domain transfer learning system for effective skin lesion segmentation from dermoscopic images. METHODS: Our system is based on two significant technical inventions. We examine a two- phase cross-domain transfer learning approach, including model-level and data-level transfer learning, by fine-tuning the system on two datasets, MoleMap and ImageNet. We then present nSknRSUNet, a high-performing DL network, for skin lesion segmentation using broad receptive fields and spatial edge attention feature fusion. We examine the trained model's generalization capabilities on skin lesion segmentation to quantify these two inventions. We cross-examine the model using two skin lesion image datasets, MoleMap and HAM10000, obtained from varied clinical contexts. RESULTS: At data-level transfer learning for the HAM10000 dataset, the proposed model obtained 94.63% of DSC and 99.12% accuracy. In cross-examination at data-level transfer learning for the Molemap dataset, the proposed model obtained 93.63% of DSC and 97.01% of accuracy. CONCLUSION: Numerous experiments reveal that our system produces excellent performance and improves upon state-of-the-art methods on both qualitative and quantitative measures.


Subject(s)
Skin Diseases , Skin , Humans , Machine Learning , Skin Diseases/diagnostic imaging
20.
Multimed Tools Appl ; 82(10): 15763-15778, 2023.
Article in English | MEDLINE | ID: mdl-36250184

ABSTRACT

A powerful medical decision support system for classifying skin lesions from dermoscopic images is an important tool to prognosis of skin cancer. In the recent years, Deep Convolutional Neural Network (DCNN) have made a significant advancement in detecting skin cancer types from dermoscopic images, in-spite of its fine grained variability in its appearance. The main objective of this research work is to develop a DCNN based model to automatically classify skin cancer types into melanoma and non-melanoma with high accuracy. The datasets used in this work were obtained from the popular challenges ISIC-2019 and ISIC-2020, which have different image resolutions and class imbalance problems. To address these two problems and to achieve high performance in classification we have used EfficientNet architecture based on transfer learning techniques, which learns more complex and fine grained patterns from lesion images by automatically scaling depth, width and resolution of the network. We have augmented our dataset to overcome the class imbalance problem and also used metadata information to improve the classification results. Further to improve the efficiency of the EfficientNet we have used ranger optimizer which considerably reduces the hyper parameter tuning, which is required to achieve state-of-the-art results. We have conducted several experiments using different transferring models and our results proved that EfficientNet variants outperformed in the skin lesion classification tasks when compared with other architectures. The performance of the proposed system was evaluated using Area under the ROC curve (AUC - ROC) and obtained the score of 0.9681 by optimal fine tuning of EfficientNet-B6 with ranger optimizer.

SELECTION OF CITATIONS
SEARCH DETAIL