RESUMEN
In the field of medicine, decision support systems play a crucial role by harnessing cutting-edge technology and data analysis to assist doctors in disease diagnosis and treatment. Leukemia is a malignancy that emerges from the uncontrolled growth of immature white blood cells within the human body. An accurate and prompt diagnosis of leukemia is desired due to its swift progression to distant parts of the body. Acute lymphoblastic leukemia (ALL) is an aggressive type of leukemia that affects both children and adults. Computer vision-based identification of leukemia is challenging due to structural irregularities and morphological similarities of blood entities. Deep neural networks have shown promise in extracting valuable information from image datasets, but they have high computational costs due to their extensive feature sets. This work presents an efficient pipeline for binary and subtype classification of acute lymphoblastic leukemia. The proposed method first unveils a novel neighborhood pixel transformation method using differential evolution to improve the clarity and discriminability of blood cell images for better analysis. Next, a hybrid feature extraction approach is presented leveraging transfer learning from selected deep neural network models, InceptionV3 and DenseNet201, to extract comprehensive feature sets. To optimize feature selection, a customized binary Grey Wolf Algorithm is utilized, achieving an impressive 80% reduction in feature size while preserving key discriminative information. These optimized features subsequently empower multiple classifiers, potentially capturing diverse perspectives and amplifying classification accuracy. The proposed pipeline is validated on publicly available standard datasets of ALL images. For binary classification, the best average accuracy of 98.1% is achieved with 98.1% sensitivity and 98% precision. For ALL subtype classifications, the best accuracy of 98.14% was attained with 78.5% sensitivity and 98% precision. The proposed feature selection method shows a better convergence behavior as compared to classical population-based meta-heuristics. The suggested solution also demonstrates comparable or better performance in comparison to several existing techniques.
RESUMEN
A significant issue in computer-aided diagnosis (CAD) for medical applications is brain tumor classification. Radiologists could reliably detect tumors using machine learning algorithms without extensive surgery. However, a few important challenges arise, such as (i) the selection of the most important deep learning architecture for classification (ii) an expert in the field who can assess the output of deep learning models. These difficulties motivate us to propose an efficient and accurate system based on deep learning and evolutionary optimization for the classification of four types of brain modalities (t1 tumor, t1ce tumor, t2 tumor, and flair tumor) on a large-scale MRI database. Thus, a CNN architecture is modified based on domain knowledge and connected with an evolutionary optimization algorithm to select hyperparameters. In parallel, a Stack Encoder-Decoder network is designed with ten convolutional layers. The features of both models are extracted and optimized using an improved version of Grey Wolf with updated criteria of the Jaya algorithm. The improved version speeds up the learning process and improves the accuracy. Finally, the selected features are fused using a novel parallel pooling approach that is classified using machine learning and neural networks. Two datasets, BraTS2020 and BraTS2021, have been employed for the experimental tasks and obtained an improved average accuracy of 98% and a maximum single-classifier accuracy of 99%. Comparison is also conducted with several classifiers, techniques, and neural nets; the proposed method achieved improved performance.
Asunto(s)
Neoplasias Encefálicas , Aprendizaje Profundo , Retraso en el Despertar Posanestésico , Humanos , Redes Neurales de la Computación , Encéfalo/diagnóstico por imagen , Neoplasias Encefálicas/diagnóstico por imagenRESUMEN
Melanoma is widely recognized as one of the most lethal forms of skin cancer, with its incidence showing an upward trend in recent years. Nonetheless, the timely detection of this malignancy substantially enhances the likelihood of patients' long-term survival. Several computer-based methods have recently been proposed, in the pursuit of diagnosing skin lesions at their early stages. Despite achieving some level of success, there still remains a margin of error that the machine learning community considers to be an unresolved research challenge. The primary objective of this study was to maximize the input feature information by combining multiple deep models in the first phase, and then to avoid noisy and redundant information by downsampling the feature set, using a novel evolutionary feature selection technique, in the second phase. By maintaining the integrity of the original feature space, the proposed idea generated highly discriminant feature information. Recent deep models, including Darknet53, DenseNet201, InceptionV3, and InceptionResNetV2, were employed in our study, for the purpose of feature extraction. Additionally, transfer learning was leveraged, to enhance the performance of our approach. In the subsequent phase, the extracted feature information from the chosen pre-existing models was combined, with the aim of preserving maximum information, prior to undergoing the process of feature selection, using a novel entropy-controlled gray wolf optimization (ECGWO) algorithm. The integration of fusion and selection techniques was employed, initially to incorporate the feature vector with a high level of information and, subsequently, to eliminate redundant and irrelevant feature information. The effectiveness of our concept is supported by an assessment conducted on three benchmark dermoscopic datasets: PH2, ISIC-MSK, and ISIC-UDA. In order to validate the proposed methodology, a comprehensive evaluation was conducted, including a rigorous comparison to established techniques in the field.
RESUMEN
The demand for the accurate and timely identification of melanoma as a major skin cancer type is increasing daily. Due to the advent of modern tools and computer vision techniques, it has become easier to perform analysis. Skin cancer classification and segmentation techniques require clear lesions segregated from the background for efficient results. Many studies resolve the matter partly. However, there exists plenty of room for new research in this field. Recently, many algorithms have been presented to preprocess skin lesions, aiding the segmentation algorithms to generate efficient outcomes. Nature-inspired algorithms and metaheuristics help to estimate the optimal parameter set in the search space. This research article proposes a hybrid metaheuristic preprocessor, BA-ABC, to improve the quality of images by enhancing their contrast and preserving the brightness. The statistical transformation function, which helps to improve the contrast, is based on a parameter set estimated through the proposed hybrid metaheuristic model for every image in the dataset. For experimentation purposes, we have utilised three publicly available datasets, ISIC-2016, 2017 and 2018. The efficacy of the presented model is validated through some state-of-the-art segmentation algorithms. The visual outcomes of the boundary estimation algorithms and performance matrix validate that the proposed model performs well. The proposed model improves the dice coefficient to 94.6% in the results.
RESUMEN
White blood cells (WBCs) constitute an essential part of the human immune system. The correct identification of WBC subtypes is critical in the diagnosis of leukemia, a kind of blood cancer defined by the aberrant proliferation of malignant leukocytes in the bone marrow. The traditional approach of classifying WBCs, which involves the visual analysis of blood smear images, is labor-intensive and error-prone. Modern approaches based on deep convolutional neural networks provide significant results for this type of image categorization, but have high processing and implementation costs owing to very large feature sets. This paper presents an improved hybrid approach for efficient WBC subtype classification. First, optimum deep features are extracted from enhanced and segmented WBC images using transfer learning on pre-trained deep neural networks, i.e., DenseNet201 and Darknet53. The serially fused feature vector is then filtered using an entropy-controlled marine predator algorithm (ECMPA). This nature-inspired meta-heuristic optimization algorithm selects the most dominant features while discarding the weak ones. The reduced feature vector is classified with multiple baseline classifiers with various kernel settings. The proposed methodology is validated on a public dataset of 5000 synthetic images that correspond to five different subtypes of WBCs. The system achieves an overall average accuracy of 99.9% with more than 95% reduction in the size of the feature vector. The feature selection algorithm also demonstrates better convergence performance as compared to classical meta-heuristic algorithms. The proposed method also demonstrates a comparable performance with several existing works on WBC classification.
RESUMEN
Rapid advancements and the escalating necessity of autonomous algorithms in medical imaging require efficient models to accomplish tasks such as segmentation and classification. However, there exists a significant dependency on the image quality of datasets when using these models. Appreciable improvements to enhance datasets for efficient image analysis have been noted in the past. In addition, deep learning and machine learning are vastly employed in this field. However, even after the advent of these advanced techniques, a significant space exists for new research. Recent research works indicate the vast applicability of preprocessing techniques in segmentation tasks. Contrast stretching is one of the preprocessing techniques used to enhance a region of interest. We propose a novel hybrid meta-heuristic preprocessor (DE-ABC), which optimises the decision variables used in the contrast-enhancement transformation function. We validated the efficiency of the preprocessor against some state-of-the-art segmentation algorithms. Publicly available skin-lesion datasets such as PH2, ISIC-2016, ISIC-2017, and ISIC-2018 were employed. We used Jaccard and the dice coefficient as performance matrices; at the maximum, the proposed model improved the dice coefficient from 93.56% to 94.09%. Cross-comparisons of segmentation results with the original datasets versus the contrast-stretched datasets validate that DE-ABC enhances the efficiency of segmentation algorithms.
RESUMEN
The high precedence of epidemiological examination of skin lesions necessitated the well-performing efficient classification and segmentation models. In the past two decades, various algorithms, especially machine/deep learning-based methods, replicated the classical visual examination to accomplish the above-mentioned tasks. These automated streams of models demand evident lesions with less background and noise affecting the region of interest. However, even after the proposal of these advanced techniques, there are gaps in achieving the efficacy of matter. Recently, many preprocessors proposed to enhance the contrast of lesions, which further aided the skin lesion segmentation and classification tasks. Metaheuristics are the methods used to support the search space optimisation problems. We propose a novel Hybrid Metaheuristic Differential Evolution-Bat Algorithm (DE-BA), which estimates parameters used in the brightness preserving contrast stretching transformation function. For extensive experimentation we tested our proposed algorithm on various publicly available databases like ISIC 2016, 2017, 2018 and PH2, and validated the proposed model with some state-of-the-art already existing segmentation models. The tabular and visual comparison of the results concluded that DE-BA as a preprocessor positively enhances the segmentation results.
Asunto(s)
Melanoma , Enfermedades de la Piel , Neoplasias Cutáneas , Humanos , Dermoscopía/métodos , Melanoma/diagnóstico , Neoplasias Cutáneas/diagnóstico , Heurística , Algoritmos , Enfermedades de la Piel/diagnóstico por imagen , Procesamiento de Imagen Asistido por Computador/métodosRESUMEN
In condition based maintenance, different signal processing techniques are used to sense the faults through the vibration and acoustic emission signals, received from the machinery. These signal processing approaches mostly utilise time, frequency, and time-frequency domain analysis. The features obtained are later integrated with the different machine learning techniques to classify the faults into different categories. In this work, different statistical features of vibration signals in time and frequency domains are studied for the detection and localisation of faults in the roller bearings. These are later classified into healthy, outer race fault, inner race fault, and ball fault classes. The statistical features including skewness, kurtosis, average and root mean square values of time domain vibration signals are considered. These features are extracted from the second derivative of the time domain vibration signals and power spectral density of vibration signals. The vibration signal is also converted to the frequency domain and the same features are extracted. All three feature sets are concatenated, creating the time, frequency and spectral power domain feature vectors. These feature vectors are finally fed into the K- nearest neighbour, support vector machine and kernel linear discriminant analysis for the detection and classification of bearing faults. With the proposed method, the reduction percentage of more than 95% percent is achieved, which not only reduces the computational burden but also the classification time. Simulation results show that the signals are classified to achieve an average accuracy of 99.13% using KLDA and 96.64% using KNN classifiers. The results are also compared with the empirical mode decomposition (EMD) features and Fourier transform features without extracting any statistical information, which are two of the most widely used approaches in the literature. To gain a certain level of confidence in the classification results, a detailed statistical analysis is also provided.
Asunto(s)
Procesamiento de Señales Asistido por Computador , Vibración , Simulación por Computador , Aprendizaje Automático , Máquina de Vectores de SoporteRESUMEN
Manual diagnosis of skin cancer is time-consuming and expensive; therefore, it is essential to develop automated diagnostics methods with the ability to classify multiclass skin lesions with greater accuracy. We propose a fully automated approach for multiclass skin lesion segmentation and classification by using the most discriminant deep features. First, the input images are initially enhanced using local color-controlled histogram intensity values (LCcHIV). Next, saliency is estimated using a novel Deep Saliency Segmentation method, which uses a custom convolutional neural network (CNN) of ten layers. The generated heat map is converted into a binary image using a thresholding function. Next, the segmented color lesion images are used for feature extraction by a deep pre-trained CNN model. To avoid the curse of dimensionality, we implement an improved moth flame optimization (IMFO) algorithm to select the most discriminant features. The resultant features are fused using a multiset maximum correlation analysis (MMCA) and classified using the Kernel Extreme Learning Machine (KELM) classifier. The segmentation performance of the proposed methodology is analyzed on ISBI 2016, ISBI 2017, ISIC 2018, and PH2 datasets, achieving an accuracy of 95.38%, 95.79%, 92.69%, and 98.70%, respectively. The classification performance is evaluated on the HAM10000 dataset and achieved an accuracy of 90.67%. To prove the effectiveness of the proposed methods, we present a comparison with the state-of-the-art techniques.
RESUMEN
[This corrects the article DOI: 10.1007/s10044-020-00950-0.].
RESUMEN
Teledermatology is one of the most illustrious applications of telemedicine and e-health. In this field, telecommunication technologies are utilized to transfer medical information to the experts. Due to the skin's visual nature, teledermatology is an effective tool for the diagnosis of skin lesions especially in rural areas. Furthermore, it can also be useful to limit gratuitous clinical referrals and triage dermatology cases. The objective of this research is to classify the skin lesion image samples, received from different servers. The proposed framework is comprised of two module, which include the skin lesion localization/segmentation and the classification. In the localization module, we propose a hybrid strategy that fuses the binary images generated from the designed 16-layered convolutional neural network model and an improved high dimension contrast transform (HDCT) based saliency segmentation. To utilize maximum information extracted from the binary images, a maximal mutual information method is proposed, which returns the segmented RGB lesion image. In the classification module, a pre-trained DenseNet201 model is re-trained on the segmented lesion images using transfer learning. Afterward, the extracted features from the two fully connected layers are down-sampled using the t-distribution stochastic neighbor embedding (t-SNE) method. These resultant features are finally fused using a multi canonical correlation (MCCA) approach and are passed to a multi-class ELM classifier. Four datasets (i.e., ISBI2016, ISIC2017, PH2, and ISBI2018) are employed for the evaluation of the segmentation task, while HAM10000, the most challenging dataset, is used for the classification task. The experimental results in comparison with the state-of-the-art methods affirm the strength of our proposed framework.
Asunto(s)
Enfermedades de la Piel , Neoplasias Cutáneas , Análisis de Correlación Canónica , Dermoscopía , Humanos , Procesamiento de Imagen Asistido por Computador , Redes Neurales de la Computación , Enfermedades de la Piel/diagnóstico por imagenRESUMEN
In this work, we propose a deep learning framework for the classification of COVID-19 pneumonia infection from normal chest CT scans. In this regard, a 15-layered convolutional neural network architecture is developed which extracts deep features from the selected image samples - collected from the Radiopeadia. Deep features are collected from two different layers, global average pool and fully connected layers, which are later combined using the max-layer detail (MLD) approach. Subsequently, a Correntropy technique is embedded in the main design to select the most discriminant features from the pool of features. One-class kernel extreme learning machine classifier is utilized for the final classification to achieving an average accuracy of 95.1%, and the sensitivity, specificity & precision rate of 95.1%, 95%, & 94% respectively. To further verify our claims, detailed statistical analyses based on standard error mean (SEM) is also provided, which proves the effectiveness of our proposed prediction design.
RESUMEN
Since the emergence of COVID-19, thousands of people undergo chest X-ray and computed tomography scan for its screening on everyday basis. This has increased the workload on radiologists, and a number of cases are in backlog. This is not only the case for COVID-19, but for the other abnormalities needing radiological diagnosis as well. In this work, we present an automated technique for rapid diagnosis of COVID-19 on computed tomography images. The proposed technique consists of four primary steps: (1) data collection and normalization, (2) extraction of the relevant features, (3) selection of the most optimal features and (4) feature classification. In the data collection step, we collect data for several patients from a public domain website, and perform preprocessing, which includes image resizing. In the successive step, we apply discrete wavelet transform and extended segmentation-based fractal texture analysis methods for extracting the relevant features. This is followed by application of an entropy controlled genetic algorithm for selection of the best features from each feature type, which are combined using a serial approach. In the final phase, the best features are subjected to various classifiers for the diagnosis. The proposed framework, when augmented with the Naive Bayes classifier, yields the best accuracy of 92.6%. The simulation results are supported by a detailed statistical analysis as a proof of concept.
RESUMEN
Hypertension is an antecedent to cardiac disorders. According to the World Health Organization (WHO), the number of people affected with hypertension will reach around 1.56 billion by 2025. Early detection of hypertension is imperative to prevent the complications caused by cardiac abnormalities. Hypertension usually possesses no apparent detectable symptoms; hence, the control rate is significantly low. Computer-aided diagnosis based on machine learning and signal analysis has recently been applied to identify biomarkers for the accurate prediction of hypertension. This research proposes a new expert hypertension detection system (EHDS) from pulse plethysmograph (PuPG) signals for the categorization of normal and hypertension. The PuPG signal data set, including rich information of cardiac activity, was acquired from healthy and hypertensive subjects. The raw PuPG signals were preprocessed through empirical mode decomposition (EMD) by decomposing a signal into its constituent components. A combination of multi-domain features was extracted from the preprocessed PuPG signal. The features exhibiting high discriminative characteristics were selected and reduced through a proposed hybrid feature selection and reduction (HFSR) scheme. Selected features were subjected to various classification methods in a comparative fashion in which the best performance of 99.4% accuracy, 99.6% sensitivity, and 99.2% specificity was achieved through weighted k-nearest neighbor (KNN-W). The performance of the proposed EHDS was thoroughly assessed by tenfold cross-validation. The proposed EHDS achieved better detection performance in comparison to other electrocardiogram (ECG) and photoplethysmograph (PPG)-based methods.
Asunto(s)
Hipertensión , Adulto , Anciano , Algoritmos , Diagnóstico por Computador , Electrocardiografía , Femenino , Frecuencia Cardíaca , Humanos , Hipertensión/diagnóstico , Aprendizaje Automático , Masculino , Persona de Mediana EdadRESUMEN
Congenital heart disease (CHD) is a heart disorder associated with the devastating indications that result in increased mortality, increased morbidity, increased healthcare expenditure, and decreased quality of life. Ventricular Septal Defects (VSDs) and Arterial Septal Defects (ASDs) are the most common types of CHD. CHDs can be controlled before reaching a serious phase with an early diagnosis. The phonocardiogram (PCG) or heart sound auscultation is a simple and non-invasive technique that may reveal obvious variations of different CHDs. Diagnosis based on heart sounds is difficult and requires a high level of medical training and skills due to human hearing limitations and the non-stationary nature of PCGs. An automated computer-aided system may boost the diagnostic objectivity and consistency of PCG signals in the detection of CHDs. The objective of this research was to assess the effects of various pattern recognition modalities for the design of an automated system that effectively differentiates normal, ASD, and VSD categories using short term PCG time series. The proposed model in this study adopts three-stage processing: pre-processing, feature extraction, and classification. Empirical mode decomposition (EMD) was used to denoise the raw PCG signals acquired from subjects. One-dimensional local ternary patterns (1D-LTPs) and Mel-frequency cepstral coefficients (MFCCs) were extracted from the denoised PCG signal for precise representation of data from different classes. In the final stage, the fused feature vector of 1D-LTPs and MFCCs was fed to the support vector machine (SVM) classifier using 10-fold cross-validation. The PCG signals were acquired from the subjects admitted to local hospitals and classified by applying various experiments. The proposed methodology achieves a mean accuracy of 95.24% in classifying ASD, VSD, and normal subjects. The proposed model can be put into practice and serve as a second opinion for cardiologists by providing more objective and faster interpretations of PCG signals.
Asunto(s)
Cardiopatías Congénitas , Ruidos Cardíacos , Procesamiento de Señales Asistido por Computador , Algoritmos , Cardiopatías Congénitas/diagnóstico , Humanos , Fonocardiografía , Calidad de Vida , Máquina de Vectores de SoporteRESUMEN
Doctor utilizes various kinds of clinical technologies like MRI, endoscopy, CT scan, etc., to identify patient's deformity during the review time. Among set of clinical technologies, wireless capsule endoscopy (WCE) is an advanced procedures used for digestive track malformation. During this complete process, more than 57,000 frames are captured and doctors need to examine a complete video frame by frame which is a tedious task even for an experienced gastrologist. In this article, a novel computerized automated method is proposed for the classification of abdominal infections of gastrointestinal track from WCE images. Three core steps of the suggested system belong to the category of segmentation, deep features extraction and fusion followed by robust features selection. The ulcer abnormalities from WCE videos are initially extracted through a proposed color features based low level and high-level saliency (CFbLHS) estimation method. Later, DenseNet CNN model is utilized and through transfer learning (TL) features are computed prior to feature optimization using Kapur's entropy. A parallel fusion methodology is opted for the selection of maximum feature value (PMFV). For feature selection, Tsallis entropy is calculated later sorted into descending order. Finally, top 50% high ranked features are selected for classification using multilayered feedforward neural network classifier for recognition. Simulation is performed on collected WCE dataset and achieved maximum accuracy of 99.5% in 21.15 s.
Asunto(s)
Endoscopía Capsular/métodos , Hemorragia/diagnóstico , Procesamiento de Imagen Asistido por Computador/métodos , Redes Neurales de la Computación , Gastropatías/diagnóstico , Hemorragia/diagnóstico por imagen , Humanos , Gastropatías/diagnóstico por imagen , Úlcera Gástrica/diagnóstico por imagenRESUMEN
Drone base stations (DBSs) have received significant research interest in recent years. They provide a flexible and cost-effective solution to improve the coverage, connectivity, quality of service (QoS), and energy efficiency of large-area Internet of Things (IoT) networks. However, as DBSs are costly and power-limited devices, they require an efficient scheme for their deployment in practical networks. This work proposes a realistic mathematical model for the joint optimization problem of DBS placement and IoT users' assignment in a massive IoT network scenario. The optimization goal is to maximize the connectivity of IoT users by utilizing the minimum number of DBS, while satisfying practical network constraints. Such an optimization problem is NP-hard, and the optimal solution has a complexity exponential to the number of DBSs and IoT users in the network. Furthermore, this work also proposes a linearization scheme and a low-complexity heuristic to solve the problem in polynomial time. The simulations are performed for a number of network scenarios, and demonstrate that the proposed heuristic is numerically accurate and performs close to the optimal solution.
RESUMEN
The emergence of cloud infrastructure has the potential to provide significant benefits in a variety of areas in the medical imaging field. The driving force behind the extensive use of cloud infrastructure for medical image processing is the exponential increase in the size of computed tomography (CT) and magnetic resonance imaging (MRI) data. The size of a single CT/MRI image has increased manifold since the inception of these imagery techniques. This demand for the introduction of effective and efficient frameworks for extracting relevant and most suitable information (features) from these sizeable images. As early detection of lungs cancer can significantly increase the chances of survival of a lung scanner patient, an effective and efficient nodule detection system can play a vital role. In this article, we have proposed a novel classification framework for lungs nodule classification with less false positive rates (FPRs), high accuracy, sensitivity rate, less computationally expensive and uses a small set of features while preserving edge and texture information. The proposed framework comprises multiple phases that include image contrast enhancement, segmentation, feature extraction, followed by an employment of these features for training and testing of a selected classifier. Image preprocessing and feature selection being the primary steps-playing their vital role in achieving improved classification accuracy. We have empirically tested the efficacy of our technique by utilizing the well-known Lungs Image Consortium Database dataset. The results prove that the technique is highly effective for reducing FPRs with an impressive sensitivity rate of 97.45%.
Asunto(s)
Procesamiento de Imagen Asistido por Computador/métodos , Neoplasias Pulmonares/diagnóstico por imagen , Máquina de Vectores de Soporte , Tomografía Computarizada por Rayos X , Reacciones Falso Positivas , Humanos , Pulmón/patología , Neoplasias Pulmonares/clasificación , Sensibilidad y EspecificidadRESUMEN
Skin cancer is being a most deadly type of cancers which have grown extensively worldwide from the last decade. For an accurate detection and classification of melanoma, several measures should be considered which include, contrast stretching, irregularity measurement, selection of most optimal features, and so forth. A poor contrast of lesion affects the segmentation accuracy and also increases classification error. To overcome this problem, an efficient model for accurate border detection and classification is presented. The proposed model improves the segmentation accuracy in its preprocessing phase, utilizing contrast enhancement of lesion area compared to the background. The enhanced 2D blue channel is selected for the construction of saliency map, at the end of which threshold function produces the binary image. In addition, particle swarm optimization (PSO) based segmentation is also utilized for accurate border detection and refinement. Few selected features including shape, texture, local, and global are also extracted which are later selected based on genetic algorithm with an advantage of identifying the fittest chromosome. Finally, optimized features are later fed into the support vector machine (SVM) for classification. Comprehensive experiments have been carried out on three datasets named as PH2, ISBI2016, and ISIC (i.e., ISIC MSK-1, ISIC MSK-2, and ISIC UDA). The improved accuracy of 97.9, 99.1, 98.4, and 93.8%, respectively obtained for each dataset. The SVM outperforms on the selected dataset in terms of sensitivity, precision rate, accuracy, and FNR. Furthermore, the selection method outperforms and successfully removed the redundant features.
Asunto(s)
Dermoscopía/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Melanoma/diagnóstico , Melanoma/patología , Neoplasias Cutáneas/diagnóstico , Neoplasias Cutáneas/patología , Algoritmos , Humanos , Sensibilidad y EspecificidadRESUMEN
Brain tumor identification using magnetic resonance images (MRI) is an important research domain in the field of medical imaging. Use of computerized techniques helps the doctors for the diagnosis and treatment against brain cancer. In this article, an automated system is developed for tumor extraction and classification from MRI. It is based on marker-based watershed segmentation and features selection. Five primary steps are involved in the proposed system including tumor contrast, tumor extraction, multimodel features extraction, features selection, and classification. A gamma contrast stretching approach is implemented to improve the contrast of a tumor. Then, segmentation is done using marker-based watershed algorithm. Shape, texture, and point features are extracted in the next step and high ranked 70% features are only selected through chi-square max conditional priority features approach. In the later step, selected features are fused using a serial-based concatenation method before classifying using support vector machine. All the experiments are performed on three data sets including Harvard, BRATS 2013, and privately collected MR images data set. Simulation results clearly reveal that the proposed system outperforms existing methods with greater precision and accuracy.