Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 33
Filtrar
1.
Front Oncol ; 14: 1300997, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38894870

RESUMO

Breast cancer (BC) is the leading cause of female cancer mortality and is a type of cancer that is a major threat to women's health. Deep learning methods have been used extensively in many medical domains recently, especially in detection and classification applications. Studying histological images for the automatic diagnosis of BC is important for patients and their prognosis. Owing to the complication and variety of histology images, manual examination can be difficult and susceptible to errors and thus needs the services of experienced pathologists. Therefore, publicly accessible datasets called BreakHis and invasive ductal carcinoma (IDC) are used in this study to analyze histopathological images of BC. Next, using super-resolution generative adversarial networks (SRGANs), which create high-resolution images from low-quality images, the gathered images from BreakHis and IDC are pre-processed to provide useful results in the prediction stage. The components of conventional generative adversarial network (GAN) loss functions and effective sub-pixel nets were combined to create the concept of SRGAN. Next, the high-quality images are sent to the data augmentation stage, where new data points are created by making small adjustments to the dataset using rotation, random cropping, mirroring, and color-shifting. Next, patch-based feature extraction using Inception V3 and Resnet-50 (PFE-INC-RES) is employed to extract the features from the augmentation. After the features have been extracted, the next step involves processing them and applying transductive long short-term memory (TLSTM) to improve classification accuracy by decreasing the number of false positives. The results of suggested PFE-INC-RES is evaluated using existing methods on the BreakHis dataset, with respect to accuracy (99.84%), specificity (99.71%), sensitivity (99.78%), and F1-score (99.80%), while the suggested PFE-INC-RES performed better in the IDC dataset based on F1-score (99.08%), accuracy (99.79%), specificity (98.97%), and sensitivity (99.17%).

3.
Front Physiol ; 15: 1366910, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38812881

RESUMO

Introduction: Eye movement is one of the cues used in human-machine interface technologies for predicting the intention of users. The developing application in eye movement event detection is the creation of assistive technologies for paralyzed patients. However, developing an effective classifier is one of the main issues in eye movement event detection. Methods: In this paper, bidirectional long short-term memory (BILSTM) is proposed along with hyperparameter tuning for achieving effective eye movement event classification. The Lévy flight and interactive crossover-based reptile search algorithm (LICRSA) is used for optimizing the hyperparameters of BILSTM. The issues related to overfitting are avoided by using fuzzy data augmentation (FDA), and a deep neural network, namely, VGG-19, is used for extracting features from eye movements. Therefore, the optimization of hyperparameters using LICRSA enhances the classification of eye movement events using BILSTM. Results and Discussion: The proposed BILSTM-LICRSA is evaluated by using accuracy, precision, sensitivity, F1-score, area under the receiver operating characteristic (AUROC) curve measure, and area under the precision-recall curve (AUPRC) measure for four datasets, namely, Lund2013, collected dataset, GazeBaseR, and UTMultiView. The gazeNet, human manual classification (HMC), and multi-source information-embedded approach (MSIEA) are used for comparison with the BILSTM-LICRSA. The F1-score of BILSTM-LICRSA for the GazeBaseR dataset is 98.99%, which is higher than that of the MSIEA.

4.
Front Neurosci ; 18: 1362567, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38680450

RESUMO

Handwritten character recognition is one of the classical problems in the field of image classification. Supervised learning techniques using deep learning models are highly effective in their application to handwritten character recognition. However, they require a large dataset of labeled samples to achieve good accuracies. Recent supervised learning techniques for Kannada handwritten character recognition have state of the art accuracy and perform well over a large range of input variations. In this work, a framework is proposed for the Kannada language that incorporates techniques from semi-supervised learning. The framework uses features extracted from a convolutional neural network backbone and uses regularization to improve the trained features and label propagation to classify previously unseen characters. The episodic learning framework is used to validate the framework. Twenty-four classes are used for pre-training, 12 classes are used for testing and 11 classes are used for validation. Fine-tuning is tested using one example per unseen class and five examples per unseen class. Through experimentation the components of the network are implemented in Python using the Pytorch library. It is shown that the accuracy obtained 99.13% make this framework competitive with the currently available supervised learning counterparts, despite the large reduction in the number of labeled samples available for the novel classes.

5.
Heliyon ; 10(7): e29033, 2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38601591

RESUMO

As is well-known, multicriteria decision-making (MCDM) approaches can aid decision-makers in identifying the optimal alternative based on predetermined criteria. However, it is a big challenge to apply this approach in complex applications such as 5th generation (5G) industry assessment because criteria are challenging and trade-offs between them are hard. Also, assessment of the 5G industry involve strong uncertainty. So, this study is the first to evaluate the 5G industry using a new neutrosophic simple multi-attribute rating technique (N-SMART). Since neutrosophic set considers truth-degree, indeterminacy-degree, and falsity-degree, it is a more accurate instrument for evaluating uncertainty. The 5G assessment issue exemplifies the validity and great performance of our proposed method as: (1) its ability to deal with uncertainty phenomena; (2) its simplicity; and (3) its enhanced capacity to discern alternatives. Also, by considering the 5G service provided in the Egyptian New Administrative capital as a case study, the results showed that Ericsson 5G is the best choice and Nokia 5G is the worst choice.

6.
BMC Med Imaging ; 24(1): 63, 2024 Mar 18.
Artigo em Inglês | MEDLINE | ID: mdl-38500083

RESUMO

Significant advancements in machine learning algorithms have the potential to aid in the early detection and prevention of cancer, a devastating disease. However, traditional research methods face obstacles, and the amount of cancer-related information is rapidly expanding. The authors have developed a helpful support system using three distinct deep-learning models, ResNet-50, EfficientNet-B3, and ResNet-101, along with transfer learning, to predict lung cancer, thereby contributing to health and reducing the mortality rate associated with this condition. This offer aims to address the issue effectively. Using a dataset of 1,000 DICOM lung cancer images from the LIDC-IDRI repository, each image is classified into four different categories. Although deep learning is still making progress in its ability to analyze and understand cancer data, this research marks a significant step forward in the fight against cancer, promoting better health outcomes and potentially lowering the mortality rate. The Fusion Model, like all other models, achieved 100% precision in classifying Squamous Cells. The Fusion Model and ResNet-50 achieved a precision of 90%, closely followed by EfficientNet-B3 and ResNet-101 with slightly lower precision. To prevent overfitting and improve data collection and planning, the authors implemented a data extension strategy. The relationship between acquiring knowledge and reaching specific scores was also connected to advancing and addressing the issue of imprecise accuracy, ultimately contributing to advancements in health and a reduction in the mortality rate associated with lung cancer.


Assuntos
Aprendizado Profundo , Neoplasias Pulmonares , Humanos , Neoplasias Pulmonares/diagnóstico por imagem , Algoritmos , Aprendizado de Máquina , Projetos de Pesquisa
7.
Heliyon ; 10(5): e27509, 2024 Mar 15.
Artigo em Inglês | MEDLINE | ID: mdl-38468955

RESUMO

Several deep-learning assisted disease assessment schemes (DAS) have been proposed to enhance accurate detection of COVID-19, a critical medical emergency, through the analysis of clinical data. Lung imaging, particularly from CT scans, plays a pivotal role in identifying and assessing the severity of COVID-19 infections. Existing automated methods leveraging deep learning contribute significantly to reducing the diagnostic burden associated with this process. This research aims in developing a simple DAS for COVID-19 detection using the pre-trained lightweight deep learning methods (LDMs) applied to lung CT slices. The use of LDMs contributes to a less complex yet highly accurate detection system. The key stages of the developed DAS include image collection and initial processing using Shannon's thresholding, deep-feature mining supported by LDMs, feature optimization utilizing the Brownian Butterfly Algorithm (BBA), and binary classification through three-fold cross-validation. The performance evaluation of the proposed scheme involves assessing individual, fused, and ensemble features. The investigation reveals that the developed DAS achieves a detection accuracy of 93.80% with individual features, 96% accuracy with fused features, and an impressive 99.10% accuracy with ensemble features. These outcomes affirm the effectiveness of the proposed scheme in significantly enhancing COVID-19 detection accuracy in the chosen lung CT database.

8.
Front Cardiovasc Med ; 11: 1365481, 2024.
Artigo em Inglês | MEDLINE | ID: mdl-38525188

RESUMO

The 2017 World Health Organization Fact Sheet highlights that coronary artery disease is the leading cause of death globally, responsible for approximately 30% of all deaths. In this context, machine learning (ML) technology is crucial in identifying coronary artery disease, thereby saving lives. ML algorithms can potentially analyze complex patterns and correlations within medical data, enabling early detection and accurate diagnosis of CAD. By leveraging ML technology, healthcare professionals can make informed decisions and implement timely interventions, ultimately leading to improved outcomes and potentially reducing the mortality rate associated with coronary artery disease. Machine learning algorithms create non-invasive, quick, accurate, and economical diagnoses. As a result, machine learning algorithms can be employed to supplement existing approaches or as a forerunner to them. This study shows how to use the CNN classifier and RNN based on the LSTM classifier in deep learning to attain targeted "risk" CAD categorization utilizing an evolving set of 450 cytokine biomarkers that could be used as suggestive solid predictive variables for treatment. The two used classifiers are based on these "45" different cytokine prediction characteristics. The best Area Under the Receiver Operating Characteristic curve (AUROC) score achieved is (0.98) for a confidence interval (CI) of 95; the classifier RNN-LSTM used "450" cytokine biomarkers had a great (AUROC) score of 0.99 with a confidence interval of 0.95 the percentage 95, the CNN model containing cytokines received the second best AUROC score (0.92). The RNN-LSTM classifier considerably beats the CNN classifier regarding AUROC scores, as evidenced by a p-value smaller than 7.48 obtained via an independent t-test. As large-scale initiatives to achieve early, rapid, reliable, inexpensive, and accessible individual identification of CAD risk gain traction, robust machine learning algorithms can now augment older methods such as angiography. Incorporating 65 new sensitive cytokine biomarkers can increase early detection even more. Investigating the novel involvement of cytokines in CAD could lead to better risk detection, disease mechanism discovery, and new therapy options.

9.
BMC Med Imaging ; 24(1): 32, 2024 Feb 05.
Artigo em Inglês | MEDLINE | ID: mdl-38317098

RESUMO

Chest radiographs are examined in typical clinical settings by competent physicians for tuberculosis diagnosis. However, this procedure is time consuming and subjective. Due to the growing usage of machine learning techniques in applied sciences, researchers have begun applying comparable concepts to medical diagnostics, such as tuberculosis screening. In the period of extremely deep neural nets which comprised of hundreds of convolution layers for feature extraction, we create a shallow-CNN for screening of TB condition from Chest X-rays so that the model is able to offer appropriate interpretation for right diagnosis. The suggested model consists of four convolution-maxpooling layers with various hyperparameters that were optimized for optimal performance using a Bayesian optimization technique. The model was reported with a peak classification accuracy, F1-score, sensitivity and specificity of 0.95. In addition, the receiver operating characteristic (ROC) curve for the proposed shallow-CNN showed a peak area under the curve value of 0.976. Moreover, we have employed class activation maps (CAM) and Local Interpretable Model-agnostic Explanations (LIME), explainer systems for assessing the transparency and explainability of the model in comparison to a state-of-the-art pre-trained neural net such as the DenseNet.


Assuntos
Aprendizado de Máquina , Tuberculose , Humanos , Teorema de Bayes , Radiografia , Programas de Rastreamento , Tuberculose/diagnóstico por imagem
10.
Sci Rep ; 13(1): 22470, 2023 12 18.
Artigo em Inglês | MEDLINE | ID: mdl-38110422

RESUMO

A drop in physical activity and a deterioration in the capacity to undertake daily life activities are both connected with ageing and have negative effects on physical and mental health. An Elderly and Visually Impaired Human Activity Monitoring (EV-HAM) system that keeps tabs on a person's routine and steps in if a change in behaviour or a crisis might greatly help an elderly person or a visually impaired. These individuals may find greater freedom with the help of an EVHAM system. As the backbone of human-centric applications like actively supported living and in-home monitoring for the elderly and visually impaired, an EVHAM system is essential. Big data-driven product design is flourishing in this age of 5G and the IoT. Recent advancements in processing power and software architectures have also contributed to the emergence and development of artificial intelligence (AI). In this context, the digital twin has emerged as a state-of-the-art technology that bridges the gap between the real and virtual worlds by evaluating data from several sensors using artificial intelligence algorithms. Although promising findings have been reported by Wi-Fi-based human activity identification techniques so far, their effectiveness is vulnerable to environmental variations. Using the environment-independent fingerprints generated from the Wi-Fi channel state information (CSI), we introduce Wi-Sense. This human activity identification system employs a Deep Hybrid convolutional neural network (DHCNN). The proposed system begins by collecting the CSI with a regular Wi-Fi Network Interface Controller. Wi-Sense uses the CSI ratio technique to lessen the effect of noise and the phase offset. The t- Distributed Stochastic Neighbor Embedding (t-SNE) is used to eliminate unnecessary data further. The data dimension is decreased, and the negative effects on the environment are eliminated in this process. The resulting spectrogram of the processed data exposes the activity's micro-Doppler fingerprints as a function of both time and location. These spectrograms are put to use in the training of a DHCNN. Based on our findings, EVHAM can accurately identify these actions 99% of the time.


Assuntos
Inteligência Artificial , Redes Neurais de Computação , Idoso , Humanos , Algoritmos , Envelhecimento , Big Data
11.
PLoS One ; 18(10): e0293064, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37824566

RESUMO

[This corrects the article DOI: 10.1371/journal.pone.0250959.].

12.
Sci Rep ; 13(1): 16827, 2023 Oct 06.
Artigo em Inglês | MEDLINE | ID: mdl-37803133

RESUMO

Spectrum sensing describes, whether the spectrum is occupied or empty. Main objective of cognitive radio network (CRN) is to increase probability of detection (Pd) and reduce probability of error (Pe) for energy consumption. To reduce energy consumption, probability of detection should be increased. In cooperative spectrum sensing (CSS), all secondary users (SU) transmit their data to fusion center (FC) for final measurement according to the status of primary user (PU). Cluster should be used to overcome this problem and improve performance. In the clustering technique, all SUs are grouped into clusters on the basis of their similarity. In cluster technique, SU transfers their data to cluster head (CH) and CH transfers their combined data to FC. This paper proposes the detection performance optimization of CRN with a machine learning-based metaheuristic algorithm using clustering CSS technique. This article presents a hybrid support vector machine (SVM) and Red Deer Algorithm (RDA) algorithm named Hybrid SVM-RDA to identify spectrum gaps. Algorithm proposed in this work outperforms the computational complexity, an issue reported with various conventional cluster techniques. The proposed algorithm increases the probability of detection (up to 99%) and decreases the probability of error (up to 1%) at different parameters.

13.
Sci Rep ; 13(1): 9725, 2023 06 15.
Artigo em Inglês | MEDLINE | ID: mdl-37322046

RESUMO

Pancreatic cancer is associated with higher mortality rates due to insufficient diagnosis techniques, often diagnosed at an advanced stage when effective treatment is no longer possible. Therefore, automated systems that can detect cancer early are crucial to improve diagnosis and treatment outcomes. In the medical field, several algorithms have been put into use. Valid and interpretable data are essential for effective diagnosis and therapy. There is much room for cutting-edge computer systems to develop. The main objective of this research is to predict pancreatic cancer early using deep learning and metaheuristic techniques. This research aims to create a deep learning and metaheuristic techniques-based system to predict pancreatic cancer early by analyzing medical imaging data, mainly CT scans, and identifying vital features and cancerous growths in the pancreas using Convolutional Neural Network (CNN) and YOLO model-based CNN (YCNN) models. Once diagnosed, the disease cannot be effectively treated, and its progression is unpredictable. That's why there's been a push in recent years to implement fully automated systems that can sense cancer at a prior stage and improve diagnosis and treatment. The paper aims to evaluate the effectiveness of the novel YCNN approach compared to other modern methods in predicting pancreatic cancer. To predict the vital features from the CT scan and the proportion of cancer feasts in the pancreas using the threshold parameters booked as markers. This paper employs a deep learning approach called a Convolutional Neural network (CNN) model to predict pancreatic cancer images. In addition, we use the YOLO model-based CNN (YCNN) to aid in the categorization process. Both biomarkers and CT image dataset is used for testing. The YCNN method was shown to perform well by a cent percent of accuracy compared to other modern techniques in a thorough review of comparative findings.


Assuntos
Aprendizado Profundo , Neoplasias Pancreáticas , Humanos , Redes Neurais de Computação , Algoritmos , Tomografia Computadorizada por Raios X/métodos , Neoplasias Pancreáticas/diagnóstico por imagem , Neoplasias Pancreáticas
14.
PeerJ Comput Sci ; 9: e1259, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37346697

RESUMO

In smart cities, the fast increase in automobiles has caused congestion, pollution, and disruptions in the transportation of commodities. Each year, there are more fatalities and cases of permanent impairment due to everyday road accidents. To control traffic congestion, provide secure data transmission also detecting accidents the IoT-based Traffic Management System is used. To identify, gather, and send data, autonomous cars, and intelligent gadgets are equipped with an IoT-based ITM system with a group of sensors. The transport system is being improved via machine learning. In this work, an Adaptive Traffic Management system (ATM) with an accident alert sound system (AALS) is used for managing traffic congestion and detecting the accident. For secure traffic data transmission Secure Early Traffic-Related EveNt Detection (SEE-TREND) is used. The design makes use of several scenarios to address every potential problem with the transportation system. The suggested ATM model continuously modifies the timing of traffic signals based on the volume of traffic and anticipated movements from neighboring junctions. By progressively allowing cars to pass green lights, it considerably reduces traveling time. It also relieves traffic congestion by creating a seamless transition. The results of the trial show that the suggested ATM system fared noticeably better than the traditional traffic-management method and will be a leader in transportation planning for smart-city-based transportation systems. The suggested ATM-ALTREND solution provides secure traffic data transmission that decreases traffic jams and vehicle wait times, lowers accident rates, and enhances the entire travel experience.

15.
Comput Intell Neurosci ; 2023: 4387053, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37163175

RESUMO

The integration of a decision maker's preferences in evolutionary multi-objective optimization (EMO) has been a common research scope over the last decade. In the published literature, several preference-based evolutionary approaches have been proposed. The reference point-based non-dominated sorting genetic (R-NSGA-II) algorithm represents one of the well-known preference-based evolutionary approaches. This method mainly aims to find a set of the Pareto-optimal solutions in the region of interest (ROI) rather than obtaining the entire Pareto-optimal set. This approach uses Euclidean distance as a metric to calculate the distance between each candidate solution and the reference point. However, this metric may not produce desired solutions because the final minimal Euclidean distance value is unknown. Thus, determining whether the true Pareto-optimal solution is achieved at the end of optimization run becomes difficult. In this study, R-NSGA-II method is modified using the recently proposed simplified Karush-Kuhn-Tucker proximity measure (S-KKTPM) metric instead of the Euclidean distance metric, where S-KKTPM-based distance measure can predict the convergence behavior of a point from the Pareto-optimal front without prior knowledge of the optimum solution. Experimental results show that the algorithm proposed herein is highly competitive compared with several state-of-the-art preference-based EMO methods. Extensive experiments were conducted with 2 to 10 objectives on various standard problems. Results show the effectiveness of our algorithm in obtaining the preferred solutions in the ROI and its ability to control the size of each preferred region separately at the same time.


Assuntos
Algoritmos , Evolução Biológica
16.
Heliyon ; 9(4): e15378, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-37101631

RESUMO

With the whirlwind evolution of technology, the quantity of stored data within datasets is rapidly expanding. As a result, extracting crucial and relevant information from said datasets is a gruelling task. Feature selection is a critical preprocessing task for machine learning to reduce the excess data in a set. This research presents a novel quasi-reflection learning arithmetic optimization algorithm - firefly search, an enhanced version of the original arithmetic optimization algorithm. Quasi-reflection learning mechanism was implemented for enhancement of population diversity, while firefly algorithm metaheuristics were used to improve the exploitation abilities of the original arithmetic optimization algorithm. The aim of this wrapper-based method is to tackle a specific classification problem by selecting an optimal feature subset. The proposed algorithm is tested and compared with various well-known methods on ten unconstrained benchmark functions, then on twenty-one standard datasets gathered from the University of California, Irvine Repository and Arizona State University. Additionally, the proposed approach is applied to the Corona disease dataset. The experimental results verify the improvements of the presented method and their statistical significance.

17.
Microprocess Microsyst ; 98: 104778, 2023 Apr.
Artigo em Inglês | MEDLINE | ID: mdl-36785847

RESUMO

Feature selection is one of the most important challenges in machine learning and data science. This process is usually performed in the data preprocessing phase, where the data is transformed to a proper format for further operations by machine learning algorithm. Many real-world datasets are highly dimensional with many irrelevant, even redundant features. These kinds of features do not improve classification accuracy and can even shrink down performance of a classifier. The goal of feature selection is to find optimal (or sub-optimal) subset of features that contain relevant information about the dataset from which machine learning algorithms can derive useful conclusions. In this manuscript, a novel version of firefly algorithm (FA) is proposed and adapted for feature selection challenge. Proposed method significantly improves performance of the basic FA, and also outperforms other state-of-the-art metaheuristics for both, benchmark bound-constrained and practical feature selection tasks. Method was first validated on standard unconstrained benchmarks and later it was applied for feature selection by using 21 standard University of California, Irvine (UCL) datasets. Moreover, presented approach was also tested for relatively novel COVID-19 dataset for predicting patients health, and one microcontroller microarray dataset. Results obtained in all practical simulations attest robustness and efficiency of proposed algorithm in terms of convergence, solutions' quality and classification accuracy. More precisely, the proposed approach obtained the best classification accuracy on 13 out of 21 total datasets, significantly outperforming other competitor methods.

18.
Sci Rep ; 13(1): 1004, 2023 01 18.
Artigo em Inglês | MEDLINE | ID: mdl-36653424

RESUMO

Industrial Internet of Things (IIoT)-based systems have become an important part of industry consortium systems because of their rapid growth and wide-ranging application. Various physical objects that are interconnected in the IIoT network communicate with each other and simplify the process of decision-making by observing and analyzing the surrounding environment. While making such intelligent decisions, devices need to transfer and communicate data with each other. However, as devices involved in IIoT networks grow and the methods of connections diversify, the traditional security frameworks face many shortcomings, including vulnerabilities to attack, lags in data, sharing data, and lack of proper authentication. Blockchain technology has the potential to empower safe data distribution of big data generated by the IIoT. Prevailing data-sharing methods in blockchain only concentrate on the data interchanging among parties, not on the efficiency in sharing, and storing. Hence an element-based K-harmonic means clustering algorithm (CA) is proposed for the effective sharing of data among the entities along with an algorithm named underweight data block (UDB) for overcoming the obstacle of storage space. The performance metrics considered for the evaluation of the proposed framework are the sum of squared error (SSE), time complexity with respect to different m values, and storage complexity with CPU utilization. The results have experimented with MATLAB 2018a simulation environment. The proposed model has better sharing, and storing based on blockchain technology, which is appropriate IIoT.


Assuntos
Blockchain , Indústrias , Algoritmos , Benchmarking , Big Data , Segurança Computacional
19.
Front Bioeng Biotechnol ; 11: 1286966, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-38169636

RESUMO

Diabetic Retinopathy (DR) is a major type of eye defect that is caused by abnormalities in the blood vessels within the retinal tissue. Early detection by automatic approach using modern methodologies helps prevent consequences like vision loss. So, this research has developed an effective segmentation approach known as Level-set Based Adaptive-active Contour Segmentation (LBACS) to segment the images by improving the boundary conditions and detecting the edges using Level Set Method with Improved Boundary Indicator Function (LSMIBIF) and Adaptive-Active Counter Model (AACM). For evaluating the DR system, the information is collected from the publically available datasets named as Indian Diabetic Retinopathy Image Dataset (IDRiD) and Diabetic Retinopathy Database 1 (DIARETDB 1). Then the collected images are pre-processed using a Gaussian filter, edge detection sharpening, Contrast enhancement, and Luminosity enhancement to eliminate the noises/interferences, and data imbalance that exists in the available dataset. After that, the noise-free data are processed for segmentation by using the Level set-based active contour segmentation technique. Then, the segmented images are given to the feature extraction stage where Gray Level Co-occurrence Matrix (GLCM), Local ternary, and binary patterns are employed to extract the features from the segmented image. Finally, extracted features are given as input to the classification stage where Long Short-Term Memory (LSTM) is utilized to categorize various classes of DR. The result analysis evidently shows that the proposed LBACS-LSTM achieved better results in overall metrics. The accuracy of the proposed LBACS-LSTM for IDRiD and DIARETDB 1 datasets is 99.43% and 97.39%, respectively which is comparably higher than the existing approaches such as Three-dimensional semantic model, Delimiting Segmentation Approach Using Knowledge Learning (DSA-KL), K-Nearest Neighbor (KNN), Computer aided method and Chronological Tunicate Swarm Algorithm with Stacked Auto Encoder (CTSA-SAE).

20.
PLoS One ; 17(10): e0275727, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-36215218

RESUMO

The fast-growing quantity of information hinders the process of machine learning, making it computationally costly and with substandard results. Feature selection is a pre-processing method for obtaining the optimal subset of features in a data set. Optimization algorithms struggle to decrease the dimensionality while retaining accuracy in high-dimensional data set. This article proposes a novel chaotic opposition fruit fly optimization algorithm, an improved variation of the original fruit fly algorithm, advanced and adapted for binary optimization problems. The proposed algorithm is tested on ten unconstrained benchmark functions and evaluated on twenty-one standard datasets taken from the Univesity of California, Irvine repository and Arizona State University. Further, the presented algorithm is assessed on a coronavirus disease dataset, as well. The proposed method is then compared with several well-known feature selection algorithms on the same datasets. The results prove that the presented algorithm predominantly outperform other algorithms in selecting the most relevant features by decreasing the number of utilized features and improving classification accuracy.


Assuntos
COVID-19 , Algoritmos , Animais , Arizona , Drosophila , Aprendizado de Máquina
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA