RESUMEN
Intelligent transportation systems (ITSs) have become an indispensable component of modern global technological development, as they play a massive role in the accurate statistical estimation of vehicles or individuals commuting to a particular transportation facility at a given time. This provides the perfect backdrop for designing and engineering an adequate infrastructural capacity for transportation analyses. However, traffic prediction remains a daunting task due to the non-Euclidean and complex distribution of road networks and the topological constraints of urbanized road networks. To solve this challenge, this paper presents a traffic forecasting model which combines a graph convolutional network, a gated recurrent unit, and a multi-head attention mechanism to simultaneously capture and incorporate the spatio-temporal dependence and dynamic variation in the topological sequence of traffic data effectively. By achieving 91.8% accuracy on the Los Angeles highway traffic (Los-loop) test data for 15-min traffic prediction and an R2 score of 85% on the Shenzhen City (SZ-taxi) test dataset for 15- and 30-min predictions, the proposed model demonstrated that it can learn the global spatial variation and the dynamic temporal sequence of traffic data over time. This has resulted in state-of-the-art traffic forecasting for the SZ-taxi and Los-loop datasets.
RESUMEN
The novel human coronavirus disease COVID-19 has become the fifth documented pandemic since the 1918 flu pandemic. COVID-19 was first reported in Wuhan, China, and subsequently spread worldwide. Almost all of the countries of the world are facing this natural challenge. We present forecasting models to estimate and predict COVID-19 outbreak in Asia Pacific countries, particularly Pakistan, Afghanistan, India, and Bangladesh. We have utilized the latest deep learning techniques such as Long Short Term Memory networks (LSTM), Recurrent Neural Network (RNN), and Gated Recurrent Units (GRU) to quantify the intensity of pandemic for the near future. We consider the time variable and data non-linearity when employing neural networks. Each model's salient features have been evaluated to foresee the number of COVID-19 cases in the next 10 days. The forecasting performance of employed deep learning models shown up to July 01, 2020, is more than 90% accurate, which shows the reliability of the proposed study. We hope that the present comparative analysis will provide an accurate picture of pandemic spread to the government officials so that they can take appropriate mitigation measures.
RESUMEN
Machine Learning and computer vision have been the frontiers of the war against the COVID-19 Pandemic. Radiology has vastly improved the diagnosis of diseases, especially lung diseases, through the early assessment of key disease factors. Chest X-rays have thus become among the commonly used radiological tests to detect and diagnose many lung diseases. However, the discovery of lung disease through X-rays is a significantly challenging task depending on the availability of skilled radiologists. There has been a recent increase in attention to the design of Convolution Neural Networks (CNN) models for lung disease classification. A considerable amount of training dataset is required for CNN to work, but the problem is that it cannot handle translation and rotation correctly as input. The recently proposed Capsule Networks (referred to as CapsNets) are new automated learning architecture that aims to overcome the shortcomings in CNN. CapsNets are vital for rotation and complex translation. They require much less training information, which applies to the processing of data sets from medical images, including radiological images of the chest X-rays. In this research, the adoption and integration of CapsNets into the problem of chest X-ray classification have been explored. The aim is to design a deep model using CapsNet that increases the accuracy of the classification problem involved. We have used convolution blocks that take input images and generate convolution layers used as input to capsule block. There are 12 capsule layers operated, and the output of each capsule is used as an input to the next convolution block. The process is repeated for all blocks. The experimental results show that the proposed architecture yields better results when compared with the existing CNN techniques by achieving a better area under the curve (AUC) average. Furthermore, DNet checks the best performance in the ChestXray-14 data set on traditional CNN, and it is validated that DNet performs better with a higher level of total depth.
RESUMEN
The world is suffering from a new pandemic of Covid-19 that is affecting human lives. The collection of records for Covid-19 patients is necessary to tackle that situation. The decision support systems (DSS) are used to gather that records. The researchers access the patient's data through DSS and perform predictions on the severity and effect of the Covid-19 disease; in contrast, unauthorized users can also access the data for malicious purposes. For that reason, it is a challenging task to protect Covid-19 patient data. In this paper, we proposed a new technique for protecting Covid-19 patients' data. The proposed model consists of two folds. Firstly, Blowfish encryption uses to encrypt the identity attributes. Secondly, it uses Pseudonymization to mask identity and quasi-attributes, then all the data links with one another, such as the encrypted, masked, sensitive, and non-sensitive attributes. In this way, the data becomes more secure from unauthorized access.
RESUMEN
Plant diseases can cause a considerable reduction in the quality and number of agricultural products. Guava, well known to be the tropics' apple, is one significant fruit cultivated in tropical regions. It is attacked by 177 pathogens, including 167 fungal and others such as bacterial, algal, and nematodes. In addition, postharvest diseases may cause crucial production loss. Due to minor variations in various guava disease symptoms, an expert opinion is required for disease analysis. Improper diagnosis may cause economic losses to farmers' improper use of pesticides. Automatic detection of diseases in plants once they emerge on the plants' leaves and fruit is required to maintain high crop fields. In this paper, an artificial intelligence (AI) driven framework is presented to detect and classify the most common guava plant diseases. The proposed framework employs the ΔE color difference image segmentation to segregate the areas infected by the disease. Furthermore, color (RGB, HSV) histogram and textural (LBP) features are applied to extract rich, informative feature vectors. The combination of color and textural features are used to identify and attain similar outcomes compared to individual channels, while disease recognition is performed by employing advanced machine-learning classifiers (Fine KNN, Complex Tree, Boosted Tree, Bagged Tree, Cubic SVM). The proposed framework is evaluated on a high-resolution (18 MP) image dataset of guava leaves and fruit. The best recognition results were obtained by Bagged Tree classifier on a set of RGB, HSV, and LBP features (99% accuracy in recognizing four guava fruit diseases (Canker, Mummification, Dot, and Rust) against healthy fruit). The proposed framework may help the farmers to avoid possible production loss by taking early precautions.
Asunto(s)
Psidium , Inteligencia Artificial , Frutas , Aprendizaje Automático , Enfermedades de las PlantasRESUMEN
Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been widely acknowledged. The artificially created examples will lead to different instances negatively identified by the DL models that are humanly considered benign. Practical application in actual physical scenarios with adversarial threats shows their features. Thus, adversarial attacks and defense, including machine learning and its reliability, have drawn growing interest and, in recent years, has been a hot topic of research. We introduce a framework that provides a defensive model against the adversarial speckle-noise attack, the adversarial training, and a feature fusion strategy, which preserves the classification with correct labelling. We evaluate and analyze the adversarial attacks and defenses on the retinal fundus images for the Diabetic Retinopathy recognition problem, which is considered a state-of-the-art endeavor. Results obtained on the retinal fundus images, which are prone to adversarial attacks, are 99% accurate and prove that the proposed defensive model is robust.
Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Algoritmos , Inteligencia Artificial , Retinopatía Diabética/diagnóstico , Humanos , Redes Neurales de la Computación , Reproducibilidad de los ResultadosRESUMEN
The Industrial Internet of Things (IIoT) has led to the growth and expansion of various new opportunities in the new Industrial Transformation. There have been notable challenges regarding the security of data and challenges related to privacy when collecting real-time and automatic data while observing applications in the industry. This paper proposes an Federated Transfer Learning for Authentication and Privacy Preservation Using Novel Supportive Twin Delayed DDPG (S-TD3) Algorithm for IIoT. In FT-Block (Federated transfer learning blockchain), several blockchains are applied to preserve privacy and security for all types of industrial applications. Additionally, by introducing the authentication mechanism based on transfer learning, blockchains can enhance the preservation and security standards for industrial applications. Specifically, Novel Supportive Twin Delayed DDPG trains the user model to authenticate specific regions. As it is considered one of the most open and scalable interacting platforms of information, it successfully helps in the positive transfer of different kinds of data between devices in more significant and local operations of the industry. It is mainly due to a single authentication factor, and the poor adaptation to regular increases in the number of users and different requirements that make the current authentication mechanism suffer a lot in IIoT. As a result, it has been very clearly observed that the given solutions are very useful.
Asunto(s)
Internet de las Cosas , Algoritmos , Seguridad Computacional , Aprendizaje Automático , PrivacidadRESUMEN
In the recent era, various diseases have severely affected the lifestyle of individuals, especially adults. Among these, bone diseases, including Knee Osteoarthritis (KOA), have a great impact on quality of life. KOA is a knee joint problem mainly produced due to decreased Articular Cartilage between femur and tibia bones, producing severe joint pain, effusion, joint movement constraints and gait anomalies. To address these issues, this study presents a novel KOA detection at early stages using deep learning-based feature extraction and classification. Firstly, the input X-ray images are preprocessed, and then the Region of Interest (ROI) is extracted through segmentation. Secondly, features are extracted from preprocessed X-ray images containing knee joint space width using hybrid feature descriptors such as Convolutional Neural Network (CNN) through Local Binary Patterns (LBP) and CNN using Histogram of oriented gradient (HOG). Low-level features are computed by HOG, while texture features are computed employing the LBP descriptor. Lastly, multi-class classifiers, that is, Support Vector Machine (SVM), Random Forest (RF), and K-Nearest Neighbour (KNN), are used for the classification of KOA according to the Kellgren-Lawrence (KL) system. The Kellgren-Lawrence system consists of Grade I, Grade II, Grade III, and Grade IV. Experimental evaluation is performed on various combinations of the proposed framework. The experimental results show that the HOG features descriptor provides approximately 97% accuracy for the early detection and classification of KOA for all four grades of KL.
Asunto(s)
Osteoartritis de la Rodilla , Humanos , Articulación de la Rodilla/diagnóstico por imagen , Redes Neurales de la Computación , Osteoartritis de la Rodilla/diagnóstico por imagen , Calidad de Vida , Máquina de Vectores de SoporteRESUMEN
In previous studies, replicated and multiple types of speech data have been used for Parkinson's disease (PD) detection. However, two main problems in these studies are lower PD detection accuracy and inappropriate validation methodologies leading to unreliable results. This study discusses the effects of inappropriate validation methodologies used in previous studies and highlights the use of appropriate alternative validation methods that would ensure generalization. To enhance PD detection accuracy, we propose a two-stage diagnostic system that refines the extracted set of features through [Formula: see text] regularized linear support vector machine and classifies the refined subset of features through a deep neural network. To rigorously evaluate the effectiveness of the proposed diagnostic system, experiments are performed on two different voice recording-based benchmark datasets. For both datasets, the proposed diagnostic system achieves 100% accuracy under leave-one-subject-out (LOSO) cross-validation (CV) and 97.5% accuracy under k-fold CV. The results show that the proposed system outperforms the existing methods regarding PD detection accuracy. The results suggest that the proposed diagnostic system is essential to improving non-invasive diagnostic decision support in PD.
Asunto(s)
Enfermedad de Parkinson , Voz , Humanos , Algoritmos , Enfermedad de Parkinson/diagnóstico , Máquina de Vectores de Soporte , Redes Neurales de la ComputaciónRESUMEN
Fake face identity is a serious, potentially fatal issue that affects every industry from the banking and finance industry to the military and mission-critical applications. This is where the proposed system offers artificial intelligence (AI)-based supported fake face detection. The models were trained on an extensive dataset of real and fake face images, incorporating steps like sampling, preprocessing, pooling, normalization, vectorization, batch processing and model training, testing-, and classification via output activation. The proposed work performs the comparative analysis of the three fusion models, which can be integrated with Generative Adversarial Networks (GAN) based on the performance evaluation. The Model-3, which contains the combination of DenseNet-201+ResNet-102+Xception, offers the highest accuracy of 0.9797, and the Model-2 with the combination of DenseNet-201+ResNet-50+Inception V3 offers the lowest loss value of 0.1146; both are suitable for the GAN integration. Additionally, the Model-1 performs admirably, with an accuracy of 0.9542 and a loss value of 0.1416. A second dataset was also tested where the proposed Model-3 provided maximum accuracy of 86.42% with a minimum loss of 0.4054.
Asunto(s)
Inteligencia Artificial , IndustriasRESUMEN
The emergency department of hospitals receives a massive number of patients with wrist fracture. For the clinical diagnosis of a suspected fracture, X-ray imaging is the major screening tool. A wrist fracture is a significant global health concern for children, adolescents, and the elderly. A missed diagnosis of wrist fracture on medical imaging can have significant consequences for patients, resulting in delayed treatment and poor functional recovery. Therefore, an intelligent method is needed in the medical department to precisely diagnose wrist fracture via an automated diagnosing tool by considering it a second option for doctors. In this research, a fused model of the deep learning method, a convolutional neural network (CNN), and long short-term memory (LSTM) is proposed to detect wrist fractures from X-ray images. It gives a second option to doctors to diagnose wrist facture using the computer vision method to lessen the number of missed fractures. The dataset acquired from Mendeley comprises 192 wrist X-ray images. In this framework, image pre-processing is applied, then the data augmentation approach is used to solve the class imbalance problem by generating rotated oversamples of images for minority classes during the training process, and pre-processed images and augmented normalized images are fed into a 28-layer dilated CNN (DCNN) to extract deep valuable features. Deep features are then fed to the proposed LSTM network to distinguish wrist fractures from normal ones. The experimental results of the DCNN-LSTM with and without augmentation is compared with other deep learning models. The proposed work is also compared to existing algorithms in terms of accuracy, sensitivity, specificity, precision, the F1-score, and kappa. The results show that the DCNN-LSTM fusion achieves higher accuracy and has high potential for medical applications to use as a second option.
RESUMEN
[This retracts the article DOI: 10.1007/s00500-021-06075-8.].
RESUMEN
Lung and colon cancers are among the leading causes of human mortality and morbidity. Early diagnostic work up of these diseases include radiography, ultrasound, magnetic resonance imaging, and computed tomography. Certain blood tumor markers for carcinoma lung and colon also aid in the diagnosis. Despite the lab and diagnostic imaging, histopathology remains the gold standard, which provides cell-level images of tissue under examination. To read these images, a histopathologist spends a large amount of time. Furthermore, using conventional diagnostic methods involve high-end equipment as well. This leads to limited number of patients getting final diagnosis and early treatment. In addition, there are chances of inter-observer errors. In recent years, deep learning has shown promising results in the medical field. This has helped in early diagnosis and treatment according to severity of disease. With the help of EffcientNetV2 models that have been cross-validated and tested fivefold, we propose an automated method for detecting lung (lung adenocarcinoma, lung benign, and lung squamous cell carcinoma) and colon (colon adenocarcinoma and colon benign) cancer subtypes from LC25000 histopathology images. A state-of-the-art deep learning architecture based on the principles of compound scaling and progressive learning, EffcientNetV2 large, medium, and small models. An accuracy of 99.97%, AUC of 99.99%, F1-score of 99.97%, balanced accuracy of 99.97%, and Matthew's correlation coefficient of 99.96% were obtained on the test set using the EffcientNetV2-L model for the 5-class classification of lung and colon cancers, outperforming the existing methods. Using gradCAM, we created visual saliency maps to precisely locate the vital regions in the histopathology images from the test set where the models put more attention during cancer subtype predictions. This visual saliency maps may potentially assist pathologists to design better treatment strategies. Therefore, it is possible to use the proposed pipeline in clinical settings for fully automated lung and colon cancer detection from histopathology images with explainability.
RESUMEN
Diabetic retinopathy (DR) is one of the major complications caused by diabetes and is usually identified from retinal fundus images. Screening of DR from digital fundus images could be time-consuming and error-prone for ophthalmologists. For efficient DR screening, good quality of the fundus image is essential and thereby reduces diagnostic errors. Hence, in this work, an automated method for quality estimation (QE) of digital fundus images using an ensemble of recent state-of-the-art EfficientNetV2 deep neural network models is proposed. The ensemble method was cross-validated and tested on one of the largest openly available datasets, the Deep Diabetic Retinopathy Image Dataset (DeepDRiD). We obtained a test accuracy of 75% for the QE, outperforming the existing methods on the DeepDRiD. Hence, the proposed ensemble method may be a potential tool for automated QE of fundus images and could be handy to ophthalmologists.
RESUMEN
Thalassemia represents one of the most common genetic disorders worldwide, characterized by defects in hemoglobin synthesis. The affected individuals suffer from malfunctioning of one or more of the four globin genes, leading to chronic hemolytic anemia, an imbalance in the hemoglobin chain ratio, iron overload, and ineffective erythropoiesis. Despite the challenges posed by this condition, recent years have witnessed significant advancements in diagnosis, therapy, and transfusion support, significantly improving the prognosis for thalassemia patients. This research empirically evaluates the efficacy of models constructed using classification methods and explores the effectiveness of relevant features that are derived using various machine-learning techniques. Five feature selection approaches, namely Chi-Square (χ2), Exploratory Factor Score (EFS), tree-based Recursive Feature Elimination (RFE), gradient-based RFE, and Linear Regression Coefficient, were employed to determine the optimal feature set. Nine classifiers, namely K-Nearest Neighbors (KNN), Decision Trees (DT), Gradient Boosting Classifier (GBC), Linear Regression (LR), AdaBoost, Extreme Gradient Boosting (XGB), Random Forest (RF), Light Gradient Boosting Machine (LGBM), and Support Vector Machine (SVM), were utilized to evaluate the performance. The χ2 method achieved accuracy, registering 91.56% precision, 91.04% recall, and 92.65% f-score when aligned with the LR classifier. Moreover, the results underscore that amalgamating over-sampling with Synthetic Minority Over-sampling Technique (SMOTE), RFE, and 10-fold cross-validation markedly elevates the detection accuracy for αT patients. Notably, the Gradient Boosting Classifier (GBC) achieves 93.46% accuracy, 93.89% recall, and 92.72% F1 score.
RESUMEN
Esophagitis, cancerous growths, bleeding, and ulcers are typical symptoms of gastrointestinal disorders, which account for a significant portion of human mortality. For both patients and doctors, traditional diagnostic methods can be exhausting. The major aim of this research is to propose a hybrid method that can accurately diagnose the gastrointestinal tract abnormalities and promote early treatment that will be helpful in reducing the death cases. The major phases of the proposed method are: Dataset Augmentation, Preprocessing, Features Engineering (Features Extraction, Fusion, Optimization), and Classification. Image enhancement is performed using hybrid contrast stretching algorithms. Deep Learning features are extracted through transfer learning from the ResNet18 model and the proposed XcepNet23 model. The obtained deep features are ensembled with the texture features. The ensemble feature vector is optimized using the Binary Dragonfly algorithm (BDA), Moth-Flame Optimization (MFO) algorithm, and Particle Swarm Optimization (PSO) algorithm. In this research, two datasets (Hybrid dataset and Kvasir-V1 dataset) consisting of five and eight classes, respectively, are utilized. Compared to the most recent methods, the accuracy achieved by the proposed method on both datasets was superior. The Q_SVM's accuracies on the Hybrid dataset, which was 100%, and the Kvasir-V1 dataset, which was 99.24%, were both promising.
RESUMEN
While the world is working quietly to repair the damage caused by COVID-19's widespread transmission, the monkeypox virus threatens to become a global pandemic. There are several nations that report new monkeypox cases daily, despite the virus being less deadly and contagious than COVID-19. Monkeypox disease may be detected using artificial intelligence techniques. This paper suggests two strategies for improving monkeypox image classification precision. Based on reinforcement learning and parameter optimization for multi-layer neural networks, the suggested approaches are based on feature extraction and classification: the Q-learning algorithm determines the rate at which an act occurs in a particular state; Malneural networks are binary hybrid algorithms that improve the parameters of neural networks. The algorithms are evaluated using an openly available dataset. In order to analyze the proposed optimization feature selection for monkeypox classification, interpretation criteria were utilized. In order to evaluate the efficiency, significance, and robustness of the suggested algorithms, a series of numerical tests were conducted. There were 95% precision, 95% recall, and 96% f1 scores for monkeypox disease. As compared to traditional learning methods, this method has a higher accuracy value. The overall macro average was around 0.95, and the overall weighted average was around 0.96. When compared to the benchmark algorithms, DDQN, Policy Gradient, and Actor-Critic, the Malneural network had the highest accuracy (around 0.985). In comparison with traditional methods, the proposed methods were found to be more effective. Clinicians can use this proposal to treat monkeypox patients and administration agencies can use it to observe the origin and current status of the disease.
RESUMEN
The skin is the human body's largest organ and its cancer is considered among the most dangerous kinds of cancer. Various pathological variations in the human body can cause abnormal cell growth due to genetic disorders. These changes in human skin cells are very dangerous. Skin cancer slowly develops over further parts of the body and because of the high mortality rate of skin cancer, early diagnosis is essential. The visual checkup and the manual examination of the skin lesions are very tricky for the determination of skin cancer. Considering these concerns, numerous early recognition approaches have been proposed for skin cancer. With the fast progression in computer-aided diagnosis systems, a variety of deep learning, machine learning, and computer vision approaches were merged for the determination of medical samples and uncommon skin lesion samples. This research provides an extensive literature review of the methodologies, techniques, and approaches applied for the examination of skin lesions to date. This survey includes preprocessing, segmentation, feature extraction, selection, and classification approaches for skin cancer recognition. The results of these approaches are very impressive but still, some challenges occur in the analysis of skin lesions because of complex and rare features. Hence, the main objective is to examine the existing techniques utilized in the discovery of skin cancer by finding the obstacle that helps researchers contribute to future research.
RESUMEN
Dementia is a cognitive disorder that mainly targets older adults. At present, dementia has no cure or prevention available. Scientists found that dementia symptoms might emerge as early as ten years before the onset of real disease. As a result, machine learning (ML) scientists developed various techniques for the early prediction of dementia using dementia symptoms. However, these methods have fundamental limitations, such as low accuracy and bias in machine learning (ML) models. To resolve the issue of bias in the proposed ML model, we deployed the adaptive synthetic sampling (ADASYN) technique, and to improve accuracy, we have proposed novel feature extraction techniques, namely, feature extraction battery (FEB) and optimized support vector machine (SVM) using radical basis function (rbf) for the classification of the disease. The hyperparameters of SVM are calibrated by employing the grid search approach. It is evident from the experimental results that the newly pr oposed model (FEB-SVM) improves the dementia prediction accuracy of the conventional SVM by 6%. The proposed model (FEB-SVM) obtained 98.28% accuracy on training data and a testing accuracy of 93.92%. Along with accuracy, the proposed model obtained a precision of 91.80%, recall of 86.59, F1-score of 89.12%, and Matthew's correlation coefficient (MCC) of 0.4987. Moreover, the newly proposed model (FEB-SVM) outperforms the 12 state-of-the-art ML models that the researchers have recently presented for dementia prediction.
RESUMEN
Intracranial hemorrhage (ICH) can lead to death or disability, which requires immediate action from radiologists. Due to the heavy workload, less experienced staff, and the complexity of subtle hemorrhages, a more intelligent and automated system is necessary to detect ICH. In literature, many artificial-intelligence-based methods are proposed. However, they are less accurate for ICH detection and subtype classification. Therefore, in this paper, we present a new methodology to improve the detection and subtype classification of ICH based on two parallel paths and a boosting technique. The first path employs the architecture of ResNet101-V2 to extract potential features from windowed slices, whereas Inception-V4 captures significant spatial information in the second path. Afterwards, the detection and subtype classification of ICH is performed by the light gradient boosting machine (LGBM) using the outputs of ResNet101-V2 and Inception-V4. Thus, the combined solution, known as ResNet101-V2, Inception-V4, and LGBM (Res-Inc-LGBM), is trained and tested over the brain computed tomography (CT) scans of CQ500 and Radiological Society of North America (RSNA) datasets. The experimental results state that the proposed solution efficiently obtains 97.7% accuracy, 96.5% sensitivity, and 97.4% F1 score using the RSNA dataset. Moreover, the proposed Res-Inc-LGBM outperforms the standard benchmarks for the detection and subtype classification of ICH regarding the accuracy, sensitivity, and F1 score. The results prove the significance of the proposed solution for its real-time application.