RESUMO
BACKGROUND: Segmentation of skin lesions remains essential in histological diagnosis and skin cancer surveillance. Recent advances in deep learning have paved the way for greater improvements in medical imaging. The Hybrid Residual Networks (ResUNet) model, supplemented with Ant Colony Optimization (ACO), represents the synergy of these improvements aimed at improving the efficiency and effectiveness of skin lesion diagnosis. OBJECTIVE: This paper seeks to evaluate the effectiveness of the Hybrid ResUNet model for skin lesion classification and assess its impact on optimizing ACO performance to bridge the gap between computational efficiency and clinical utility. METHODS: The study used a deep learning design on a complex dataset that included a variety of skin lesions. The method includes training a Hybrid ResUNet model with standard parameters and fine-tuning using ACO for hyperparameter optimization. Performance was evaluated using traditional metrics such as accuracy, dice coefficient, and Jaccard index compared with existing models such as residual network (ResNet) and U-Net. RESULTS: The proposed hybrid ResUNet model exhibited excellent classification accuracy, reflected in the noticeable improvement in all evaluated metrics. His ability to describe complex lesions was particularly outstanding, improving diagnostic accuracy. Our experimental results demonstrate that the proposed Hybrid ResUNet model outperforms existing state-of-the-art methods, achieving an accuracy of 95.8%, a Dice coefficient of 93.1%, and a Jaccard index of 87.5. CONCLUSION: The addition of ResUNet to ACO in the proposed Hybrid ResUNet model significantly improves the classification of skin lesions. This integration goes beyond traditional paradigms and demonstrates a viable strategy for deploying AI-powered tools in clinical settings. FUTURE WORK: Future investigations will focus on increasing the version's abilities by using multi-modal imaging information, experimenting with alternative optimization algorithms, and comparing real-world medical applicability. There is also a promising scope for enhancing computational performance and exploring the model's interpretability for more clinical adoption.
Assuntos
Aprendizado Profundo , Neoplasias Cutâneas , Humanos , Neoplasias Cutâneas/diagnóstico por imagem , Interpretação de Imagem Assistida por Computador/métodos , Algoritmos , Dermatopatias/diagnóstico por imagemRESUMO
Radiofrequency ablation of the medial branch is commonly used to treat chronic low back pain involving facet joints, which accounts for 12% to 37% of the total cases of chronic low back pain. An adverse effect of this procedure is the denervation of the multifidus muscle, which may lead to its atrophy which can affect the spine and possibly disc degeneration. This study aims to quantify changes in joint angles and loading caused by multifidus denervation after radiofrequency ablation. AnyBody model of the torso was used to evaluate intervertebral joints in flexion, lateral bending, and torsion. Force-dependent kinematics was used to calculate joint angles and forces. These dependent variables were investigated in intact multifidus, unilateral, and bilateral ablations of L3L4, L4L5, and L5S1 joints. The results showed pronounced angular joint changes, especially in bilateral ablations in flexion, when compared with other cases. The same changes' trend from intact to unilaterally then bilaterally ablated multifidus occurred in joint angles of lateral bending. Meanwhile, joint forces were not adversely affected. These results suggest that multifidus denervation after radiofrequency ablation affects spinal mechanics. Such changes may be associated with abnormal tissue deformations and stresses that can potentially alter their mechanobiology and homeostasis, thereby possibly affecting the health of the spine.
Assuntos
Dor Lombar , Ablação por Radiofrequência , Articulação Zigapofisária , Humanos , Dor Lombar/etiologia , Dor Lombar/cirurgia , Fenômenos Biomecânicos/fisiologia , Músculos Paraespinais , Articulação Zigapofisária/cirurgia , Articulação Zigapofisária/inervação , Articulação Zigapofisária/fisiologia , Ablação por Radiofrequência/efeitos adversos , Denervação/efeitos adversos , Denervação/métodos , Vértebras Lombares/cirurgia , Vértebras Lombares/fisiologiaRESUMO
Voice-controlled devices are in demand due to their hands-free controls. However, using voice-controlled devices in sensitive scenarios like smartphone applications and financial transactions requires protection against fraudulent attacks referred to as "speech spoofing". The algorithms used in spoof attacks are practically unknown; hence, further analysis and development of spoof-detection models for improving spoof classification are required. A study of the spoofed-speech spectrum suggests that high-frequency features are able to discriminate genuine speech from spoofed speech well. Typically, linear or triangular filter banks are used to obtain high-frequency features. However, a Gaussian filter can extract more global information than a triangular filter. In addition, MFCC features are preferable among other speech features because of their lower covariance. Therefore, in this study, the use of a Gaussian filter is proposed for the extraction of inverted MFCC (iMFCC) features, providing high-frequency features. Complementary features are integrated with iMFCC to strengthen the features that aid in the discrimination of spoof speech. Deep learning has been proven to be efficient in classification applications, but the selection of its hyper-parameters and architecture is crucial and directly affects performance. Therefore, a Bayesian algorithm is used to optimize the BiLSTM network. Thus, in this study, we build a high-frequency-based optimized BiLSTM network to classify the spoofed-speech signal, and we present an extensive investigation using the ASVSpoof 2017 dataset. The optimized BiLSTM model is successfully trained with the least epoch and achieved a 99.58% validation accuracy. The proposed algorithm achieved a 6.58% EER on the evaluation dataset, with a relative improvement of 78% on a baseline spoof-identification system.
Assuntos
Aplicativos Móveis , Fala , Redes Neurais de Computação , Teorema de Bayes , AlgoritmosRESUMO
As the most prevalent and deadly malignancy, brain tumors have a dismal survival rate when they are at their most hazardous. Using mostly traditional medical image processing methods, segmenting and classifying brain malignant tumors is a challenging and time-consuming task. Indeed, medical research reveals that categorization performed manually with the help of a person might result in inaccurate prediction and diagnosis. This is mostly due to the fact that malignancies and normal tissues are so dissimilar and comparable. The brain, lung, liver, breast, and prostate are all studied using imaging modalities such as computed tomography (CT), magnetic resonance imaging (MRI), and ultrasound. This research makes significant use of CT and X-ray imaging to identify brain malignant tumors. The purpose of this article is to examine the use of convolutional neural networks (CNNs) in image-based diagnosis of brain cancers. It expedites and improves the treatment's reliability. As a result of the abundance of research on this issue, the provided model focuses on increasing accuracy via the use of a transfer learning method. This experiment was conducted using Python and Google Colab. Deep features were extracted using VGG19 and MobileNetV2, two pretrained deep CNN models. The classification accuracy is used to evaluate this work's performance. This research achieved a 97 percent accuracy rate by MobileNetV2 and a 91 percent accuracy rate by the VGG19 algorithm. This allows us to find malignancies before they have a negative effect on our bodies, like paralysis.
Assuntos
Neoplasias Encefálicas , Redes Neurais de Computação , Encéfalo/diagnóstico por imagem , Neoplasias Encefálicas/diagnóstico por imagem , Humanos , Aprendizado de Máquina , Masculino , Reprodutibilidade dos TestesRESUMO
Breast cancer is one of the most commonly diagnosed female disorders globally. Numerous studies have been conducted to predict survival markers, although the majority of these analyses were conducted using simple statistical techniques. In lieu of that, this research employed machine learning approaches to develop models for identifying and visualizing relevant prognostic indications of breast cancer survival rates. A comprehensive hospital-based breast cancer dataset was collected from the National Cancer Institute's SEER Program's November 2017 update, which offers population-based cancer statistics. The dataset included female patients diagnosed between 2006 and 2010 with infiltrating duct and lobular carcinoma breast cancer (SEER primary cites recode NOS histology codes 8522/3). The dataset included nine predictor factors and one predictor variable that were linked to the patients' survival status (alive or dead). To identify important prognostic markers associated with breast cancer survival rates, prediction models were constructed using K-nearest neighbor (K-NN), decision tree (DT), gradient boosting (GB), random forest (RF), AdaBoost, logistic regression (LR), voting classifier, and support vector machine (SVM). All methods yielded close results in terms of model accuracy and calibration measures, with the lowest achieved from logistic regression (accuracy = 80.57 percent) and the greatest acquired from the random forest (accuracy = 94.64 percent). Notably, the multiple machine learning algorithms utilized in this research achieved high accuracy, suggesting that these approaches might be used as alternative prognostic tools in breast cancer survival studies, especially in the Asian area.
Assuntos
Neoplasias da Mama , Neoplasias da Mama/diagnóstico , Feminino , Humanos , Modelos Logísticos , Aprendizado de Máquina , Prognóstico , Máquina de Vetores de SuporteRESUMO
It can be challenging for doctors to identify eye disorders early enough using fundus pictures. Diagnosing ocular illnesses by hand is time-consuming, error-prone, and complicated. Therefore, an automated ocular disease detection system with computer-aided tools is necessary to detect various eye disorders using fundus pictures. Such a system is now possible as a consequence of deep learning algorithms that have improved image classification capabilities. A deep-learning-based approach to targeted ocular detection is presented in this study. For this study, we used state-of-the-art image classification algorithms, such as VGG-19, to classify the ODIR dataset, which contains 5000 images of eight different classes of the fundus. These classes represent different ocular diseases. However, the dataset within these classes is highly unbalanced. To resolve this issue, the work suggested converting this multiclass classification problem into a binary classification problem and taking the same number of images for both classifications. Then, the binary classifications were trained with VGG-19. The accuracy of the VGG-19 model was 98.13% for the normal (N) versus pathological myopia (M) class; the model reached an accuracy of 94.03% for normal (N) versus cataract (C), and the model provided an accuracy of 90.94% for normal (N) versus glaucoma (G). All of the other models also improve the accuracy when the data is balanced.
Assuntos
Aprendizado Profundo , Algoritmos , ComputadoresRESUMO
Nowadays, the implementation of Artificial Intelligence (AI) in medical diagnosis has attracted major attention within both the academic literature and industrial sector. AI would include deep learning (DL) models, where these models have been achieving a spectacular performance in healthcare applications. According to the World Health Organization (WHO), in 2020 there were around 25.6 million people who died from cardiovascular diseases (CVD). Thus, this paper aims to shad the light on cardiology since it is widely considered as one of the most important in medicine field. The paper develops an efficient DL model for automatic diagnosis of 12-lead electrocardiogram (ECG) signals with 27 classes, including 26 types of CVD and a normal sinus rhythm. The proposed model consists of Residual Neural Network (ResNet-50). An experimental work has been conducted using combined public databases from the USA, China, and Germany as a proof-of-concept. Simulation results of the proposed model have achieved an accuracy of 97.63% and a precision of 89.67%. The achieved results are validated against the actual values in the recent literature.
Assuntos
Inteligência Artificial , Doenças Cardiovasculares , Algoritmos , Automação , Eletrocardiografia , Humanos , Redes Neurais de ComputaçãoRESUMO
Obstetricians often utilize cardiotocography (CTG) to assess a child's physical health throughout pregnancy because it gives data on the fetal heartbeat and uterine contractions, which helps identify whether the fetus is pathologic or not. Obstetricians have traditionally analyzed CTG data artificially, which takes time and is unreliable. As a result, creating a fetal health classification model is essential, as it may save not only time but also medical resources in the diagnosis process. Machine learning (ML) is currently extensively used in fields such as biology and medicine to address a variety of issues, due to its fast advancement. This research covers the findings and analyses of multiple machine learning models for fetal health classification. The method was developed using the open-access cardiotocography dataset. Although the dataset is modest, it contains some noteworthy values. The data was examined and used in a variety of ML models. For classification, random forest (RF), logistic regression, decision tree (DT), support vector classifier, voting classifier, and K-nearest neighbor were utilized. When the results are compared, it is discovered that the random forest model produces the best results. It achieves 97.51% accuracy, which is better than the previous method reported.
RESUMO
One of the most prevalent and leading causes of cancer in women is breast cancer. It has now become a frequent health problem, and its prevalence has recently increased. The easiest approach to dealing with breast cancer findings is to recognize them early on. Early detection of breast cancer is facilitated by computer-aided detection and diagnosis (CAD) technologies, which can help people live longer lives. The major goal of this work is to take advantage of recent developments in CAD systems and related methodologies. In 2011, the United States reported that one out of every eight women was diagnosed with cancer. Breast cancer originates as a result of aberrant cell division in the breast, which leads to either benign or malignant cancer formation. As a result, early detection of breast cancer is critical, and with effective treatment, many lives can be saved. This research covers the findings and analyses of multiple machine learning models for identifying breast cancer. The Wisconsin Breast Cancer Diagnostic (WBCD) dataset was used to develop the method. Despite its small size, the dataset provides some interesting data. The information was analyzed and put to use in a number of machine learning models. For prediction, random forest, logistic regression, decision tree, and K-nearest neighbor were utilized. When the results are compared, the logistic regression model is found to offer the best results. Logistic regression achieves 98% accuracy, which is better than the previous method reported.
Assuntos
Neoplasias da Mama , Mama , Neoplasias da Mama/diagnóstico , Análise por Conglomerados , Feminino , Humanos , Modelos Logísticos , Aprendizado de MáquinaRESUMO
Ultra-low-power is a key performance indicator in 6G-IoT ecosystems. Sensor nodes in this eco-system are also capable of running light-weight artificial intelligence (AI) models. In this work, we have achieved high performance in a gas sensor system using Convolutional Neural Network (CNN) with a smaller number of gas sensor elements. We have identified redundant gas sensor elements in a gas sensor array and removed them to reduce the power consumption without significant deviation in the node's performance. The inevitable variation in the performance due to removing redundant sensor elements has been compensated using specialized data pre-processing (zero-padded virtual sensors and spatial augmentation) and CNN. The experiment is demonstrated to classify and quantify the four hazardous gases, viz., acetone, carbon tetrachloride, ethyl methyl ketone, and xylene. The performance of the unoptimized gas sensor array has been taken as a "baseline" to compare the performance of the optimized gas sensor array. Our proposed approach reduces the power consumption from 10 Watts to 5 Watts; classification performance sustained to 100 percent while quantification performance compensated up to a mean squared error (MSE) of 1.12 × 10-2. Thus, our power-efficient optimization paves the way to "computation on edge", even in the resource-constrained 6G-IoT paradigm.
Assuntos
Inteligência Artificial , Ecossistema , Gases , Redes Neurais de ComputaçãoRESUMO
The coupling of drones and IoT is a major topics in academia and industry since it significantly contributes towards making human life safer and smarter. Using drones is seen as a robust approach for mobile remote sensing operations, such as search-and-rescue missions, due to their speed and efficiency, which could seriously affect victims' chances of survival. This paper aims to modify the Hata-Davidson empirical propagation model based on RF drone measurement to conduct searches for missing persons in complex environments with rugged areas after manmade or natural disasters. A drone was coupled with a thermal FLIR lepton camera, a microcontroller, GPS, and weather station sensors. The proposed modified model utilized the least squares tuning algorithm to fit the data measured from the drone communication system. This enhanced the RF connectivity between the drone and the local authority, as well as leading to increased coverage footprint and, thus, the performance of wider search-and-rescue operations in a timely fashion using strip search patterns. The development of the proposed model considered both software simulation and hardware implementations. Since empirical propagation models are the most adjustable models, this study concludes with a comparison between the modified Hata-Davidson algorithm against other well-known modified empirical models for validation using root mean square error (RMSE). The experimental results show that the modified Hata-Davidson model outperforms the other empirical models, which in turn helps to identify missing persons and their locations using thermal imaging and a GPS sensor.
Assuntos
Desastres Naturais , Dispositivos Aéreos não Tripulados , Algoritmos , Humanos , Tecnologia de Sensoriamento Remoto , SoftwareRESUMO
The number of Internet of Things (IoT) devices to be connected via the Internet is overgrowing. The heterogeneity and complexity of the IoT in terms of dynamism and uncertainty complicate this landscape dramatically and introduce vulnerabilities. Intelligent management of IoT is required to maintain connectivity, improve Quality of Service (QoS), and reduce energy consumption in real time within dynamic environments. Machine Learning (ML) plays a pivotal role in QoS enhancement, connectivity, and provisioning of smart applications. Therefore, this survey focuses on the use of ML for enhancing IoT applications. We also provide an in-depth overview of the variety of IoT applications that can be enhanced using ML, such as smart cities, smart homes, and smart healthcare. For each application, we introduce the advantages of using ML. Finally, we shed light on ML challenges for future IoT research, and we review the current literature based on existing works.
Assuntos
Internet das Coisas , Cidades , Atenção à Saúde , Aprendizado de MáquinaRESUMO
Healthcare is one of the most promising domains for the application of Internet of Things- (IoT-) based technologies, where patients can use wearable or implanted medical sensors to measure medical parameters anywhere and anytime. The information collected by IoT devices can then be sent to the health care professionals, and physicians allow having a real-time access to patients' data. However, besides limited batteries lifetime and computational power, there is spatio-temporal correlation, where unnecessary transmission of these redundant data has a significant impact on reducing energy consumption and reducing battery lifetime. Thus, this paper aims to propose a routing protocol to enhance energy-efficiency, which in turn prolongs the sensor lifetime. The proposed work is based on Energy Efficient Routing Protocol using Dual Prediction Model (EERP-DPM) for Healthcare using IoT, where Dual-Prediction Mechanism is used to reduce data transmission between sensor nodes and medical server if predictions match the readings or if the data are considered critical if it goes beyond the upper/lower limits of defined thresholds. The proposed system was developed and tested using MATLAB software and a hardware platform called "MySignals HW V2." Both simulation and experimental results confirm that the proposed EERP-DPM protocol has been observed to be extremely successful compared to other existing routing protocols not only in terms of energy consumption and network lifetime but also in terms of guaranteeing reliability, throughput, and end-to-end delay.