RESUMO
With recent advances in precision medicine and healthcare computing, there is an enormous demand for developing machine learning algorithms in genomics to enhance the rapid analysis of disease disorders. Technological advancement in genomics and imaging provides clinicians with enormous amounts of data, but prediction is still mostly subjective, resulting in problematic medical treatment. Machine learning is being employed in several domains of the healthcare sector, encompassing clinical research, early disease identification, and medicinal innovation with a historical perspective. The main objective of this study is to detect patients who, based on several medical standards, are more susceptible to having a genetic disorder. A genetic disease prediction algorithm was employed, leveraging the patient's health history to evaluate the probability of diagnosing a genetic disorder. We developed a computationally efficient machine learning approach to predict the overall lifespan of patients with a genomics disorder and to classify and predict patients with a genetic disease. The SVM, RF, and ETC are stacked using two-layer meta-estimators to develop the proposed model. The first layer comprises all the baseline models employed to predict the outcomes based on the dataset. The second layer comprises a component known as a meta-classifier. Results from the experiment indicate that the model achieved an accuracy of 90.45% and a recall score of 90.19%. The area under the curve (AUC) for mitochondrial diseases is 98.1%; for multifactorial diseases, it is 97.5%; and for single-gene inheritance, it is 98.8%. The proposed approach presents a novel method for predicting patient prognosis in a manner that is unbiased, accurate, and comprehensive. The proposed approach outperforms human professionals using the current clinical standard for genetic disease classification in terms of identification accuracy. The implementation of stacked will significantly improve the field of biomedical research by improving the anticipation of genetic diseases.
Assuntos
Setor de Assistência à Saúde , Aprendizado de Máquina , Humanos , Algoritmos , Bases de Dados Genéticas , GenômicaRESUMO
A medical disorder known as diabetic retinopathy (DR) affects people who suffer from diabetes. Many people are visually impaired due to DR. Primary cause of DR in patients is high blood sugar, and it affects blood vessels available in the retinal cell. The recent advancement in deep learning and computer vision methods, and their automation applications can recognize the presence of DR in retinal cells and vessel images. Authors have proposed an attention-based hybrid model to recognize diabetes in early stage to prevent harmful clauses. Proposed methodology uses DenseNet121 architecture for convolution learning and then, the feature vector will be enhanced with channel and spatial attention model. The proposed architecture also simulates binary and multiclass classification to recognize the infection and the spreading of disease. Binary classification recognizes DR images either positive or negative, while multiclass classification represents an infection on a scale of 0-4. Simulation of the proposed methodology has achieved 98.57% and 99.01% accuracy for multiclass and binary classification, respectively. Simulation of the study also explored the impact of data augmentation to make the proposed model robust and generalized. Attention-based deep learning model has achieved remarkable accuracy to detect diabetic infection from retinal cellular images.
Assuntos
Aprendizado Profundo , Diabetes Mellitus , Retinopatia Diabética , Hiperglicemia , Humanos , Retinopatia Diabética/diagnóstico por imagem , Automação , NeurôniosRESUMO
Rapid development of the Internet of Everything (IoE) and cloud services offer a vital role in the growth of smart applications. It provides scalability with the collaboration of cloud servers and copes with a big amount of collected data for network systems. Although, edge computing supports efficient utilization of communication bandwidth, and latency requirements to facilitate smart embedded systems. However, it faces significant research issues regarding data aggregation among heterogeneous network services and objects. Moreover, distributed systems are more precise for data access and storage, thus machine-to-machine is needed to be secured from unpredictable events. As a result, this research proposed secured data management with distributed load balancing protocol using particle swarm optimization, which aims to decrease the response time for cloud users and effectively maintain the integrity of network communication. It combines distributed computing and shift high cost computations closer to the requesting node to reduce latency and transmission overhead. Moreover, the proposed work also protects the communicating machines from malicious devices by evaluating the trust in a controlled manner. Simulation results revealed a significant performance of the proposed protocol in comparison to other solutions in terms of energy consumption by 20%, success rate by 17%, end-to-end delay by 14%, and network cost by 19% as average in the light of various performance metrics.
RESUMO
In recent decades, networked smart devices and cutting-edge technology have been exploited in many applications for the improvement of agriculture. The deployment of smart sensors and intelligent farming techniques supports real-time information gathering for the agriculture sector and decreases the burden on farmers. Many solutions have been presented to automate the agriculture system using IoT networks; however, the identification of redundant data traffic is one of the most significant research problems. Additionally, farmers do not obtain the information they need in time, such as data on water pressure and soil conditions. Thus, these solutions consequently reduce the production rates and increase costs for farmers. Moreover, controlling all agricultural operations in a controlled manner should also be considered in developing intelligent solutions. Therefore, this study proposes a framework for a system that combines fog computing with smart farming and effectively controls network traffic. Firstly, the proposed framework efficiently monitors redundant information and avoids the inefficient use of communication bandwidth. It also controls the number of re-transmissions in the case of malicious actions and efficiently utilizes the network's resources. Second, a trustworthy chain is built between agricultural sensors by utilizing the fog nodes to address security issues and increase reliability by preventing malicious communication. Through extensive simulation-based experiments, the proposed framework revealed an improved performance for energy efficiency, security, and network connectivity in comparison to other related works.
Assuntos
Agricultura , Tecnologia sem Fio , Agricultura/métodos , Fenômenos Físicos , Reprodutibilidade dos TestesRESUMO
For the monitoring and processing of network data, wireless systems are widely used in many industrial applications. With the assistance of wireless sensor networks (WSNs) and the Internet of Things (IoT), smart grids are being explored in many distributed communication systems. They collect data from the surrounding environment and transmit it with the support of a multi-hop system. However, there is still a significant research gap in energy management for IoT devices and smart sensors. Many solutions have been proposed by researchers to cope with efficient routing schemes in smart grid applications. But, reducing energy holes and offering intelligent decisions for forwarding data are remain major problems. Moreover, the management of network traffic on grid nodes while balancing the communication overhead on the routing paths is an also demanding challenge. In this research work, we propose a secure edge-based energy management protocol for a smart grid environment with the support of multi-route management. It strengthens the ability to predict the data forwarding process and improves the management of IoT devices by utilizing a technique of correlation analysis. Moreover, the proposed protocol increases the system's reliability and achieves security goals by employing lightweight authentication with sink coordination. To demonstrate the superiority of our proposed protocol over the chosen existing work, extensive experiments were performed on various network parameters.
RESUMO
The development of smart applications has benefited greatly from the expansion of wireless technologies. A range of tasks are performed, and end devices are made capable of communicating with one another with the support of artificial intelligence technology. The Internet of Things (IoT) increases the efficiency of communication networks due to its low costs and simple management. However, it has been demonstrated that many systems still need an intelligent strategy for green computing. Establishing reliable connectivity in Green-IoT (G-IoT) networks is another key research challenge. With the integration of edge computing, this study provides a Sustainable Data-driven Secured optimization model (SDS-GIoT) that uses dynamic programming to provide enhanced learning capabilities. First, the proposed approach examines multi-variable functions and delivers graph-based link predictions to locate the optimal nodes for edge networks. Moreover, it identifies a sub-path in multistage to continue data transfer if a route is unavailable due to certain communication circumstances. Second, while applying security, edge computing provides offloading services that lower the amount of processing power needed for low-constraint nodes. Finally, the SDS-GIoT model is verified with various experiments, and the performance results demonstrate its significance for a sustainable environment against existing solutions.
Assuntos
Internet das Coisas , Inteligência Artificial , Tecnologia sem FioRESUMO
Wireless networks and the Internet of things (IoT) have proven rapid growth in the development and management of smart environments. These technologies are applied in numerous research fields, such as security surveillance, Internet of vehicles, medical systems, etc. The sensor technologies and IoT devices are cooperative and allow the collection of unpredictable factors from the observing field. However, the constraint resources of distributed battery-powered sensors decrease the energy efficiency of the IoT network and increase the delay in receiving the network data on users' devices. It is observed that many solutions are proposed to overcome the energy deficiency in smart applications; though, due to the mobility of the nodes, lots of communication incurs frequent data discontinuity, compromising the data trust. Therefore, this work introduces a D2D multi-criteria learning algorithm for IoT networks using secured sensors, which aims to improve the data exchange without imposing additional costs and data diverting for mobile sensors. Moreover, it reduces the compromising threats in the presence of anonymous devices and increases the trustworthiness of the IoT-enabled communication system with the support of machine learning. The proposed work was tested and analyzed using broad simulation-based experiments and demonstrated the significantly improved performance of the packet delivery ratio by 17%, packet disturbances by 31%, data delay by 22%, energy consumption by 24%, and computational complexity by 37% for realistic network configurations.
Assuntos
Internet das Coisas , Algoritmos , Comunicação , Simulação por Computador , Aprendizado de MáquinaRESUMO
The anterior cruciate ligament (ACL) is one of the main stabilizer parts of the knee. ACL injury leads to causes of osteoarthritis risk. ACL rupture is common in the young athletic population. Accurate segmentation at an early stage can improve the analysis and classification of anterior cruciate ligaments tears. This study automatically segmented the anterior cruciate ligament (ACL) tears from magnetic resonance imaging through deep learning. The knee mask was generated on the original Magnetic Resonance (MR) images to apply a semantic segmentation technique with convolutional neural network architecture U-Net. The proposed segmentation method was measured by accuracy, intersection over union (IoU), dice similarity coefficient (DSC), precision, recall and F1-score of 98.4%, 99.0%, 99.4%, 99.6%, 99.6% and 99.6% on 11451 training images, whereas on the validation images of 3817 was, respectively, 97.7%, 93.8%,96.8%, 96.5%, 97.3% and 96.9%. We also provide dice loss of training and test datasets that have remained 0.005 and 0.031, respectively. The experimental results show that the ACL segmentation on JPEG MRI images with U-Nets achieves accuracy that outperforms the human segmentation. The strategy has promising potential applications in medical image analytics for the segmentation of knee ACL tears for MR images.
Assuntos
Lesões do Ligamento Cruzado Anterior , Ligamento Cruzado Anterior , Lesões do Ligamento Cruzado Anterior/diagnóstico por imagem , Humanos , Joelho , Articulação do Joelho , Imageamento por Ressonância Magnética/métodosRESUMO
In modern years, network edges have been explored by many applications to lower communication and management costs. They are also integrated with the internet of things (IoT) to achieve network design, in terms of scalability and heterogeneous services for multimedia applications. Many proposed solutions are performing a vital role in the development of robust protocols and reducing the response time for critical networks. However, most of them are not able to support the forwarding processes of high multimedia traffic under dynamic characteristics with constraint bandwidth. Moreover, they increase the rate of data loss in an uncertain environment and compromise network performance by increasing delivery delay. Therefore, this paper presents an optimization model with mobile edges for multimedia sensors using artificial intelligence of things, which aims to maintain the process of real-time data collection with low consumption of resources. Moreover, it improves the unpredictability of network communication with the integration of software-defined networks (SDN) and mobile edges. Firstly, it utilizes the artificial intelligence of things (AIoT), forming the multi-hop network and guaranteed the primary services for constraints network with stable resources management. Secondly, the SDN performs direct association with mobile edges to support the load balancing for multimedia sensors and centralized the management. Finally, multimedia traffic is heading towards applications in an unchanged form and without negotiating using the sharing of subkeys. The experimental results demonstrated its effectiveness for delivery rate by an average of 35%, processing delay by an average of 29%, network overheads by an average of 41%, packet drop ratio by an average of 39%, and packet retransmission by an average of 34% against existing solutions.
Assuntos
Redes de Comunicação de Computadores , Multimídia , Algoritmos , Inteligência Artificial , SoftwareRESUMO
In this work, we propose a deep learning framework for the classification of COVID-19 pneumonia infection from normal chest CT scans. In this regard, a 15-layered convolutional neural network architecture is developed which extracts deep features from the selected image samples - collected from the Radiopeadia. Deep features are collected from two different layers, global average pool and fully connected layers, which are later combined using the max-layer detail (MLD) approach. Subsequently, a Correntropy technique is embedded in the main design to select the most discriminant features from the pool of features. One-class kernel extreme learning machine classifier is utilized for the final classification to achieving an average accuracy of 95.1%, and the sensitivity, specificity & precision rate of 95.1%, 95%, & 94% respectively. To further verify our claims, detailed statistical analyses based on standard error mean (SEM) is also provided, which proves the effectiveness of our proposed prediction design.
RESUMO
Currently, the world faces a novel coronavirus disease 2019 (COVID-19) challenge and infected cases are increasing exponentially. COVID-19 is a disease that has been reported by the WHO in March 2020, caused by a virus called the SARS-CoV-2. As of 10 March 2021, more than 150 million people were infected and 3v million died. Researchers strive to find out about the virus and recommend effective actions. An unprecedented increase in pathogens is happening and a major attempt is being made to tackle the epidemic. This article presents deep learning-based COVID-19 detection using CT and X-ray images and data analytics on its spread worldwide. This article's research structure builds on a recent analysis of the COVID-19 data and prospective research to systematize current resources, help the researchers, practitioners by using in-depth learning methodologies to build solutions for the COVID-19 pandemic.
RESUMO
The novel coronavirus named COVID-19 has quickly spread among humans worldwide, and the situation remains hazardous to the health system. The existence of this virus in the human body is identified through sputum or blood samples. Furthermore, computed tomography (CT) or X-ray has become a significant tool for quick diagnoses. Thus, it is essential to develop an online and real-time computer-aided diagnosis (CAD) approach to support physicians and avoid further spreading of the disease. In this research, a convolutional neural network (CNN) -based Residual neural network (ResNet50) has been employed to detect COVID-19 through chest X-ray images and achieved 98% accuracy. The proposed CAD system will receive the X-ray images from the remote hospitals/healthcare centers and perform diagnostic processes. Furthermore, the proposed CAD system uses advanced load balancer and resilience features to achieve fault tolerance with zero delays and perceives more infected cases during this pandemic.
RESUMO
An entity's existence in an image can be depicted by the activity instantiation vector from a group of neurons (called capsule). Recently, multi-layered capsules, called CapsNet, have proven to be state-of-the-art for image classification tasks. This research utilizes the prowess of this algorithm to detect pneumonia from chest X-ray (CXR) images. Here, an entity in the CXR image can help determine if the patient (whose CXR is used) is suffering from pneumonia or not. A simple model of capsules (also known as Simple CapsNet) has provided results comparable to best Deep Learning models that had been used earlier. Subsequently, a combination of convolutions and capsules is used to obtain two models that outperform all models previously proposed. These models-Integration of convolutions with capsules (ICC) and Ensemble of convolutions with capsules (ECC)-detect pneumonia with a test accuracy of 95.33% and 95.90%, respectively. The latter model is studied in detail to obtain a variant called EnCC, where n = 3, 4, 8, 16. Here, the E4CC model works optimally and gives test accuracy of 96.36%. All these models had been trained, validated, and tested on 5857 images from Mendeley.
Assuntos
Algoritmos , Pneumonia/diagnóstico por imagem , Pneumonia/diagnóstico , Tórax/diagnóstico por imagem , Aprendizado Profundo , Humanos , Reprodutibilidade dos Testes , Raios XRESUMO
Nowadays, the Internet of Things enabled Underwater Wireless Sensor Network (IoT-UWSN) is suffering from serious performance restrictions, i.e., high End to End (E2E) delay, low energy efficiency, low data reliability, etc. The necessity of efficient, reliable, collision and interference-free communication has become a challenging task for the researchers. However, the minimum Energy Consumption (EC) and low E2E delay increase the performance of the IoT-UWSN. Therefore, in the current work, two proactive routing protocols are presented, namely: Bellmanâ»Ford Shortest Path-based Routing (BF-SPR-Three) and Energy-efficient Path-based Void hole and Interference-free Routing (EP-VIR-Three). Then we formalized the aforementioned problems to accomplish the reliable data transmission in Underwater Wireless Sensor Network (UWSN). The main objectives of this paper include minimum EC, interference-free transmission, void hole avoidance and high Packet Delivery Ratio (PDR). Furthermore, the algorithms for the proposed routing protocols are presented. Feasible regions using linear programming are also computed for optimal EC and to enhance the network lifespan. Comparative analysis is also performed with state-of-the-art proactive routing protocols. In the end, extensive simulations have been performed to authenticate the performance of the proposed routing protocols. Results and discussion disclose that the proposed routing protocols outperformed the counterparts significantly.
RESUMO
Cancer is one of the leading causes of deaths in the last two decades. It is either diagnosed malignant or benign - depending upon the severity of the infection and the current stage. The conventional methods require a detailed physical inspection by an expert dermatologist, which is time-consuming and imprecise. Therefore, several computer vision methods are introduced lately, which are cost-effective and somewhat accurate. In this work, we propose a new automated approach for skin lesion detection and recognition using a deep convolutional neural network (DCNN). The proposed cascaded design incorporates three fundamental steps including; a) contrast enhancement through fast local Laplacian filtering (FlLpF) along HSV color transformation; b) lesion boundary extraction using color CNN approach by following XOR operation; c) in-depth features extraction by applying transfer learning using Inception V3 model prior to feature fusion using hamming distance (HD) approach. An entropy controlled feature selection method is also introduced for the selection of the most discriminant features. The proposed method is tested on PH2 and ISIC 2017 datasets, whereas the recognition phase is validated on PH2, ISBI 2016, and ISBI 2017 datasets. From the results, it is concluded that the proposed method outperforms several existing methods and attained accuracy 98.4% on PH2 dataset, 95.1% on ISBI dataset and 94.8% on ISBI 2017 dataset.
Assuntos
Dermoscopia/métodos , Processamento de Imagem Assistida por Computador/métodos , Melanoma/diagnóstico , Redes Neurais de Computação , Neoplasias Cutâneas/diagnóstico , Algoritmos , Cor , Humanos , Sensibilidade e Especificidade , Neoplasias Cutâneas/patologiaRESUMO
Alzheimer's disease (AD) is an incurable neurodegenerative disorder accounting for 70%-80% dementia cases worldwide. Although, research on AD has increased in recent years, however, the complexity associated with brain structure and functions makes the early diagnosis of this disease a challenging task. Resting-state functional magnetic resonance imaging (rs-fMRI) is a neuroimaging technology that has been widely used to study the pathogenesis of neurodegenerative diseases. In literature, the computer-aided diagnosis of AD is limited to binary classification or diagnosis of AD and MCI stages. However, its applicability to diagnose multiple progressive stages of AD is relatively under-studied. This study explores the effectiveness of rs-fMRI for multi-class classification of AD and its associated stages including CN, SMC, EMCI, MCI, LMCI, and AD. A longitudinal cohort of resting-state fMRI of 138 subjects (25 CN, 25 SMC, 25 EMCI, 25 LMCI, 13 MCI, and 25 AD) from Alzheimer's Disease Neuroimaging Initiative (ADNI) is studied. To provide a better insight into deep learning approaches and their applications to AD classification, we investigate ResNet-18 architecture in detail. We consider the training of the network from scratch by using single-channel input as well as performed transfer learning with and without fine-tuning using an extended network architecture. We experimented with residual neural networks to perform AD classification task and compared it with former research in this domain. The performance of the models is evaluated using precision, recall, f1-measure, AUC and ROC curves. We found that our networks were able to significantly classify the subjects. We achieved improved results with our fine-tuned model for all the AD stages with an accuracy of 100%, 96.85%, 97.38%, 97.43%, 97.40% and 98.01% for CN, SMC, EMCI, LMCI, MCI, and AD respectively. However, in terms of overall performance, we achieved state-of-the-art results with an average accuracy of 97.92% and 97.88% for off-the-shelf and fine-tuned models respectively. The Analysis of results indicate that classification and prediction of neurodegenerative brain disorders such as AD using functional magnetic resonance imaging and advanced deep learning methods is promising for clinical decision making and have the potential to assist in early diagnosis of AD and its associated stages.
Assuntos
Doença de Alzheimer/classificação , Doença de Alzheimer/diagnóstico por imagem , Diagnóstico por Computador/métodos , Vias Neurais/diagnóstico por imagem , Doença de Alzheimer/patologia , Encéfalo/diagnóstico por imagem , Encéfalo/patologia , Aprendizado Profundo , Humanos , Interpretação de Imagem Assistida por Computador , Imageamento por Ressonância Magnética/métodos , Redes Neurais de Computação , Vias Neurais/patologia , DescansoRESUMO
Leukemia is a life-threatening disease. So far diagnosing of leukemia is manually carried out by the Hematologists that is time-consuming and error-prone. The crucial problem is leukocytes' nuclei segmentation precisely. This paper presents a novel technique to solve the problem by applying statistical methods of Gaussian mixture model through expectation maximization for the basic and challenging step of leukocytes' nuclei segmentation. The proposed technique is being tested on a set of 365 images and the segmentation results are validated both qualitatively and quantitatively with current state-of-the-art methods on the basis of ground truth data (manually marked images by medical experts). The proposed technique is qualitatively compared with current state-of-the-art methods on the basis of ground truth data through visual inspection on four different grounds. Finally, the proposed technique quantitatively achieved an overall segmentation accuracy, sensitivity and precision of 92.8%, 93.5% and 98.16% respectively while an overall F-measure of 95.75%.
Assuntos
Núcleo Celular/genética , Leucócitos/fisiologia , Automação Laboratorial , Humanos , Leucemia/genéticaRESUMO
BACKGROUND: In digital mammography, finding accurate breast profile segmentation of women's mammogram is considered a challenging task. The existence of the pectoral muscle may mislead the diagnosis of cancer due to its high-level similarity to breast body. In addition, some other challenges due to manifestation of the breast body pectoral muscle in the mammogram data include inaccurate estimation of the density level and assessment of the cancer cell. The discrete differentiation operator has been proven to eliminate the pectoral muscle before the analysis processing. METHODS: We propose a novel approach to remove the pectoral muscle in terms of the mediolateral-oblique observation of a mammogram using a discrete differentiation operator. This is used to detect the edges boundaries and to approximate the gradient value of the intensity function. Further refinement is achieved using a convex hull technique. This method is implemented on dataset provided by MIAS and 20 contrast enhanced digital mammographic images. RESULTS: To assess the performance of the proposed method, visual inspections by radiologist as well as calculation based on well-known metrics are observed. For calculation of performance metrics, the given pixels in pectoral muscle region of the input scans are calculated as ground truth. CONCLUSIONS: Our approach tolerates an extensive variety of the pectoral muscle geometries with minimum risk of bias in breast profile than existing techniques.
Assuntos
Mama/diagnóstico por imagem , Mamografia/métodos , Músculos Peitorais/diagnóstico por imagem , Interpretação de Imagem Radiográfica Assistida por Computador/métodos , Algoritmos , Feminino , HumanosRESUMO
This paper presents a novel features mining approach from documents that could not be mined via optical character recognition (OCR). By identifying the intimate relationship between the text and graphical components, the proposed technique pulls out the Start, End, and Exact values for each bar. Furthermore, the word 2-gram and Euclidean distance methods are used to accurately detect and determine plagiarism in bar charts.
Assuntos
Algoritmos , Inteligência Artificial , Mineração de Dados/métodos , Reconhecimento Automatizado de Padrão/métodos , Plágio , Gráficos por Computador , Humanos , SemânticaRESUMO
Brain tumors present a significant medical challenge, demanding accurate and timely diagnosis for effective treatment planning. These tumors disrupt normal brain functions in various ways, giving rise to a broad spectrum of physical, cognitive, and emotional challenges. The daily increase in mortality rates attributed to brain tumors underscores the urgency of this issue. In recent years, advanced medical imaging techniques, particularly magnetic resonance imaging (MRI), have emerged as indispensable tools for diagnosing brain tumors. Brain MRI scans provide high-resolution, non-invasive visualization of brain structures, facilitating the precise detection of abnormalities such as tumors. This study aims to propose an effective neural network approach for the timely diagnosis of brain tumors. Our experiments utilized a multi-class MRI image dataset comprising 21,672 images related to glioma tumors, meningioma tumors, and pituitary tumors. We introduced a novel neural network-based feature engineering approach, combining 2D convolutional neural network (2DCNN) and VGG16. The resulting 2DCNN-VGG16 network (CVG-Net) extracted spatial features from MRI images using 2DCNN and VGG16 without human intervention. The newly created hybrid feature set is then input into machine learning models to diagnose brain tumors. We have balanced the multi-class MRI image features data using the Synthetic Minority Over-sampling Technique (SMOTE) approach. Extensive research experiments demonstrate that utilizing the proposed CVG-Net, the k-neighbors classifier outperformed state-of-the-art studies with a k-fold accuracy performance score of 0.96. We also applied hyperparameter tuning to enhance performance for multi-class brain tumor diagnosis. Our novel proposed approach has the potential to revolutionize early brain tumor diagnosis, providing medical professionals with a cost-effective and timely diagnostic mechanism.