Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 23
Filtrar
1.
Sensors (Basel) ; 23(3)2023 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-36772430

RESUMO

The early, valid prediction of heart problems would minimize life threats and save lives, while lack of prediction and false diagnosis can be fatal. Addressing a single dataset alone to build a machine learning model for the identification of heart problems is not practical because each country and hospital has its own data schema, structure, and quality. On this basis, a generic framework has been built for heart problem diagnosis. This framework is a hybrid framework that employs multiple machine learning and deep learning techniques and votes for the best outcome based on a novel voting technique with the intention to remove bias from the model. The framework contains two consequent layers. The first layer contains simultaneous machine learning models running over a given dataset. The second layer consolidates the outputs of the first layer and classifies them as a second classification layer based on novel voting techniques. Prior to the classification process, the framework selects the top features using a proposed feature selection framework. It starts by filtering the columns using multiple feature selection methods and considers the top common features selected. Results from the proposed framework, with 95.6% accuracy, show its superiority over the single machine learning model, classical stacking technique, and traditional voting technique. The main contribution of this work is to demonstrate how the prediction probabilities of multiple models can be exploited for the purpose of creating another layer for final output; this step neutralizes any model bias. Another experimental contribution is proving the complete pipeline's ability to be retrained and used for other datasets collected using different measurements and with different distributions.


Assuntos
Aprendizado de Máquina , Probabilidade
2.
Sensors (Basel) ; 22(17)2022 Aug 29.
Artigo em Inglês | MEDLINE | ID: mdl-36080971

RESUMO

The correlations between smartphone sensors, algorithms, and relevant techniques are major components facilitating indoor localization and tracking in the absence of communication and localization standards. A major research gap can be noted in terms of explaining the connections between these components to clarify the impacts and issues of models meant for indoor localization and tracking. In this paper, we comprehensively study the smartphone sensors, algorithms, and techniques that can support indoor localization and tracking without the need for any additional hardware or specific infrastructure. Reviews and comparisons detail the strengths and limitations of each component, following which we propose a handheld-device-based indoor localization with zero infrastructure (HDIZI) approach to connect the abovementioned components in a balanced manner. The sensors are the input source, while the algorithms are used as engines in an optimal manner, in order to produce a robust localizing and tracking model without requiring any further infrastructure. The proposed framework makes indoor and outdoor navigation more user-friendly, and is cost-effective for researchers working with embedded sensors in handheld devices, enabling technologies for Industry 4.0 and beyond. We conducted experiments using data collected from two different sites with five smartphones as an initial work. The data were sampled at 10 Hz for a duration of five seconds at fixed locations; furthermore, data were also collected while moving, allowing for analysis based on user stepping behavior and speed across multiple paths. We leveraged the capabilities of smartphones, through efficient implementation and the optimal integration of algorithms, in order to overcome the inherent limitations. Hence, the proposed HDIZI is expected to outperform approaches proposed in previous studies, helping researchers to deal with sensors for the purposes of indoor navigation-in terms of either positioning or tracking-for use in various fields, such as healthcare, transportation, environmental monitoring, or disaster situations.


Assuntos
Algoritmos , Smartphone , Computadores , Meios de Transporte
3.
Sensors (Basel) ; 20(8)2020 Apr 22.
Artigo em Inglês | MEDLINE | ID: mdl-32331260

RESUMO

The IEEE 802.15.6 standard has the potential to provide cost-effective and unobtrusive medical services to individuals with chronic health conditions. It is a low-power standard developed for wireless body area networks and enables wireless communication inside or near a human body. This standard utilizes a Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA) protocol to improve network performance under different channel access priorities. However, the CSMA/CA proposed in the IEEE 802.15.6 standard has poor throughput performance and link reliability when some of the nodes deployed on a human body are hidden from each other. We employ the RTS/CTS scheme to solve hidden node problems in IEEE 802.15.6 networks over a lossy channel. To improve performance of the RTS/CTS scheme, we adjust transmission power levels of the nodes according to transmission failures. We estimate throughput and energy consumption of the proposed model by differentiating several parameters, such as contention window size, values of bit error ratios, number of nodes in different priority classes. The performance results are obtained through analytical approximations and simulations. We observe that the proposed model significantly improves performance of the IEEE 802.15.6 CSMA/CA by resolving hidden node problems.


Assuntos
Redes de Comunicação de Computadores , Tecnologia sem Fio , Atenção à Saúde
4.
Sensors (Basel) ; 20(20)2020 Oct 15.
Artigo em Inglês | MEDLINE | ID: mdl-33076436

RESUMO

In this paper, we propose a pen device capable of detecting specific features from dynamic handwriting tests for aiding on automatic Parkinson's disease identification. The method used in this work uses machine learning to compare the raw signals from different sensors in the device coupled to a pen and extract relevant information such as tremors and hand acceleration to diagnose the patient clinically. Additionally, the datasets composed of raw signals from healthy and Parkinson's disease patients acquired here are made available to further contribute to research related to this topic.


Assuntos
Escrita Manual , Monitorização Fisiológica/instrumentação , Doença de Parkinson , Aceleração , Humanos , Aprendizado de Máquina , Doença de Parkinson/diagnóstico , Tremor
5.
Sensors (Basel) ; 20(23)2020 Nov 24.
Artigo em Inglês | MEDLINE | ID: mdl-33255308

RESUMO

Several pathologies have a direct impact on society, causing public health problems. Pulmonary diseases such as Chronic obstructive pulmonary disease (COPD) are already the third leading cause of death in the world, leaving tuberculosis at ninth with 1.7 million deaths and over 10.4 million new occurrences. The detection of lung regions in images is a classic medical challenge. Studies show that computational methods contribute significantly to the medical diagnosis of lung pathologies by Computerized Tomography (CT), as well as through Internet of Things (IoT) methods based in the context on the health of things. The present work proposes a new model based on IoT for classification and segmentation of pulmonary CT images, applying the transfer learning technique in deep learning methods combined with Parzen's probability density. The proposed model uses an Application Programming Interface (API) based on the Internet of Medical Things to classify lung images. The approach was very effective, with results above 98% accuracy for classification in pulmonary images. Then the model proceeds to the lung segmentation stage using the Mask R-CNN network to create a pulmonary map and use fine-tuning to find the pulmonary borders on the CT image. The experiment was a success, the proposed method performed better than other works in the literature, reaching high segmentation metrics values such as accuracy of 98.34%. Besides reaching 5.43 s in segmentation time and overcoming other transfer learning models, our methodology stands out among the others because it is fully automatic. The proposed approach has simplified the segmentation process using transfer learning. It has introduced a faster and more effective method for better-performing lung segmentation, making our model fully automatic and robust.


Assuntos
Aprendizado Profundo , Internet das Coisas , Tomografia Computadorizada por Raios X , Processamento de Imagem Assistida por Computador , Pulmão/diagnóstico por imagem
6.
J Med Syst ; 42(6): 99, 2018 Apr 16.
Artigo em Inglês | MEDLINE | ID: mdl-29663090

RESUMO

In recent years, human activity recognition from body sensor data or wearable sensor data has become a considerable research attention from academia and health industry. This research can be useful for various e-health applications such as monitoring elderly and physical impaired people at Smart home to improve their rehabilitation processes. However, it is not easy to accurately and automatically recognize physical human activity through wearable sensors due to the complexity and variety of body activities. In this paper, we address the human activity recognition problem as a classification problem using wearable body sensor data. In particular, we propose to utilize a Deep Belief Network (DBN) model for successful human activity recognition. First, we extract the important initial features from the raw body sensor data. Then, a kernel principal component analysis (KPCA) and linear discriminant analysis (LDA) are performed to further process the features and make them more robust to be useful for fast activity recognition. Finally, the DBN is trained by these features. Various experiments were performed on a real-world wearable sensor dataset to verify the effectiveness of the deep learning algorithm. The results show that the proposed DBN outperformed other algorithms and achieves satisfactory activity recognition performance.


Assuntos
Aprendizado de Máquina , Monitorização Ambulatorial/métodos , Movimento/fisiologia , Tecnologia de Sensoriamento Remoto/métodos , Algoritmos , Teste de Esforço , Humanos , Redes Neurais de Computação , Reprodutibilidade dos Testes
7.
Sensors (Basel) ; 17(5)2017 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-28445441

RESUMO

The understanding of various health-oriented vital sign data generated from body sensor networks (BSNs) and discovery of the associations between the generated parameters is an important task that may assist and promote important decision making in healthcare. For example, in a smart home scenario where occupants' health status is continuously monitored remotely, it is essential to provide the required assistance when an unusual or critical situation is detected in their vital sign data. In this paper, we present an efficient approach for mining the periodic patterns obtained from BSN data. In addition, we employ a correlation test on the generated patterns and introduce productive-associated periodic-frequent patterns as the set of correlated periodic-frequent items. The combination of these measures has the advantage of empowering healthcare providers and patients to raise the quality of diagnosis as well as improve treatment and smart care, especially for elderly people in smart homes. We develop an efficient algorithm named PPFP-growth (Productive Periodic-Frequent Pattern-growth) to discover all productive-associated periodic frequent patterns using these measures. PPFP-growth is efficient and the productiveness measure removes uncorrelated periodic items. An experimental evaluation on synthetic and real datasets shows the efficiency of the proposed PPFP-growth algorithm, which can filter a huge number of periodic patterns to reveal only the correlated ones.


Assuntos
Serviços de Assistência Domiciliar , Algoritmos , Mineração de Dados , Atenção à Saúde , Monitorização Fisiológica
8.
Sensors (Basel) ; 17(7)2017 Jul 10.
Artigo em Inglês | MEDLINE | ID: mdl-28698501

RESUMO

Body area networks (BANs) are configured with a great number of ultra-low power consumption wearable devices, which constantly monitor physiological signals of the human body and thus realize intelligent monitoring. However, the collection and transfer of human body signals consume energy, and considering the comfort demand of wearable devices, both the size and the capacity of a wearable device's battery are limited. Thus, minimizing the energy consumption of wearable devices and optimizing the BAN energy efficiency is still a challenging problem. Therefore, in this paper, we propose an energy harvesting-based BAN for smart health and discuss an optimal resource allocation scheme to improve BAN energy efficiency. Specifically, firstly, considering energy harvesting in a BAN and the time limits of human body signal transfer, we formulate the energy efficiency optimization problem of time division for wireless energy transfer and wireless information transfer. Secondly, we convert the optimization problem into a convex optimization problem under a linear constraint and propose a closed-form solution to the problem. Finally, simulation results proved that when the size of data acquired by the wearable devices is small, the proportion of energy consumed by the circuit and signal acquisition of the wearable devices is big, and when the size of data acquired by the wearable devices is big, the energy consumed by the signal transfer of the wearable device is decisive.

9.
Sensors (Basel) ; 17(12)2017 Dec 07.
Artigo em Inglês | MEDLINE | ID: mdl-29215591

RESUMO

Ensuring self-coexistence among IEEE 802.22 networks is a challenging problem owing to opportunistic access of incumbent-free radio resources by users in co-located networks. In this study, we propose a fully-distributed non-cooperative approach to ensure self-coexistence in downlink channels of IEEE 802.22 networks. We formulate the self-coexistence problem as a mixed-integer non-linear optimization problem for maximizing the network data rate, which is an NP-hard one. This work explores a sub-optimal solution by dividing the optimization problem into downlink channel allocation and power assignment sub-problems. Considering fairness, quality of service and minimum interference for customer-premises-equipment, we also develop a greedy algorithm for channel allocation and a non-cooperative game-theoretic framework for near-optimal power allocation. The base stations of networks are treated as players in a game, where they try to increase spectrum utilization by controlling power and reaching a Nash equilibrium point. We further develop a utility function for the game to increase the data rate by minimizing the transmission power and, subsequently, the interference from neighboring networks. A theoretical proof of the uniqueness and existence of the Nash equilibrium has been presented. Performance improvements in terms of data-rate with a degree of fairness compared to a cooperative branch-and-bound-based algorithm and a non-cooperative greedy approach have been shown through simulation studies.

10.
J Med Syst ; 39(12): 192, 2015 Dec.
Artigo em Inglês | MEDLINE | ID: mdl-26490150

RESUMO

With the advances in wearable computing and various wireless technologies, there is an increasing trend to outsource body signals from wireless body area network (WBAN) to outside world including cyber space, healthcare big data clouds, etc. Since the environmental and physiological data collected by multimodal sensors have different importance, the provisioning of quality of service (QoS) for the sensory data in WBAN is a critical issue. This paper proposes multiple level-based QoS design at WBAN media access control layer in terms of user level, data level and time level. In the proposed QoS provisioning scheme, different users have different priorities, various sensory data collected by different sensor nodes have different importance, while data priority for the same sensor node varies over time. The experimental results show that the proposed multi-level based QoS provisioning solution in WBAN yields better performance for meeting QoS requirements of personalized healthcare applications while achieving energy saving.


Assuntos
Redes de Comunicação de Computadores/instrumentação , Tecnologia de Sensoriamento Remoto/instrumentação , Telemedicina/instrumentação , Tecnologia sem Fio/instrumentação , Conscientização , Simulação por Computador , Humanos
11.
Sensors (Basel) ; 14(12): 24381-407, 2014 Dec 18.
Artigo em Inglês | MEDLINE | ID: mdl-25529205

RESUMO

The problem of moving target tracking in directional sensor networks (DSNs) introduces new research challenges, including optimal selection of sensing and communication sectors of the directional sensor nodes, determination of the precise location of the target and an energy-efficient data collection mechanism. Existing solutions allow individual sensor nodes to detect the target's location through collaboration among neighboring nodes, where most of the sensors are activated and communicate with the sink. Therefore, they incur much overhead, loss of energy and reduced target tracking accuracy. In this paper, we have proposed a clustering algorithm, where distributed cluster heads coordinate their member nodes in optimizing the active sensing and communication directions of the nodes, precisely determining the target location by aggregating reported sensing data from multiple nodes and transferring the resultant location information to the sink. Thus, the proposed target tracking mechanism minimizes the sensing redundancy and maximizes the number of sleeping nodes in the network. We have also investigated the dynamic approach of activating sleeping nodes on-demand so that the moving target tracking accuracy can be enhanced while maximizing the network lifetime. We have carried out our extensive simulations in ns-3, and the results show that the proposed mechanism achieves higher performance compared to the state-of-the-art works.

12.
Sci Rep ; 14(1): 9584, 2024 Apr 26.
Artigo em Inglês | MEDLINE | ID: mdl-38671012

RESUMO

The rapid advancement of modern communication technologies necessitates the development of generalized multi-access frameworks and the continuous implementation of rate splitting, augmented with semantic awareness. This trend, coupled with the mounting pressure on wireless services, underscores the need for intelligent approaches to radio signal propagation. In response to these challenges, intelligent reflecting surfaces (IRS) have garnered significant attention for their ability to control data transmission systems in a goal-oriented and dynamic manner. This innovation is largely attributed to equitable resource allocation and the dynamic enhancement of network performance. However, the integration of rate-splitting multi-access (RSMA) architecture with semantic considerations imposes stringent requirements on IRS platforms to ensure seamless connectivity and broad coverage for a diverse user base without interference. Semantic communications hinge on a knowledge base-a centralized repository of integrated information related to the transmitted data-which becomes critically important in multi-antenna scenarios. This article proposes a novel set of design strategies for RSMA-IRS systems, enabled by reconfigurable intelligent surface synergizing with semantic communication principles. An experimental analysis is presented, demonstrating the effectiveness of these design guidelines in the context of Beyond 5G/6G communication systems. The RSMA-IRS model, infused with semantic communication, offers a promising solution for future wireless networks. Performance evaluations of the proposed approach reveal that, despite an increase in the number of users, the delay in the RSMA-IRS framework incorporating semantics is 2.94% less than that of a RSMA-IRS system without semantic integration.

13.
Heliyon ; 9(2): e13636, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-36852018

RESUMO

Convolutional neural networks (CNNs) have demonstrated exceptional results in the analysis of time- series data when used for Human Activity Recognition (HAR). The manual design of such neural architectures is an error-prone and time-consuming process. The search for optimal CNN architectures is considered a revolution in the design of neural networks. By means of Neural Architecture Search (NAS), network architectures can be designed and optimized automatically. Thus, the optimal CNN architecture representation can be found automatically because of its ability to overcome the limitations of human experience and thinking modes. Evolution algorithms, which are derived from evolutionary mechanisms such as natural selection and genetics, have been widely employed to develop and optimize NAS because they can handle a blackbox optimization process for designing appropriate solution representations and search paradigms without explicit mathematical formulations or gradient information. The Genetic optimization algorithm (GA) is widely used to find optimal or near-optimal solutions for difficult problems. Considering these characteristics, an efficient human activity recognition architecture (AUTO-HAR) is presented in this study. Using the evolutionary GA to select the optimal CNN architecture, the current study proposes a novel encoding schema structure and a novel search space with a much broader range of operations to effectively search for the best architectures for HAR tasks. In addition, the proposed search space provides a reasonable degree of depth because it does not limit the maximum length of the devised task architecture. To test the effectiveness of the proposed framework for HAR tasks, three datasets were utilized: UCI-HAR, Opportunity, and DAPHNET. Based on the results of this study, it has been found that the proposed method can efficiently recognize human activity with an average accuracy of 98.5% (∓1.1), 98.3%, and 99.14% (∓0.8) for UCI-HAR, Opportunity, and DAPHNET, respectively.

15.
Healthcare (Basel) ; 11(3)2023 Jan 31.
Artigo em Inglês | MEDLINE | ID: mdl-36766986

RESUMO

The coronavirus epidemic has spread to virtually every country on the globe, inflicting enormous health, financial, and emotional devastation, as well as the collapse of healthcare systems in some countries. Any automated COVID detection system that allows for fast detection of the COVID-19 infection might be highly beneficial to the healthcare service and people around the world. Molecular or antigen testing along with radiology X-ray imaging is now utilized in clinics to diagnose COVID-19. Nonetheless, due to a spike in coronavirus and hospital doctors' overwhelming workload, developing an AI-based auto-COVID detection system with high accuracy has become imperative. On X-ray images, the diagnosis of COVID-19, non-COVID-19 non-COVID viral pneumonia, and other lung opacity can be challenging. This research utilized artificial intelligence (AI) to deliver high-accuracy automated COVID-19 detection from normal chest X-ray images. Further, this study extended to differentiate COVID-19 from normal, lung opacity and non-COVID viral pneumonia images. We have employed three distinct pre-trained models that are Xception, VGG19, and ResNet50 on a benchmark dataset of 21,165 X-ray images. Initially, we formulated the COVID-19 detection problem as a binary classification problem to classify COVID-19 from normal X-ray images and gained 97.5%, 97.5%, and 93.3% accuracy for Xception, VGG19, and ResNet50 respectively. Later we focused on developing an efficient model for multi-class classification and gained an accuracy of 75% for ResNet50, 92% for VGG19, and finally 93% for Xception. Although Xception and VGG19's performances were identical, Xception proved to be more efficient with its higher precision, recall, and f-1 scores. Finally, we have employed Explainable AI on each of our utilized model which adds interpretability to our study. Furthermore, we have conducted a comprehensive comparison of the model's explanations and the study revealed that Xception is more precise in indicating the actual features that are responsible for a model's predictions.This addition of explainable AI will benefit the medical professionals greatly as they will get to visualize how a model makes its prediction and won't have to trust our developed machine-learning models blindly.

16.
Ultrasonics ; 132: 107017, 2023 Jul.
Artigo em Inglês | MEDLINE | ID: mdl-37148701

RESUMO

Ultrasound imaging is a valuable tool for assessing the development of the fetal during pregnancy. However, interpreting ultrasound images manually can be time-consuming and subject to variability. Automated image categorization using machine learning algorithms can streamline the interpretation process by identifying stages of fetal development present in ultrasound images. In particular, deep learning architectures have shown promise in medical image analysis, enabling accurate automated diagnosis. The objective of this research is to identify fetal planes from ultrasound images with higher precision. To achieve this, we trained several convolutional neural network (CNN) architectures on a dataset of 12400 images. Our study focuses on the impact of enhanced image quality by adopting Histogram Equalization and Fuzzy Logic-based contrast enhancement on fetal plane detection using the Evidential Dempster-Shafer Based CNN Architecture, PReLU-Net, SqueezeNET, and Swin Transformer. The results of each classifier were noteworthy, with PreLUNet achieving an accuracy of 91.03%, SqueezeNET reaching 91.03% accuracy, Swin Transformer reaching an accuracy of 88.90%, and the Evidential classifier achieving an accuracy of 83.54%. We evaluated the results in terms of both training and testing accuracies. Additionally, we used LIME and GradCam to examine the decision-making process of the classifiers, providing explainability for their outputs. Our findings demonstrate the potential for automated image categorization in large-scale retrospective assessments of fetal development using ultrasound imaging.


Assuntos
Algoritmos , Redes Neurais de Computação , Gravidez , Feminino , Humanos , Estudos Retrospectivos , Aprendizado de Máquina , Ultrassonografia
17.
Sensors (Basel) ; 12(11): 15599-627, 2012 Nov 12.
Artigo em Inglês | MEDLINE | ID: mdl-23202224

RESUMO

The emergence of heterogeneous applications with diverse requirements for resource-constrained Wireless Body Area Networks (WBANs) poses significant challenges for provisioning Quality of Service (QoS) with multi-constraints (delay and reliability) while preserving energy efficiency. To address such challenges, this paper proposes McMAC,a MAC protocol with multi-constrained QoS provisioning for diverse traffic classes in WBANs. McMAC classifies traffic based on their multi-constrained QoS demands and introduces a novel superframe structure based on the "transmit-whenever-appropriate"principle, which allows diverse periods for diverse traffic classes according to their respective QoS requirements. Furthermore, a novel emergency packet handling mechanism is proposedto ensure packet delivery with the least possible delay and the highest reliability. McMAC is also modeled analytically, and extensive simulations were performed to evaluate its performance. The results reveal that McMAC achieves the desired delay and reliability guarantee according to the requirements of a particular traffic class while achieving energy efficiency.


Assuntos
Redes de Comunicação de Computadores , Tecnologia sem Fio , Humanos , Modelos Teóricos , Reprodutibilidade dos Testes
18.
Sensors (Basel) ; 12(2): 2175-207, 2012.
Artigo em Inglês | MEDLINE | ID: mdl-22438759

RESUMO

Wireless Sensor Networks (WSNs) are gaining tremendous importance thanks to their broad range of commercial applications such as in smart home automation, health-care and industrial automation. In these applications multi-vendor and heterogeneous sensor nodes are deployed. Due to strict administrative control over the specific WSN domains, communication barriers, conflicting goals and the economic interests of different WSN sensor node vendors, it is difficult to introduce a large scale federated WSN. By allowing heterogeneous sensor nodes in WSNs to coexist on a shared physical sensor substrate, virtualization in sensor network may provide flexibility, cost effective solutions, promote diversity, ensure security and increase manageability. This paper surveys the novel approach of using the large scale federated WSN resources in a sensor virtualization environment. Our focus in this paper is to introduce a few design goals, the challenges and opportunities of research in the field of sensor network virtualization as well as to illustrate a current status of research in this field. This paper also presents a wide array of state-of-the art projects related to sensor network virtualization.


Assuntos
Redes de Comunicação de Computadores/instrumentação , Modelos Teóricos , Tecnologia de Sensoriamento Remoto/instrumentação , Telemetria/instrumentação , Transdutores , Interface Usuário-Computador , Simulação por Computador , Desenho de Equipamento , Análise de Falha de Equipamento
19.
Neural Comput Appl ; : 1-14, 2022 Nov 17.
Artigo em Inglês | MEDLINE | ID: mdl-36415284

RESUMO

The COVID-19 pandemic has devastated the entire globe since its first appearance at the end of 2019. Although vaccines are now in production, the number of contaminations remains high, thus increasing the number of specialized personnel that can analyze clinical exams and points out the final diagnosis. Computed tomography and X-ray images are the primary sources for computer-aided COVID-19 diagnosis, but we still lack better interpretability of such automated decision-making mechanisms. This manuscript presents an insightful comparison of three approaches based on explainable artificial intelligence (XAI) to light up interpretability in the context of COVID-19 diagnosis using deep networks: Composite Layer-wise Propagation, Single Taylor Decomposition, and Deep Taylor Decomposition. Two deep networks have been used as the backbones to assess the explanation skills of the XAI approaches mentioned above: VGG11 and VGG16. We hope that such work can be used as a basis for further research on XAI and COVID-19 diagnosis for each approach figures its own positive and negative points.

20.
J Supercomput ; 78(7): 10250-10274, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35079199

RESUMO

This paper designs and develops a computational intelligence-based framework using convolutional neural network (CNN) and genetic algorithm (GA) to detect COVID-19 cases. The framework utilizes a multi-access edge computing technology such that end-user can access available resources as well the CNN on the cloud. Early detection of COVID-19 can improve treatment and mitigate transmission. During peaks of infection, hospitals worldwide have suffered from heavy patient loads, bed shortages, inadequate testing kits and short-staffing problems. Due to the time-consuming nature of the standard RT-PCR test, the lack of expert radiologists, and evaluation issues relating to poor quality images, patients with severe conditions are sometimes unable to receive timely treatment. It is thus recommended to incorporate computational intelligence methodologies, which provides highly accurate detection in a matter of minutes, alongside traditional testing as an emergency measure. CNN has achieved extraordinary performance in numerous computational intelligence tasks. However, finding a systematic, automatic and optimal set of hyperparameters for building an efficient CNN for complex tasks remains challenging. Moreover, due to advancement of technology, data are collected at sparse location and hence accumulation of data from such a diverse sparse location poses a challenge. In this article, we propose a framework of computational intelligence-based algorithm that utilize the recent 5G mobile technology of multi-access edge computing along with a new CNN-model for automatic COVID-19 detection using raw chest X-ray images. This algorithm suggests that anyone having a 5G device (e.g., 5G mobile phone) should be able to use the CNN-based automatic COVID-19 detection tool. As part of the proposed automated model, the model introduces a novel CNN structure with the genetic algorithm (GA) for hyperparameter tuning. One such combination of GA and CNN is new in the application of COVID-19 detection/classification. The experimental results show that the developed framework could classify COVID-19 X-ray images with 98.48% accuracy which is higher than any of the performances achieved by other studies.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA