Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Sensors (Basel) ; 22(16)2022 Aug 09.
Artículo en Inglés | MEDLINE | ID: mdl-36015699

RESUMEN

Over the last decade, the usage of Internet of Things (IoT) enabled applications, such as healthcare, intelligent vehicles, and smart homes, has increased progressively. These IoT applications generate delayed- sensitive data and requires quick resources for execution. Recently, software-defined networks (SDN) offer an edge computing paradigm (e.g., fog computing) to run these applications with minimum end-to-end delays. Offloading and scheduling are promising schemes of edge computing to run delay-sensitive IoT applications while satisfying their requirements. However, in the dynamic environment, existing offloading and scheduling techniques are not ideal and decrease the performance of such applications. This article formulates joint and scheduling problems into combinatorial integer linear programming (CILP). We propose a joint task offloading and scheduling (JTOS) framework based on the problem. JTOS consists of task offloading, sequencing, scheduling, searching, and failure components. The study's goal is to minimize the hybrid delay of all applications. The performance evaluation shows that JTOS outperforms all existing baseline methods in hybrid delay for all applications in the dynamic environment. The performance evaluation shows that JTOS reduces the processing delay by 39% and the communication delay by 35% for IoT applications compared to existing schemes.


Asunto(s)
Nube Computacional , Internet de las Cosas , Atención a la Salud
2.
Sensors (Basel) ; 22(6)2022 Mar 08.
Artículo en Inglés | MEDLINE | ID: mdl-35336263

RESUMEN

The electroencephalogram (EEG) introduced a massive potential for user identification. Several studies have shown that EEG provides unique features in addition to typical strength for spoofing attacks. EEG provides a graphic recording of the brain's electrical activity that electrodes can capture on the scalp at different places. However, selecting which electrodes should be used is a challenging task. Such a subject is formulated as an electrode selection task that is tackled by optimization methods. In this work, a new approach to select the most representative electrodes is introduced. The proposed algorithm is a hybrid version of the Flower Pollination Algorithm and ß-Hill Climbing optimizer called FPAß-hc. The performance of the FPAß-hc algorithm is evaluated using a standard EEG motor imagery dataset. The experimental results show that the FPAß-hc can utilize less than half of the electrode numbers, achieving more accurate results than seven other methods.


Asunto(s)
Imaginación , Polinización , Algoritmos , Electroencefalografía/métodos , Flores
3.
Expert Syst ; 39(3): e12759, 2022 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-34511689

RESUMEN

COVID-19 is the disease evoked by a new breed of coronavirus called the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Recently, COVID-19 has become a pandemic by infecting more than 152 million people in over 216 countries and territories. The exponential increase in the number of infections has rendered traditional diagnosis techniques inefficient. Therefore, many researchers have developed several intelligent techniques, such as deep learning (DL) and machine learning (ML), which can assist the healthcare sector in providing quick and precise COVID-19 diagnosis. Therefore, this paper provides a comprehensive review of the most recent DL and ML techniques for COVID-19 diagnosis. The studies are published from December 2019 until April 2021. In general, this paper includes more than 200 studies that have been carefully selected from several publishers, such as IEEE, Springer and Elsevier. We classify the research tracks into two categories: DL and ML and present COVID-19 public datasets established and extracted from different countries. The measures used to evaluate diagnosis methods are comparatively analysed and proper discussion is provided. In conclusion, for COVID-19 diagnosing and outbreak prediction, SVM is the most widely used machine learning mechanism, and CNN is the most widely used deep learning mechanism. Accuracy, sensitivity, and specificity are the most widely used measurements in previous studies. Finally, this review paper will guide the research community on the upcoming development of machine learning for COVID-19 and inspire their works for future development. This review paper will guide the research community on the upcoming development of ML and DL for COVID-19 and inspire their works for future development.

4.
Sensors (Basel) ; 21(11)2021 Jun 07.
Artículo en Inglés | MEDLINE | ID: mdl-34200216

RESUMEN

Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been widely acknowledged. The artificially created examples will lead to different instances negatively identified by the DL models that are humanly considered benign. Practical application in actual physical scenarios with adversarial threats shows their features. Thus, adversarial attacks and defense, including machine learning and its reliability, have drawn growing interest and, in recent years, has been a hot topic of research. We introduce a framework that provides a defensive model against the adversarial speckle-noise attack, the adversarial training, and a feature fusion strategy, which preserves the classification with correct labelling. We evaluate and analyze the adversarial attacks and defenses on the retinal fundus images for the Diabetic Retinopathy recognition problem, which is considered a state-of-the-art endeavor. Results obtained on the retinal fundus images, which are prone to adversarial attacks, are 99% accurate and prove that the proposed defensive model is robust.


Asunto(s)
Diabetes Mellitus , Retinopatía Diabética , Algoritmos , Inteligencia Artificial , Retinopatía Diabética/diagnóstico , Humanos , Redes Neurales de la Computación , Reproducibilidad de los Resultados
5.
Sensors (Basel) ; 21(12)2021 Jun 14.
Artículo en Inglés | MEDLINE | ID: mdl-34198608

RESUMEN

The Internet of Medical Things (IoMT) is increasingly being used for healthcare purposes. IoMT enables many sensors to collect patient data from various locations and send it to a distributed hospital for further study. IoMT provides patients with a variety of paid programmes to help them keep track of their health problems. However, the current system services are expensive, and offloaded data in the healthcare network are insecure. The research develops a new, cost-effective and stable IoMT framework based on a blockchain-enabled fog cloud. The study aims to reduce the cost of healthcare application services as they are processing in the system. The study devises an IoMT system based on different algorithm techniques, such as Blockchain-Enable Smart-Contract Cost-Efficient Scheduling Algorithm Framework (BECSAF) schemes. Smart-Contract Blockchain schemes ensure data consistency and validation with symmetric cryptography. However, due to the different workflow tasks scheduled on other nodes, the heterogeneous, earliest finish, time-based scheduling deals with execution under their deadlines. Simulation results show that the proposed algorithm schemes outperform all existing baseline approaches in terms of the implementation of applications.


Asunto(s)
Cadena de Bloques , Internet de las Cosas , Algoritmos , Atención a la Salud , Humanos
6.
Sensors (Basel) ; 21(20)2021 Oct 19.
Artículo en Inglés | MEDLINE | ID: mdl-34696135

RESUMEN

In the last decade, the developments in healthcare technologies have been increasing progressively in practice. Healthcare applications such as ECG monitoring, heartbeat analysis, and blood pressure control connect with external servers in a manner called cloud computing. The emerging cloud paradigm offers different models, such as fog computing and edge computing, to enhance the performances of healthcare applications with minimum end-to-end delay in the network. However, many research challenges exist in the fog-cloud enabled network for healthcare applications. Therefore, in this paper, a Critical Healthcare Task Management (CHTM) model is proposed and implemented using an ECG dataset. We design a resource scheduling model among fog nodes at the fog level. A multi-agent system is proposed to provide the complete management of the network from the edge to the cloud. The proposed model overcomes the limitations of providing interoperability, resource sharing, scheduling, and dynamic task allocation to manage critical tasks significantly. The simulation results show that our model, in comparison with the cloud, significantly reduces the network usage by 79%, the response time by 90%, the network delay by 65%, the energy consumption by 81%, and the instance cost by 80%.


Asunto(s)
Nube Computacional , Electrocardiografía , Simulación por Computador , Atención a la Salud , Modelos Teóricos
7.
Sensors (Basel) ; 20(7)2020 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-32230843

RESUMEN

In healthcare applications, numerous sensors and devices produce massive amounts of data which are the focus of critical tasks. Their management at the edge of the network can be done by Fog computing implementation. However, Fog Nodes suffer from lake of resources That could limit the time needed for final outcome/analytics. Fog Nodes could perform just a small number of tasks. A difficult decision concerns which tasks will perform locally by Fog Nodes. Each node should select such tasks carefully based on the current contextual information, for example, tasks' priority, resource load, and resource availability. We suggest in this paper a Multi-Agent Fog Computing model for healthcare critical tasks management. The main role of the multi-agent system is mapping between three decision tables to optimize scheduling the critical tasks by assigning tasks with their priority, load in the network, and network resource availability. The first step is to decide whether a critical task can be processed locally; otherwise, the second step involves the sophisticated selection of the most suitable neighbor Fog Node to allocate it. If no Fog Node is capable of processing the task throughout the network, it is then sent to the Cloud facing the highest latency. We test the proposed scheme thoroughly, demonstrating its applicability and optimality at the edge of the network using iFogSim simulator and UTeM clinic data.


Asunto(s)
Técnicas Biosensibles , Simulación por Computador , Atención a la Salud/tendencias , Algoritmos , Nube Computacional , Humanos
8.
Comput Biol Med ; 169: 107845, 2024 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-38118307

RESUMEN

Utilizing digital healthcare services for patients who use wheelchairs is a vital and effective means to enhance their healthcare. Digital healthcare integrates various healthcare facilities, including local laboratories and centralized hospitals, to provide healthcare services for individuals in wheelchairs. In digital healthcare, the Internet of Medical Things (IoMT) allows local wheelchairs to connect with remote digital healthcare services and generate sensors from wheelchairs to monitor and process healthcare. Recently, it has been observed that wheelchair patients, when older than thirty, suffer from high blood pressure, heart disease, body glucose, and others due to less activity because of their disabilities. However, existing wheelchair IoMT applications are straightforward and do not consider the healthcare of wheelchair patients with their diseases during their disabilities. This paper presents a novel digital healthcare framework for patients with disabilities based on deep-federated learning schemes. In the proposed framework, we offer the federated learning deep convolutional neural network schemes (FL-DCNNS) that consist of different sub-schemes. The offloading scheme collects the sensors from integrated wheelchair bio-sensors as smartwatches such as blood pressure, heartbeat, body glucose, and oxygen. The smartwatches worked with wearable devices for disabled patients in our framework. We present the federated learning-enabled laboratories for data training and share the updated weights with the data security to the centralized node for decision and prediction. We present the decision forest for centralized healthcare nodes to decide on aggregation with the different constraints: cost, energy, time, and accuracy. We implemented a deep CNN scheme in each laboratory to train and validate the model locally on the node with the consideration of resources. Simulation results show that FL-DCNNS obtained the optimal results on the sensor data and minimized the energy by 25%, time 19%, cost 28%, and improved the accuracy of disease prediction by 99% as compared to existing digital healthcare schemes for wheelchair patients.


Asunto(s)
Personas con Discapacidad , Instituciones de Salud , Humanos , Hospitales , Laboratorios , Glucosa
9.
Comput Biol Med ; 154: 106617, 2023 03.
Artículo en Inglés | MEDLINE | ID: mdl-36753981

RESUMEN

These days, the ratio of cancer diseases among patients has been growing day by day. Recently, many cancer cases have been reported in different clinical hospitals. Many machine learning algorithms have been suggested in the literature to predict cancer diseases with the same class types based on trained and test data. However, there are many research rooms available for further research. In this paper, the studies look into the different types of cancer by analyzing, classifying, and processing the multi-omics dataset in a fog cloud network. Based on SARSA on-policy and multi-omics workload learning, made possible by reinforcement learning, the study made new hybrid cancer detection schemes. It consists of different layers, such as clinical data collection via laboratories and tool processes (biopsy, colonoscopy, and mammography) at the distributed omics-based clinics in the network. The study considers the different cancer classes such as carcinomas, sarcomas, leukemias, and lymphomas with their types in work and processes them using the multi-omics distributed clinics in work. In order to solve the problem, the study presents omics cancer workload reinforcement learning state action reward state action "SARSA" (OCWLS) schemes, which are made up of an on-policy learning scheme on different parameters like states, actions, timestamps, reward, accuracy, and processing time constraints. The goal is to process multiple cancer classes and workload feature matching while reducing the time it takes to process in clinical hospitals that are spread out. Simulation results show that OCWLS is better than other machine learning methods regarding+ processing time, extracting features from multiple classes of cancer, and matching in the system.


Asunto(s)
Multiómica , Neoplasias , Humanos , Recompensa , Algoritmos , Refuerzo en Psicología , Neoplasias/diagnóstico
10.
Diagnostics (Basel) ; 13(4)2023 Feb 10.
Artículo en Inglés | MEDLINE | ID: mdl-36832152

RESUMEN

This research aims to review and evaluate the most relevant scientific studies about deep learning (DL) models in the omics field. It also aims to realize the potential of DL techniques in omics data analysis fully by demonstrating this potential and identifying the key challenges that must be addressed. Numerous elements are essential for comprehending numerous studies by surveying the existing literature. For example, the clinical applications and datasets from the literature are essential elements. The published literature highlights the difficulties encountered by other researchers. In addition to looking for other studies, such as guidelines, comparative studies, and review papers, a systematic approach is used to search all relevant publications on omics and DL using different keyword variants. From 2018 to 2022, the search procedure was conducted on four Internet search engines: IEEE Xplore, Web of Science, ScienceDirect, and PubMed. These indexes were chosen because they offer enough coverage and linkages to numerous papers in the biological field. A total of 65 articles were added to the final list. The inclusion and exclusion criteria were specified. Of the 65 publications, 42 are clinical applications of DL in omics data. Furthermore, 16 out of 65 articles comprised the review publications based on single- and multi-omics data from the proposed taxonomy. Finally, only a small number of articles (7/65) were included in papers focusing on comparative analysis and guidelines. The use of DL in studying omics data presented several obstacles related to DL itself, preprocessing procedures, datasets, model validation, and testbed applications. Numerous relevant investigations were performed to address these issues. Unlike other review papers, our study distinctly reflects different observations on omics with DL model areas. We believe that the result of this study can be a useful guideline for practitioners who look for a comprehensive view of the role of DL in omics data analysis.

11.
Comput Biol Med ; 166: 107539, 2023 Oct 04.
Artículo en Inglés | MEDLINE | ID: mdl-37804778

RESUMEN

The incidence of Autism Spectrum Disorder (ASD) among children, attributed to genetics and environmental factors, has been increasing daily. ASD is a non-curable neurodevelopmental disorder that affects children's communication, behavior, social interaction, and learning skills. While machine learning has been employed for ASD detection in children, existing ASD frameworks offer limited services to monitor and improve the health of ASD patients. This paper presents a complex and efficient ASD framework with comprehensive services to enhance the results of existing ASD frameworks. Our proposed approach is the Federated Learning-enabled CNN-LSTM (FCNN-LSTM) scheme, designed for ASD detection in children using multimodal datasets. The ASD framework is built in a distributed computing environment where different ASD laboratories are connected to the central hospital. The FCNN-LSTM scheme enables local laboratories to train and validate different datasets, including Ages and Stages Questionnaires (ASQ), Facial Communication and Symbolic Behavior Scales (CSBS) Dataset, Parents Evaluate Developmental Status (PEDS), Modified Checklist for Autism in Toddlers (M-CHAT), and Screening Tool for Autism in Toddlers and Children (STAT) datasets, on different computing laboratories. To ensure the security of patient data, we have implemented a security mechanism based on advanced standard encryption (AES) within the federated learning environment. This mechanism allows all laboratories to offload and download data securely. We integrate all trained datasets into the aggregated nodes and make the final decision for ASD patients based on the decision process tree. Additionally, we have designed various Internet of Things (IoT) applications to improve the efficiency of ASD patients and achieve more optimal learning results. Simulation results demonstrate that our proposed framework achieves an ASD detection accuracy of approximately 99% compared to all existing ASD frameworks.

12.
IEEE J Biomed Health Inform ; 27(2): 673-683, 2023 02.
Artículo en Inglés | MEDLINE | ID: mdl-35635827

RESUMEN

The Internet of things (IoT) is a network of technologies that support a wide variety of healthcare workflow applications to facilitate users' obtaining real-time healthcare services. Many patients and doctors' hospitals use different healthcare services to monitor their healthcare and save their records on the servers. Healthcare sensors are widely linked to the outside world for different disease classifications and questions. These applications are extraordinarily dynamic and use mobile devices to roam several locales. However, healthcare apps confront two significant challenges: data privacy and the cost of application execution services. This work presents the mobility-aware security dynamic service composition (MSDSC) algorithmic framework for workflow healthcare based on serverless, serverless, and restricted Boltzmann machine mechanisms. The study suggests the stochastic deep neural network trains probabilistic models at each phase of the process, including service composition, task sequencing, security, and scheduling. The experimental setup and findings revealed that the developed system-based methods outperform traditional methods by 25% in terms of safety and 35% in application cost.


Asunto(s)
Atención a la Salud , Internet de las Cosas , Humanos , Privacidad , Internet
13.
Soft comput ; 27(5): 2657-2672, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-33250662

RESUMEN

The outbreaks of Coronavirus (COVID-19) epidemic have increased the pressure on healthcare and medical systems worldwide. The timely diagnosis of infected patients is a critical step to limit the spread of the COVID-19 epidemic. The chest radiography imaging has shown to be an effective screening technique in diagnosing the COVID-19 epidemic. To reduce the pressure on radiologists and control of the epidemic, fast and accurate a hybrid deep learning framework for diagnosing COVID-19 virus in chest X-ray images is developed and termed as the COVID-CheXNet system. First, the contrast of the X-ray image was enhanced and the noise level was reduced using the contrast-limited adaptive histogram equalization and Butterworth bandpass filter, respectively. This was followed by fusing the results obtained from two different pre-trained deep learning models based on the incorporation of a ResNet34 and high-resolution network model trained using a large-scale dataset. Herein, the parallel architecture was considered, which provides radiologists with a high degree of confidence to discriminate between the healthy and COVID-19 infected people. The proposed COVID-CheXNet system has managed to correctly and accurately diagnose the COVID-19 patients with a detection accuracy rate of 99.99%, sensitivity of 99.98%, specificity of 100%, precision of 100%, F1-score of 99.99%, MSE of 0.011%, and RMSE of 0.012% using the weighted sum rule at the score-level. The efficiency and usefulness of the proposed COVID-CheXNet system are established along with the possibility of using it in real clinical centers for fast diagnosis and treatment supplement, with less than 2 s per image to get the prediction result.

14.
Bioengineering (Basel) ; 10(2)2023 Jan 22.
Artículo en Inglés | MEDLINE | ID: mdl-36829641

RESUMEN

Susceptibility analysis is an intelligent technique that not only assists decision makers in assessing the suspected severity of any sort of brain tumour in a patient but also helps them diagnose and cure these tumours. This technique has been proven more useful in those developing countries where the available health-based and funding-based resources are limited. By employing set-based operations of an arithmetical model, namely fuzzy parameterised complex intuitionistic fuzzy hypersoft set (FPCIFHSS), this study seeks to develop a robust multi-attribute decision support mechanism for appraising patients' susceptibility to brain tumours. The FPCIFHSS is regarded as more reliable and generalised for handling information-based uncertainties because its complex components and fuzzy parameterisation are designed to deal with the periodic nature of the data and dubious parameters (sub-parameters), respectively. In the proposed FPCIFHSS-susceptibility model, some suitable types of brain tumours are approximated with respect to the most relevant symptoms (parameters) based on the expert opinions of decision makers in terms of complex intuitionistic fuzzy numbers (CIFNs). After determining the fuzzy parameterised values of multi-argument-based tuples and converting the CIFNs into fuzzy values, the scores for such types of tumours are computed based on a core matrix which relates them with fuzzy parameterised multi-argument-based tuples. The sub-intervals within [0, 1] denote the susceptibility degrees of patients corresponding to these types of brain tumours. The susceptibility of patients is examined by observing the membership of score values in the sub-intervals.

15.
J Adv Res ; 2023 Oct 13.
Artículo en Inglés | MEDLINE | ID: mdl-37839503

RESUMEN

INTRODUCTION: The Industrial Internet of Water Things (IIoWT) has recently emerged as a leading architecture for efficient water distribution in smart cities. Its primary purpose is to ensure high-quality drinking water for various institutions and households. However, existing IIoWT architecture has many challenges. One of the paramount challenges in achieving data standardization and data fusion across multiple monitoring institutions responsible for assessing water quality and quantity. OBJECTIVE: This paper introduces the Industrial Internet of Water Things System for Data Standardization based on Blockchain and Digital Twin Technology. The main objective of this study is to design a new IIoWT architecture where data standardization, interoperability, and data security among different water institutions must be met. METHODS: We devise the digital twin-enabled cross-platform environment using the Message Queuing Telemetry Transport (MQTT) protocol to achieve seamless interoperability in heterogeneous computing. In water management, we encounter different types of data from various sensors. Therefore, we propose a CNN-LSTM and blockchain data transactional (BCDT) scheme for processing valid data across different nodes. RESULTS: Through simulation results, we demonstrate that the proposed IIoWT architecture significantly reduces processing time while improving the accuracy of data standardization within the water distribution management system. CONCLUSION: Overall, this paper presents a comprehensive approach to tackle the challenges of data standardization and security in the IIoWT architecture.

16.
PeerJ Comput Sci ; 9: e1423, 2023.
Artículo en Inglés | MEDLINE | ID: mdl-37409080

RESUMEN

Due to the vast variety of aspects that must be made-many of which are in opposition to one another-choosing a home can be difficult for those without much experience. Individuals need to spend more time making decisions because they are difficult, which results in making poor choices. To overcome residence selection issues, a computational approach is necessary. Unaccustomed people can use decision support systems to help them make decisions of expert quality. The current article explains the empirical procedure in that field in order to construct decision-support system for selecting a residence. The main goal of this study is to build a weighted product mechanism-based decision-support system for residential preference. The said house short-listing estimation is based on several key requirements derived from the interaction between the researchers and experts. The results of the information processing show that the normalized product strategy can rank the available alternatives to help individuals choose the best option. The interval valued fuzzy hypersoft set (IVFHS-set) is a broader variant of the fuzzy soft set that resolves the constraints of the fuzzy soft set from the perspective of the utilization of the multi-argument approximation operator. This operator maps sub-parametric tuples into a power set of universe. It emphasizes the segmentation of every attribute into a disjoint attribute valued set. These characteristics make it a whole new mathematical tool for handling problems involving uncertainties. This makes the decision-making process more effective and efficient. Furthermore, the traditional TOPSIS technique as a multi-criteria decision-making strategy is discussed in a concise manner. A new decision-making strategy, "OOPCS" is constructed with modifications in TOPSIS for fuzzy hypersoft set in interval settings. The proposed strategy is applied to a real-world multi-criteria decision-making scenario for ranking the alternatives to check and demonstrate their efficiency and effectiveness.

17.
Heliyon ; 9(11): e21639, 2023 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-38027596

RESUMEN

For the past decade, there has been a significant increase in customer usage of public transport applications in smart cities. These applications rely on various services, such as communication and computation, provided by additional nodes within the smart city environment. However, these services are delivered by a diverse range of cloud computing-based servers that are widely spread and heterogeneous, leading to cybersecurity becoming a crucial challenge among these servers. Numerous machine-learning approaches have been proposed in the literature to address the cybersecurity challenges in heterogeneous transport applications within smart cities. However, the centralized security and scheduling strategies suggested so far have yet to produce optimal results for transport applications. This work aims to present a secure decentralized infrastructure for transporting data in fog cloud networks. This paper introduces Multi-Objectives Reinforcement Federated Learning Blockchain (MORFLB) for Transport Infrastructure. MORFLB aims to minimize processing and transfer delays while maximizing long-term rewards by identifying known and unknown attacks on remote sensing data in-vehicle applications. MORFLB incorporates multi-agent policies, proof-of-work hashing validation, and decentralized deep neural network training to achieve minimal processing and transfer delays. It comprises vehicle applications, decentralized fog, and cloud nodes based on blockchain reinforcement federated learning, which improves rewards through trial and error. The study formulates a combinatorial problem that minimizes and maximizes various factors for vehicle applications. The experimental results demonstrate that MORFLB effectively reduces processing and transfer delays while maximizing rewards compared to existing studies. It provides a promising solution to address the cybersecurity challenges in intelligent transport applications within smart cities. In conclusion, this paper presents MORFLB, a combination of different schemes that ensure the execution of transport data under their constraints and achieve optimal results with the suggested decentralized infrastructure based on blockchain technology.

18.
Comput Intell Neurosci ; 2022: 1307944, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35996653

RESUMEN

Due to the COVID-19 pandemic, computerized COVID-19 diagnosis studies are proliferating. The diversity of COVID-19 models raises the questions of which COVID-19 diagnostic model should be selected and which decision-makers of healthcare organizations should consider performance criteria. Because of this, a selection scheme is necessary to address all the above issues. This study proposes an integrated method for selecting the optimal deep learning model based on a novel crow swarm optimization algorithm for COVID-19 diagnosis. The crow swarm optimization is employed to find an optimal set of coefficients using a designed fitness function for evaluating the performance of the deep learning models. The crow swarm optimization is modified to obtain a good selected coefficient distribution by considering the best average fitness. We have utilized two datasets: the first dataset includes 746 computed tomography images, 349 of them are of confirmed COVID-19 cases and the other 397 are of healthy individuals, and the second dataset are composed of unimproved computed tomography images of the lung for 632 positive cases of COVID-19 with 15 trained and pretrained deep learning models with nine evaluation metrics are used to evaluate the developed methodology. Among the pretrained CNN and deep models using the first dataset, ResNet50 has an accuracy of 91.46% and a F1-score of 90.49%. For the first dataset, the ResNet50 algorithm is the optimal deep learning model selected as the ideal identification approach for COVID-19 with the closeness overall fitness value of 5715.988 for COVID-19 computed tomography lung images case considered differential advancement. In contrast, the VGG16 algorithm is the optimal deep learning model is selected as the ideal identification approach for COVID-19 with the closeness overall fitness value of 5758.791 for the second dataset. Overall, InceptionV3 had the lowest performance for both datasets. The proposed evaluation methodology is a helpful tool to assist healthcare managers in selecting and evaluating the optimal COVID-19 diagnosis models based on deep learning.


Asunto(s)
COVID-19 , Cuervos , Aprendizaje Profundo , Algoritmos , Animales , COVID-19/diagnóstico , Prueba de COVID-19 , Humanos , Pandemias
19.
J Healthc Eng ; 2022: 5329014, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-35368962

RESUMEN

Coronavirus disease 2019 (COVID-19) is a novel disease that affects healthcare on a global scale and cannot be ignored because of its high fatality rate. Computed tomography (CT) images are presently being employed to assist doctors in detecting COVID-19 in its early stages. In several scenarios, a combination of epidemiological criteria (contact during the incubation period), the existence of clinical symptoms, laboratory tests (nucleic acid amplification tests), and clinical imaging-based tests are used to diagnose COVID-19. This method can miss patients and cause more complications. Deep learning is one of the techniques that has been proven to be prominent and reliable in several diagnostic domains involving medical imaging. This study utilizes a convolutional neural network (CNN), stacked autoencoder, and deep neural network to develop a COVID-19 diagnostic system. In this system, classification undergoes some modification before applying the three CT image techniques to determine normal and COVID-19 cases. A large-scale and challenging CT image dataset was used in the training process of the employed deep learning model and reporting their final performance. Experimental outcomes show that the highest accuracy rate was achieved using the CNN model with an accuracy of 88.30%, a sensitivity of 87.65%, and a specificity of 87.97%. Furthermore, the proposed system has outperformed the current existing state-of-the-art models in detecting the COVID-19 virus using CT images.


Asunto(s)
COVID-19 , Aprendizaje Profundo , COVID-19/diagnóstico por imagen , Humanos , Redes Neurales de la Computación , Tomografía Computarizada por Rayos X/métodos
20.
J Pers Med ; 12(2)2022 Feb 18.
Artículo en Inglés | MEDLINE | ID: mdl-35207796

RESUMEN

Currently, most mask extraction techniques are based on convolutional neural networks (CNNs). However, there are still numerous problems that mask extraction techniques need to solve. Thus, the most advanced methods to deploy artificial intelligence (AI) techniques are necessary. The use of cooperative agents in mask extraction increases the efficiency of automatic image segmentation. Hence, we introduce a new mask extraction method that is based on multi-agent deep reinforcement learning (DRL) to minimize the long-term manual mask extraction and to enhance medical image segmentation frameworks. A DRL-based method is introduced to deal with mask extraction issues. This new method utilizes a modified version of the Deep Q-Network to enable the mask detector to select masks from the image studied. Based on COVID-19 computed tomography (CT) images, we used DRL mask extraction-based techniques to extract visual features of COVID-19 infected areas and provide an accurate clinical diagnosis while optimizing the pathogenic diagnostic test and saving time. We collected CT images of different cases (normal chest CT, pneumonia, typical viral cases, and cases of COVID-19). Experimental validation achieved a precision of 97.12% with a Dice of 80.81%, a sensitivity of 79.97%, a specificity of 99.48%, a precision of 85.21%, an F1 score of 83.01%, a structural metric of 84.38%, and a mean absolute error of 0.86%. Additionally, the results of the visual segmentation clearly reflected the ground truth. The results reveal the proof of principle for using DRL to extract CT masks for an effective diagnosis of COVID-19.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA