Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 27
Filtrar
1.
Comput Biol Med ; 169: 107845, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38118307

RESUMO

Utilizing digital healthcare services for patients who use wheelchairs is a vital and effective means to enhance their healthcare. Digital healthcare integrates various healthcare facilities, including local laboratories and centralized hospitals, to provide healthcare services for individuals in wheelchairs. In digital healthcare, the Internet of Medical Things (IoMT) allows local wheelchairs to connect with remote digital healthcare services and generate sensors from wheelchairs to monitor and process healthcare. Recently, it has been observed that wheelchair patients, when older than thirty, suffer from high blood pressure, heart disease, body glucose, and others due to less activity because of their disabilities. However, existing wheelchair IoMT applications are straightforward and do not consider the healthcare of wheelchair patients with their diseases during their disabilities. This paper presents a novel digital healthcare framework for patients with disabilities based on deep-federated learning schemes. In the proposed framework, we offer the federated learning deep convolutional neural network schemes (FL-DCNNS) that consist of different sub-schemes. The offloading scheme collects the sensors from integrated wheelchair bio-sensors as smartwatches such as blood pressure, heartbeat, body glucose, and oxygen. The smartwatches worked with wearable devices for disabled patients in our framework. We present the federated learning-enabled laboratories for data training and share the updated weights with the data security to the centralized node for decision and prediction. We present the decision forest for centralized healthcare nodes to decide on aggregation with the different constraints: cost, energy, time, and accuracy. We implemented a deep CNN scheme in each laboratory to train and validate the model locally on the node with the consideration of resources. Simulation results show that FL-DCNNS obtained the optimal results on the sensor data and minimized the energy by 25%, time 19%, cost 28%, and improved the accuracy of disease prediction by 99% as compared to existing digital healthcare schemes for wheelchair patients.


Assuntos
Pessoas com Deficiência , Instalações de Saúde , Humanos , Hospitais , Laboratórios , Glucose
2.
Heliyon ; 9(11): e21639, 2023 Nov.
Artigo em Inglês | MEDLINE | ID: mdl-38027596

RESUMO

For the past decade, there has been a significant increase in customer usage of public transport applications in smart cities. These applications rely on various services, such as communication and computation, provided by additional nodes within the smart city environment. However, these services are delivered by a diverse range of cloud computing-based servers that are widely spread and heterogeneous, leading to cybersecurity becoming a crucial challenge among these servers. Numerous machine-learning approaches have been proposed in the literature to address the cybersecurity challenges in heterogeneous transport applications within smart cities. However, the centralized security and scheduling strategies suggested so far have yet to produce optimal results for transport applications. This work aims to present a secure decentralized infrastructure for transporting data in fog cloud networks. This paper introduces Multi-Objectives Reinforcement Federated Learning Blockchain (MORFLB) for Transport Infrastructure. MORFLB aims to minimize processing and transfer delays while maximizing long-term rewards by identifying known and unknown attacks on remote sensing data in-vehicle applications. MORFLB incorporates multi-agent policies, proof-of-work hashing validation, and decentralized deep neural network training to achieve minimal processing and transfer delays. It comprises vehicle applications, decentralized fog, and cloud nodes based on blockchain reinforcement federated learning, which improves rewards through trial and error. The study formulates a combinatorial problem that minimizes and maximizes various factors for vehicle applications. The experimental results demonstrate that MORFLB effectively reduces processing and transfer delays while maximizing rewards compared to existing studies. It provides a promising solution to address the cybersecurity challenges in intelligent transport applications within smart cities. In conclusion, this paper presents MORFLB, a combination of different schemes that ensure the execution of transport data under their constraints and achieve optimal results with the suggested decentralized infrastructure based on blockchain technology.

3.
Comput Biol Med ; 166: 107539, 2023 Oct 04.
Artigo em Inglês | MEDLINE | ID: mdl-37804778

RESUMO

The incidence of Autism Spectrum Disorder (ASD) among children, attributed to genetics and environmental factors, has been increasing daily. ASD is a non-curable neurodevelopmental disorder that affects children's communication, behavior, social interaction, and learning skills. While machine learning has been employed for ASD detection in children, existing ASD frameworks offer limited services to monitor and improve the health of ASD patients. This paper presents a complex and efficient ASD framework with comprehensive services to enhance the results of existing ASD frameworks. Our proposed approach is the Federated Learning-enabled CNN-LSTM (FCNN-LSTM) scheme, designed for ASD detection in children using multimodal datasets. The ASD framework is built in a distributed computing environment where different ASD laboratories are connected to the central hospital. The FCNN-LSTM scheme enables local laboratories to train and validate different datasets, including Ages and Stages Questionnaires (ASQ), Facial Communication and Symbolic Behavior Scales (CSBS) Dataset, Parents Evaluate Developmental Status (PEDS), Modified Checklist for Autism in Toddlers (M-CHAT), and Screening Tool for Autism in Toddlers and Children (STAT) datasets, on different computing laboratories. To ensure the security of patient data, we have implemented a security mechanism based on advanced standard encryption (AES) within the federated learning environment. This mechanism allows all laboratories to offload and download data securely. We integrate all trained datasets into the aggregated nodes and make the final decision for ASD patients based on the decision process tree. Additionally, we have designed various Internet of Things (IoT) applications to improve the efficiency of ASD patients and achieve more optimal learning results. Simulation results demonstrate that our proposed framework achieves an ASD detection accuracy of approximately 99% compared to all existing ASD frameworks.

4.
J Adv Res ; 2023 Oct 13.
Artigo em Inglês | MEDLINE | ID: mdl-37839503

RESUMO

INTRODUCTION: The Industrial Internet of Water Things (IIoWT) has recently emerged as a leading architecture for efficient water distribution in smart cities. Its primary purpose is to ensure high-quality drinking water for various institutions and households. However, existing IIoWT architecture has many challenges. One of the paramount challenges in achieving data standardization and data fusion across multiple monitoring institutions responsible for assessing water quality and quantity. OBJECTIVE: This paper introduces the Industrial Internet of Water Things System for Data Standardization based on Blockchain and Digital Twin Technology. The main objective of this study is to design a new IIoWT architecture where data standardization, interoperability, and data security among different water institutions must be met. METHODS: We devise the digital twin-enabled cross-platform environment using the Message Queuing Telemetry Transport (MQTT) protocol to achieve seamless interoperability in heterogeneous computing. In water management, we encounter different types of data from various sensors. Therefore, we propose a CNN-LSTM and blockchain data transactional (BCDT) scheme for processing valid data across different nodes. RESULTS: Through simulation results, we demonstrate that the proposed IIoWT architecture significantly reduces processing time while improving the accuracy of data standardization within the water distribution management system. CONCLUSION: Overall, this paper presents a comprehensive approach to tackle the challenges of data standardization and security in the IIoWT architecture.

5.
PeerJ Comput Sci ; 9: e1423, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-37409080

RESUMO

Due to the vast variety of aspects that must be made-many of which are in opposition to one another-choosing a home can be difficult for those without much experience. Individuals need to spend more time making decisions because they are difficult, which results in making poor choices. To overcome residence selection issues, a computational approach is necessary. Unaccustomed people can use decision support systems to help them make decisions of expert quality. The current article explains the empirical procedure in that field in order to construct decision-support system for selecting a residence. The main goal of this study is to build a weighted product mechanism-based decision-support system for residential preference. The said house short-listing estimation is based on several key requirements derived from the interaction between the researchers and experts. The results of the information processing show that the normalized product strategy can rank the available alternatives to help individuals choose the best option. The interval valued fuzzy hypersoft set (IVFHS-set) is a broader variant of the fuzzy soft set that resolves the constraints of the fuzzy soft set from the perspective of the utilization of the multi-argument approximation operator. This operator maps sub-parametric tuples into a power set of universe. It emphasizes the segmentation of every attribute into a disjoint attribute valued set. These characteristics make it a whole new mathematical tool for handling problems involving uncertainties. This makes the decision-making process more effective and efficient. Furthermore, the traditional TOPSIS technique as a multi-criteria decision-making strategy is discussed in a concise manner. A new decision-making strategy, "OOPCS" is constructed with modifications in TOPSIS for fuzzy hypersoft set in interval settings. The proposed strategy is applied to a real-world multi-criteria decision-making scenario for ranking the alternatives to check and demonstrate their efficiency and effectiveness.

6.
Diagnostics (Basel) ; 13(4)2023 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-36832152

RESUMO

This research aims to review and evaluate the most relevant scientific studies about deep learning (DL) models in the omics field. It also aims to realize the potential of DL techniques in omics data analysis fully by demonstrating this potential and identifying the key challenges that must be addressed. Numerous elements are essential for comprehending numerous studies by surveying the existing literature. For example, the clinical applications and datasets from the literature are essential elements. The published literature highlights the difficulties encountered by other researchers. In addition to looking for other studies, such as guidelines, comparative studies, and review papers, a systematic approach is used to search all relevant publications on omics and DL using different keyword variants. From 2018 to 2022, the search procedure was conducted on four Internet search engines: IEEE Xplore, Web of Science, ScienceDirect, and PubMed. These indexes were chosen because they offer enough coverage and linkages to numerous papers in the biological field. A total of 65 articles were added to the final list. The inclusion and exclusion criteria were specified. Of the 65 publications, 42 are clinical applications of DL in omics data. Furthermore, 16 out of 65 articles comprised the review publications based on single- and multi-omics data from the proposed taxonomy. Finally, only a small number of articles (7/65) were included in papers focusing on comparative analysis and guidelines. The use of DL in studying omics data presented several obstacles related to DL itself, preprocessing procedures, datasets, model validation, and testbed applications. Numerous relevant investigations were performed to address these issues. Unlike other review papers, our study distinctly reflects different observations on omics with DL model areas. We believe that the result of this study can be a useful guideline for practitioners who look for a comprehensive view of the role of DL in omics data analysis.

7.
Comput Biol Med ; 154: 106617, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36753981

RESUMO

These days, the ratio of cancer diseases among patients has been growing day by day. Recently, many cancer cases have been reported in different clinical hospitals. Many machine learning algorithms have been suggested in the literature to predict cancer diseases with the same class types based on trained and test data. However, there are many research rooms available for further research. In this paper, the studies look into the different types of cancer by analyzing, classifying, and processing the multi-omics dataset in a fog cloud network. Based on SARSA on-policy and multi-omics workload learning, made possible by reinforcement learning, the study made new hybrid cancer detection schemes. It consists of different layers, such as clinical data collection via laboratories and tool processes (biopsy, colonoscopy, and mammography) at the distributed omics-based clinics in the network. The study considers the different cancer classes such as carcinomas, sarcomas, leukemias, and lymphomas with their types in work and processes them using the multi-omics distributed clinics in work. In order to solve the problem, the study presents omics cancer workload reinforcement learning state action reward state action "SARSA" (OCWLS) schemes, which are made up of an on-policy learning scheme on different parameters like states, actions, timestamps, reward, accuracy, and processing time constraints. The goal is to process multiple cancer classes and workload feature matching while reducing the time it takes to process in clinical hospitals that are spread out. Simulation results show that OCWLS is better than other machine learning methods regarding+ processing time, extracting features from multiple classes of cancer, and matching in the system.


Assuntos
Multiômica , Neoplasias , Humanos , Recompensa , Algoritmos , Reforço Psicológico , Neoplasias/diagnóstico
8.
Bioengineering (Basel) ; 10(2)2023 Jan 22.
Artigo em Inglês | MEDLINE | ID: mdl-36829641

RESUMO

Susceptibility analysis is an intelligent technique that not only assists decision makers in assessing the suspected severity of any sort of brain tumour in a patient but also helps them diagnose and cure these tumours. This technique has been proven more useful in those developing countries where the available health-based and funding-based resources are limited. By employing set-based operations of an arithmetical model, namely fuzzy parameterised complex intuitionistic fuzzy hypersoft set (FPCIFHSS), this study seeks to develop a robust multi-attribute decision support mechanism for appraising patients' susceptibility to brain tumours. The FPCIFHSS is regarded as more reliable and generalised for handling information-based uncertainties because its complex components and fuzzy parameterisation are designed to deal with the periodic nature of the data and dubious parameters (sub-parameters), respectively. In the proposed FPCIFHSS-susceptibility model, some suitable types of brain tumours are approximated with respect to the most relevant symptoms (parameters) based on the expert opinions of decision makers in terms of complex intuitionistic fuzzy numbers (CIFNs). After determining the fuzzy parameterised values of multi-argument-based tuples and converting the CIFNs into fuzzy values, the scores for such types of tumours are computed based on a core matrix which relates them with fuzzy parameterised multi-argument-based tuples. The sub-intervals within [0, 1] denote the susceptibility degrees of patients corresponding to these types of brain tumours. The susceptibility of patients is examined by observing the membership of score values in the sub-intervals.

9.
Soft comput ; 27(5): 2657-2672, 2023.
Artigo em Inglês | MEDLINE | ID: mdl-33250662

RESUMO

The outbreaks of Coronavirus (COVID-19) epidemic have increased the pressure on healthcare and medical systems worldwide. The timely diagnosis of infected patients is a critical step to limit the spread of the COVID-19 epidemic. The chest radiography imaging has shown to be an effective screening technique in diagnosing the COVID-19 epidemic. To reduce the pressure on radiologists and control of the epidemic, fast and accurate a hybrid deep learning framework for diagnosing COVID-19 virus in chest X-ray images is developed and termed as the COVID-CheXNet system. First, the contrast of the X-ray image was enhanced and the noise level was reduced using the contrast-limited adaptive histogram equalization and Butterworth bandpass filter, respectively. This was followed by fusing the results obtained from two different pre-trained deep learning models based on the incorporation of a ResNet34 and high-resolution network model trained using a large-scale dataset. Herein, the parallel architecture was considered, which provides radiologists with a high degree of confidence to discriminate between the healthy and COVID-19 infected people. The proposed COVID-CheXNet system has managed to correctly and accurately diagnose the COVID-19 patients with a detection accuracy rate of 99.99%, sensitivity of 99.98%, specificity of 100%, precision of 100%, F1-score of 99.99%, MSE of 0.011%, and RMSE of 0.012% using the weighted sum rule at the score-level. The efficiency and usefulness of the proposed COVID-CheXNet system are established along with the possibility of using it in real clinical centers for fast diagnosis and treatment supplement, with less than 2 s per image to get the prediction result.

10.
IEEE J Biomed Health Inform ; 27(2): 673-683, 2023 02.
Artigo em Inglês | MEDLINE | ID: mdl-35635827

RESUMO

The Internet of things (IoT) is a network of technologies that support a wide variety of healthcare workflow applications to facilitate users' obtaining real-time healthcare services. Many patients and doctors' hospitals use different healthcare services to monitor their healthcare and save their records on the servers. Healthcare sensors are widely linked to the outside world for different disease classifications and questions. These applications are extraordinarily dynamic and use mobile devices to roam several locales. However, healthcare apps confront two significant challenges: data privacy and the cost of application execution services. This work presents the mobility-aware security dynamic service composition (MSDSC) algorithmic framework for workflow healthcare based on serverless, serverless, and restricted Boltzmann machine mechanisms. The study suggests the stochastic deep neural network trains probabilistic models at each phase of the process, including service composition, task sequencing, security, and scheduling. The experimental setup and findings revealed that the developed system-based methods outperform traditional methods by 25% in terms of safety and 35% in application cost.


Assuntos
Atenção à Saúde , Internet das Coisas , Humanos , Privacidade , Internet
11.
Bioengineering (Basel) ; 9(9)2022 Sep 08.
Artigo em Inglês | MEDLINE | ID: mdl-36135003

RESUMO

Effective prioritization plays critical roles in precision medicine. Healthcare decisions are complex, involving trade-offs among numerous frequently contradictory priorities. Considering the numerous difficulties associated with COVID-19, approaches that could triage COVID-19 patients may help in prioritizing treatment and provide precise medicine for those who are at risk of serious disease. Prioritizing a patient with COVID-19 depends on a variety of examination criteria, but due to the large number of these biomarkers, it may be hard for medical practitioners and emergency systems to decide which cases should be given priority for treatment. The aim of this paper is to propose a Multidimensional Examination Framework (MEF) for the prioritization of COVID-19 severe patients on the basis of combined multi-criteria decision-making (MCDM) methods. In contrast to the existing literature, the MEF has not considered only a single dimension of the examination factors; instead, the proposed framework included different multidimensional examination criteria such as demographic, laboratory findings, vital signs, symptoms, and chronic conditions. A real dataset that consists of data from 78 patients with different examination criteria was used as a base in the construction of Multidimensional Evaluation Matrix (MEM). The proposed framework employs the CRITIC (CRiteria Importance Through Intercriteria Correlation) method to identify objective weights and importance for multidimensional examination criteria. Furthermore, the VIKOR (VIekriterijumsko KOmpromisno Rangiranje) method is utilized to prioritize COVID-19 severe patients. The results based on the CRITIC method showed that the most important examination criterion for prioritization is COVID-19 patients with heart disease, followed by cough and nasal congestion symptoms. Moreover, the VIKOR method showed that Patients 8, 3, 9, 59, and 1 are the most urgent cases that required the highest priority among the other 78 patients. Finally, the proposed framework can be used by medical organizations to prioritize the most critical COVID-19 patient that has multidimensional examination criteria and to promptly give appropriate care for more precise medicine.

12.
Sensors (Basel) ; 22(16)2022 Aug 09.
Artigo em Inglês | MEDLINE | ID: mdl-36015699

RESUMO

Over the last decade, the usage of Internet of Things (IoT) enabled applications, such as healthcare, intelligent vehicles, and smart homes, has increased progressively. These IoT applications generate delayed- sensitive data and requires quick resources for execution. Recently, software-defined networks (SDN) offer an edge computing paradigm (e.g., fog computing) to run these applications with minimum end-to-end delays. Offloading and scheduling are promising schemes of edge computing to run delay-sensitive IoT applications while satisfying their requirements. However, in the dynamic environment, existing offloading and scheduling techniques are not ideal and decrease the performance of such applications. This article formulates joint and scheduling problems into combinatorial integer linear programming (CILP). We propose a joint task offloading and scheduling (JTOS) framework based on the problem. JTOS consists of task offloading, sequencing, scheduling, searching, and failure components. The study's goal is to minimize the hybrid delay of all applications. The performance evaluation shows that JTOS outperforms all existing baseline methods in hybrid delay for all applications in the dynamic environment. The performance evaluation shows that JTOS reduces the processing delay by 39% and the communication delay by 35% for IoT applications compared to existing schemes.


Assuntos
Computação em Nuvem , Internet das Coisas , Atenção à Saúde
13.
Comput Intell Neurosci ; 2022: 1307944, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35996653

RESUMO

Due to the COVID-19 pandemic, computerized COVID-19 diagnosis studies are proliferating. The diversity of COVID-19 models raises the questions of which COVID-19 diagnostic model should be selected and which decision-makers of healthcare organizations should consider performance criteria. Because of this, a selection scheme is necessary to address all the above issues. This study proposes an integrated method for selecting the optimal deep learning model based on a novel crow swarm optimization algorithm for COVID-19 diagnosis. The crow swarm optimization is employed to find an optimal set of coefficients using a designed fitness function for evaluating the performance of the deep learning models. The crow swarm optimization is modified to obtain a good selected coefficient distribution by considering the best average fitness. We have utilized two datasets: the first dataset includes 746 computed tomography images, 349 of them are of confirmed COVID-19 cases and the other 397 are of healthy individuals, and the second dataset are composed of unimproved computed tomography images of the lung for 632 positive cases of COVID-19 with 15 trained and pretrained deep learning models with nine evaluation metrics are used to evaluate the developed methodology. Among the pretrained CNN and deep models using the first dataset, ResNet50 has an accuracy of 91.46% and a F1-score of 90.49%. For the first dataset, the ResNet50 algorithm is the optimal deep learning model selected as the ideal identification approach for COVID-19 with the closeness overall fitness value of 5715.988 for COVID-19 computed tomography lung images case considered differential advancement. In contrast, the VGG16 algorithm is the optimal deep learning model is selected as the ideal identification approach for COVID-19 with the closeness overall fitness value of 5758.791 for the second dataset. Overall, InceptionV3 had the lowest performance for both datasets. The proposed evaluation methodology is a helpful tool to assist healthcare managers in selecting and evaluating the optimal COVID-19 diagnosis models based on deep learning.


Assuntos
COVID-19 , Corvos , Aprendizado Profundo , Algoritmos , Animais , COVID-19/diagnóstico , Teste para COVID-19 , Humanos , Pandemias
14.
Multimed Tools Appl ; : 1-16, 2022 Jul 28.
Artigo em Inglês | MEDLINE | ID: mdl-35915808

RESUMO

Waste generation in smart cities is a critical issue, and the interim steps towards its management were not that effective. But at present, the challenge of meeting recycling requirements due to the practical difficulty involved in waste sorting decelerates smart city CE vision. In this paper, a digital model that automatically sorts the generated waste and classifies the type of waste as per the recycling requirements based on an artificial neural network (ANN) and features fusion techniques is proposed. In the proposed model, various features extracted using image processing are combined to develop a sophisticated classifier. Based on the different features, different models are built, and each model produces a single decision. Besides, the kind of class is determined using machine learning. The model is validated by extracting relevant information from the dataset containing 2400 images of possible waste types recycled across three categories. Based on the analysis, it is observed that the proposed model achieved an accuracy of 91.7%, proving its ability to sort and classify the waste as per the recycling requirements automatically. Overall, this analysis suggests that a digital-enabled CE vision could improve the waste sorting services and recycling decisions across the value chain in smart cities.

15.
J Healthc Eng ; 2022: 5329014, 2022.
Artigo em Inglês | MEDLINE | ID: mdl-35368962

RESUMO

Coronavirus disease 2019 (COVID-19) is a novel disease that affects healthcare on a global scale and cannot be ignored because of its high fatality rate. Computed tomography (CT) images are presently being employed to assist doctors in detecting COVID-19 in its early stages. In several scenarios, a combination of epidemiological criteria (contact during the incubation period), the existence of clinical symptoms, laboratory tests (nucleic acid amplification tests), and clinical imaging-based tests are used to diagnose COVID-19. This method can miss patients and cause more complications. Deep learning is one of the techniques that has been proven to be prominent and reliable in several diagnostic domains involving medical imaging. This study utilizes a convolutional neural network (CNN), stacked autoencoder, and deep neural network to develop a COVID-19 diagnostic system. In this system, classification undergoes some modification before applying the three CT image techniques to determine normal and COVID-19 cases. A large-scale and challenging CT image dataset was used in the training process of the employed deep learning model and reporting their final performance. Experimental outcomes show that the highest accuracy rate was achieved using the CNN model with an accuracy of 88.30%, a sensitivity of 87.65%, and a specificity of 87.97%. Furthermore, the proposed system has outperformed the current existing state-of-the-art models in detecting the COVID-19 virus using CT images.


Assuntos
COVID-19 , Aprendizado Profundo , COVID-19/diagnóstico por imagem , Humanos , Redes Neurais de Computação , Tomografia Computadorizada por Raios X/métodos
16.
Sensors (Basel) ; 22(6)2022 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-35336263

RESUMO

The electroencephalogram (EEG) introduced a massive potential for user identification. Several studies have shown that EEG provides unique features in addition to typical strength for spoofing attacks. EEG provides a graphic recording of the brain's electrical activity that electrodes can capture on the scalp at different places. However, selecting which electrodes should be used is a challenging task. Such a subject is formulated as an electrode selection task that is tackled by optimization methods. In this work, a new approach to select the most representative electrodes is introduced. The proposed algorithm is a hybrid version of the Flower Pollination Algorithm and ß-Hill Climbing optimizer called FPAß-hc. The performance of the FPAß-hc algorithm is evaluated using a standard EEG motor imagery dataset. The experimental results show that the FPAß-hc can utilize less than half of the electrode numbers, achieving more accurate results than seven other methods.


Assuntos
Imaginação , Polinização , Algoritmos , Eletroencefalografia/métodos , Flores
17.
J Pers Med ; 12(2)2022 Feb 18.
Artigo em Inglês | MEDLINE | ID: mdl-35207796

RESUMO

Currently, most mask extraction techniques are based on convolutional neural networks (CNNs). However, there are still numerous problems that mask extraction techniques need to solve. Thus, the most advanced methods to deploy artificial intelligence (AI) techniques are necessary. The use of cooperative agents in mask extraction increases the efficiency of automatic image segmentation. Hence, we introduce a new mask extraction method that is based on multi-agent deep reinforcement learning (DRL) to minimize the long-term manual mask extraction and to enhance medical image segmentation frameworks. A DRL-based method is introduced to deal with mask extraction issues. This new method utilizes a modified version of the Deep Q-Network to enable the mask detector to select masks from the image studied. Based on COVID-19 computed tomography (CT) images, we used DRL mask extraction-based techniques to extract visual features of COVID-19 infected areas and provide an accurate clinical diagnosis while optimizing the pathogenic diagnostic test and saving time. We collected CT images of different cases (normal chest CT, pneumonia, typical viral cases, and cases of COVID-19). Experimental validation achieved a precision of 97.12% with a Dice of 80.81%, a sensitivity of 79.97%, a specificity of 99.48%, a precision of 85.21%, an F1 score of 83.01%, a structural metric of 84.38%, and a mean absolute error of 0.86%. Additionally, the results of the visual segmentation clearly reflected the ground truth. The results reveal the proof of principle for using DRL to extract CT masks for an effective diagnosis of COVID-19.

18.
Expert Syst ; 39(3): e12759, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34511689

RESUMO

COVID-19 is the disease evoked by a new breed of coronavirus called the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Recently, COVID-19 has become a pandemic by infecting more than 152 million people in over 216 countries and territories. The exponential increase in the number of infections has rendered traditional diagnosis techniques inefficient. Therefore, many researchers have developed several intelligent techniques, such as deep learning (DL) and machine learning (ML), which can assist the healthcare sector in providing quick and precise COVID-19 diagnosis. Therefore, this paper provides a comprehensive review of the most recent DL and ML techniques for COVID-19 diagnosis. The studies are published from December 2019 until April 2021. In general, this paper includes more than 200 studies that have been carefully selected from several publishers, such as IEEE, Springer and Elsevier. We classify the research tracks into two categories: DL and ML and present COVID-19 public datasets established and extracted from different countries. The measures used to evaluate diagnosis methods are comparatively analysed and proper discussion is provided. In conclusion, for COVID-19 diagnosing and outbreak prediction, SVM is the most widely used machine learning mechanism, and CNN is the most widely used deep learning mechanism. Accuracy, sensitivity, and specificity are the most widely used measurements in previous studies. Finally, this review paper will guide the research community on the upcoming development of machine learning for COVID-19 and inspire their works for future development. This review paper will guide the research community on the upcoming development of ML and DL for COVID-19 and inspire their works for future development.

19.
Sensors (Basel) ; 21(20)2021 Oct 19.
Artigo em Inglês | MEDLINE | ID: mdl-34696135

RESUMO

In the last decade, the developments in healthcare technologies have been increasing progressively in practice. Healthcare applications such as ECG monitoring, heartbeat analysis, and blood pressure control connect with external servers in a manner called cloud computing. The emerging cloud paradigm offers different models, such as fog computing and edge computing, to enhance the performances of healthcare applications with minimum end-to-end delay in the network. However, many research challenges exist in the fog-cloud enabled network for healthcare applications. Therefore, in this paper, a Critical Healthcare Task Management (CHTM) model is proposed and implemented using an ECG dataset. We design a resource scheduling model among fog nodes at the fog level. A multi-agent system is proposed to provide the complete management of the network from the edge to the cloud. The proposed model overcomes the limitations of providing interoperability, resource sharing, scheduling, and dynamic task allocation to manage critical tasks significantly. The simulation results show that our model, in comparison with the cloud, significantly reduces the network usage by 79%, the response time by 90%, the network delay by 65%, the energy consumption by 81%, and the instance cost by 80%.


Assuntos
Computação em Nuvem , Eletrocardiografia , Simulação por Computador , Atenção à Saúde , Modelos Teóricos
20.
Comput Biol Med ; 137: 104799, 2021 10.
Artigo em Inglês | MEDLINE | ID: mdl-34478922

RESUMO

Stroke is the second foremost cause of death worldwide and is one of the most common causes of disability. Several approaches have been proposed to manage stroke patient rehabilitation such as robotic devices and virtual reality systems, and researchers have found that the brain-computer interfaces (BCI) approaches can provide better results. Therefore, the most challenging tasks with BCI applications involve identifying the best technique(s) that can reveal the neuron stimulus information from the patients' brains and extracting the most effective features from these signals as well. Accordingly, the main novelty of this paper is twofold: propose a new feature fusion method for motor imagery (MI)-based BCI and develop an automatic MI framework to detect the changes pre- and post-rehabilitation. This study investigated the electroencephalography (EEG) dataset from post-stroke patients with upper extremity hemiparesis. All patients performed 25 MI-based BCI sessions with follow up assessment visits to examine the functional changes before and after EEG neurorehabilitation. In the first stage, conventional filters and automatic independent component analysis with wavelet transform (AICA-WT) denoising technique were used. Next, attributes from time, entropy and frequency domains were computed, and the effective features were combined into time-entropy-frequency (TEF) attributes. Consequently, the AICA-WT and the TEF fusion set were utilised to develop an AICA-WT-TEF framework. Then, support vector machine (SVM), k-nearest neighbours (kNN) and random forest (RF) classification technique were tested for MI-based BCI rehabilitation. The proposed AICA-WT-TEF framework with RF classifier achieves the best results compared with other classifiers. Finally, the proposed framework and feature fusion set achieve a significant performance in terms of accuracy measures compared to the state-of-the-art. Therefore, the proposed methods could be crucial for improving the process of automatic MI rehabilitation and are recommended for implementation in real-time applications.


Assuntos
Interfaces Cérebro-Computador , Reabilitação do Acidente Vascular Cerebral , Acidente Vascular Cerebral , Algoritmos , Eletroencefalografia , Humanos , Imagens, Psicoterapia , Imaginação
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA
...