Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 58
Filtrar
Mais filtros

Base de dados
País/Região como assunto
Tipo de documento
Intervalo de ano de publicação
1.
Ann Plast Surg ; 93(1): 130-138, 2024 Jul 01.
Artigo em Inglês | MEDLINE | ID: mdl-38885169

RESUMO

BACKGROUND: Vascularized lymph node transfer (VLNT) involves the microvascular transplantation of functional lymph nodes from a donor site into a limb affected by lymphedema to restore the normal flow of lymphatic fluid. Despite the increasing clinical experience with VLNT, there remains insufficient data to support its routine use in clinical practice. Here, we aim to evaluate the effectiveness and safety of VLNTs for upper limb lymphedema and compare clinical outcomes when using different donor sites. METHODS: We carried out a systematic search of the literature through PubMed and Scopus databases for studies on VLNT for upper limb lymphedema. Primary and secondary outcomes included circumference reduction rate (CRR) and infection reduction rate by postoperative cellulitis episodes for the efficacy and safety of VLNT. Pooled analysis was performed using the inverse variance weighting meta-analysis of single means using the meta package in R software. Subgroup analyses were performed for donor and recipient sites, age groups, follow-ups, and symptom durations. Quality assessment was performed using the Newcastle-Ottawa Scale for nonrandomized studies. RESULTS: A total of 1089 studies were retrieved from the literature, and 15 studies with 448 upper limb lymphedema patients who underwent VLNT were included after eligibility assessment. The mean CRR was 34.6 (18.8) and the mean postoperative cellulitis episodes per year was 0.71 (0.7). The pooled analysis of CRR was 28.4% (95% confidence interval, 19.7-41.1) and postoperative cellulitis episodes showed a mean of 0.59 (95% confidence interval, 0.36-0.95) using the random-effect model. Subgroup analyses showed significant group differences in recipient site for CRR and postoperative cellulitis episodes with the wrist comprising the highest weights, and patients younger than 50 years showing a lower postoperative infection. CONCLUSIONS: Vascularized lymph node transfer using gastroepiploic flaps at the wrists has shown a significant difference in reductions of limb circumference and cellulitis episodes in upper limb lymphedema patients when compared with other donor sites. However, further prospective studies are needed to consolidate this finding.


Assuntos
Linfonodos , Linfedema , Extremidade Superior , Humanos , Linfedema/cirurgia , Extremidade Superior/cirurgia , Linfonodos/transplante , Linfonodos/irrigação sanguínea , Sítio Doador de Transplante , Resultado do Tratamento
2.
Sensors (Basel) ; 23(2)2023 Jan 04.
Artigo em Inglês | MEDLINE | ID: mdl-36679370

RESUMO

Recently, transfer learning approaches appeared to reduce the need for many classified medical images. However, these approaches still contain some limitations due to the mismatch of the domain between the source domain and the target domain. Therefore, this study aims to propose a novel approach, called Dual Transfer Learning (DTL), based on the convergence of patterns between the source and target domains. The proposed approach is applied to four pre-trained models (VGG16, Xception, ResNet50, MobileNetV2) using two datasets: ISIC2020 skin cancer images and ICIAR2018 breast cancer images, by fine-tuning the last layers on a sufficient number of unclassified images of the same disease and on a small number of classified images of the target task, in addition to using data augmentation techniques to balance classes and to increase the number of samples. According to the obtained results, it has been experimentally proven that the proposed approach has improved the performance of all models, where without data augmentation, the performance of the VGG16 model, Xception model, ResNet50 model, and MobileNetV2 model are improved by 0.28%, 10.96%, 15.73%, and 10.4%, respectively, while, with data augmentation, the VGG16 model, Xception model, ResNet50 model, and MobileNetV2 model are improved by 19.66%, 34.76%, 31.76%, and 33.03%, respectively. The Xception model obtained the highest performance compared to the rest of the models when classifying skin cancer images in the ISIC2020 dataset, as it obtained 96.83%, 96.919%, 96.826%, 96.825%, 99.07%, and 94.58% for accuracy, precision, recall, F1-score, sensitivity, and specificity respectively. To classify the images of the ICIAR 2018 dataset for breast cancer, the Xception model obtained 99%, 99.003%, 98.995%, 99%, 98.55%, and 99.14% for accuracy, precision, recall, F1-score, sensitivity, and specificity, respectively. Through these results, the proposed approach improved the models' performance when fine-tuning was performed on unclassified images of the same disease.


Assuntos
Aprendizagem , Neoplasias Cutâneas , Humanos , Aprendizado de Máquina
3.
Sensors (Basel) ; 22(14)2022 Jul 16.
Artigo em Inglês | MEDLINE | ID: mdl-35891007

RESUMO

In healthcare, there are rapid emergency response systems that necessitate real-time actions where speed and efficiency are critical; this may suffer as a result of cloud latency because of the delay caused by the cloud. Therefore, fog computing is utilized in real-time healthcare applications. There are still limitations in response time, latency, and energy consumption. Thus, a proper fog computing architecture and good task scheduling algorithms should be developed to minimize these limitations. In this study, an Energy-Efficient Internet of Medical Things to Fog Interoperability of Task Scheduling (EEIoMT) framework is proposed. This framework schedules tasks in an efficient way by ensuring that critical tasks are executed in the shortest possible time within their deadline while balancing energy consumption when processing other tasks. In our architecture, Electrocardiogram (ECG) sensors are used to monitor heart health at home in a smart city. ECG sensors send the sensed data continuously to the ESP32 microcontroller through Bluetooth (BLE) for analysis. ESP32 is also linked to the fog scheduler via Wi-Fi to send the results data of the analysis (tasks). The appropriate fog node is carefully selected to execute the task by giving each node a special weight, which is formulated on the basis of the expected amount of energy consumed and latency in executing this task and choosing the node with the lowest weight. Simulations were performed in iFogSim2. The simulation outcomes show that the suggested framework has a superior performance in reducing the usage of energy, latency, and network utilization when weighed against CHTM, LBS, and FNPA models.


Assuntos
Algoritmos , Computação em Nuvem , Simulação por Computador , Eletrocardiografia , Internet
4.
Sensors (Basel) ; 22(16)2022 Aug 09.
Artigo em Inglês | MEDLINE | ID: mdl-36015699

RESUMO

Over the last decade, the usage of Internet of Things (IoT) enabled applications, such as healthcare, intelligent vehicles, and smart homes, has increased progressively. These IoT applications generate delayed- sensitive data and requires quick resources for execution. Recently, software-defined networks (SDN) offer an edge computing paradigm (e.g., fog computing) to run these applications with minimum end-to-end delays. Offloading and scheduling are promising schemes of edge computing to run delay-sensitive IoT applications while satisfying their requirements. However, in the dynamic environment, existing offloading and scheduling techniques are not ideal and decrease the performance of such applications. This article formulates joint and scheduling problems into combinatorial integer linear programming (CILP). We propose a joint task offloading and scheduling (JTOS) framework based on the problem. JTOS consists of task offloading, sequencing, scheduling, searching, and failure components. The study's goal is to minimize the hybrid delay of all applications. The performance evaluation shows that JTOS outperforms all existing baseline methods in hybrid delay for all applications in the dynamic environment. The performance evaluation shows that JTOS reduces the processing delay by 39% and the communication delay by 35% for IoT applications compared to existing schemes.


Assuntos
Computação em Nuvem , Internet das Coisas , Atenção à Saúde
5.
Sensors (Basel) ; 22(18)2022 Sep 16.
Artigo em Inglês | MEDLINE | ID: mdl-36146358

RESUMO

Wireless Sensor Networks (WSNs) enhance the ability to sense and control the physical environment in various applications. The functionality of WSNs depends on various aspects like the localization of nodes, the strategies of node deployment, and a lifetime of nodes and routing techniques, etc. Coverage is an essential part of WSNs wherein the targeted area is covered by at least one node. Computational Geometry (CG) -based techniques significantly improve the coverage and connectivity of WSNs. This paper is a step towards employing some of the popular techniques in WSNs in a productive manner. Furthermore, this paper attempts to survey the existing research conducted using Computational Geometry-based methods in WSNs. In order to address coverage and connectivity issues in WSNs, the use of the Voronoi Diagram, Delaunay Triangulation, Voronoi Tessellation, and the Convex Hull have played a prominent role. Finally, the paper concludes by discussing various research challenges and proposed solutions using Computational Geometry-based techniques.

6.
Sensors (Basel) ; 22(6)2022 Mar 08.
Artigo em Inglês | MEDLINE | ID: mdl-35336263

RESUMO

The electroencephalogram (EEG) introduced a massive potential for user identification. Several studies have shown that EEG provides unique features in addition to typical strength for spoofing attacks. EEG provides a graphic recording of the brain's electrical activity that electrodes can capture on the scalp at different places. However, selecting which electrodes should be used is a challenging task. Such a subject is formulated as an electrode selection task that is tackled by optimization methods. In this work, a new approach to select the most representative electrodes is introduced. The proposed algorithm is a hybrid version of the Flower Pollination Algorithm and ß-Hill Climbing optimizer called FPAß-hc. The performance of the FPAß-hc algorithm is evaluated using a standard EEG motor imagery dataset. The experimental results show that the FPAß-hc can utilize less than half of the electrode numbers, achieving more accurate results than seven other methods.


Assuntos
Imaginação , Polinização , Algoritmos , Eletroencefalografia/métodos , Flores
7.
Expert Syst ; : e13010, 2022 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-35942177

RESUMO

Coronavirus disease 2019 (COVID-19) has attracted significant attention of researchers from various disciplines since the end of 2019. Although the global epidemic situation is stabilizing due to vaccination, new COVID-19 cases are constantly being discovered around the world. As a result, lung computed tomography (CT) examination, an aggregated identification technique, has been used to ameliorate diagnosis. It helps reveal missed diagnoses due to the ambiguity of nucleic acid polymerase chain reaction. Therefore, this study investigated how quickly and accurately hybrid deep learning (DL) methods can identify infected individuals with COVID-19 on the basis of their lung CT images. In addition, this study proposed a developed system to create a reliable COVID-19 prediction network using various layers starting with the segmentation of the lung CT scan image and ending with disease prediction. The initial step of the system starts with a proposed technique for lung segmentation that relies on a no-threshold histogram-based image segmentation method. Afterward, the GrabCut method was used as a post-segmentation method to enhance segmentation outcomes and avoid over-and under-segmentation problems. Then, three pre-trained models of standard DL methods, including Visual Geometry Group Network, convolutional deep belief network, and high-resolution network, were utilized to extract the most affective features from the segmented images that can help to identify COVID-19. These three described pre-trained models were combined as a new mechanism to increase the system's overall prediction capabilities. A publicly available dataset, namely, COVID-19 CT, was used to test the performance of the proposed model, which obtained a 95% accuracy rate. On the basis of comparison, the proposed model outperformed several state-of-the-art studies. Because of its effectiveness in accurately screening COVID-19 CT images, the developed model will potentially be valuable as an additional diagnostic tool for leading clinical professionals.

8.
Expert Syst ; 39(3): e12759, 2022 Mar.
Artigo em Inglês | MEDLINE | ID: mdl-34511689

RESUMO

COVID-19 is the disease evoked by a new breed of coronavirus called the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2). Recently, COVID-19 has become a pandemic by infecting more than 152 million people in over 216 countries and territories. The exponential increase in the number of infections has rendered traditional diagnosis techniques inefficient. Therefore, many researchers have developed several intelligent techniques, such as deep learning (DL) and machine learning (ML), which can assist the healthcare sector in providing quick and precise COVID-19 diagnosis. Therefore, this paper provides a comprehensive review of the most recent DL and ML techniques for COVID-19 diagnosis. The studies are published from December 2019 until April 2021. In general, this paper includes more than 200 studies that have been carefully selected from several publishers, such as IEEE, Springer and Elsevier. We classify the research tracks into two categories: DL and ML and present COVID-19 public datasets established and extracted from different countries. The measures used to evaluate diagnosis methods are comparatively analysed and proper discussion is provided. In conclusion, for COVID-19 diagnosing and outbreak prediction, SVM is the most widely used machine learning mechanism, and CNN is the most widely used deep learning mechanism. Accuracy, sensitivity, and specificity are the most widely used measurements in previous studies. Finally, this review paper will guide the research community on the upcoming development of machine learning for COVID-19 and inspire their works for future development. This review paper will guide the research community on the upcoming development of ML and DL for COVID-19 and inspire their works for future development.

9.
Sensors (Basel) ; 21(20)2021 Oct 19.
Artigo em Inglês | MEDLINE | ID: mdl-34696135

RESUMO

In the last decade, the developments in healthcare technologies have been increasing progressively in practice. Healthcare applications such as ECG monitoring, heartbeat analysis, and blood pressure control connect with external servers in a manner called cloud computing. The emerging cloud paradigm offers different models, such as fog computing and edge computing, to enhance the performances of healthcare applications with minimum end-to-end delay in the network. However, many research challenges exist in the fog-cloud enabled network for healthcare applications. Therefore, in this paper, a Critical Healthcare Task Management (CHTM) model is proposed and implemented using an ECG dataset. We design a resource scheduling model among fog nodes at the fog level. A multi-agent system is proposed to provide the complete management of the network from the edge to the cloud. The proposed model overcomes the limitations of providing interoperability, resource sharing, scheduling, and dynamic task allocation to manage critical tasks significantly. The simulation results show that our model, in comparison with the cloud, significantly reduces the network usage by 79%, the response time by 90%, the network delay by 65%, the energy consumption by 81%, and the instance cost by 80%.


Assuntos
Computação em Nuvem , Eletrocardiografia , Simulação por Computador , Atenção à Saúde , Modelos Teóricos
10.
Sensors (Basel) ; 21(11)2021 Jun 07.
Artigo em Inglês | MEDLINE | ID: mdl-34200216

RESUMO

Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been widely acknowledged. The artificially created examples will lead to different instances negatively identified by the DL models that are humanly considered benign. Practical application in actual physical scenarios with adversarial threats shows their features. Thus, adversarial attacks and defense, including machine learning and its reliability, have drawn growing interest and, in recent years, has been a hot topic of research. We introduce a framework that provides a defensive model against the adversarial speckle-noise attack, the adversarial training, and a feature fusion strategy, which preserves the classification with correct labelling. We evaluate and analyze the adversarial attacks and defenses on the retinal fundus images for the Diabetic Retinopathy recognition problem, which is considered a state-of-the-art endeavor. Results obtained on the retinal fundus images, which are prone to adversarial attacks, are 99% accurate and prove that the proposed defensive model is robust.


Assuntos
Diabetes Mellitus , Retinopatia Diabética , Algoritmos , Inteligência Artificial , Retinopatia Diabética/diagnóstico , Humanos , Redes Neurais de Computação , Reprodutibilidade dos Testes
11.
Sensors (Basel) ; 21(12)2021 Jun 14.
Artigo em Inglês | MEDLINE | ID: mdl-34198608

RESUMO

The Internet of Medical Things (IoMT) is increasingly being used for healthcare purposes. IoMT enables many sensors to collect patient data from various locations and send it to a distributed hospital for further study. IoMT provides patients with a variety of paid programmes to help them keep track of their health problems. However, the current system services are expensive, and offloaded data in the healthcare network are insecure. The research develops a new, cost-effective and stable IoMT framework based on a blockchain-enabled fog cloud. The study aims to reduce the cost of healthcare application services as they are processing in the system. The study devises an IoMT system based on different algorithm techniques, such as Blockchain-Enable Smart-Contract Cost-Efficient Scheduling Algorithm Framework (BECSAF) schemes. Smart-Contract Blockchain schemes ensure data consistency and validation with symmetric cryptography. However, due to the different workflow tasks scheduled on other nodes, the heterogeneous, earliest finish, time-based scheduling deals with execution under their deadlines. Simulation results show that the proposed algorithm schemes outperform all existing baseline approaches in terms of the implementation of applications.


Assuntos
Blockchain , Internet das Coisas , Algoritmos , Atenção à Saúde , Humanos
12.
Sensors (Basel) ; 20(17)2020 Sep 01.
Artigo em Inglês | MEDLINE | ID: mdl-32883006

RESUMO

The discrimination of non-focal class (NFC) and focal class (FC), is vital in localizing the epileptogenic zone (EZ) during neurosurgery. In the conventional diagnosis method, the neurologist has to visually examine the long hour electroencephalogram (EEG) signals, which consumes time and is prone to error. Hence, in this present work, automated diagnosis of FC EEG signals from NFC EEG signals is developed using the Fast Walsh-Hadamard Transform (FWHT) method, entropies, and artificial neural network (ANN). The FWHT analyzes the EEG signals in the frequency domain and decomposes it into the Hadamard coefficients. Five different nonlinear features, namely approximate entropy (ApEn), log-energy entropy (LogEn), fuzzy entropy (FuzzyEn), sample entropy (SampEn), and permutation entropy (PermEn) are extracted from the decomposed Hadamard coefficients. The extracted features detail the nonlinearity in the NFC and the FC EEG signals. The judicious entropy features are supplied to the ANN classifier, with a 10-fold cross-validation method to classify the NFC and FC classes. Two publicly available datasets such as the University of Bonn and Bern-Barcelona dataset are used to evaluate the proposed approach. A maximum sensitivity of 99.70%, the accuracy of 99.50%, and specificity of 99.30% with the 3750 pairs of NFC and FC signal are achieved using the Bern-Barcelona dataset, while the accuracy of 92.80%, the sensitivity of 91%, and specificity of 94.60% is achieved using University of Bonn dataset. Compared to the existing technique, the proposed approach attained a maximum classification performance in both the dataset.


Assuntos
Eletroencefalografia , Processamento de Sinais Assistido por Computador , Entropia , Redes Neurais de Computação , Procedimentos Neurocirúrgicos
13.
Sensors (Basel) ; 20(7)2020 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-32230843

RESUMO

In healthcare applications, numerous sensors and devices produce massive amounts of data which are the focus of critical tasks. Their management at the edge of the network can be done by Fog computing implementation. However, Fog Nodes suffer from lake of resources That could limit the time needed for final outcome/analytics. Fog Nodes could perform just a small number of tasks. A difficult decision concerns which tasks will perform locally by Fog Nodes. Each node should select such tasks carefully based on the current contextual information, for example, tasks' priority, resource load, and resource availability. We suggest in this paper a Multi-Agent Fog Computing model for healthcare critical tasks management. The main role of the multi-agent system is mapping between three decision tables to optimize scheduling the critical tasks by assigning tasks with their priority, load in the network, and network resource availability. The first step is to decide whether a critical task can be processed locally; otherwise, the second step involves the sophisticated selection of the most suitable neighbor Fog Node to allocate it. If no Fog Node is capable of processing the task throughout the network, it is then sent to the Cloud facing the highest latency. We test the proposed scheme thoroughly, demonstrating its applicability and optimality at the edge of the network using iFogSim simulator and UTeM clinic data.


Assuntos
Técnicas Biossensoriais , Simulação por Computador , Atenção à Saúde/tendências , Algoritmos , Computação em Nuvem , Humanos
15.
J Med Syst ; 42(4): 58, 2018 Feb 17.
Artigo em Inglês | MEDLINE | ID: mdl-29455440

RESUMO

Blood leucocytes segmentation in medical images is viewed as difficult process due to the variability of blood cells concerning their shape and size and the difficulty towards determining location of Blood Leucocytes. Physical analysis of blood tests to recognize leukocytes is tedious, time-consuming and liable to error because of the various morphological components of the cells. Segmentation of medical imagery has been considered as a difficult task because of complexity of images, and also due to the non-availability of leucocytes models which entirely captures the probable shapes in each structures and also incorporate cell overlapping, the expansive variety of the blood cells concerning their shape and size, various elements influencing the outer appearance of the blood leucocytes, and low Static Microscope Image disparity from extra issues outcoming about because of noise. We suggest a strategy towards segmentation of blood leucocytes using static microscope images which is a resultant of three prevailing systems of computer vision fiction: enhancing the image, Support vector machine for segmenting the image, and filtering out non ROI (region of interest) on the basis of Local binary patterns and texture features. Every one of these strategies are modified for blood leucocytes division issue, in this manner the subsequent techniques are very vigorous when compared with its individual segments. Eventually, we assess framework based by compare the outcome and manual division. The findings outcome from this study have shown a new approach that automatically segments the blood leucocytes and identify it from a static microscope images. Initially, the method uses a trainable segmentation procedure and trained support vector machine classifier to accurately identify the position of the ROI. After that, filtering out non ROI have proposed based on histogram analysis to avoid the non ROI and chose the right object. Finally, identify the blood leucocytes type using the texture feature. The performance of the foreseen approach has been tried in appearing differently in relation to the system against manual examination by a gynaecologist utilizing diverse scales. A total of 100 microscope images were used for the comparison, and the results showed that the proposed solution is a viable alternative to the manual segmentation method for accurately determining the ROI. We have evaluated the blood leucocytes identification using the ROI texture (LBP Feature). The identification accuracy in the technique used is about 95.3%., with 100 sensitivity and 91.66% specificity.


Assuntos
Interpretação de Imagem Assistida por Computador/métodos , Processamento de Imagem Assistida por Computador/métodos , Leucócitos/citologia , Reconhecimento Automatizado de Padrão/métodos , Máquina de Vetores de Suporte , Inteligência Artificial , Humanos , Microscopia , Reprodutibilidade dos Testes
16.
Comput Biol Med ; 169: 107845, 2024 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-38118307

RESUMO

Utilizing digital healthcare services for patients who use wheelchairs is a vital and effective means to enhance their healthcare. Digital healthcare integrates various healthcare facilities, including local laboratories and centralized hospitals, to provide healthcare services for individuals in wheelchairs. In digital healthcare, the Internet of Medical Things (IoMT) allows local wheelchairs to connect with remote digital healthcare services and generate sensors from wheelchairs to monitor and process healthcare. Recently, it has been observed that wheelchair patients, when older than thirty, suffer from high blood pressure, heart disease, body glucose, and others due to less activity because of their disabilities. However, existing wheelchair IoMT applications are straightforward and do not consider the healthcare of wheelchair patients with their diseases during their disabilities. This paper presents a novel digital healthcare framework for patients with disabilities based on deep-federated learning schemes. In the proposed framework, we offer the federated learning deep convolutional neural network schemes (FL-DCNNS) that consist of different sub-schemes. The offloading scheme collects the sensors from integrated wheelchair bio-sensors as smartwatches such as blood pressure, heartbeat, body glucose, and oxygen. The smartwatches worked with wearable devices for disabled patients in our framework. We present the federated learning-enabled laboratories for data training and share the updated weights with the data security to the centralized node for decision and prediction. We present the decision forest for centralized healthcare nodes to decide on aggregation with the different constraints: cost, energy, time, and accuracy. We implemented a deep CNN scheme in each laboratory to train and validate the model locally on the node with the consideration of resources. Simulation results show that FL-DCNNS obtained the optimal results on the sensor data and minimized the energy by 25%, time 19%, cost 28%, and improved the accuracy of disease prediction by 99% as compared to existing digital healthcare schemes for wheelchair patients.


Assuntos
Pessoas com Deficiência , Instalações de Saúde , Humanos , Hospitais , Laboratórios , Glucose
17.
Comput Biol Med ; 178: 108694, 2024 Jun 08.
Artigo em Inglês | MEDLINE | ID: mdl-38870728

RESUMO

Telemedicine is an emerging development in the healthcare domain, where the Internet of Things (IoT) fiber optics technology assists telemedicine applications to improve overall digital healthcare performances for society. Telemedicine applications are bowel disease monitoring based on fiber optics laser endoscopy, gastrointestinal disease fiber optics lights, remote doctor-patient communication, and remote surgeries. However, many existing systems are not effective and their approaches based on deep reinforcement learning have not obtained optimal results. This paper presents the fiber optics IoT healthcare system based on deep reinforcement learning combinatorial constraint scheduling for hybrid telemedicine applications. In the proposed system, we propose the adaptive security deep q-learning network (ASDQN) algorithm methodology to execute all telemedicine applications under their given quality of services (deadline, latency, security, and resources) constraints. For the problem solution, we have exploited different fiber optics endoscopy datasets with images, video, and numeric data for telemedicine applications. The objective is to minimize the overall latency of telemedicine applications (e.g., local, communication, and edge nodes) and maximize the overall rewards during offloading and scheduling on different nodes. The simulation results show that ASDQN outperforms all telemedicine applications with their QoS and objectives compared to existing state action reward state (SARSA) and deep q-learning network (DQN) policy during execution and scheduling on different nodes.

18.
Diagnostics (Basel) ; 13(4)2023 Feb 10.
Artigo em Inglês | MEDLINE | ID: mdl-36832152

RESUMO

This research aims to review and evaluate the most relevant scientific studies about deep learning (DL) models in the omics field. It also aims to realize the potential of DL techniques in omics data analysis fully by demonstrating this potential and identifying the key challenges that must be addressed. Numerous elements are essential for comprehending numerous studies by surveying the existing literature. For example, the clinical applications and datasets from the literature are essential elements. The published literature highlights the difficulties encountered by other researchers. In addition to looking for other studies, such as guidelines, comparative studies, and review papers, a systematic approach is used to search all relevant publications on omics and DL using different keyword variants. From 2018 to 2022, the search procedure was conducted on four Internet search engines: IEEE Xplore, Web of Science, ScienceDirect, and PubMed. These indexes were chosen because they offer enough coverage and linkages to numerous papers in the biological field. A total of 65 articles were added to the final list. The inclusion and exclusion criteria were specified. Of the 65 publications, 42 are clinical applications of DL in omics data. Furthermore, 16 out of 65 articles comprised the review publications based on single- and multi-omics data from the proposed taxonomy. Finally, only a small number of articles (7/65) were included in papers focusing on comparative analysis and guidelines. The use of DL in studying omics data presented several obstacles related to DL itself, preprocessing procedures, datasets, model validation, and testbed applications. Numerous relevant investigations were performed to address these issues. Unlike other review papers, our study distinctly reflects different observations on omics with DL model areas. We believe that the result of this study can be a useful guideline for practitioners who look for a comprehensive view of the role of DL in omics data analysis.

19.
Comput Biol Med ; 154: 106617, 2023 03.
Artigo em Inglês | MEDLINE | ID: mdl-36753981

RESUMO

These days, the ratio of cancer diseases among patients has been growing day by day. Recently, many cancer cases have been reported in different clinical hospitals. Many machine learning algorithms have been suggested in the literature to predict cancer diseases with the same class types based on trained and test data. However, there are many research rooms available for further research. In this paper, the studies look into the different types of cancer by analyzing, classifying, and processing the multi-omics dataset in a fog cloud network. Based on SARSA on-policy and multi-omics workload learning, made possible by reinforcement learning, the study made new hybrid cancer detection schemes. It consists of different layers, such as clinical data collection via laboratories and tool processes (biopsy, colonoscopy, and mammography) at the distributed omics-based clinics in the network. The study considers the different cancer classes such as carcinomas, sarcomas, leukemias, and lymphomas with their types in work and processes them using the multi-omics distributed clinics in work. In order to solve the problem, the study presents omics cancer workload reinforcement learning state action reward state action "SARSA" (OCWLS) schemes, which are made up of an on-policy learning scheme on different parameters like states, actions, timestamps, reward, accuracy, and processing time constraints. The goal is to process multiple cancer classes and workload feature matching while reducing the time it takes to process in clinical hospitals that are spread out. Simulation results show that OCWLS is better than other machine learning methods regarding+ processing time, extracting features from multiple classes of cancer, and matching in the system.


Assuntos
Multiômica , Neoplasias , Humanos , Recompensa , Algoritmos , Reforço Psicológico , Neoplasias/diagnóstico
20.
Comput Biol Med ; 166: 107539, 2023 Oct 04.
Artigo em Inglês | MEDLINE | ID: mdl-37804778

RESUMO

The incidence of Autism Spectrum Disorder (ASD) among children, attributed to genetics and environmental factors, has been increasing daily. ASD is a non-curable neurodevelopmental disorder that affects children's communication, behavior, social interaction, and learning skills. While machine learning has been employed for ASD detection in children, existing ASD frameworks offer limited services to monitor and improve the health of ASD patients. This paper presents a complex and efficient ASD framework with comprehensive services to enhance the results of existing ASD frameworks. Our proposed approach is the Federated Learning-enabled CNN-LSTM (FCNN-LSTM) scheme, designed for ASD detection in children using multimodal datasets. The ASD framework is built in a distributed computing environment where different ASD laboratories are connected to the central hospital. The FCNN-LSTM scheme enables local laboratories to train and validate different datasets, including Ages and Stages Questionnaires (ASQ), Facial Communication and Symbolic Behavior Scales (CSBS) Dataset, Parents Evaluate Developmental Status (PEDS), Modified Checklist for Autism in Toddlers (M-CHAT), and Screening Tool for Autism in Toddlers and Children (STAT) datasets, on different computing laboratories. To ensure the security of patient data, we have implemented a security mechanism based on advanced standard encryption (AES) within the federated learning environment. This mechanism allows all laboratories to offload and download data securely. We integrate all trained datasets into the aggregated nodes and make the final decision for ASD patients based on the decision process tree. Additionally, we have designed various Internet of Things (IoT) applications to improve the efficiency of ASD patients and achieve more optimal learning results. Simulation results demonstrate that our proposed framework achieves an ASD detection accuracy of approximately 99% compared to all existing ASD frameworks.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA