Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 9 de 9
Filter
Add more filters










Database
Language
Publication year range
1.
Neural Netw ; 165: 860-867, 2023 Aug.
Article in English | MEDLINE | ID: mdl-37437364

ABSTRACT

As the noisy intermediate-scale quantum (NISQ) era has begun, a quantum neural network (QNN) is definitely a promising solution to many problems that classical neural networks cannot solve. In addition, a quantum convolutional neural network (QCNN) is now receiving a lot of attention because it can process high dimensional inputs comparing to QNN. However, due to the nature of quantum computing, it is difficult to scale up the QCNN to extract a sufficient number of features due to barren plateaus. This is especially challenging in classification operations with high-dimensional data input. However, due to the nature of quantum computing, it is difficult to scale up the QCNN to extract a sufficient number of features due to barren plateaus. This is especially challenging in classification operations with high dimensional data input. Motivated by this, a novel stereoscopic 3D scalable QCNN (sQCNN-3D) is proposed for point cloud data processing in classification applications. Furthermore, reverse fidelity training (RF-Train) is additionally considered on top of sQCNN-3D for diversifying features with a limited number of qubits using the fidelity of quantum computing. Our data-intensive performance evaluation verifies that the proposed algorithm achieves desired performance.


Subject(s)
Computing Methodologies , Quantum Theory , Neural Networks, Computer , Algorithms , Cloud Computing
2.
Comput Biol Med ; 156: 106739, 2023 04.
Article in English | MEDLINE | ID: mdl-36889025

ABSTRACT

In this work, we present a deep reinforcement learning-based approach as a baseline system for autonomous propofol infusion control. Specifically, design an environment for simulating the possible conditions of a target patient based on input demographic data and design our reinforcement learning model-based system so that it effectively makes predictions on the proper level of propofol infusion to maintain stable anesthesia even under dynamic conditions that can affect the decision-making process, such as the manual control of remifentanil by anesthesiologists and the varying patient conditions under anesthesia. Through an extensive set of evaluations using patient data from 3000 subjects, we show that the proposed method results in stabilization in the anesthesia state, by managing the bispectral index (BIS) and effect-site concentration for a patient showing varying conditions.


Subject(s)
Anesthesia , Propofol , Humans , Anesthetics, Intravenous , Feasibility Studies , Piperidines , Anesthesia, Intravenous/methods , Electroencephalography
3.
Article in English | MEDLINE | ID: mdl-35853065

ABSTRACT

This article aims to provide a hierarchical reinforcement learning (RL)-based solution to the automated drug infusion field. The learning policy is divided into the tasks of: 1) learning trajectory generative model and 2) planning policy model. The proposed deep infusion assistant policy gradient (DIAPG) model draws inspiration from adversarial autoencoders (AAEs) and learns latent representations of hypnotic depth trajectories. Given the trajectories drawn from the generative model, the planning policy infers a dose of propofol for stable sedation of a patient under total intravenous anesthesia (TIVA) using propofol and remifentanil. Through extensive evaluation, the DIAPG model can effectively stabilize bispectral index (BIS) and effect site concentration given a potentially time-varying target sequence. The proposed DIAPG shows an increased performance of 530% and 15% when a human expert and a standard reinforcement algorithm are used to infuse drugs, respectively.

4.
Sci Rep ; 12(1): 1534, 2022 01 27.
Article in English | MEDLINE | ID: mdl-35087165

ABSTRACT

It seems as though progressively more people are in the race to upload content, data, and information online; and hospitals haven't neglected this trend either. Hospitals are now at the forefront for multi-site medical data sharing to provide ground-breaking advancements in the way health records are shared and patients are diagnosed. Sharing of medical data is essential in modern medical research. Yet, as with all data sharing technology, the challenge is to balance improved treatment with protecting patient's personal information. This paper provides a novel split learning algorithm coined the term, "multi-site split learning", which enables a secure transfer of medical data between multiple hospitals without fear of exposing personal data contained in patient records. It also explores the effects of varying the number of end-systems and the ratio of data-imbalance on the deep learning performance. A guideline for the most optimal configuration of split learning that ensures privacy of patient data whilst achieving performance is empirically given. We argue the benefits of our multi-site split learning algorithm, especially regarding the privacy preserving factor, using CT scans of COVID-19 patients, X-ray bone scans, and cholesterol level medical data.


Subject(s)
Algorithms , Bone and Bones/diagnostic imaging , COVID-19/diagnostic imaging , Cholesterol/blood , Hospitals , Privacy , Feasibility Studies , Female , Humans , Male , Tomography, X-Ray Computed , X-Rays
5.
Sensors (Basel) ; 21(4)2021 Feb 20.
Article in English | MEDLINE | ID: mdl-33672454

ABSTRACT

Green tide, which is a serious water pollution problem, is caused by the complex relationships of various factors, such as flow rate, several water quality indicators, and weather. Because the existing methods are not suitable for identifying these relationships and making accurate predictions, a new system and algorithm is required to predict the green tide phenomenon and also minimize the related damage before the green tide occurs. For this purpose, we consider a new network model using smart sensor-based federated learning which is able to use distributed observation data with geologically separated local models. Moreover, we design an optimal scheduler which is beneficial to use real-time big data arrivals to make the overall network system efficient. The proposed scheduling algorithm is effective in terms of (1) data usage and (2) the performance of green tide occurrence prediction models. The advantages of the proposed algorithm is verified via data-intensive experiments with real water quality big-data.

6.
Sensors (Basel) ; 18(10)2018 Oct 09.
Article in English | MEDLINE | ID: mdl-30304788

ABSTRACT

The increasing use of Internet of Things (IoT) devices in specific areas results in an interference among them and the quality of communications can be severely degraded. To deal with this interference issue, the IEEE 802.11ax standard has been established in hyper-dense wireless networking systems. The 802.11ax adopts a new candidate technology that is called multiple network allocation vector in order to mitigate the interference problem. In this paper, we point out a potential problem in multiple network allocation vector which can cause delays to communication among IoT devices in hyper-dense wireless networks. Furthermore, this paper introduces an adaptive beam alignment algorithm for interference resolution, and analyzes the potential delays of communications among IoT devices under interference conditions. Finally, we simulate our proposed algorithm in densely deployed environments and show that the interference can be mitigated and the IEEE 802.11ax-based IoT devices can utilize air interface more fairly compared to conventional IEEE 802.11 distributed coordination function.

7.
PLoS One ; 12(8): e0182527, 2017.
Article in English | MEDLINE | ID: mdl-28796804

ABSTRACT

As an important part of IoTization trends, wireless sensing technologies have been involved in many fields of human life. In cellular network evolution, the long term evolution advanced (LTE-A) networks including machine-type communication (MTC) features (named LTE-M) provide a promising infrastructure for a proliferation of Internet of things (IoT) sensing platform. However, LTE-M may not be optimally exploited for directly supporting such low-data-rate devices in terms of energy efficiency since it depends on core technologies of LTE that are originally designed for high-data-rate services. Focusing on this circumstance, we propose a novel adaptive modulation and coding selection (AMCS) algorithm to address the energy consumption problem in the LTE-M based IoT-sensing platform. The proposed algorithm determines the optimal pair of MCS and the number of primary resource blocks (#PRBs), at which the transport block size is sufficient to packetize the sensing data within the minimum transmit power. In addition, a quantity-oriented resource planning (QORP) technique that utilizes these optimal MCS levels as main criteria for spectrum allocation has been proposed for better adapting to the sensing node requirements. The simulation results reveal that the proposed approach significantly reduces the energy consumption of IoT sensing nodes and #PRBs up to 23.09% and 25.98%, respectively.


Subject(s)
Wireless Technology , Algorithms , Computer Simulation , Health Resources , Internet
8.
PLoS One ; 11(12): e0167447, 2016.
Article in English | MEDLINE | ID: mdl-27997929

ABSTRACT

This paper addresses the computation procedures for estimating the impact of interference in 60 GHz IEEE 802.11ad uplink access in order to construct visual big-data database from randomly deployed surveillance camera sensing devices. The acquired large-scale massive visual information from surveillance camera devices will be used for organizing big-data database, i.e., this estimation is essential for constructing centralized cloud-enabled surveillance database. This performance estimation study captures interference impacts on the target cloud access points from multiple interference components generated by the 60 GHz wireless transmissions from nearby surveillance camera devices to their associated cloud access points. With this uplink interference scenario, the interference impacts on the main wireless transmission from a target surveillance camera device to its associated target cloud access point with a number of settings are measured and estimated under the consideration of 60 GHz radiation characteristics and antenna radiation pattern models.


Subject(s)
Cloud Computing , Electronic Data Processing/methods , Video Recording/methods
9.
PLoS One ; 11(8): e0160375, 2016.
Article in English | MEDLINE | ID: mdl-27494411

ABSTRACT

The convergent communication network will play an important role as a single platform to unify heterogeneous networks and integrate emerging technologies and existing legacy networks. Although there have been proposed many feasible solutions, they could not become convergent frameworks since they mainly focused on converting functions between various protocols and interfaces in edge networks, and handling functions for multiple services in core networks, e.g., the Multi-protocol Label Switching (MPLS) technique. Software-defined networking (SDN), on the other hand, is expected to be the ideal future for the convergent network since it can provide a controllable, dynamic, and cost-effective network. However, SDN has an original structural vulnerability behind a lot of advantages, which is the centralized control plane. As the brains of the network, a controller manages the whole network, which is attractive to attackers. In this context, we proposes a novel solution called adaptive suspicious prevention (ASP) mechanism to protect the controller from the Denial of Service (DoS) attacks that could incapacitate an SDN. The ASP is integrated with OpenFlow protocol to detect and prevent DoS attacks effectively. Our comprehensive experimental results show that the ASP enhances the resilience of an SDN network against DoS attacks by up to 38%.


Subject(s)
Computer Communication Networks , Computer Security , Algorithms , Software , Workflow
SELECTION OF CITATIONS
SEARCH DETAIL
...