Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Resultados 1 - 11 de 11
Filtrar
Más filtros

Banco de datos
País como asunto
Tipo del documento
Publication year range
1.
Sensors (Basel) ; 24(10)2024 May 15.
Artículo en Inglés | MEDLINE | ID: mdl-38793994

RESUMEN

Personal identification is an important aspect of managing electronic health records (EHRs), ensuring secure access to patient information, and maintaining patient privacy. Traditionally, biometric, signature, username/password, photo identity, etc., are employed for user authentication. However, these methods can be prone to security breaches, identity theft, and user inconvenience. The security of personal information is of paramount importance, particularly in the context of EHR. To address this, our study leverages ResNet1D, a deep learning architecture, to analyze surface electromyography (sEMG) signals for robust identification purposes. The proposed ResNet1D-based personal identification approach using the sEMG signal can offer an alternative and potentially more secure method for personal identification in EHR systems. We collected a multi-session sEMG signal database from individuals, focusing on hand gestures. The ResNet1D model was trained using this database to learn discriminative features for both gesture and personal identification tasks. For personal identification, the model validated an individual's identity by comparing captured features with their own stored templates in the healthcare EHR system, allowing secure access to sensitive medical information. Data were obtained in two channels when each of the 200 subjects performed 12 motions. There were three sessions, and each motion was repeated 10 times with time intervals of a day or longer between each session. Experiments were conducted on a dataset of 20 randomly sampled subjects out of 200 subjects in the database, achieving exceptional identification accuracy. The experiment was conducted separately for 5, 10, 15, and 20 subjects using the ResNet1D model of a deep neural network, achieving accuracy rates of 97%, 96%, 87%, and 82%, respectively. The proposed model can be integrated with healthcare EHR systems to enable secure and reliable personal identification and the safeguarding of patient information.


Asunto(s)
Electromiografía , Registros Electrónicos de Salud , Humanos , Electromiografía/métodos , Masculino , Adulto , Femenino , Seguridad Computacional , Aprendizaje Profundo , Procesamiento de Señales Asistido por Computador , Adulto Joven
2.
Sensors (Basel) ; 23(5)2023 Mar 03.
Artículo en Inglés | MEDLINE | ID: mdl-36904977

RESUMEN

As a critical enabler for beyond fifth-generation (B5G) technology, millimeter wave (mmWave) beamforming for mmWave has been studied for many years. Multi-input multi-output (MIMO) system, which is the baseline for beamforming operation, rely heavily on multiple antennas to stream data in mmWave wireless communication systems. High-speed mmWave applications face challenges such as blockage and latency overhead. In addition, the efficiency of the mobile systems is severely impacted by the high training overhead required to discover the best beamforming vectors in large antenna array mmWave systems. In order to mitigate the stated challenges, in this paper, we propose a novel deep reinforcement learning (DRL) based coordinated beamforming scheme where multiple base stations serve one mobile station (MS) jointly. The constructed solution then uses a proposed DRL model and predicts the suboptimal beamforming vectors at the base stations (BSs) out of possible beamforming codebook candidates. This solution enables a complete system that facilitates highly mobile mmWave applications with dependable coverage, minimal training overhead, and low latency. Numerical results demonstrate that our proposed algorithm remarkably increases the achievable sum rate capacity for the highly mobile mmWave massive MIMO scenario while ensuring low training and latency overhead.

3.
Sensors (Basel) ; 23(8)2023 Apr 17.
Artículo en Inglés | MEDLINE | ID: mdl-37112398

RESUMEN

Perceptual encryption (PE) hides the identifiable information of an image in such a way that its intrinsic characteristics remain intact. This recognizable perceptual quality can be used to enable computation in the encryption domain. A class of PE algorithms based on block-level processing has recently gained popularity for their ability to generate JPEG-compressible cipher images. A tradeoff in these methods, however, is between the security efficiency and compression savings due to the chosen block size. Several methods (such as the processing of each color component independently, image representation, and sub-block-level processing) have been proposed to effectively manage this tradeoff. The current study adapts these assorted practices into a uniform framework to provide a fair comparison of their results. Specifically, their compression quality is investigated under various design parameters, such as the choice of colorspace, image representation, chroma subsampling, quantization tables, and block size. Our analyses have shown that at best the PE methods introduce a decrease of 6% and 3% in the JPEG compression performance with and without chroma subsampling, respectively. Additionally, their encryption quality is quantified in terms of several statistical analyses. The simulation results show that block-based PE methods exhibit several favorable properties for the encryption-then-compression schemes. Nonetheless, to avoid any pitfalls, their principal design should be carefully considered in the context of the applications for which we outlined possible future research directions.

4.
Sensors (Basel) ; 22(10)2022 May 19.
Artículo en Inglés | MEDLINE | ID: mdl-35632265

RESUMEN

With the increase in the number of connected devices, to facilitate more users with high-speed transfer rate and enormous bandwidth, millimeter-wave (mmWave) technology has become one of the promising research sectors in both industry and academia. Owing to the advancements in 5G communication, traditional physical (PHY) layer-based solutions are becoming obsolete. Resource allocation, interference management, anti-blockage, and deafness are crucial problems needing resolution for designing modern mmWave communication network architectures. Consequently, comparatively new approaches such as medium access control (MAC) protocol-based utilization can help meet the advancement requirements. A MAC layer accesses channels and prepares the data frames for transmission to all connected devices, which is even more significant in very high frequency bands, i.e., in the mmWave spectrum. Moreover, different MAC protocols have their unique limitations and characteristics. In this survey, to deal with the above challenges and address the limitations revolving around the MAC layers of mmWave communication systems, we investigated the existing state-of-the-art MAC protocols, related surveys, and solutions available for mmWave frequency. Moreover, we performed a categorized qualitative comparison of the state-of-the-art protocols and finally examined the probable approaches to alleviate the critical challenges in future research.

5.
Sensors (Basel) ; 20(13)2020 Jul 05.
Artículo en Inglés | MEDLINE | ID: mdl-32635619

RESUMEN

Deep neural networks (DNNs) have achieved significant advancements in speech processing, and numerous types of DNN architectures have been proposed in the field of sound localization. When a DNN model is deployed for sound localization, a fixed input size is required. This is generally determined by the number of microphones, the fast Fourier transform size, and the frame size. if the numbers or configurations of the microphones change, the DNN model should be retrained because the size of the input features changes. in this paper, we propose a configuration-invariant sound localization technique using the azimuth-frequency representation and convolutional neural networks (CNNs). the proposed CNN model receives the azimuth-frequency representation instead of time-frequency features as the input features. the proposed model was evaluated in different environments from the microphone configuration in which it was originally trained. for evaluation, single sound source is simulated using the image method. Through the evaluations, it was confirmed that the localization performance was superior to the conventional steered response power phase transform (SRP-PHAT) and multiple signal classification (MUSIC) methods.

6.
Sensors (Basel) ; 20(9)2020 May 09.
Artículo en Inglés | MEDLINE | ID: mdl-32397540

RESUMEN

Cloud radio access network (C-RAN) is a promising mobile wireless sensor network architecture to address the challenges of ever-increasing mobile data traffic and network costs. C-RAN is a practical solution to the strict energy-constrained wireless sensor nodes, often found in Internet of Things (IoT) applications. Although this architecture can provide energy efficiency and reduce cost, it is a challenging task in C-RAN to utilize the resources efficiently, considering the dynamic real-time environment. Several research works have proposed different methodologies for effective resource management in C-RAN. This study performs a comprehensive survey on the state-of-the-art resource management techniques that have been proposed recently for this architecture. The resource management techniques are categorized into computational resource management (CRM) and radio resource management (RRM) techniques. Then both of the techniques are further classified and analyzed based on the strategies used in the studies. Remote radio head (RRH) clustering schemes used in CRM techniques are discussed extensively. In this research work, the investigated performance metrics and their validation techniques are critically analyzed. Moreover, other important challenges and open research issues for efficient resource management in C-RAN are highlighted to provide future research direction.

7.
Sensors (Basel) ; 20(16)2020 Aug 16.
Artículo en Inglés | MEDLINE | ID: mdl-32824357

RESUMEN

Internet of Things (IoT) devices bring us rich sensor data, such as images capturing the environment. One prominent approach to understanding and utilizing such data is image classification which can be effectively solved by deep learning (DL). Combined with cross-entropy loss, softmax has been widely used for classification problems, despite its limitations. Many efforts have been made to enhance the performance of softmax decision-making models. However, they require complex computations and/or re-training the model, which is computationally prohibited on low-power IoT devices. In this paper, we propose a light-weight framework to enhance the performance of softmax decision-making models for DL. The proposed framework operates with a pre-trained DL model using softmax, without requiring any modification to the model. First, it computes the level of uncertainty as to the model's prediction, with which misclassified samples are detected. Then, it makes a probabilistic control decision to enhance the decision performance of the given model. We validated the proposed framework by conducting an experiment for IoT car control. The proposed model successfully reduced the control decision errors by up to 96.77% compared to the given DL model, and that suggests the feasibility of building DL-based IoT applications with high accuracy and low complexity.

8.
Sensors (Basel) ; 19(20)2019 Oct 12.
Artículo en Inglés | MEDLINE | ID: mdl-31614801

RESUMEN

In networking systems such as cloud radio access networks (C-RAN) where users receive the connection and data service from short-range, light-weight base stations (BSs), users' mobility has a significant impact on their association with BSs. Although communicating with the closest BS may yield the most desirable channel conditions, such strategy can lead to certain BSs being over-populated while leaving remaining BSs under-utilized. In addition, mobile users may encounter frequent handovers, which imposes a non-negligible burden on BSs and users. To reduce the handover overhead while balancing the traffic loads between BSs, we propose an optimal user association strategy for a large-scale mobile Internet of Things (IoT) network operating on C-RAN. We begin with formulating an optimal user association scheme focusing only on the task of load balancing. Thereafter, we revise the formulation such that the number of handovers is minimized while keeping BSs well-balanced in terms of the traffic load. To evaluate the performance of the proposed scheme, we implement a discrete-time network simulator. The evaluation results show that the proposed optimal user association strategy can significantly reduce the number of handovers, while outperforming conventional association schemes in terms of load balancing.

9.
ACS Photonics ; 11(4): 1362-1375, 2024 Apr 17.
Artículo en Inglés | MEDLINE | ID: mdl-38645999

RESUMEN

Over the past 15 years, the output power of silicon submillimeter-wave electronics has increased by a factor greater than 1000 reaching -3.9 dBm at 440 GHz for a single unit in CMOS and -10.7 dBm at 1.01 THz for a 42-element array in SiGe BiCMOS. The smallest power of a 1 kHz bandwidth signal at 420 GHz that can be detected has improved by 100 million times. These and the expected improvements from the ongoing activities should be sufficient to support high resolution imaging with a range of up to several hundred meters, gas sensing up to ∼1 THz, and communication over ∼1000 m. The silicon IC technologies enable integration of complex systems into a small form factor and reduction of manufacturing cost. When broad deployment of submillimeter wave systems for everyday life applications becomes necessary, the silicon IC infrastructure will be the most capable to support the high-volume manufacturing need.

10.
Sci Rep ; 13(1): 19461, 2023 11 09.
Artículo en Inglés | MEDLINE | ID: mdl-37945682

RESUMEN

Corals are sessile invertebrates living underwater in colorful structures known as reefs. Unfortunately, coral's temperature sensitivity is causing color bleaching, which hosts organisms that are crucial and consequently affect marine pharmacognosy. To address this problem, many researchers are developing cures and treatment procedures to restore bleached corals. However, before the cure, the researchers need to precisely localize the bleached corals in the Great Barrier Reef. The researchers have developed various visual classification frameworks to localize bleached corals. However, the performance of those techniques degrades with variations in illumination, orientation, scale, and view angle. In this paper, we develop highly noise-robust and invariant robust localization using bag-of-hybrid visual features (RL-BoHVF) for bleached corals by employing the AlexNet DNN and ColorTexture handcrafted by raw features. It is observed that the overall dimension is reduced by using the bag-of-feature method while achieving a classification accuracy of 96.20% on the balanced dataset collected from the Great Barrier Reef of Australia. Furthermore, the localization performance of the proposed model was evaluated on 342 images, which include both train and test segments. The model achieved superior performance compared to other standalone and hybrid DNN and handcrafted models reported in the literature.


Asunto(s)
Antozoos , Animales , Temperatura , Australia , Arrecifes de Coral
11.
PLoS One ; 12(2): e0172318, 2017.
Artículo en Inglés | MEDLINE | ID: mdl-28241083

RESUMEN

In heterogeneous networks (HetNets), the large-scale deployment of small base stations (BSs) together with traditional macro BSs is an economical and efficient solution that is employed to address the exponential growth in mobile data traffic. In dense HetNets, network switching, i.e., handovers, plays a critical role in connecting a mobile terminal (MT) to the best of all accessible networks. In the existing literature, a handover decision is made using various handover metrics such as the signal-to-noise ratio, data rate, and movement speed. However, there are few studies on handovers that focus on energy efficiency in HetNets. In this paper, we propose a handover strategy that helps to minimize energy consumption at BSs in HetNets without compromising the quality of service (QoS) of each MT. The proposed handover strategy aims to capture the effect of the stochastic behavior of handover parameters and the expected energy consumption due to handover execution when making a handover decision. To identify the validity of the proposed handover strategy, we formulate a handover problem as a constrained Markov decision process (CMDP), by which the effects of the stochastic behaviors of handover parameters and consequential handover energy consumption can be accurately reflected when making a handover decision. In the CMDP, the aim is to minimize the energy consumption to service an MT over the lifetime of its connection, and the constraint is to guarantee the QoS requirements of the MT given in terms of the transmission delay and call-dropping probability. We find an optimal policy for the CMDP using a combination of the Lagrangian method and value iteration. Simulation results verify the validity of the proposed handover strategy.


Asunto(s)
Redes de Comunicación de Computadores , Tecnología Inalámbrica , Algoritmos , Simulación por Computador , Cadenas de Markov , Modelos Estadísticos , Probabilidad , Relación Señal-Ruido , Procesos Estocásticos
SELECCIÓN DE REFERENCIAS
Detalles de la búsqueda