Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 14 de 14
Filtrar
1.
Sensors (Basel) ; 23(15)2023 Aug 01.
Artigo em Inglês | MEDLINE | ID: mdl-37571633

RESUMO

The Internet of Things is rapidly growing with the demand for low-power, long-range wireless communication technologies. Long Range Wide Area Network (LoRaWAN) is one such technology that has gained significant attention in recent years due to its ability to provide long-range communication with low power consumption. One of the main issues in LoRaWAN is the efficient utilization of radio resources (e.g., spreading factor and transmission power) by the end devices. To solve the resource allocation issue, machine learning (ML) methods have been used to improve the LoRaWAN network performance. The primary aim of this survey paper is to study and examine the issue of resource management in LoRaWAN that has been resolved through state-of-the-art ML methods. Further, this survey presents the publicly available LoRaWAN frameworks that could be utilized for dataset collection, discusses the required features for efficient resource management with suggested ML methods, and highlights the existing publicly available datasets. The survey also explores and evaluates the Network Simulator-3-based ML frameworks that can be leveraged for efficient resource management. Finally, future recommendations regarding the applicability of the ML applications for resource management in LoRaWAN are illustrated, providing a comprehensive guide for researchers and practitioners interested in applying ML to improve the performance of the LoRaWAN network.

2.
Sensors (Basel) ; 23(11)2023 May 24.
Artigo em Inglês | MEDLINE | ID: mdl-37299760

RESUMO

Terahertz (THz) is a promising technology for future wireless communication networks, particularly for 6G and beyond. The ultra-wide THz band, ranging from 0.1 to 10 THz, can potentially address the limited capacity and scarcity of spectrum in current wireless systems such as 4G-LTE and 5G. Furthermore, it is expected to support advanced wireless applications requiring high data transmission and quality services, i.e., terabit-per-second backhaul systems, ultra-high-definition streaming, virtual/augmented reality, and high-bandwidth wireless communications. In recent years, artificial intelligence (AI) has been used mainly for resource management, spectrum allocation, modulation and bandwidth classification, interference mitigation, beamforming, and medium access control layer protocols to improve THz performance. This survey paper examines the use of AI in state-of-the-art THz communications, discussing the challenges, potentials, and shortcomings. Additionally, this survey discusses the available platforms, including commercial, testbeds, and publicly available simulators for THz communications. Finally, this survey provides future strategies for improving the existing THz simulators and using AI methods, including deep learning, federated learning, and reinforcement learning, to improve THz communications.


Assuntos
Realidade Aumentada , Realidade Virtual , Inteligência Artificial , Tecnologia
3.
Sensors (Basel) ; 22(23)2022 Dec 05.
Artigo em Inglês | MEDLINE | ID: mdl-36502211

RESUMO

IEEE 802.11ah, known as Wi-Fi HaLow, is envisioned for long-range and low-power communication. It is sub-1 GHz technology designed for massive Internet of Things (IoT) and machine-to-machine devices. It aims to overcome the IoT challenges, such as providing connectivity to massive power-constrained devices distributed over a large geographical area. To accomplish this objective, IEEE 802.11ah introduces several unique physical and medium access control layer (MAC) features. In recent years, the MAC features of IEEE 802.11ah, including restricted access window, authentication (e.g., centralized and distributed) and association, relay and sectorization, target wake-up time, and traffic indication map, have been intensively investigated from various aspects to improve resource allocation and enhance the network performance in terms of device association time, throughput, delay, and energy consumption. This survey paper presents an in-depth assessment and analysis of these MAC features along with current solutions, their potentials, and key challenges, exposing how to use these novel features to meet the rigorous IoT standards.


Assuntos
Internet das Coisas , Cafeína , Comunicação , Alocação de Recursos , Tecnologia
4.
Sensors (Basel) ; 22(16)2022 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-36015879

RESUMO

Tracking the source of air pollution plumes and monitoring the air quality during emergency events in real-time is crucial to support decision-makers in making an appropriate evacuation plan. Internet of Things (IoT) based air quality tracking and monitoring platforms have used stationary sensors around the environment. However, fixed IoT sensors may not be enough to monitor the air quality in a vast area during emergency situations. Therefore, many applications consider utilizing Unmanned Aerial Vehicles (UAVs) to monitor the air pollution plumes environment. However, finding an unhealthy location in a vast area requires a long navigation time. For time efficiency, we employ deep reinforcement learning (Deep RL) to assist UAVs to find air pollution plumes in an equal-sized grid space. The proposed Deep Q-network (DQN) based UAV Pollution Tracking (DUPT) is utilized to guide the multi-navigation direction of the UAV to find the pollution plumes' location in a vast area within a short duration of time. Indeed, we deployed a long short-term memory (LSTM) combined with Q-network to suggest a particular navigation pattern producing minimal time consumption. The proposed DUPT is evaluated and validated using an air pollution environment generated by a well-known Gaussian distribution and kriging interpolation. The evaluation and comparison results are carefully presented and analyzed. The experiment results show that our proposed DUPT solution can rapidly identify the unhealthy polluted area and spends around 28% of the total time of the existing solution.


Assuntos
Poluição do Ar , Fatores de Tempo
5.
Sensors (Basel) ; 21(9)2021 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-34066766

RESUMO

The Internet of Things (IoT)-based target tracking system is required for applications such as smart farm, smart factory, and smart city where many sensor devices are jointly connected to collect the moving target positions. Each sensor device continuously runs on battery-operated power, consuming energy while perceiving target information in a particular environment. To reduce sensor device energy consumption in real-time IoT tracking applications, many traditional methods such as clustering, information-driven, and other approaches have previously been utilized to select the best sensor. However, applying machine learning methods, particularly deep reinforcement learning (Deep RL), to address the problem of sensor selection in tracking applications is quite demanding because of the limited sensor node battery lifetime. In this study, we proposed a long short-term memory deep Q-network (DQN)-based Deep RL target tracking model to overcome the problem of energy consumption in IoT target applications. The proposed method is utilized to select the energy-efficient best sensor while tracking the target. The best sensor is defined by the minimum distance function (i.e., derived as the state), which leads to lower energy consumption. The simulation results show favorable features in terms of the best sensor selection and energy consumption.

6.
Sensors (Basel) ; 20(24)2020 Dec 17.
Artigo em Inglês | MEDLINE | ID: mdl-33348701

RESUMO

In recent times, social and commercial interests in location-based services (LBS) are significantly increasing due to the rise in smart devices and technologies. The global navigation satellite systems (GNSS) have long been employed for LBS to navigate and determine accurate and reliable location information in outdoor environments. However, the GNSS signals are too weak to penetrate buildings and unable to provide reliable indoor LBS. Hence, GNSS's incompetence in the indoor environment invites extensive research and development of an indoor positioning system (IPS). Various technologies and techniques have been studied for IPS development. This paper provides an overview of the available smartphone-based indoor localization solutions that rely on radio frequency technologies. As fingerprinting localization is mostly accepted for IPS development owing to its good localization accuracy, we discuss fingerprinting localization in detail. In particular, our analysis is more focused on practical IPS that are realized using a smartphone and Wi-Fi/Bluetooth Low Energy (BLE) as a signal source. Furthermore, we elaborate on the challenges of practical IPS, the available solutions and comprehensive performance comparison, and present some future trends in IPS development.

7.
Sensors (Basel) ; 20(11)2020 May 29.
Artigo em Inglês | MEDLINE | ID: mdl-32485827

RESUMO

Securing personal authentication is an important study in the field of security. Particularly, fingerprinting and face recognition have been used for personal authentication. However, these systems suffer from certain issues, such as fingerprinting forgery, or environmental obstacles. To address forgery or spoofing identification problems, various approaches have been considered, including electrocardiogram (ECG). For ECG identification, linear discriminant analysis (LDA), support vector machine (SVM), principal component analysis (PCA), deep recurrent neural network (DRNN), and recurrent neural network (RNN) have been conventionally used. Certain studies have shown that the RNN model yields the best performance in ECG identification as compared with the other models. However, these methods require a lengthy input signal for high accuracy. Thus, these methods may not be applied to a real-time system. In this study, we propose using bidirectional long short-term memory (LSTM)-based deep recurrent neural networks (DRNN) through late-fusion to develop a real-time system for ECG-based biometrics identification and classification. We suggest a preprocessing procedure for the quick identification and noise reduction, such as a derivative filter, moving average filter, and normalization. We experimentally evaluated the proposed method using two public datasets: MIT-BIH Normal Sinus Rhythm (NSRDB) and MIT-BIH Arrhythmia (MITDB). The proposed LSTM-based DRNN model shows that in NSRDB, the overall precision was 100%, recall was 100%, accuracy was 100%, and F1-score was 1. For MITDB, the overall precision was 99.8%, recall was 99.8%, accuracy was 99.8%, and F1-score was 0.99. Our experiments demonstrate that the proposed model achieves an overall higher classification accuracy and efficiency compared with the conventional LSTM approach.

8.
Sensors (Basel) ; 20(22)2020 Nov 12.
Artigo em Inglês | MEDLINE | ID: mdl-33198298

RESUMO

A long-range wide area network (LoRaWAN) is one of the leading communication technologies for Internet of Things (IoT) applications. In order to fulfill the IoT-enabled application requirements, LoRaWAN employs an adaptive data rate (ADR) mechanism at both the end device (ED) and the network server (NS). NS-managed ADR aims to offer a reliable and battery-efficient resource to EDs by managing the spreading factor (SF) and transmit power (TP). However, such management is severely affected by the lack of agility in adapting to the variable channel conditions. Thus, several hours or even days may be required to converge at a level of stable and energy-efficient communication. Therefore, we propose two NS-managed ADRs, a Gaussian filter-based ADR (G-ADR) and an exponential moving average-based ADR (EMA-ADR). Both of the proposed schemes operate as a low-pass filter to resist rapid changes in the signal-to-noise ratio of received packets at the NS. The proposed methods aim to allocate the best SF and TP to both static and mobile EDs by seeking to reduce the convergence period in the confirmed mode of LoRaWAN. Based on the simulation results, we show that the G-ADR and EMA-ADR schemes reduce the convergence period in a static scenario by 16% and 68%, and in a mobility scenario by 17% and 81%, respectively, as compared to typical ADR. Moreover, we show that the proposed schemes are successful in reducing the energy consumption and enhancing the packet success ratio.

9.
Sensors (Basel) ; 20(9)2020 May 06.
Artigo em Inglês | MEDLINE | ID: mdl-32384656

RESUMO

A long-range wide area network (LoRaWAN) adapts the ALOHA network concept for channel access, resulting in packet collisions caused by intra- and inter-spreading factor (SF) interference. This leads to a high packet loss ratio. In LoRaWAN, each end device (ED) increments the SF after every two consecutive failed retransmissions, thus forcing the EDs to use a high SF. When numerous EDs switch to the highest SF, the network loses its advantage of orthogonality. Thus, the collision probability of the ED packets increases drastically. In this study, we propose two SF allocation schemes to enhance the packet success ratio by lowering the impact of interference. The first scheme, called the channel-adaptive SF recovery algorithm, increments or decrements the SF based on the retransmission of the ED packets, indicating the channel status in the network. The second approach allocates SF to EDs based on ED sensitivity during the initial deployment. These schemes are validated through extensive simulations by considering the channel interference in both confirmed and unconfirmed modes of LoRaWAN. Through simulation results, we show that the SFs have been adaptively applied to each ED, and the proposed schemes enhance the packet success delivery ratio as compared to the typical SF allocation schemes.

10.
Sensors (Basel) ; 19(2)2019 Jan 15.
Artigo em Inglês | MEDLINE | ID: mdl-30650630

RESUMO

This paper proposes a method of estimating the attitude of an underwater vehicle. The proposed method uses two field measurements, namely, a gravitational field and a magnetic field represented in terms of vectors in three-dimensional space. In many existing methods that convert the measured field vectors into Euler angles, the yaw accuracy is affected by the uncertainty of the gravitational measurement and by the uncertainty of the magnetic field measurement. Additionally, previous methods have used the magnetic field measurement under the assumption that the magnetic field has only a horizontal component. The proposed method utilizes all field measurement components as they are, without converting them into Euler angles. The bias in the measured magnetic field vector is estimated and compensated to take full advantage of all measured field vector components. Because the proposed method deals with the measured field independently, uncertainties in the measured vectors affect the attitude estimation separately without adding up. The proposed method was tested by conducting navigation experiments with an unmanned underwater vehicle inside test tanks. The results were compared with those obtained by other methods, wherein the Euler angles converted from the measured field vectors were used as measurements.

11.
Sensors (Basel) ; 18(12)2018 Dec 04.
Artigo em Inglês | MEDLINE | ID: mdl-30518119

RESUMO

Fingerprinting localization approach is widely used in indoor positioning applications owing to its high reliability. However, the learning procedure of radio signals in fingerprinting is time-consuming and labor-intensive. In this paper, an affinity propagation clustering (APC)-based fingerprinting localization system with Gaussian process regression (GPR) is presented for a practical positioning system with the reduced offline workload and low online computation cost. The proposed system collects sparse received signal strength (RSS) data from the deployed Bluetooth low energy beacons and trains them with the Gaussian process model. As the signal estimation component, GPR predicts not only the mean RSS but also the variance, which indicates the uncertainty of the estimation. The predicted RSS and variance can be employed for probabilistic-based fingerprinting localization. As the clustering component, the APC minimizes the searching space of reference points on the testbed. Consequently, it also helps to reduce the localization estimation error and the computational cost of the positioning system. The proposed method is evaluated through real field deployments. Experimental results show that the proposed method can reduce the offline workload and increase localization accuracy with less computational cost. This method outperforms the existing methods owing to RSS prediction using GPR and RSS clustering using APC.

12.
Sensors (Basel) ; 17(11)2017 Nov 06.
Artigo em Inglês | MEDLINE | ID: mdl-29113103

RESUMO

Adopting deep learning methods for human activity recognition has been effective in extracting discriminative features from raw input sequences acquired from body-worn sensors. Although human movements are encoded in a sequence of successive samples in time, typical machine learning methods perform recognition tasks without exploiting the temporal correlations between input data samples. Convolutional neural networks (CNNs) address this issue by using convolutions across a one-dimensional temporal sequence to capture dependencies among input data. However, the size of convolutional kernels restricts the captured range of dependencies between data samples. As a result, typical models are unadaptable to a wide range of activity-recognition configurations and require fixed-length input windows. In this paper, we propose the use of deep recurrent neural networks (DRNNs) for building recognition models that are capable of capturing long-range dependencies in variable-length input sequences. We present unidirectional, bidirectional, and cascaded architectures based on long short-term memory (LSTM) DRNNs and evaluate their effectiveness on miscellaneous benchmark datasets. Experimental results show that our proposed models outperform methods employing conventional machine learning, such as support vector machine (SVM) and k-nearest neighbors (KNN). Additionally, the proposed models yield better performance than other deep learning techniques, such as deep believe networks (DBNs) and CNNs.


Assuntos
Redes Neurais de Computação , Algoritmos , Atividades Humanas , Humanos , Aprendizado de Máquina , Memória de Curto Prazo
13.
Sensors (Basel) ; 15(3): 6740-62, 2015 Mar 19.
Artigo em Inglês | MEDLINE | ID: mdl-25808773

RESUMO

In indoor environments, the Global Positioning System (GPS) and long-range tracking radar systems are not optimal, because of signal propagation limitations in the indoor environment. In recent years, the use of ultra-wide band (UWB) technology has become a possible solution for object detection, localization and tracking in indoor environments, because of its high range resolution, compact size and low cost. This paper presents improved target detection and tracking techniques for moving objects with impulse-radio UWB (IR-UWB) radar in a short-range indoor area. This is achieved through signal-processing steps, such as clutter reduction, target detection, target localization and tracking. In this paper, we introduce a new combination consisting of our proposed signal-processing procedures. In the clutter-reduction step, a filtering method that uses a Kalman filter (KF) is proposed. Then, in the target detection step, a modification of the conventional CLEAN algorithm which is used to estimate the impulse response from observation region is applied for the advanced elimination of false alarms. Then, the output is fed into the target localization and tracking step, in which the target location and trajectory are determined and tracked by using unscented KF in two-dimensional coordinates. In each step, the proposed methods are compared to conventional methods to demonstrate the differences in performance. The experiments are carried out using actual IR-UWB radar under different scenarios. The results verify that the proposed methods can improve the probability and efficiency of target detection and tracking.

14.
Comput Med Imaging Graph ; 37(7-8): 522-37, 2013.
Artigo em Inglês | MEDLINE | ID: mdl-24148784

RESUMO

The level set approach is a powerful tool for segmenting images. This paper proposes a method for segmenting brain tumor images from MR images. A new signed pressure function (SPF) that can efficiently stop the contours at weak or blurred edges is introduced. The local statistics of the different objects present in the MR images were calculated. Using local statistics, the tumor objects were identified among different objects. In this level set method, the calculation of the parameters is a challenging task. The calculations of different parameters for different types of images were automatic. The basic thresholding value was updated and adjusted automatically for different MR images. This thresholding value was used to calculate the different parameters in the proposed algorithm. The proposed algorithm was tested on the magnetic resonance images of the brain for tumor segmentation and its performance was evaluated visually and quantitatively. Numerical experiments on some brain tumor images highlighted the efficiency and robustness of this method.


Assuntos
Algoritmos , Neoplasias Encefálicas/patologia , Interpretação Estatística de Dados , Interpretação de Imagem Assistida por Computador/métodos , Imageamento por Ressonância Magnética/métodos , Reconhecimento Automatizado de Padrão/métodos , Inteligência Artificial , Simulação por Computador , Humanos , Aumento da Imagem/métodos , Modelos Biológicos , Modelos Estatísticos , Reprodutibilidade dos Testes , Sensibilidade e Especificidade
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa