Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 5 de 5
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Sensors (Basel) ; 24(7)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38610369

RESUMO

Video surveillance systems are integral to bolstering safety and security across multiple settings. With the advent of deep learning (DL), a specialization within machine learning (ML), these systems have been significantly augmented to facilitate DL-based video surveillance services with notable precision. Nevertheless, DL-based video surveillance services, which necessitate the tracking of object movement and motion tracking (e.g., to identify unusual object behaviors), can demand a significant portion of computational and memory resources. This includes utilizing GPU computing power for model inference and allocating GPU memory for model loading. To tackle the computational demands inherent in DL-based video surveillance, this study introduces a novel video surveillance management system designed to optimize operational efficiency. At its core, the system is built on a two-tiered edge computing architecture (i.e., client and server through socket transmission). In this architecture, the primary edge (i.e., client side) handles the initial processing tasks, such as object detection, and is connected via a Universal Serial Bus (USB) cable to the Closed-Circuit Television (CCTV) camera, directly at the source of the video feed. This immediate processing reduces the latency of data transfer by detecting objects in real time. Meanwhile, the secondary edge (i.e., server side) plays a vital role by hosting a dynamically controlling threshold module targeted at releasing DL-based models, reducing needless GPU usage. This module is a novel addition that dynamically adjusts the threshold time value required to release DL models. By dynamically optimizing this threshold, the system can effectively manage GPU usage, ensuring resources are allocated efficiently. Moreover, we utilize federated learning (FL) to streamline the training of a Long Short-Term Memory (LSTM) network for predicting imminent object appearances by amalgamating data from diverse camera sources while ensuring data privacy and optimized resource allocation. Furthermore, in contrast to the static threshold values or moving average techniques used in previous approaches for the controlling threshold module, we employ a Deep Q-Network (DQN) methodology to manage threshold values dynamically. This approach efficiently balances the trade-off between GPU memory conservation and the reloading latency of the DL model, which is enabled by incorporating LSTM-derived predictions as inputs to determine the optimal timing for releasing the DL model. The results highlight the potential of our approach to significantly improve the efficiency and effective usage of computational resources in video surveillance systems, opening the door to enhanced security in various domains.

2.
Sensors (Basel) ; 23(4)2023 Feb 05.
Artigo em Inglês | MEDLINE | ID: mdl-36850388

RESUMO

The Internet of things (IoT) combines different sources of collected data which are processed and analyzed to support smart city applications. Machine learning and deep learning algorithms play a vital role in edge intelligence by minimizing the amount of irrelevant data collected from multiple sources to facilitate these smart city applications. However, the data collected by IoT sensors can often be noisy, redundant, and even empty, which can negatively impact the performance of these algorithms. To address this issue, it is essential to develop effective methods for detecting and eliminating irrelevant data to improve the performance of intelligent IoT applications. One approach to achieving this goal is using data cleaning techniques, which can help identify and remove noisy, redundant, or empty data from the collected sensor data. This paper proposes a deep reinforcement learning (deep RL) framework for IoT sensor data cleaning. The proposed system utilizes a deep Q-network (DQN) agent to classify sensor data into three categories: empty, garbage, and normal. The DQN agent receives input from three received signal strength (RSS) values, indicating the current and two previous sensor data points, and receives reward feedback based on its predicted actions. Our experiments demonstrate that the proposed system outperforms a common time-series-based fully connected neural network (FCDQN) solution, with an accuracy of around 96% after the exploration mode. The use of deep RL for IoT sensor data cleaning is significant because it has the potential to improve the performance of intelligent IoT applications by eliminating irrelevant and harmful data.

3.
Sensors (Basel) ; 23(5)2023 Mar 06.
Artigo em Inglês | MEDLINE | ID: mdl-36905075

RESUMO

Nowadays, deep learning (DL)-based video surveillance services are widely used in smart cities because of their ability to accurately identify and track objects, such as vehicles and pedestrians, in real time. This allows a more efficient traffic management and improved public safety. However, DL-based video surveillance services that require object movement and motion tracking (e.g., for detecting abnormal object behaviors) can consume a substantial amount of computing and memory capacity, such as (i) GPU computing resources for model inference and (ii) GPU memory resources for model loading. This paper presents a novel cognitive video surveillance management with long short-term memory (LSTM) model, denoted as the CogVSM framework. We consider DL-based video surveillance services in a hierarchical edge computing system. The proposed CogVSM forecasts object appearance patterns and smooths out the forecast results needed for an adaptive model release. Here, we aim to reduce standby GPU memory by model release while avoiding unnecessary model reloads for a sudden object appearance. CogVSM hinges on an LSTM-based deep learning architecture explicitly designed for future object appearance pattern prediction by training previous time-series patterns to achieve these objectives. By referring to the result of the LSTM-based prediction, the proposed framework controls the threshold time value in a dynamic manner by using an exponential weighted moving average (EWMA) technique. Comparative evaluations on both simulated and real-world measurement data on the commercial edge devices prove that the LSTM-based model in the CogVSM can achieve a high predictive accuracy, i.e., a root-mean-square error metric of 0.795. In addition, the suggested framework utilizes up to 32.1% less GPU memory than the baseline and 8.9% less than previous work.

4.
Sensors (Basel) ; 22(14)2022 Jul 12.
Artigo em Inglês | MEDLINE | ID: mdl-35890896

RESUMO

With the deployment of the fifth generation (5G) mobile network systems and the envisioned heterogeneous ultra-dense networks (UDNs), both small cell (SmC) and distributed antenna system (DAS) technologies are required by mobile network operators (MNOs) and venue owners to support multiple spectrum bands, multiple radio access technologies (RATs), multiple optical central offices (COs), and multiple MNOs. As a result, the neutral host business model representing a third party responsible for managing the network enterprise on behalf of multiple MNOs has emerged as a potential solution, mainly influenced by the desire to provide a high user experience without significantly increasing the total cost of ownership (TCO). However, designing a sustainable business model for a neutral host is a nontrivial task, especially when considered in the context of 5G and beyond (5GB) UDNs. In this paper, under an integrated optical wireless network infrastructure, we review how SmC and DAS technologies are evolving towards the adoption of the neutral host business model and identify key challenges and requirements for 5GB support. Thus, we explore recent candidate advancements in heterogeneous network integration technologies for the realization of an efficient 5GB neutral host business model design capable of accommodating both SmC and DAS. Furthermore, we propose a novel design architecture that relies on virtual radio access network (vRAN) to enable real-time dynamic resource allocation and radio over Ethernet (RoE) for flexible and reconfigurable fronthaul. The results from our simulations using MATLAB over two real-life deployment scenarios validate the feasibility of utilizing switched RoE considering end-to-end delay requirements of 5GB under different switching schemes, as long as the queuing delay is kept to a minimum. Finally, the results show that incorporating RoE and vRAN technologies into the neutral host design results in substantial TCO reduction by about 81% in an indoor scenario and 73% in an outdoor scenario.


Assuntos
Software , Tecnologia sem Fio
5.
Sensors (Basel) ; 22(16)2022 Aug 16.
Artigo em Inglês | MEDLINE | ID: mdl-36015879

RESUMO

Tracking the source of air pollution plumes and monitoring the air quality during emergency events in real-time is crucial to support decision-makers in making an appropriate evacuation plan. Internet of Things (IoT) based air quality tracking and monitoring platforms have used stationary sensors around the environment. However, fixed IoT sensors may not be enough to monitor the air quality in a vast area during emergency situations. Therefore, many applications consider utilizing Unmanned Aerial Vehicles (UAVs) to monitor the air pollution plumes environment. However, finding an unhealthy location in a vast area requires a long navigation time. For time efficiency, we employ deep reinforcement learning (Deep RL) to assist UAVs to find air pollution plumes in an equal-sized grid space. The proposed Deep Q-network (DQN) based UAV Pollution Tracking (DUPT) is utilized to guide the multi-navigation direction of the UAV to find the pollution plumes' location in a vast area within a short duration of time. Indeed, we deployed a long short-term memory (LSTM) combined with Q-network to suggest a particular navigation pattern producing minimal time consumption. The proposed DUPT is evaluated and validated using an air pollution environment generated by a well-known Gaussian distribution and kriging interpolation. The evaluation and comparison results are carefully presented and analyzed. The experiment results show that our proposed DUPT solution can rapidly identify the unhealthy polluted area and spends around 28% of the total time of the existing solution.


Assuntos
Poluição do Ar , Fatores de Tempo
SELEÇÃO DE REFERÊNCIAS
Detalhe da pesquisa