Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Banco de datos
Tipo del documento
Intervalo de año de publicación
1.
Sensors (Basel) ; 24(15)2024 Aug 02.
Artículo en Inglés | MEDLINE | ID: mdl-39124047

RESUMEN

Federated Learning (FL) is a decentralized machine learning method in which individual devices compute local models based on their data. In FL, devices periodically share newly trained updates with the central server, rather than submitting their raw data. The key characteristics of FL, including on-device training and aggregation, make it interesting for many communication domains. Moreover, the potential of new systems facilitating FL in sixth generation (6G) enabled Passive Optical Networks (PON), presents a promising opportunity for integration within this domain. This article focuses on the interaction between FL and PON, exploring approaches for effective bandwidth management, particularly in addressing the complexity introduced by FL traffic. In the PON standard, advanced bandwidth management is proposed by allocating multiple upstream grants utilizing the Dynamic Bandwidth Allocation (DBA) algorithm to be allocated for an Optical Network Unit (ONU). However, there is a lack of research on studying the utilization of multiple grant allocation. In this paper, we address this limitation by introducing a novel DBA approach that efficiently allocates PON bandwidth for FL traffic generation and demonstrates how multiple grants can benefit from the enhanced capacity of implementing PON in carrying out FL flows. Simulations conducted in this study show that the proposed solution outperforms state-of-the-art solutions in several network performance metrics, particularly in reducing upstream delay. This improvement holds great promise for enabling real-time data-intensive services that will be key components of 6G environments. Furthermore, our discussion outlines the potential for the integration of FL and PON as an operational reality capable of supporting 6G networking.

2.
Sensors (Basel) ; 24(7)2024 Mar 27.
Artículo en Inglés | MEDLINE | ID: mdl-38610369

RESUMEN

Video surveillance systems are integral to bolstering safety and security across multiple settings. With the advent of deep learning (DL), a specialization within machine learning (ML), these systems have been significantly augmented to facilitate DL-based video surveillance services with notable precision. Nevertheless, DL-based video surveillance services, which necessitate the tracking of object movement and motion tracking (e.g., to identify unusual object behaviors), can demand a significant portion of computational and memory resources. This includes utilizing GPU computing power for model inference and allocating GPU memory for model loading. To tackle the computational demands inherent in DL-based video surveillance, this study introduces a novel video surveillance management system designed to optimize operational efficiency. At its core, the system is built on a two-tiered edge computing architecture (i.e., client and server through socket transmission). In this architecture, the primary edge (i.e., client side) handles the initial processing tasks, such as object detection, and is connected via a Universal Serial Bus (USB) cable to the Closed-Circuit Television (CCTV) camera, directly at the source of the video feed. This immediate processing reduces the latency of data transfer by detecting objects in real time. Meanwhile, the secondary edge (i.e., server side) plays a vital role by hosting a dynamically controlling threshold module targeted at releasing DL-based models, reducing needless GPU usage. This module is a novel addition that dynamically adjusts the threshold time value required to release DL models. By dynamically optimizing this threshold, the system can effectively manage GPU usage, ensuring resources are allocated efficiently. Moreover, we utilize federated learning (FL) to streamline the training of a Long Short-Term Memory (LSTM) network for predicting imminent object appearances by amalgamating data from diverse camera sources while ensuring data privacy and optimized resource allocation. Furthermore, in contrast to the static threshold values or moving average techniques used in previous approaches for the controlling threshold module, we employ a Deep Q-Network (DQN) methodology to manage threshold values dynamically. This approach efficiently balances the trade-off between GPU memory conservation and the reloading latency of the DL model, which is enabled by incorporating LSTM-derived predictions as inputs to determine the optimal timing for releasing the DL model. The results highlight the potential of our approach to significantly improve the efficiency and effective usage of computational resources in video surveillance systems, opening the door to enhanced security in various domains.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA