Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 682
Filtrar
1.
Sensors (Basel) ; 24(17)2024 Aug 30.
Artículo en Inglés | MEDLINE | ID: mdl-39275567

RESUMEN

The platooning of cars and trucks is a pertinent approach for autonomous driving due to the effective utilization of roadways. The decreased gas consumption levels are an added merit owing to sustainability. Conventional platooning depended on Dedicated Short-Range Communication (DSRC)-based vehicle-to-vehicle communications. The computations were executed by the platoon members with their constrained capabilities. The advent of 5G has favored Intelligent Transportation Systems (ITS) to adopt Multi-access Edge Computing (MEC) in platooning paradigms by offloading the computational tasks to the edge server. In this research, vital parameters in vehicular platooning systems, viz. latency-sensitive radio resource management schemes, and Age of Information (AoI) are investigated. In addition, the delivery rates of Cooperative Awareness Messages (CAM) that ensure expeditious reception of safety-critical messages at the roadside units (RSU) are also examined. However, for latency-sensitive applications like vehicular networks, it is essential to address multiple and correlated objectives. To solve such objectives effectively and simultaneously, the Multi-Agent Deep Deterministic Policy Gradient (MADDPG) framework necessitates a better and more sophisticated model to enhance its ability. In this paper, a novel Cascaded MADDPG framework, CMADDPG, is proposed to train cascaded target critics, which aims at achieving expected rewards through the collaborative conduct of agents. The estimation bias phenomenon, which hinders a system's overall performance, is vividly circumvented in this cascaded algorithm. Eventually, experimental analysis also demonstrates the potential of the proposed algorithm by evaluating the convergence factor, which stabilizes quickly with minimum distortions, and reliable CAM message dissemination with 99% probability. The average AoI quantity is maintained within the 5-10 ms range, guaranteeing better QoS. This technique has proven its robustness in decentralized resource allocation against channel uncertainties caused by higher mobility in the environment. Most importantly, the performance of the proposed algorithm remains unaffected by increasing platoon size and leading channel uncertainties.

2.
Sensors (Basel) ; 24(17)2024 Aug 28.
Artículo en Inglés | MEDLINE | ID: mdl-39275469

RESUMEN

Mobile Edge Computing (MEC) is crucial for reducing latency by bringing computational resources closer to the network edge, thereby enhancing the quality of services (QoS). However, the broad deployment of cloudlets poses challenges in efficient network slicing, particularly when traffic distribution is uneven. Therefore, these challenges include managing diverse resource requirements across widely distributed cloudlets, minimizing resource conflicts and delays, and maintaining service quality amid fluctuating request rates. Addressing this requires intelligent strategies to predict request types (common or urgent), assess resource needs, and allocate resources efficiently. Emerging technologies like edge computing and 5G with network slicing can handle delay-sensitive IoT requests rapidly, but a robust mechanism for real-time resource and utility optimization remains necessary. To address these challenges, we designed an end-to-end network slicing approach that predicts common and urgent user requests through T distribution. We formulated our problem as a multi-agent Markov decision process (MDP) and introduced a multi-agent soft actor-critic (MAgSAC) algorithm. This algorithm prevents the wastage of scarce resources by intelligently activating and deactivating virtual network function (VNF) instances, thereby balancing the allocation process. Our approach aims to optimize overall utility, balancing trade-offs between revenue, energy consumption costs, and latency. We evaluated our method, MAgSAC, through simulations, comparing it with the following six benchmark schemes: MAA3C, SACT, DDPG, S2Vec, Random, and Greedy. The results demonstrate that our approach, MAgSAC, optimizes utility by 30%, minimizes energy consumption costs by 12.4%, and reduces execution time by 21.7% compared to the closest related multi-agent approach named MAA3C.

3.
Neuron ; 112(18): 3017-3028, 2024 Sep 25.
Artículo en Inglés | MEDLINE | ID: mdl-39326392

RESUMEN

Innovations in wearable technology and artificial intelligence have enabled consumer devices to process and transmit data about human mental states (cognitive, affective, and conative) through what this paper refers to as "cognitive biometrics." Devices such as brain-computer interfaces, extended reality headsets, and fitness wearables offer significant benefits in health, wellness, and entertainment through the collection and processing and cognitive biometric data. However, they also pose unique risks to mental privacy due to their ability to infer sensitive information about individuals. This paper challenges the current approach to protecting individuals through legal protections for "neural data" and advocates for a more expansive legal and industry framework, as recently reflected in the draft UNESCO Recommendation on the Ethics of Neurotechnology, to holistically address both neural and cognitive biometric data. Incorporating this broader and more inclusive approach into legislation and product design can facilitate responsible innovation while safeguarding individuals' mental privacy.


Asunto(s)
Interfaces Cerebro-Computador , Cognición , Privacidad , Humanos , Cognición/fisiología , Interfaces Cerebro-Computador/ética , Dispositivos Electrónicos Vestibles , Biometría/métodos , Confidencialidad/ética
4.
Sensors (Basel) ; 24(18)2024 Sep 19.
Artículo en Inglés | MEDLINE | ID: mdl-39338808

RESUMEN

Fog networking has become an established architecture addressing various applications with strict latency, jitter, and bandwidth constraints. Fog Nodes (FNs) allow for flexible and effective computation offloading and content distribution. However, the transmission of computational tasks, the processing of these tasks, and finally sending the results back still incur energy costs. We survey the literature on fog computing, focusing on energy consumption. We take a holistic approach and look at energy consumed by devices located in all network tiers from the things tier through the fog tier to the cloud tier, including communication links between the tiers. Furthermore, fog network modeling is analyzed with particular emphasis on application scenarios and the energy consumed for communication and computation. We perform a detailed analysis of model parameterization, which is crucial for the results presented in the surveyed works. Finally, we survey energy-saving methods, putting them into different classification systems and considering the results presented in the surveyed works. Based on our analysis, we present a classification and comparison of the fog algorithmic models, where energy is spent on communication and computation, and where delay is incurred. We also classify the scenarios examined by the surveyed works with respect to the assumed parameters. Moreover, we systematize methods used to save energy in a fog network. These methods are compared with respect to their scenarios, objectives, constraints, and decision variables. Finally, we discuss future trends in fog networking and how related technologies and economics shall trade their increasing development with energy consumption.

5.
Sensors (Basel) ; 24(18)2024 Sep 23.
Artículo en Inglés | MEDLINE | ID: mdl-39338897

RESUMEN

With the development of the Internet of Things (IoT) and edge computing, more and more devices, such as sensor nodes and intelligent automated guided vehicles (AGVs), can serve as edge devices to provide Location-Based Services (LBS) through the IoT. As the number of applications increases, there is an abundance of sensitive information in the communication process, pushing the focus of privacy protection towards the communication process and edge devices. The challenge lies in the fact that most traditional location privacy protection algorithms are not suited for the IoT with edge computing, as they primarily focus on the security of remote servers. To enhance the capability of location privacy protection, this paper proposes a novel K-anonymity algorithm based on clustering. This novel algorithm incorporates a scheme that flexibly combines real and virtual locations based on the requirements of applications. Simulation results demonstrate that the proposed algorithm significantly improves location privacy protection for the IoT with edge computing. When compared to traditional K-anonymity algorithms, the proposed algorithm further enhances the security of location privacy by expanding the potential region in which the real node may be located, thereby limiting the effectiveness of "narrow-region" attacks.

6.
Sci Rep ; 14(1): 21532, 2024 09 15.
Artículo en Inglés | MEDLINE | ID: mdl-39278954

RESUMEN

The advancement in technology, with the "Internet of Things (IoT) is continuing a crucial task to accomplish distance medical care observation, where the effective and secure healthcare information retrieval is complex. However, the IoT systems have restricted resources hence it is complex to attain effective and secure healthcare information acquisition. The idea of smart healthcare has developed in diverse regions, where small-scale implementations of medical facilities are evaluated. In the IoT-aided medical devices, the security of the IoT systems and related information is highly essential on the other hand, the edge computing is a significant framework that rectifies their processing and computational issues. The edge computing is inexpensive, and it is a powerful framework to offer low latency information assistance by enhancing the computation and the transmission speed of the IoT systems in the medical sectors. The main intention of this work is to design a secure framework for Edge computing in IoT-enabled healthcare systems using heuristic-based authentication and "Named Data Networking (NDN)". There are three layers in the proposed model. In the first layer, many IoT devices are connected together, and using the cluster head formation, the patients are transmitting their data to the edge cloud layer. The edge cloud layer is responsible for storage and computing resources for rapidly caching and providing medical data. Hence, the patient layer is a new heuristic-based sanitization algorithm called Revised Position of Cat Swarm Optimization (RPCSO) with NDN for hiding the sensitive data that should not be leaked to unauthorized users. This authentication procedure is adopted as a multi-objective function key generation procedure considering constraints like hiding failure rate, information preservation rate, and degree of modification. Further, the data from the edge cloud layer is transferred to the user layer, where the optimal key generation with NDN-based restoration is adopted, thus achieving efficient and secure medical data retrieval. The framework is evaluated quantitatively on diverse healthcare datasets from University of California (UCI) and Kaggle repository and experimental analysis shows the superior performance of the proposed model in terms of latency and cost when compared to existing solutions. The proposed model performs the comparative analysis of the existing algorithms such as Cat Swarm Optimization (CSO), Osprey Optimization Algorithm (OOA), Mexican Axolotl Optimization (MAO), Single candidate optimizer (SCO). Similarly, the cryptography tasks like "Rivest-Shamir-Adleman (RSA), Advanced Encryption Standard (AES), Elliptic Curve Cryptography (ECC), and Data sanitization and Restoration (DSR) are applied and compared with the RPCSO in the proposed work. The results of the proposed model is compared on the basis of the best, worst, mean, median and standard deviation. The proposed RPCSO outperforms all other models with values of 0.018069361, 0.50564046, 0.112643119, 0.018069361, 0.156968355 and 0.283597992, 0.467442652, 0.32920734, 0.328581887, 0.063687386 for both dataset 1 and dataset 2 respectively.


Asunto(s)
Nube Computacional , Seguridad Computacional , Internet de las Cosas , Humanos , Heurística , Algoritmos , Atención a la Salud , Redes de Comunicación de Computadores
7.
Heliyon ; 10(18): e37490, 2024 Sep 30.
Artículo en Inglés | MEDLINE | ID: mdl-39309787

RESUMEN

The current society is becoming increasingly interconnected and hyper-connected. Communication networks are advancing, as well as logistics networks, or even networks for the transportation and distribution of natural resources. One of the key benefits of the evolution of these networks is to bring consumers closer to the source of a resource or service. However, this is not a straightforward task, particularly since networks near final users are usually shaped by heterogeneous nodes, sometimes even in very dense scenarios, which may demand or offer a resource at any given moment. In this paper, we present DEN2NE, a novel algorithm designed for the automatic distribution and reallocation of resources in distributed environments. The algorithm has been implemented with six different criteria in order to adapt it to the specific use case under consideration. The results obtained from DEN2DE are promising, owing to its adaptability and its average execution time, which follows a linear distribution in relation to the topology size.

8.
Neural Netw ; 179: 106621, 2024 Nov.
Artículo en Inglés | MEDLINE | ID: mdl-39153402

RESUMEN

Vehicular edge computing (VEC), a promising paradigm for the development of emerging intelligent transportation systems, can provide lower service latency for vehicular applications. However, it is still a challenge to fulfill the requirements of such applications with stringent latency requirements in the VEC system with limited resources. In addition, existing methods focus on handling the offloading task in a certain time slot with statically allocated resources, but ignore the heterogeneous tasks' different resource requirements, resulting in resource wastage. To solve the real-time task offloading and heterogeneous resource allocation problem in VEC system, we propose a decentralized solution based on the attention mechanism and recurrent neural networks (RNN) with a multi-agent distributed deep deterministic policy gradient (AR-MAD4PG). First, to address the partial observability of agents, we construct a shared agent graph and propose a periodic communication mechanism that enables edge nodes to aggregate information from other edge nodes. Second, to help agents better understand the current system state, we design an RNN-based feature extraction network to capture the historical state and resource allocation information of the VEC system. Thirdly, to tackle the challenges of excessive joint observation-action space and ineffective information interference, we adopt the multi-head attention mechanism to compress the dimension of the observation-action space of agents. Finally, we build a simulation model based on the actual vehicle trajectories, and the experimental results show that our proposed method outperforms the existing approaches.


Asunto(s)
Redes Neurales de la Computación , Asignación de Recursos , Refuerzo en Psicología , Internet , Transportes , Algoritmos , Simulación por Computador , Aprendizaje Profundo
9.
Sci Rep ; 14(1): 18506, 2024 Aug 09.
Artículo en Inglés | MEDLINE | ID: mdl-39122773

RESUMEN

This paper aims to increase the Unmanned Aerial Vehicle's (UAV) capacity for target tracking. First, a control model based on fuzzy logic is created, which modifies the UAV's flight attitude in response to the target's motion status and changes in the surrounding environment. Then, an edge computing-based target tracking framework is created. By deploying edge devices around the UAV, the calculation of target recognition and position prediction is transferred from the central processing unit to the edge nodes. Finally, the latest Vision Transformer model is adopted for target recognition, the image is divided into uniform blocks, and then the attention mechanism is used to capture the relationship between different blocks to realize real-time image analysis. To anticipate the position, the particle filter algorithm is used with historical data and sensor inputs to produce a high-precision estimate of the target position. The experimental results in different scenes show that the average target capture time of the algorithm based on fuzzy logic control is shortened by 20% compared with the traditional proportional-integral-derivative (PID) method, from 5.2 s of the traditional PID to 4.2 s. The average tracking error is reduced by 15%, from 0.8 m of traditional PID to 0.68 m. Meanwhile, in the case of environmental change and target motion change, this algorithm shows better robustness, and the fluctuation range of tracking error is only half of that of traditional PID. This shows that the fuzzy logic control theory is successfully applied to the UAV target tracking field, which proves the effectiveness of this method in improving the target tracking performance.

10.
Sensors (Basel) ; 24(15)2024 Jul 25.
Artículo en Inglés | MEDLINE | ID: mdl-39123877

RESUMEN

Computer Vision (CV) has become increasingly important for Single-Board Computers (SBCs) due to their widespread deployment in addressing real-world problems. Specifically, in the context of smart cities, there is an emerging trend of developing end-to-end video analytics solutions designed to address urban challenges such as traffic management, disaster response, and waste management. However, deploying CV solutions on SBCs presents several pressing challenges (e.g., limited computation power, inefficient energy management, and real-time processing needs) hindering their use at scale. Graphical Processing Units (GPUs) and software-level developments have emerged recently in addressing these challenges to enable the elevated performance of SBCs; however, it is still an active area of research. There is a gap in the literature for a comprehensive review of such recent and rapidly evolving advancements on both software and hardware fronts. The presented review provides a detailed overview of the existing GPU-accelerated edge-computing SBCs and software advancements including algorithm optimization techniques, packages, development frameworks, and hardware deployment specific packages. This review provides a subjective comparative analysis based on critical factors to help applied Artificial Intelligence (AI) researchers in demonstrating the existing state of the art and selecting the best suited combinations for their specific use-case. At the end, the paper also discusses potential limitations of the existing SBCs and highlights the future research directions in this domain.

11.
Sensors (Basel) ; 24(15)2024 Jul 27.
Artículo en Inglés | MEDLINE | ID: mdl-39123922

RESUMEN

Interest in deploying deep reinforcement learning (DRL) models on low-power edge devices, such as Autonomous Mobile Robots (AMRs) and Internet of Things (IoT) devices, has seen a significant rise due to the potential of performing real-time inference by eliminating the latency and reliability issues incurred from wireless communication and the privacy benefits of processing data locally. Deploying such energy-intensive models on power-constrained devices is not always feasible, however, which has led to the development of model compression techniques that can reduce the size and computational complexity of DRL policies. Policy distillation, the most popular of these methods, can be used to first lower the number of network parameters by transferring the behavior of a large teacher network to a smaller student model before deploying these students at the edge. This works well with deterministic policies that operate using discrete actions. However, many real-world tasks that are power constrained, such as in the field of robotics, are formulated using continuous action spaces, which are not supported. In this work, we improve the policy distillation method to support the compression of DRL models designed to solve these continuous control tasks, with an emphasis on maintaining the stochastic nature of continuous DRL algorithms. Experiments show that our methods can be used effectively to compress such policies up to 750% while maintaining or even exceeding their teacher's performance by up to 41% in solving two popular continuous control tasks.

12.
Sensors (Basel) ; 24(15)2024 Aug 05.
Artículo en Inglés | MEDLINE | ID: mdl-39124122

RESUMEN

The rapid advancement of technology has greatly expanded the capabilities of unmanned aerial vehicles (UAVs) in wireless communication and edge computing domains. The primary objective of UAVs is the seamless transfer of video data streams to emergency responders. However, live video data streaming is inherently latency dependent, wherein the value of the video frames diminishes with any delay in the stream. This becomes particularly critical during emergencies, where live video streaming provides vital information about the current conditions. Edge computing seeks to address this latency issue in live video streaming by bringing computing resources closer to users. Nonetheless, the mobile nature of UAVs necessitates additional trajectory supervision alongside the management of computation and networking resources. Consequently, efficient system optimization is required to maximize the overall effectiveness of the collaborative system with limited UAV resources. This study explores a scenario where multiple UAVs collaborate with end users and edge servers to establish an emergency response system. The proposed idea takes a comprehensive approach by considering the entire emergency response system from the incident site to video distribution at the user level. It includes an adaptive resource management strategy, leveraging deep reinforcement learning by simultaneously addressing video streaming latency, UAV and user mobility factors, and varied bandwidth resources.

13.
Sensors (Basel) ; 24(15)2024 Aug 05.
Artículo en Inglés | MEDLINE | ID: mdl-39124124

RESUMEN

A complete low-power, low-cost and wireless solution for bridge structural health monitoring is presented. This work includes monitoring nodes with modular hardware design and low power consumption based on a control and resource management board called CoreBoard, and a specific board for sensorization called SensorBoard is presented. The firmware is presented as a design of FreeRTOS parallelised tasks that carry out the management of the hardware resources and implement the Random Decrement Technique to minimize the amount of data to be transmitted over the NB-IoT network in a secure way. The presented solution is validated through the characterization of its energy consumption, which guarantees an autonomy higher than 10 years with a daily 8 min monitoring periodicity, and two deployments in a pilot laboratory structure and the Eduardo Torroja bridge in Posadas (Córdoba, Spain). The results are compared with two different calibrated commercial systems, obtaining an error lower than 1.72% in modal analysis frequencies. The architecture and the results obtained place the presented design as a new solution in the state of the art and, thanks to its autonomy, low cost and the graphical device management interface presented, allow its deployment and integration in the current IoT paradigm.

14.
Cogn Neurodyn ; 18(4): 1799-1810, 2024 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-39104679

RESUMEN

Facial expression recognition has made a significant progress as a result of the advent of more and more convolutional neural networks (CNN). However, with the improvement of CNN, the models continues to get deeper and larger so as to a greater focus on the high-level features of the image and the low-level features tend to be lost. Because of the reason above, the dependence of low-level features between different areas of the face often cannot be summarized. In response to this problem, we propose a novel network based on the CNN model. To extract long-range dependencies of low-level features, multiple attention mechanisms has been introduced into the network. In this paper, the patch attention mechanism is designed to obtain the dependence between low-level features of facial expressions firstly. After fusion, the feature maps are input to the backbone network incorporating convolutional block attention module (CBAM) to enhance the feature extraction ability and improve the accuracy of facial expression recognition, and achieve competitive results on three datasets CK+ (98.10%), JAFFE (95.12%) and FER2013 (73.50%). Further, according to the PA Net designed in this paper, a hardware friendly implementation scheme is designed based on memristor crossbars, which is expected to provide a software and hardware co-design scheme for edge computing of personal and wearable electronic products.

15.
Heliyon ; 10(12): e32399, 2024 Jun 30.
Artículo en Inglés | MEDLINE | ID: mdl-39183823

RESUMEN

Recent years, edge-cloud computing has attracted more and more attention due to benefits from the combination of edge and cloud computing. Task scheduling is still one of the major challenges for improving service quality and resource efficiency of edge-clouds. Though several researches have studied on the scheduling problem, there remains issues needed to be addressed for their applications, e.g., ignoring resource heterogeneity, focusing on only one kind of requests. Therefore, in this paper, we aim at providing a heterogeneity aware task scheduling algorithm to improve task completion rate and resource utilization for edge-clouds with deadline constraints. Due to NP-hardness of the scheduling problem, we exploit genetic algorithm (GA), one of the most representative and widely used meta-heuristic algorithms, to solve the problem considering task completion rate and resource utilization as major and minor optimization objectives, respectively. In our GA-based scheduling algorithm, a gene indicates which resource that its corresponding task is processed by. To improve the performance of GA, we propose to exploit a skew mutation operator where genes are associated to resource heterogeneity during the population evolution. We conduct extensive experiments to evaluate the performance of our algorithm, and results verify the performance superiority of our algorithm in task completion rate, compared with other thirteen classical and up-to-date scheduling algorithms.

16.
Sensors (Basel) ; 24(16)2024 Aug 14.
Artículo en Inglés | MEDLINE | ID: mdl-39204957

RESUMEN

Intelligent mobile image sensing powered by deep learning analyzes images captured by cameras from mobile devices, such as smartphones or smartwatches. It supports numerous mobile applications, such as image classification, face recognition, and camera scene detection. Unfortunately, mobile devices often lack the resources necessary for deep learning, leading to increased inference latency and rapid battery consumption. Moreover, the inference accuracy may decline over time due to potential data drift. To address these issues, we introduce a new cost-efficient framework, called Corun, designed to simultaneously handle multiple inference queries and continual model retraining/fine-tuning of a pre-trained model on a single commodity GPU in an edge server to significantly improve the inference throughput, upholding the inference accuracy. The scheduling method of Corun undertakes offline profiling to find the maximum number of concurrent inferences that can be executed along with a retraining job on a single GPU without incurring an out-of-memory error or significantly increasing the latency. Our evaluation verifies the cost-effectiveness of Corun. The inference throughput provided by Corun scales with the number of concurrent inference queries. However, the latency of inference queries and the length of a retraining epoch increase at substantially lower rates. By concurrently processing multiple inference and retraining tasks on one GPU instead of using a separate GPU for each task, Corun could reduce the number of GPUs and cost required to deploy mobile image sensing applications based on deep learning at the edge.

17.
Sensors (Basel) ; 24(16)2024 Aug 08.
Artículo en Inglés | MEDLINE | ID: mdl-39204838

RESUMEN

Device-to-device (D2D) is a pivotal technology in the next generation of communication, allowing for direct task offloading between mobile devices (MDs) to improve the efficient utilization of idle resources. This paper proposes a novel algorithm for dynamic task offloading between the active MDs and the idle MDs in a D2D-MEC (mobile edge computing) system by deploying multi-agent deep reinforcement learning (DRL) to minimize the long-term average delay of delay-sensitive tasks under deadline constraints. Our core innovation is a dynamic partitioning scheme for idle and active devices in the D2D-MEC system, accounting for stochastic task arrivals and multi-time-slot task execution, which has been insufficiently explored in the existing literature. We adopt a queue-based system to formulate a dynamic task offloading optimization problem. To address the challenges of large action space and the coupling of actions across time slots, we model the problem as a Markov decision process (MDP) and perform multi-agent DRL through multi-agent proximal policy optimization (MAPPO). We employ a centralized training with decentralized execution (CTDE) framework to enable each MD to make offloading decisions solely based on its local system state. Extensive simulations demonstrate the efficiency and fast convergence of our algorithm. In comparison to the existing sub-optimal results deploying single-agent DRL, our algorithm reduces the average task completion delay by 11.0% and the ratio of dropped tasks by 17.0%. Our proposed algorithm is particularly pertinent to sensor networks, where mobile devices equipped with sensors generate a substantial volume of data that requires timely processing to ensure quality of experience (QoE) and meet the service-level agreements (SLAs) of delay-sensitive applications.

18.
Sensors (Basel) ; 24(16)2024 Aug 09.
Artículo en Inglés | MEDLINE | ID: mdl-39204852

RESUMEN

With the rapid development of the Industrial Internet of Things in rotating machinery, the amount of data sampled by mechanical vibration wireless sensor networks (MvWSNs) has increased significantly, straining bandwidth capacity. Concurrently, the safety requirements for rotating machinery have escalated, necessitating enhanced real-time data processing capabilities. Conventional methods, reliant on experiential approaches, have proven inefficient in meeting these evolving challenges. To this end, a fault detection method for rotating machinery based on mobileNet in MvWSNs is proposed to address these intractable issues. The small and light deep learning model is helpful to realize nearly real-time sensing and fault detection, lightening the communication pressure of MvWSNs. The well-trained deep learning is implanted on the MvWSNs sensor node, an edge computing platform developed via embedded STM32 microcontrollers (STMicroelectronics International NV, Geneva, Switzerland). Data acquisition, data processing, and data classification are all executed on the computing- and energy-constrained sensor node. The experimental results demonstrate that the proposed fault detection method can achieve about 0.99 for the DDS dataset and an accuracy of 0.98 in the MvWSNs sensor node. Furthermore, the final transmission data size is only 0.1% compared to the original data size. It is also a time-saving method that can be accomplished within 135 ms while the raw data will take about 1000 ms to transmit to the monitoring center when there are four sensor nodes in the network. Thus, the proposed edge computing method shows good application prospects in fault detection and control of rotating machinery with high time sensitivity.

19.
Sensors (Basel) ; 24(16)2024 Aug 15.
Artículo en Inglés | MEDLINE | ID: mdl-39204979

RESUMEN

In the era of ubiquitous computing, the challenges imposed by the increasing demand for real-time data processing, security, and energy efficiency call for innovative solutions. The emergence of fog computing has provided a promising paradigm to address these challenges by bringing computational resources closer to data sources. Despite its advantages, the fog computing characteristics pose challenges in heterogeneous environments in terms of resource allocation and management, provisioning, security, and connectivity, among others. This paper introduces COGNIFOG, a novel cognitive fog framework currently under development, which was designed to leverage intelligent, decentralized decision-making processes, machine learning algorithms, and distributed computing principles to enable the autonomous operation, adaptability, and scalability across the IoT-edge-cloud continuum. By integrating cognitive capabilities, COGNIFOG is expected to increase the efficiency and reliability of next-generation computing environments, potentially providing a seamless bridge between the physical and digital worlds. Preliminary experimental results with a limited set of connectivity-related COGNIFOG building blocks show promising improvements in network resource utilization in a real-world-based IoT scenario. Overall, this work paves the way for further developments on the framework, which are aimed at making it more intelligent, resilient, and aligned with the ever-evolving demands of next-generation computing environments.

20.
Sensors (Basel) ; 24(16)2024 Aug 22.
Artículo en Inglés | MEDLINE | ID: mdl-39205138

RESUMEN

This paper presents a new edge detection process implemented in an embedded IoT device called Bee Smart Detection node to detect catastrophic apiary events. Such events include swarming, queen loss, and the detection of Colony Collapse Disorder (CCD) conditions. Two deep learning sub-processes are used for this purpose. The first uses a fuzzy multi-layered neural network of variable depths called fuzzy-stranded-NN to detect CCD conditions based on temperature and humidity measurements inside the beehive. The second utilizes a deep learning CNN model to detect swarming and queen loss cases based on sound recordings. The proposed processes have been implemented into autonomous Bee Smart Detection IoT devices that transmit their measurements and the detection results to the cloud over Wi-Fi. The BeeSD devices have been tested for easy-to-use functionality, autonomous operation, deep learning model inference accuracy, and inference execution speeds. The author presents the experimental results of the fuzzy-stranded-NN model for detecting critical conditions and deep learning CNN models for detecting swarming and queen loss. From the presented experimental results, the stranded-NN achieved accuracy results up to 95%, while the ResNet-50 model presented accuracy results up to 99% for detecting swarming or queen loss events. The ResNet-18 model is also the fastest inference speed replacement of the ResNet-50 model, achieving up to 93% accuracy results. Finally, cross-comparison of the deep learning models with machine learning ones shows that deep learning models can provide at least 3-5% better accuracy results.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA