Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 683
Filtrar
1.
Nano Lett ; 24(23): 7091-7099, 2024 Jun 12.
Artigo em Inglês | MEDLINE | ID: mdl-38804877

RESUMO

Multimodal perception can capture more precise and comprehensive information compared with unimodal approaches. However, current sensory systems typically merge multimodal signals at computing terminals following parallel processing and transmission, which results in the potential loss of spatial association information and requires time stamps to maintain temporal coherence for time-series data. Here we demonstrate bioinspired in-sensor multimodal fusion, which effectively enhances comprehensive perception and reduces the level of data transfer between sensory terminal and computation units. By adopting floating gate phototransistors with reconfigurable photoresponse plasticity, we realize the agile spatial and spatiotemporal fusion under nonvolatile and volatile photoresponse modes. To realize an optimal spatial estimation, we integrate spatial information from visual-tactile signals. For dynamic events, we capture and fuse in real time spatiotemporal information from visual-audio signals, realizing a dance-music synchronization recognition task without a time-stamping process. This in-sensor multimodal fusion approach provides the potential to simplify the multimodal integration system, extending the in-sensor computing paradigm.

2.
Methods ; 218: 94-100, 2023 10.
Artigo em Inglês | MEDLINE | ID: mdl-37507060

RESUMO

In recent years, healthcare data from various sources such as clinical institutions, patients, and pharmaceutical industries have become increasingly abundant. However, due to the complex healthcare system and data privacy concerns, aggregating and utilizing these data in a centralized manner can be challenging. Federated learning (FL) has emerged as a promising solution for distributed training in edge computing scenarios, utilizing on-device user data while reducing server costs. In traditional FL, a central server trains a global model sampled client data randomly, and the server combines the collected model from different clients into one global model. However, for not independent and identically distributed (non-i.i.d.) datasets, randomly selecting users to train server is not an optimal choice and can lead to poor model training performance. To address this limitation, we propose the Federated Multi-Center Clustering algorithm (FedMCC) to enhance the robustness and accuracy for all clients. FedMCC leverages the Model-Agnostic Meta-Learning (MAML) algorithm, focusing on training a robust base model during the initial training phase and better capturing features from different users. Subsequently, clustering methods are used to ensure that features among users within each cluster are similar, approximating an i.i.d. training process in each round, resulting in more effective training of the global model. We validate the effectiveness and generalizability of FedMCC through extensive experiments on public healthcare datasets. The results demonstrate that FedMCC achieves improved performance and accuracy for all clients while maintaining data privacy and security, showcasing its potential for various healthcare applications.


Assuntos
Algoritmos , Privacidade , Humanos , Análise por Conglomerados
3.
Sensors (Basel) ; 24(18)2024 Sep 19.
Artigo em Inglês | MEDLINE | ID: mdl-39338808

RESUMO

Fog networking has become an established architecture addressing various applications with strict latency, jitter, and bandwidth constraints. Fog Nodes (FNs) allow for flexible and effective computation offloading and content distribution. However, the transmission of computational tasks, the processing of these tasks, and finally sending the results back still incur energy costs. We survey the literature on fog computing, focusing on energy consumption. We take a holistic approach and look at energy consumed by devices located in all network tiers from the things tier through the fog tier to the cloud tier, including communication links between the tiers. Furthermore, fog network modeling is analyzed with particular emphasis on application scenarios and the energy consumed for communication and computation. We perform a detailed analysis of model parameterization, which is crucial for the results presented in the surveyed works. Finally, we survey energy-saving methods, putting them into different classification systems and considering the results presented in the surveyed works. Based on our analysis, we present a classification and comparison of the fog algorithmic models, where energy is spent on communication and computation, and where delay is incurred. We also classify the scenarios examined by the surveyed works with respect to the assumed parameters. Moreover, we systematize methods used to save energy in a fog network. These methods are compared with respect to their scenarios, objectives, constraints, and decision variables. Finally, we discuss future trends in fog networking and how related technologies and economics shall trade their increasing development with energy consumption.

4.
Sensors (Basel) ; 24(8)2024 Apr 15.
Artigo em Inglês | MEDLINE | ID: mdl-38676147

RESUMO

This paper focuses on the use of smart manufacturing in lathe-cutting tool machines, which can experience thermal deformation during long-term processing, leading to displacement errors in the cutting head and damage to the final product. This study uses time-series thermal compensation to develop a predictive system for thermal displacement in machine tools, which is applicable in the industry using edge computing technology. Two experiments were carried out to optimize the temperature prediction models and predict the displacement of five axes at the temperature points. First, an examination is conducted to determine possible variances in time-series data. This analysis is based on the data obtained for the changes in time, speed, torque, and temperature at various locations of the machine tool. Using the viable machine-learning models determined, the study then examines various cutting settings, temperature points, and machine speeds to forecast the future five-axis displacement. Second, to verify the precision of the models created in the initial phase, other time-series models are examined and trained in the subsequent phase, and their effectiveness is compared to the models acquired in the first phase. This work also included training seven models of WNN, LSTNet, TPA-LSTM, XGBoost, BiLSTM, CNN, and GA-LSTM. The study found that the GA-LSTM model outperforms the other three best models of the LSTM, GRU, and XGBoost models with an average precision greater than 90%. Based on the analysis of training time and model precision, the study concluded that a system using LSTM, GRU, and XGBoost should be designed and applied for thermal compensation using edge devices such as the Raspberry Pi.

5.
Sensors (Basel) ; 24(8)2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38676197

RESUMO

Federated learning (FL) in mobile edge computing has emerged as a promising machine-learning paradigm in the Internet of Things, enabling distributed training without exposing private data. It allows multiple mobile devices (MDs) to collaboratively create a global model. FL not only addresses the issue of private data exposure but also alleviates the burden on a centralized server, which is common in conventional centralized learning. However, a critical issue in FL is the imposed computing for local training on multiple MDs, which often have limited computing capabilities. This limitation poses a challenge for MDs to actively contribute to the training process. To tackle this problem, this paper proposes an adaptive dataset management (ADM) scheme, aiming to reduce the burden of local training on MDs. Through an empirical study on the influence of dataset size on accuracy improvement over communication rounds, we confirm that the amount of dataset has a reduced impact on accuracy gain. Based on this finding, we introduce a discount factor that represents the reduced impact of the size of the dataset on the accuracy gain over communication rounds. To address the ADM problem, which involves determining how much the dataset should be reduced over classes while considering both the proposed discounting factor and Kullback-Leibler divergence (KLD), a theoretical framework is presented. The ADM problem is a non-convex optimization problem. To solve it, we propose a greedy-based heuristic algorithm that determines a suboptimal solution with low complexity. Simulation results demonstrate that our proposed scheme effectively alleviates the training burden on MDs while maintaining acceptable training accuracy.

6.
Sensors (Basel) ; 24(10)2024 May 08.
Artigo em Inglês | MEDLINE | ID: mdl-38793843

RESUMO

Edge computing provides higher computational power and lower transmission latency by offloading tasks to nearby edge nodes with available computational resources to meet the requirements of time-sensitive tasks and computationally complex tasks. Resource allocation schemes are essential to this process. To allocate resources effectively, it is necessary to attach metadata to a task to indicate what kind of resources are needed and how many computation resources are required. However, these metadata are sensitive and can be exposed to eavesdroppers, which can lead to privacy breaches. In addition, edge nodes are vulnerable to corruption because of their limited cybersecurity defenses. Attackers can easily obtain end-device privacy through unprotected metadata or corrupted edge nodes. To address this problem, we propose a metadata privacy resource allocation scheme that uses searchable encryption to protect metadata privacy and zero-knowledge proofs to resist semi-malicious edge nodes. We have formally proven that our proposed scheme satisfies the required security concepts and experimentally demonstrated the effectiveness of the scheme.

7.
Sensors (Basel) ; 24(3)2024 Jan 26.
Artigo em Inglês | MEDLINE | ID: mdl-38339545

RESUMO

Myocardial Infarction (MI), commonly known as heart attack, is a cardiac condition characterized by damage to a portion of the heart, specifically the myocardium, due to the disruption of blood flow. Given its recurring and often asymptomatic nature, there is the need for continuous monitoring using wearable devices. This paper proposes a single-microcontroller-based system designed for the automatic detection of MI based on the Edge Computing paradigm. Two solutions for MI detection are evaluated, based on Machine Learning (ML) and Deep Learning (DL) techniques. The developed algorithms are based on two different approaches currently available in the literature, and they are optimized for deployment on low-resource hardware. A feasibility assessment of their implementation on a single 32-bit microcontroller with an ARM Cortex-M4 core was examined, and a comparison in terms of accuracy, inference time, and memory usage was detailed. For ML techniques, significant data processing for feature extraction, coupled with a simpler Neural Network (NN) is involved. On the other hand, the second method, based on DL, employs a Spectrogram Analysis for feature extraction and a Convolutional Neural Network (CNN) with a longer inference time and higher memory utilization. Both methods employ the same low power hardware reaching an accuracy of 89.40% and 94.76%, respectively. The final prototype is an energy-efficient system capable of real-time detection of MI without the need to connect to remote servers or the cloud. All processing is performed at the edge, enabling NN inference on the same microcontroller.


Assuntos
Cardiopatias , Infarto do Miocárdio , Humanos , Infarto do Miocárdio/diagnóstico , Coração , Miocárdio , Algoritmos
8.
Sensors (Basel) ; 24(13)2024 Jul 02.
Artigo em Inglês | MEDLINE | ID: mdl-39001087

RESUMO

The growing importance of edge and fog computing in the modern IT infrastructure is driven by the rise of decentralized applications. However, resource allocation within these frameworks is challenging due to varying device capabilities and dynamic network conditions. Conventional approaches often result in poor resource use and slowed advancements. This study presents a novel strategy for enhancing resource allocation in edge and fog computing by integrating machine learning with the blockchain for reliable trust management. Our proposed framework, called CyberGuard, leverages the blockchain's inherent immutability and decentralization to establish a trustworthy and transparent network for monitoring and verifying edge and fog computing transactions. CyberGuard combines the Trust2Vec model with conventional machine-learning models like SVM, KNN, and random forests, creating a robust mechanism for assessing trust and security risks. Through detailed optimization and case studies, CyberGuard demonstrates significant improvements in resource allocation efficiency and overall system performance in real-world scenarios. Our results highlight CyberGuard's effectiveness, evidenced by a remarkable accuracy, precision, recall, and F1-score of 98.18%, showcasing the transformative potential of our comprehensive approach in edge and fog computing environments.

9.
Sensors (Basel) ; 24(13)2024 Jul 04.
Artigo em Inglês | MEDLINE | ID: mdl-39001116

RESUMO

This study investigates the dynamic deployment of unmanned aerial vehicles (UAVs) using edge computing in a forest fire scenario. We consider the dynamically changing characteristics of forest fires and the corresponding varying resource requirements. Based on this, this paper models a two-timescale UAV dynamic deployment scheme by considering the dynamic changes in the number and position of UAVs. In the slow timescale, we use a gate recurrent unit (GRU) to predict the number of future users and determine the number of UAVs based on the resource requirements. UAVs with low energy are replaced accordingly. In the fast timescale, a deep-reinforcement-learning-based UAV position deployment algorithm is designed to enable the low-latency processing of computational tasks by adjusting the UAV positions in real time to meet the ground devices' computational demands. The simulation results demonstrate that the proposed scheme achieves better prediction accuracy. The number and position of UAVs can be adapted to resource demand changes and reduce task execution delays.

10.
Sensors (Basel) ; 24(15)2024 Aug 05.
Artigo em Inglês | MEDLINE | ID: mdl-39124124

RESUMO

A complete low-power, low-cost and wireless solution for bridge structural health monitoring is presented. This work includes monitoring nodes with modular hardware design and low power consumption based on a control and resource management board called CoreBoard, and a specific board for sensorization called SensorBoard is presented. The firmware is presented as a design of FreeRTOS parallelised tasks that carry out the management of the hardware resources and implement the Random Decrement Technique to minimize the amount of data to be transmitted over the NB-IoT network in a secure way. The presented solution is validated through the characterization of its energy consumption, which guarantees an autonomy higher than 10 years with a daily 8 min monitoring periodicity, and two deployments in a pilot laboratory structure and the Eduardo Torroja bridge in Posadas (Córdoba, Spain). The results are compared with two different calibrated commercial systems, obtaining an error lower than 1.72% in modal analysis frequencies. The architecture and the results obtained place the presented design as a new solution in the state of the art and, thanks to its autonomy, low cost and the graphical device management interface presented, allow its deployment and integration in the current IoT paradigm.

11.
Sensors (Basel) ; 24(7)2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38610489

RESUMO

In the mobile edge computing (MEC) environment, the edge caching can provide the timely data response service for the intelligent scenarios. However, due to the limited storage capacity of edge nodes and the malicious node behavior, the question of how to select the cached contents and realize the decentralized security data caching faces challenges. In this paper, a blockchain-based decentralized and proactive caching strategy is proposed in an MEC environment to address this problem. The novelty is that the blockchain was adopted in an MEC environment with a proactive caching strategy based on node utility, and the corresponding optimization problem was built. The blockchain was adopted to build a secure and reliable service environment. The employed methodology is that the optimal caching strategy was achieved based on the linear relaxation technology and the interior point method. Additionally, in a content caching system, there is a trade-off between cache space and node utility, and the caching strategy was proposed to solve this problem. There was also a trade-off between the consensus process delay of blockchain and the caching latency of content. An offline consensus authentication method was adopted to reduce the influence of the consensus process delay on the content caching. The key finding was that the proposed algorithm can reduce latency and can ensure the security data caching in an IoT environment. Finally, the simulation experiment showed that the proposed algorithm can achieve up to 49.32%, 43.11%, and 34.85% improvements on the cache hit rate, the average content response latency, and the average system utility, respectively, compared to the random content caching algorithm, and it achieved up to 9.67%, 8.11%, and 5.95% increases, successively, compared to the greedy content caching algorithm.

12.
Sensors (Basel) ; 24(6)2024 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-38544101

RESUMO

Recently, the integration of unmanned aerial vehicles (UAVs) with edge computing has emerged as a promising paradigm for providing computational support for Internet of Things (IoT) applications in remote, disaster-stricken, and maritime areas. In UAV-aided edge computing, the offloading decision plays a central role in optimizing the overall system performance. However, the trajectory directly affects the offloading decision. In general, IoT devices use ground offload computation-intensive tasks on UAV-aided edge servers. The UAVs plan their trajectories based on the task generation rate. Therefore, researchers are attempting to optimize the offloading decision along with the trajectory, and numerous studies are ongoing to determine the impact of the trajectory on offloading decisions. In this survey, we review existing trajectory-aware offloading decision techniques by focusing on design concepts, operational features, and outstanding characteristics. Moreover, they are compared in terms of design principles and operational characteristics. Open issues and research challenges are discussed, along with future directions.

13.
Sensors (Basel) ; 24(18)2024 Sep 23.
Artigo em Inglês | MEDLINE | ID: mdl-39338897

RESUMO

With the development of the Internet of Things (IoT) and edge computing, more and more devices, such as sensor nodes and intelligent automated guided vehicles (AGVs), can serve as edge devices to provide Location-Based Services (LBS) through the IoT. As the number of applications increases, there is an abundance of sensitive information in the communication process, pushing the focus of privacy protection towards the communication process and edge devices. The challenge lies in the fact that most traditional location privacy protection algorithms are not suited for the IoT with edge computing, as they primarily focus on the security of remote servers. To enhance the capability of location privacy protection, this paper proposes a novel K-anonymity algorithm based on clustering. This novel algorithm incorporates a scheme that flexibly combines real and virtual locations based on the requirements of applications. Simulation results demonstrate that the proposed algorithm significantly improves location privacy protection for the IoT with edge computing. When compared to traditional K-anonymity algorithms, the proposed algorithm further enhances the security of location privacy by expanding the potential region in which the real node may be located, thereby limiting the effectiveness of "narrow-region" attacks.

14.
Sensors (Basel) ; 24(8)2024 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-38676106

RESUMO

In this paper, we consider an integrated sensing, communication, and computation (ISCC) system to alleviate the spectrum congestion and computation burden problem. Specifically, while serving communication users, a base station (BS) actively engages in sensing targets and collaborates seamlessly with the edge server to concurrently process the acquired sensing data for efficient target recognition. A significant challenge in edge computing systems arises from the inherent uncertainty in computations, mainly stemming from the unpredictable complexity of tasks. With this consideration, we address the computation uncertainty by formulating a robust communication and computing resource allocation problem in ISCC systems. The primary goal of the system is to minimize total energy consumption while adhering to perception and delay constraints. This is achieved through the optimization of transmit beamforming, offloading ratio, and computing resource allocation, effectively managing the trade-offs between local execution and edge computing. To overcome this challenge, we employ a Markov decision process (MDP) in conjunction with the proximal policy optimization (PPO) algorithm, establishing an adaptive learning strategy. The proposed algorithm stands out for its rapid training speed, ensuring compliance with latency requirements for perception and computation in applications. Simulation results highlight its robustness and effectiveness within ISCC systems compared to baseline approaches.

15.
Sensors (Basel) ; 24(10)2024 May 13.
Artigo em Inglês | MEDLINE | ID: mdl-38793952

RESUMO

The convergence of edge computing systems with Field-Programmable Gate Array (FPGA) technology has shown considerable promise in enhancing real-time applications across various domains. This paper presents an innovative edge computing system design specifically tailored for pavement defect detection within the Advanced Driver-Assistance Systems (ADASs) domain. The system seamlessly integrates the AMD Xilinx AI platform into a customized circuit configuration, capitalizing on its capabilities. Utilizing cameras as input sensors to capture road scenes, the system employs a Deep Learning Processing Unit (DPU) to execute the YOLOv3 model, enabling the identification of three distinct types of pavement defects with high accuracy and efficiency. Following defect detection, the system efficiently transmits detailed information about the type and location of detected defects via the Controller Area Network (CAN) interface. This integration of FPGA-based edge computing not only enhances the speed and accuracy of defect detection, but also facilitates real-time communication between the vehicle's onboard controller and external systems. Moreover, the successful integration of the proposed system transforms ADAS into a sophisticated edge computing device, empowering the vehicle's onboard controller to make informed decisions in real time. These decisions are aimed at enhancing the overall driving experience by improving safety and performance metrics. The synergy between edge computing and FPGA technology not only advances ADAS capabilities, but also paves the way for future innovations in automotive safety and assistance systems.

16.
Sensors (Basel) ; 24(9)2024 Apr 27.
Artigo em Inglês | MEDLINE | ID: mdl-38732905

RESUMO

High-pressure pipelines are critical for transporting hazardous materials over long distances, but they face threats from third-party interference activities. Preventive measures are implemented, but interference accidents can still occur, making the need for high-quality detection strategies vital. This paper proposes an end-to-end Artificial Intelligence of Things (AIoT) solution to detect potential interference threats in real time. The solution involves developing a smart visual sensor capable of processing images using state-of-the-art computer vision algorithms and transmitting alerts to pipeline operators in real time. The system's core is based on the object-detection model (e.g., You Only Look Once version 4 (YOLOv4) and DETR with Improved deNoising anchOr boxes (DINO)), trained on a custom Pipeline Visual Threat Assessment (Pipe-VisTA) dataset. Among the trained models, DINO was able to achieve the best Mean Average Precision (mAP) of 71.2% for the unseen test dataset. However, for the deployment on a limited computational-ability edge computer (i.e., the NVIDIA Jetson Nano), the simpler and TensorRT-optimized YOLOv4 model was used, which achieved a mAP of 61.8% for the test dataset. The developed AIoT device captures the image using a camera, processes on the edge using the trained YOLOv4 model to detect the potential threat, transmits the threat alert to a Fleet Portal via LoRaWAN, and hosts the alert on a dashboard via a satellite network. The device has been fully tested in the field to ensure its functionality prior to deployment for the SEA Gas use-case. The AIoT smart solution has been deployed across the 10km stretch of the SEA Gas pipeline across the Murray Bridge section. In total, 48 AIoT devices and three Fleet Portals are installed to ensure the line-of-sight communication between the devices and portals.

17.
Sensors (Basel) ; 24(9)2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38733003

RESUMO

In the context of the rapid development of the Internet of Vehicles, virtual reality, automatic driving and the industrial Internet, the terminal devices in the network show explosive growth. As a result, more and more information is generated from the edge of the network, which makes the data throughput increase dramatically in the mobile communication network. As the key technology of the fifth-generation mobile communication network, mobile edge caching technology which caches popular data to the edge server deployed at the edge of the network avoids the data transmission delay of the backhaul link and the occurrence of network congestion. With the growing scale of the network, distributing hot data from cloud servers to edge servers will generate huge energy consumption. To realize the green and sustainable development of the communication industry and reduce the energy consumption of distribution of data that needs to be cached in edge servers, we make the first attempt to propose and solve the problem of edge caching data distribution with minimum energy consumption (ECDDMEC) in this paper. First, we model and formulate the problem as a constrained optimization problem and then prove its NP-hardness. Subsequently, we design a greedy algorithm with computational complexity of O(n2) to solve the problem approximately. Experimental results show that compared with the distribution strategy of each edge server directly requesting data from the cloud server, the strategy obtained by the algorithm can significantly reduce the energy consumption of data distribution.

18.
Sensors (Basel) ; 24(7)2024 Mar 27.
Artigo em Inglês | MEDLINE | ID: mdl-38610369

RESUMO

Video surveillance systems are integral to bolstering safety and security across multiple settings. With the advent of deep learning (DL), a specialization within machine learning (ML), these systems have been significantly augmented to facilitate DL-based video surveillance services with notable precision. Nevertheless, DL-based video surveillance services, which necessitate the tracking of object movement and motion tracking (e.g., to identify unusual object behaviors), can demand a significant portion of computational and memory resources. This includes utilizing GPU computing power for model inference and allocating GPU memory for model loading. To tackle the computational demands inherent in DL-based video surveillance, this study introduces a novel video surveillance management system designed to optimize operational efficiency. At its core, the system is built on a two-tiered edge computing architecture (i.e., client and server through socket transmission). In this architecture, the primary edge (i.e., client side) handles the initial processing tasks, such as object detection, and is connected via a Universal Serial Bus (USB) cable to the Closed-Circuit Television (CCTV) camera, directly at the source of the video feed. This immediate processing reduces the latency of data transfer by detecting objects in real time. Meanwhile, the secondary edge (i.e., server side) plays a vital role by hosting a dynamically controlling threshold module targeted at releasing DL-based models, reducing needless GPU usage. This module is a novel addition that dynamically adjusts the threshold time value required to release DL models. By dynamically optimizing this threshold, the system can effectively manage GPU usage, ensuring resources are allocated efficiently. Moreover, we utilize federated learning (FL) to streamline the training of a Long Short-Term Memory (LSTM) network for predicting imminent object appearances by amalgamating data from diverse camera sources while ensuring data privacy and optimized resource allocation. Furthermore, in contrast to the static threshold values or moving average techniques used in previous approaches for the controlling threshold module, we employ a Deep Q-Network (DQN) methodology to manage threshold values dynamically. This approach efficiently balances the trade-off between GPU memory conservation and the reloading latency of the DL model, which is enabled by incorporating LSTM-derived predictions as inputs to determine the optimal timing for releasing the DL model. The results highlight the potential of our approach to significantly improve the efficiency and effective usage of computational resources in video surveillance systems, opening the door to enhanced security in various domains.

19.
Sensors (Basel) ; 24(7)2024 Mar 29.
Artigo em Inglês | MEDLINE | ID: mdl-38610415

RESUMO

In Vehicular Edge Computing Network (VECN) scenarios, the mobility of vehicles causes the uncertainty of channel state information, which makes it difficult to guarantee the Quality of Service (QoS) in the process of computation offloading and the resource allocation of a Vehicular Edge Computing Server (VECS). A multi-user computation offloading and resource allocation optimization model and a computation offloading and resource allocation algorithm based on the Deep Deterministic Policy Gradient (DDPG) are proposed to address this problem. Firstly, the problem is modeled as a Mixed Integer Nonlinear Programming (MINLP) problem according to the optimization objective of minimizing the total system delay. Then, in response to the large state space and the coexistence of discrete and continuous variables in the action space, a reinforcement learning algorithm based on DDPG is proposed. Finally, the proposed method is used to solve the problem and compared with the other three benchmark schemes. Compared with the baseline algorithms, the proposed scheme can effectively select the task offloading mode and reasonably allocate VECS computing resources, ensure the QoS of task execution, and have a certain stability and scalability. Simulation results show that the total completion time of the proposed scheme can be reduced by 24-29% compared with the existing state-of-the-art techniques.

20.
Sensors (Basel) ; 24(11)2024 May 25.
Artigo em Inglês | MEDLINE | ID: mdl-38894200

RESUMO

Chicken behavior recognition is crucial for a number of reasons, including promoting animal welfare, ensuring the early detection of health issues, optimizing farm management practices, and contributing to more sustainable and ethical poultry farming. In this paper, we introduce a technique for recognizing chicken behavior on edge computing devices based on video sensing mosaicing. Our method combines video sensing mosaicing with deep learning to accurately identify specific chicken behaviors from videos. It attains remarkable accuracy, achieving 79.61% with MobileNetV2 for chickens demonstrating three types of behavior. These findings underscore the efficacy and promise of our approach in chicken behavior recognition on edge computing devices, making it adaptable for diverse applications. The ongoing exploration and identification of various behavioral patterns will contribute to a more comprehensive understanding of chicken behavior, enhancing the scope and accuracy of behavior analysis within diverse contexts.


Assuntos
Criação de Animais Domésticos , Comportamento Animal , Galinhas , Metodologias Computacionais , Criação de Animais Domésticos/instrumentação , Criação de Animais Domésticos/métodos , Gravação em Vídeo , Animais , Aprendizado Profundo
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA