Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 20 de 109
Filtrar
1.
Sensors (Basel) ; 24(7)2024 Apr 03.
Artigo em Inglês | MEDLINE | ID: mdl-38610489

RESUMO

In the mobile edge computing (MEC) environment, the edge caching can provide the timely data response service for the intelligent scenarios. However, due to the limited storage capacity of edge nodes and the malicious node behavior, the question of how to select the cached contents and realize the decentralized security data caching faces challenges. In this paper, a blockchain-based decentralized and proactive caching strategy is proposed in an MEC environment to address this problem. The novelty is that the blockchain was adopted in an MEC environment with a proactive caching strategy based on node utility, and the corresponding optimization problem was built. The blockchain was adopted to build a secure and reliable service environment. The employed methodology is that the optimal caching strategy was achieved based on the linear relaxation technology and the interior point method. Additionally, in a content caching system, there is a trade-off between cache space and node utility, and the caching strategy was proposed to solve this problem. There was also a trade-off between the consensus process delay of blockchain and the caching latency of content. An offline consensus authentication method was adopted to reduce the influence of the consensus process delay on the content caching. The key finding was that the proposed algorithm can reduce latency and can ensure the security data caching in an IoT environment. Finally, the simulation experiment showed that the proposed algorithm can achieve up to 49.32%, 43.11%, and 34.85% improvements on the cache hit rate, the average content response latency, and the average system utility, respectively, compared to the random content caching algorithm, and it achieved up to 9.67%, 8.11%, and 5.95% increases, successively, compared to the greedy content caching algorithm.

2.
Sensors (Basel) ; 24(8)2024 Apr 18.
Artigo em Inglês | MEDLINE | ID: mdl-38676197

RESUMO

Federated learning (FL) in mobile edge computing has emerged as a promising machine-learning paradigm in the Internet of Things, enabling distributed training without exposing private data. It allows multiple mobile devices (MDs) to collaboratively create a global model. FL not only addresses the issue of private data exposure but also alleviates the burden on a centralized server, which is common in conventional centralized learning. However, a critical issue in FL is the imposed computing for local training on multiple MDs, which often have limited computing capabilities. This limitation poses a challenge for MDs to actively contribute to the training process. To tackle this problem, this paper proposes an adaptive dataset management (ADM) scheme, aiming to reduce the burden of local training on MDs. Through an empirical study on the influence of dataset size on accuracy improvement over communication rounds, we confirm that the amount of dataset has a reduced impact on accuracy gain. Based on this finding, we introduce a discount factor that represents the reduced impact of the size of the dataset on the accuracy gain over communication rounds. To address the ADM problem, which involves determining how much the dataset should be reduced over classes while considering both the proposed discounting factor and Kullback-Leibler divergence (KLD), a theoretical framework is presented. The ADM problem is a non-convex optimization problem. To solve it, we propose a greedy-based heuristic algorithm that determines a suboptimal solution with low complexity. Simulation results demonstrate that our proposed scheme effectively alleviates the training burden on MDs while maintaining acceptable training accuracy.

3.
Sensors (Basel) ; 24(16)2024 Aug 08.
Artigo em Inglês | MEDLINE | ID: mdl-39204838

RESUMO

Device-to-device (D2D) is a pivotal technology in the next generation of communication, allowing for direct task offloading between mobile devices (MDs) to improve the efficient utilization of idle resources. This paper proposes a novel algorithm for dynamic task offloading between the active MDs and the idle MDs in a D2D-MEC (mobile edge computing) system by deploying multi-agent deep reinforcement learning (DRL) to minimize the long-term average delay of delay-sensitive tasks under deadline constraints. Our core innovation is a dynamic partitioning scheme for idle and active devices in the D2D-MEC system, accounting for stochastic task arrivals and multi-time-slot task execution, which has been insufficiently explored in the existing literature. We adopt a queue-based system to formulate a dynamic task offloading optimization problem. To address the challenges of large action space and the coupling of actions across time slots, we model the problem as a Markov decision process (MDP) and perform multi-agent DRL through multi-agent proximal policy optimization (MAPPO). We employ a centralized training with decentralized execution (CTDE) framework to enable each MD to make offloading decisions solely based on its local system state. Extensive simulations demonstrate the efficiency and fast convergence of our algorithm. In comparison to the existing sub-optimal results deploying single-agent DRL, our algorithm reduces the average task completion delay by 11.0% and the ratio of dropped tasks by 17.0%. Our proposed algorithm is particularly pertinent to sensor networks, where mobile devices equipped with sensors generate a substantial volume of data that requires timely processing to ensure quality of experience (QoE) and meet the service-level agreements (SLAs) of delay-sensitive applications.

4.
Sensors (Basel) ; 24(14)2024 Jul 18.
Artigo em Inglês | MEDLINE | ID: mdl-39066074

RESUMO

Edge servers frequently manage their own offline digital twin (DT) services, in addition to caching online digital twin services. However, current research often overlooks the impact of offline caching services on memory and computation resources, which can hinder the efficiency of online service task processing on edge servers. In this study, we concentrated on service caching and task offloading within a collaborative edge computing system by emphasizing the integrated quality of service (QoS) for both online and offline edge services. We considered the resource usage of both online and offline services, along with incoming online requests. To maximize the overall QoS utility, we established an optimization objective that rewards the throughput of online services while penalizing offline services that miss their soft deadlines. We formulated this as a utility maximization problem, which was proven to be NP-hard. To tackle this complexity, we reframed the optimization problem as a Markov decision process (MDP) and introduced a joint optimization algorithm for service caching and task offloading by leveraging the deep Q-network (DQN). Comprehensive experiments revealed that our algorithm enhanced the utility by at least 14.01% compared with the baseline algorithms.

5.
Sensors (Basel) ; 24(9)2024 May 01.
Artigo em Inglês | MEDLINE | ID: mdl-38733003

RESUMO

In the context of the rapid development of the Internet of Vehicles, virtual reality, automatic driving and the industrial Internet, the terminal devices in the network show explosive growth. As a result, more and more information is generated from the edge of the network, which makes the data throughput increase dramatically in the mobile communication network. As the key technology of the fifth-generation mobile communication network, mobile edge caching technology which caches popular data to the edge server deployed at the edge of the network avoids the data transmission delay of the backhaul link and the occurrence of network congestion. With the growing scale of the network, distributing hot data from cloud servers to edge servers will generate huge energy consumption. To realize the green and sustainable development of the communication industry and reduce the energy consumption of distribution of data that needs to be cached in edge servers, we make the first attempt to propose and solve the problem of edge caching data distribution with minimum energy consumption (ECDDMEC) in this paper. First, we model and formulate the problem as a constrained optimization problem and then prove its NP-hardness. Subsequently, we design a greedy algorithm with computational complexity of O(n2) to solve the problem approximately. Experimental results show that compared with the distribution strategy of each edge server directly requesting data from the cloud server, the strategy obtained by the algorithm can significantly reduce the energy consumption of data distribution.

6.
Sensors (Basel) ; 24(8)2024 Apr 12.
Artigo em Inglês | MEDLINE | ID: mdl-38676106

RESUMO

In this paper, we consider an integrated sensing, communication, and computation (ISCC) system to alleviate the spectrum congestion and computation burden problem. Specifically, while serving communication users, a base station (BS) actively engages in sensing targets and collaborates seamlessly with the edge server to concurrently process the acquired sensing data for efficient target recognition. A significant challenge in edge computing systems arises from the inherent uncertainty in computations, mainly stemming from the unpredictable complexity of tasks. With this consideration, we address the computation uncertainty by formulating a robust communication and computing resource allocation problem in ISCC systems. The primary goal of the system is to minimize total energy consumption while adhering to perception and delay constraints. This is achieved through the optimization of transmit beamforming, offloading ratio, and computing resource allocation, effectively managing the trade-offs between local execution and edge computing. To overcome this challenge, we employ a Markov decision process (MDP) in conjunction with the proximal policy optimization (PPO) algorithm, establishing an adaptive learning strategy. The proposed algorithm stands out for its rapid training speed, ensuring compliance with latency requirements for perception and computation in applications. Simulation results highlight its robustness and effectiveness within ISCC systems compared to baseline approaches.

7.
Sensors (Basel) ; 24(6)2024 Mar 07.
Artigo em Inglês | MEDLINE | ID: mdl-38544005

RESUMO

With the development of the Internet of Things (IoT) technology, massive amounts of sensor data in applications such as fire monitoring need to be transmitted to edge servers for timely processing. However, there is an energy-hole phenomenon in transmitting data only through terrestrial multi-hop networks. In this study, we focus on the data collection task in an unmanned aerial vehicle (UAV)-assisted mobile edge computing (MEC) network, where a UAV is deployed as the mobile data collector for the ground sensor nodes (SNs) to ensure high information freshness. Meanwhile, the UAV is equipped with an edge server for data caching. We first establish a rigorous mathematical model in which the age of information (AoI) is used as a measure of information freshness, related to both the data collection time and the UAV's flight time. Then a mixed-integer non-convex optimization problem is formulated to minimize the peak AoI of the collected data. To solve the problem efficiently, we propose an iterative two-step algorithm named the AoI-minimized association and trajectory planning (AoI-MATP) algorithm. In each iteration, the optimal SN-collection point (CP) associations and CP locations for the parameter ε are first obtained by the affinity propagation clustering algorithm. The optimal UAV trajectory is found using an improved elite genetic algorithm. Simulation results show that based on the optimized ε, the AoI-MATP algorithm can achieve a balance between data collection time and flight time, reducing the peak AoI of the collected data.

8.
Sensors (Basel) ; 24(6)2024 Mar 14.
Artigo em Inglês | MEDLINE | ID: mdl-38544128

RESUMO

With the exponential growth of wireless devices and the demand for real-time processing, traditional server architectures face challenges in meeting the ever-increasing computational requirements. This paper proposes a collaborative edge computing framework to offload and process tasks efficiently in such environments. By equipping a moving unmanned aerial vehicle (UAV) as the mobile edge computing (MEC) server, the proposed architecture aims to release the burden on roadside units (RSUs) servers. Specifically, we propose a two-layer edge intelligence scheme to allocate network computing resources. The first layer intelligently offloads and allocates tasks generated by wireless devices in the vehicular system, and the second layer utilizes the partially observable stochastic game (POSG), solved by duelling deep Q-learning, to allocate the computing resources of each processing node (PN) to different tasks. Meanwhile, we propose a weighted position optimization algorithm for the UAV movement in the system to facilitate task offloading and task processing. Simulation results demonstrate the improved performance by applying the proposed scheme.

9.
Sensors (Basel) ; 24(9)2024 Apr 28.
Artigo em Inglês | MEDLINE | ID: mdl-38732918

RESUMO

In this paper, we consider a low-latency Mobile Edge Computing (MEC) network where multiple User Equipment (UE) wirelessly reports to a decision-making edge server. At the same time, the transmissions are operated with Finite Blocklength (FBL) codes to achieve low-latency transmission. We introduce the task of Age upon Decision (AuD) aimed at the timeliness of tasks used for decision-making, which highlights the timeliness of the information at decision-making moments. For the case in which dynamic task generation and random fading channels are considered, we provide a task AuD minimization design by jointly selecting UE and allocating blocklength. In particular, to solve the task AuD minimization problem, we transform the optimization problem to a Markov Decision Process problem and propose an Error Probability-Controlled Action-Masked Proximal Policy Optimization (EMPPO) algorithm. Via simulation, we show that the proposed design achieves a lower AuD than baseline methods across various network conditions, especially in scenarios with significant channel Signal-to-Noise Ratio (SNR) differences and low average SNR, which shows the robustness of EMPPO and its potential for real-time applications.

10.
Sensors (Basel) ; 24(7)2024 Mar 24.
Artigo em Inglês | MEDLINE | ID: mdl-38610282

RESUMO

With the ongoing advancement of electric power Internet of Things (IoT), traditional power inspection methods face challenges such as low efficiency and high risk. Unmanned aerial vehicles (UAVs) have emerged as a more efficient solution for inspecting power facilities due to their high maneuverability, excellent line-of-sight communication capabilities, and strong adaptability. However, UAVs typically grapple with limited computational power and energy resources, which constrain their effectiveness in handling computationally intensive and latency-sensitive inspection tasks. In response to this issue, we propose a UAV task offloading strategy based on deep reinforcement learning (DRL), which is designed for power inspection scenarios consisting of mobile edge computing (MEC) servers and multiple UAVs. Firstly, we propose an innovative UAV-Edge server collaborative computing architecture to fully exploit the mobility of UAVs and the high-performance computing capabilities of MEC servers. Secondly, we established a computational model concerning energy consumption and task processing latency in the UAV power inspection system, enhancing our understanding of the trade-offs involved in UAV offloading strategies. Finally, we formalize the task offloading problem as a multi-objective optimization issue and simultaneously model it as a Markov Decision Process (MDP). Subsequently, we proposed a task offloading algorithm based on a Deep Deterministic Policy Gradient (OTDDPG) to obtain the optimal task offloading strategy for UAVs. The simulation results demonstrated that this approach outperforms baseline methods with significant improvements in task processing latency and energy consumption.

11.
Sensors (Basel) ; 24(6)2024 Mar 13.
Artigo em Inglês | MEDLINE | ID: mdl-38544101

RESUMO

Recently, the integration of unmanned aerial vehicles (UAVs) with edge computing has emerged as a promising paradigm for providing computational support for Internet of Things (IoT) applications in remote, disaster-stricken, and maritime areas. In UAV-aided edge computing, the offloading decision plays a central role in optimizing the overall system performance. However, the trajectory directly affects the offloading decision. In general, IoT devices use ground offload computation-intensive tasks on UAV-aided edge servers. The UAVs plan their trajectories based on the task generation rate. Therefore, researchers are attempting to optimize the offloading decision along with the trajectory, and numerous studies are ongoing to determine the impact of the trajectory on offloading decisions. In this survey, we review existing trajectory-aware offloading decision techniques by focusing on design concepts, operational features, and outstanding characteristics. Moreover, they are compared in terms of design principles and operational characteristics. Open issues and research challenges are discussed, along with future directions.

12.
Sensors (Basel) ; 24(3)2024 Feb 01.
Artigo em Inglês | MEDLINE | ID: mdl-38339674

RESUMO

Wireless Sensor Networks (WSNs) have emerged as an efficient solution for numerous real-time applications, attributable to their compactness, cost-effectiveness, and ease of deployment. The rapid advancement of 5G technology and mobile edge computing (MEC) in recent years has catalyzed the transition towards large-scale deployment of WSN devices. However, the resulting data proliferation and the dynamics of communication environments introduce new challenges for WSN communication: (1) ensuring robust communication in adverse environments and (2) effectively alleviating bandwidth pressure from massive data transmission. In response to the aforementioned challenges, this paper proposes a semantic communication solution. Specifically, considering the limited computational and storage resources of WSN devices, we propose a flexible Attention-based Adaptive Coding (AAC) module. This module integrates window and channel attention mechanisms, dynamically adjusts semantic information in response to the current channel state, and facilitates adaptation of a single model across various Signal-to-Noise Ratio (SNR) environments. Furthermore, to validate the effectiveness of this approach, the paper introduces an end-to-end Joint Source Channel Coding (JSCC) scheme for image semantic communication, employing the AAC module. Experimental results demonstrate that the proposed scheme surpasses existing deep JSCC schemes across datasets of varying resolutions; furthermore, they validate the efficacy of the proposed AAC module, which is capable of dynamically adjusting critical information according to the current channel state. This enables the model to be trained over a range of SNRs and obtain better results.

13.
Sensors (Basel) ; 24(17)2024 Aug 28.
Artigo em Inglês | MEDLINE | ID: mdl-39275469

RESUMO

Mobile Edge Computing (MEC) is crucial for reducing latency by bringing computational resources closer to the network edge, thereby enhancing the quality of services (QoS). However, the broad deployment of cloudlets poses challenges in efficient network slicing, particularly when traffic distribution is uneven. Therefore, these challenges include managing diverse resource requirements across widely distributed cloudlets, minimizing resource conflicts and delays, and maintaining service quality amid fluctuating request rates. Addressing this requires intelligent strategies to predict request types (common or urgent), assess resource needs, and allocate resources efficiently. Emerging technologies like edge computing and 5G with network slicing can handle delay-sensitive IoT requests rapidly, but a robust mechanism for real-time resource and utility optimization remains necessary. To address these challenges, we designed an end-to-end network slicing approach that predicts common and urgent user requests through T distribution. We formulated our problem as a multi-agent Markov decision process (MDP) and introduced a multi-agent soft actor-critic (MAgSAC) algorithm. This algorithm prevents the wastage of scarce resources by intelligently activating and deactivating virtual network function (VNF) instances, thereby balancing the allocation process. Our approach aims to optimize overall utility, balancing trade-offs between revenue, energy consumption costs, and latency. We evaluated our method, MAgSAC, through simulations, comparing it with the following six benchmark schemes: MAA3C, SACT, DDPG, S2Vec, Random, and Greedy. The results demonstrate that our approach, MAgSAC, optimizes utility by 30%, minimizes energy consumption costs by 12.4%, and reduces execution time by 21.7% compared to the closest related multi-agent approach named MAA3C.

14.
Sensors (Basel) ; 23(22)2023 Nov 15.
Artigo em Inglês | MEDLINE | ID: mdl-38005586

RESUMO

Compared to cloud computing, mobile edge computing (MEC) is a promising solution for delay-sensitive applications due to its proximity to end users. Because of its ability to offload resource-intensive tasks to nearby edge servers, MEC allows a diverse range of compute- and storage-intensive applications to operate on resource-constrained devices. The optimal utilization of MEC can lead to enhanced responsiveness and quality of service, but it requires careful design from the perspective of user-base station association, virtualized resource provisioning, and task distribution. Also, considering the limited exploration of the federation concept in the existing literature, its impacts on the allocation and management of resources still remain not widely recognized. In this paper, we study the network and MEC resource scheduling problem, where some edge servers are federated, limiting resource expansion within the same federations. The integration of network and MEC is crucial, emphasizing the necessity of a joint approach. In this work, we present NAFEOS, a proposed solution formulated as a two-stage algorithm that can effectively integrate association optimization with vertical and horizontal scaling. The Stage-1 problem optimizes the user-base station association and federation assignment so that the edge servers can be utilized in a balanced manner. The following Stage-2 dynamically schedules both vertical and horizontal scaling so that the fluctuating task-offloading demands from users are fulfilled. The extensive evaluations and comparison results show that the proposed approach can effectively achieve optimal resource utilization.

15.
Sensors (Basel) ; 23(10)2023 May 16.
Artigo em Inglês | MEDLINE | ID: mdl-37430721

RESUMO

An optimal method for resource allocation based on contract theory is proposed to improve energy utilization. In heterogeneous networks (HetNets), distributed heterogeneous network architectures are designed to balance different computing capacities, and MEC server gains are designed based on the amount of allocated computing tasks. An optimal function based on contract theory is developed to optimize the revenue gain of MEC servers while considering constraints such as service caching, computation offloading, and the number of resources allocated. As the objective function is a complex problem, it is solved utilizing equivalent transformations and variations of the reduced constraints. A greedy algorithm is applied to solve the optimal function. A comparative experiment on resource allocation is conducted, and energy utilization parameters are calculated to compare the effectiveness of the proposed algorithm and the main algorithm. The results show that the proposed incentive mechanism has a significant advantage in improving the utility of the MEC server.

16.
Sensors (Basel) ; 23(13)2023 Jun 29.
Artigo em Inglês | MEDLINE | ID: mdl-37447890

RESUMO

Mobile edge computing has been an important computing paradigm for providing delay-sensitive and computation-intensive services to mobile users. In this paper, we study the problem of the joint optimization of task assignment and energy management in a mobile-server-assisted edge computing network, where mobile servers can provide assisted task offloading services on behalf of the fixed servers at the network edge. The design objective is to minimize the system delay. As far as we know, our paper presents the first work that improves the quality of service of the whole system from a long-term aspect by prolonging the operational time of assisted mobile servers. We formulate the system delay minimization problem as a mixed-integer programming (MIP) problem. Due to the NP-hardness of this problem, we propose a dynamic energy criticality avoidance-based delay minimization ant colony algorithm (EACO), which strives for a balance between delay minimization for offloaded tasks and operational time maximization for mobile servers. We present a detailed algorithm design and deduce its computational complexity. We conduct extensive simulations, and the results demonstrate the high performance of the proposed algorithm compared to the benchmark algorithms.


Assuntos
Algoritmos , Benchmarking , Computadores , Dureza , Fenômenos Físicos
17.
Sensors (Basel) ; 23(2)2023 Jan 06.
Artigo em Inglês | MEDLINE | ID: mdl-36679460

RESUMO

Mobile edge computing (MEC)-enabled satellite-terrestrial networks (STNs) can provide task computing services for Internet of Things (IoT) devices. However, since some applications' tasks require huge amounts of computing resources, sometimes the computing resources of a local satellite's MEC server are insufficient, but the computing resources of neighboring satellites' MEC servers are redundant. Therefore, we investigated inter-satellite cooperation in MEC-enabled STNs. First, we designed a system model of the MEC-enabled STN architecture, where the local satellite and the neighboring satellites assist IoT devices in computing tasks through inter-satellite cooperation. The local satellite migrates some tasks to the neighboring satellites to utilize their idle resources. Next, the task completion delay minimization problem for all IoT devices is formulated and decomposed. Then, we propose an inter-satellite cooperative joint offloading decision and resource allocation optimization scheme, which consists of a task offloading decision algorithm based on the Grey Wolf Optimizer (GWO) algorithm and a computing resource allocation algorithm based on the Lagrange multiplier method. The optimal solution is obtained by continuous iterations. Finally, simulation results demonstrate that the proposed scheme achieves relatively better performance than other baseline schemes.


Assuntos
Algoritmos , Internet das Coisas , Simulação por Computador , Alocação de Recursos
18.
Sensors (Basel) ; 23(2)2023 Jan 08.
Artigo em Inglês | MEDLINE | ID: mdl-36679520

RESUMO

A secrecy energy efficiency optimization scheme for a multifunctional unmanned aerial vehicle (UAV) assisted mobile edge computing system is proposed to solve the computing power and security issues in the Internet-of-Things scenario. The UAV can switch roles between a computing UAV and jamming UAV based on the channel conditions. To ensure the security of the content and the system energy efficiency in the process of offloading computing tasks, the UAV trajectory, uplink transmit power, user scheduling, and offload task are jointly optimized, and an updated-rate assisted block coordinate descent (BCD) algorithm is used. Simulation results show that this scheme efficiently improves the secrecy performance and energy efficiency of the system. Compared with the benchmark scheme, the secrecy energy efficiency of the scheme is improved by 38.5%.


Assuntos
Conservação de Recursos Energéticos , Dispositivos Aéreos não Tripulados , Algoritmos , Benchmarking , Simulação por Computador
19.
Sensors (Basel) ; 23(9)2023 Apr 23.
Artigo em Inglês | MEDLINE | ID: mdl-37177424

RESUMO

Nowadays, mobile devices are expected to perform a growing number of tasks, whose complexity is also increasing significantly. However, despite great technological improvements in the last decade, such devices still have limitations in terms of processing power and battery lifetime. In this context, mobile edge computing (MEC) emerges as a possible solution to address such limitations, being able to provide on-demand services to the customer, and bringing closer several services published in the cloud with a reduced cost and fewer security concerns. On the other hand, Unmanned Aerial Vehicle (UAV) networking emerged as a paradigm offering flexible services, new ephemeral applications such as safety and disaster management, mobile crowd-sensing, and fast delivery, to name a few. However, to efficiently use these services, discovery and selection strategies must be taken into account. In this context, discovering the services made available by a UAV-MEC network, and selecting the best services among those available in a timely and efficient manner, can become a challenging task. To face these issues, game theory methods have been proposed in the literature that perfectly suit the case of UAV-MEC services by modeling this challenge as a Stackelberg game, and using existing approaches to find the solution for such a game aiming at an efficient services' discovery and service selection. Hence, the goal of this paper is to propose Stackelberg-game-based solutions for service discovery and selection in the context of UAV-based mobile edge computing. Simulations results conducted using the NS-3 simulator highlight the efficiency of our proposed game in terms of price and QoS metrics.

20.
Sensors (Basel) ; 23(4)2023 Feb 14.
Artigo em Inglês | MEDLINE | ID: mdl-36850763

RESUMO

Deep Learning models have presented promising results when applied to Agriculture 4.0. Among other applications, these models can be used in disease detection and fruit counting. Deep Learning models usually have many layers in the architecture and millions of parameters. This aspect hinders the use of Deep Learning on mobile devices as they require a large amount of processing power for inference. In addition, the lack of high-quality Internet connectivity in the field impedes the usage of cloud computing, pushing the processing towards edge devices. This work describes the proposal of an edge AI application to detect and map diseases in citrus orchards. The proposed system has low computational demand, enabling the use of low-footprint models for both detection and classification tasks. We initially compared AI algorithms to detect fruits on trees. Specifically, we analyzed and compared YOLO and Faster R-CNN. Then, we studied lean AI models to perform the classification task. In this context, we tested and compared the performance of MobileNetV2, EfficientNetV2-B0, and NASNet-Mobile. In the detection task, YOLO and Faster R-CNN had similar AI performance metrics, but YOLO was significantly faster. In the image classification task, MobileNetMobileV2 and EfficientNetV2-B0 obtained an accuracy of 100%, while NASNet-Mobile had a 98% performance. As for the timing performance, MobileNetV2 and EfficientNetV2-B0 were the best candidates, while NASNet-Mobile was significantly worse. Furthermore, MobileNetV2 had a 10% better performance than EfficientNetV2-B0. Finally, we provide a method to evaluate the results from these algorithms towards describing the disease spread using statistical parametric models and a genetic algorithm to perform the parameters' regression. With these results, we validated the proposed pipeline, enabling the usage of adequate AI models to develop a mobile edge AI solution.


Assuntos
Agricultura , Citrus , Algoritmos , Benchmarking , Inteligência Artificial
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA