RESUMO
This study addresses the challenge of bearing-only target localization with sensor bias contamination. To enhance the system's observability, inspired by plant phototropism, we propose a control barrier function (CBF)-based method for UAV motion planning. The rank criterion provides only qualitative observability results. We employ the condition number for a quantitative analysis, identifying key influencing factors. After that, a multi-objective, nonlinear optimization problem for UAV trajectory planning is formulated and solved using the proposed Nonlinear Constrained Multi-Objective Gray Wolf Optimization Algorithm (NCMOGWOA). Simulations validate our approach, showing a threefold reduction in the condition number, significantly enhancing observability. The algorithm outperforms others in terms of localization accuracy and convergence, achieving the lowest Generational Distance (GD) (7.3442) and Inverted Generational Distance (IGD) (8.4577) metrics. Additionally, we explore the effects of the CBF attenuation rates and initial flight path angles.
RESUMO
Morphing aircraft are capable of modifying their geometry configurations according to different flight conditions to improve their performance, such as by increasing the lift-to-drag ratio or reducing their fuel consumption. In this article, we focus on the airfoil morphing of wings and propose a novel morphing control method for an asymmetric deformable airfoil based on deep reinforcement learning approaches. Firstly, we develop an asymmetric airfoil shaped using piece-wise Bézier curves and modeled by shape memory alloys. Resistive heating is adopted to actuate the shape memory alloys and realize the airfoil morphing. With regard to the hysteresis characteristics exhibited in the phase transformation of shape memory alloys, we construct a second-order Markov decision process for the morphing procedure to formulate a reinforcement learning environment with hysteresis properties explicitly considered. Subsequently, we learn the morphing policy based on deep reinforcement learning techniques where the accurate information of the system model is unavailable. Lastly, we conduct simulations to demonstrate the benefits brought by our learning implementations and validate the morphing performance of the proposed method. The simulation results show that the proposed method provides an average 29.8% performance improvement over traditional methods.