RESUMEN
Global optimization problems have been a research topic of great interest in various engineering applications among which neural network algorithm (NNA) is one of the most widely used methods. However, it is inevitable for neural network algorithms to plunge into poor local optima and convergence when tackling complex optimization problems. To overcome these problems, an improved neural network algorithm with quasi-oppositional-based and chaotic sine-cosine learning strategies is proposed, that speeds up convergence and avoids trapping in a local optimum. Firstly, quasi-oppositional-based learning facilitated the exploration and exploitation of the search space by the improved algorithm. Meanwhile, a new logistic chaotic sine-cosine learning strategy by integrating the logistic chaotic mapping and sine-cosine strategy enhances the ability that jumps out of the local optimum. Moreover, a dynamic tuning factor of piecewise linear chaotic mapping is utilized for the adjustment of the exploration space to improve the convergence performance. Finally, the validity and applicability of the proposed improved algorithm are evaluated by the challenging CEC 2017 function and three engineering optimization problems. The experimental comparative results of average, standard deviation, and Wilcoxon rank-sum tests reveal that the presented algorithm has excellent global optimality and convergence speed for most functions and engineering problems.
RESUMEN
Numerical optimization has been a popular research topic within various engineering applications, where differential evolution (DE) is one of the most extensively applied methods. However, it is difficult to choose appropriate control parameters and to avoid falling into local optimum and poor convergence when handling complex numerical optimization problems. To handle these problems, an improved DE (BROMLDE) with the Bernstein operator and refracted oppositional-mutual learning (ROML) is proposed, which can reduce parameter selection, converge faster, and avoid trapping in local optimum. Firstly, a new ROML strategy integrates mutual learning (ML) and refractive oppositional learning (ROL), achieving stochastic switching between ROL and ML during the population initialization and generation jumping period to balance exploration and exploitation. Meanwhile, a dynamic adjustment factor is constructed to improve the ability of the algorithm to jump out of the local optimum. Secondly, a Bernstein operator, which has no parameters setting and intrinsic parameters tuning phase, is introduced to improve convergence performance. Finally, the performance of BROMLDE is evaluated by 10 bound-constrained benchmark functions from CEC 2019 and CEC 2020, respectively. Two engineering optimization problems are utilized simultaneously. The comparative experimental results show that BROMLDE has higher global optimization capability and convergence speed on most functions and engineering problems.
RESUMEN
This paper suggests an adaptive funnel dynamic surface control method with a disturbance observer for the permanent magnet synchronous motor with time delays. An improved prescribed performance function is integrated with a modified funnel variable at the beginning of the controller design to coordinate the permanent magnet synchronous motor with the output constrained into an unconstrained one, which has a faster convergence rate than ordinary barrier Lyapunov functions. Then, the specific controller is devised by the dynamic surface control technique with first-order filters to the unconstrained system. Therein, a disturbance-observer and the radial basis function neural networks are introduced to estimate unmatched disturbances and multiple unknown nonlinearities, respectively. Several Lyapunov-Krasovskii functionals are constructed to make up for time delays, enhancing control performance. The first-order filters are implemented to overcome the "complexity explosion" caused by general backstepping methods. Additionally, the boundedness and binding ranges of all the signals are ensured through the detailed stability analysis. Ultimately, simulation results and comparison experiments confirm the superiority of the controller designed in this paper.
RESUMEN
Server load levels affect the performance of cloud task execution, which is rooted in the impact of server performance on cloud task execution. Traditional cloud task scheduling methods usually only consider server load without fully considering the server's real-time load-performance mapping relationship, resulting in the inability to evaluate the server's real-time processing capability accurately. This deficiency directly affects the efficiency, performance, and user experience of cloud task scheduling. Firstly, we construct a performance platform model to monitor server real-time load and performance status information in response to the above problems. In addition, we propose a new deep reinforcement learning task scheduling method based on server real-time performance (SRP-DRL). This method introduces a real-time performance-aware strategy and adds status information about the real-time impact of task load on server performance on top of considering server load. It enhances the perception capability of the deep reinforcement learning (DRL) model in cloud scheduling environments and improves the server's load-balancing ability under latency constraints. Experimental results indicate that the SRP-DRL method has better overall performance regarding task average response time, success rate, and server average load variance compared to Random, Round-Robin, Earliest Idle Time First (EITF), and Best Fit (BEST-FIT) task scheduling methods. In particular, the SRP-DRL is highly effective in reducing server average load variance when numerous tasks arrive within a unit of time, ultimately optimizing the performance of the cloud system.
RESUMEN
The teaching-learning-based optimization (TLBO) algorithm, which has gained popularity among scholars for addressing practical issues, suffers from several drawbacks including slow convergence speed, susceptibility to local optima, and suboptimal performance. To overcome these limitations, this paper presents a novel algorithm called the teaching-learning optimization algorithm, based on the cadre-mass relationship with the tutor mechanism (TLOCTO). Building upon the original teaching foundation, this algorithm incorporates the characteristics of class cadre settings and extracurricular learning institutions. It proposes a new learner strategy, cadre-mass relationship strategy, and tutor mechanism. The experimental results on 23 test functions and CEC-2020 benchmark functions demonstrate that the enhanced algorithm exhibits strong competitiveness in terms of convergence speed, solution accuracy, and robustness. Additionally, the superiority of the proposed algorithm over other popular optimizers is confirmed through the Wilcoxon signed rank-sum test. Furthermore, the algorithm's practical applicability is demonstrated by successfully applying it to three complex engineering design problems.