Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros










Base de dados
Intervalo de ano de publicação
1.
Artigo em Inglês | MEDLINE | ID: mdl-38687660

RESUMO

Generalizing out-of-distribution (OoD) is critical but challenging in real applications such as unmanned aerial vehicle (UAV) flight control. Previous machine learning-based control has shown promise in dealing with complex real-world environments but suffers huge performance degradation facing OoD scenarios, posing risks to the stability and safety of UAVs. In this paper, we found that the introduced random noises during training surprisingly yield theoretically guaranteed performances via a proposed functional optimization framework. More encouragingly, this framework does not involve common Lyapunov assumptions used in this field, making it more widely applicable. With this framework, the upperbound for control error is induced. We also proved that the induced random noises can lead to lower OoD control errors. Based on our theoretical analysis, we further propose OoD-Control to generalize control in unseen environments. Numerical experiments demonstrate the superiority of the proposed algorithm, surpassing previous state-of-the-art by 65% under challenging unseen environments. We further extend to outdoor real-world experiments and found that the control error is reduced by 50% approximately. Our code is available athttps://github.com/Ulricall/OoD-Control.

2.
IEEE Trans Neural Netw Learn Syst ; 34(2): 882-893, 2023 Feb.
Artigo em Inglês | MEDLINE | ID: mdl-34406950

RESUMO

Despite the empirical success in various domains, it has been revealed that deep neural networks are vulnerable to maliciously perturbed input data that can dramatically degrade their performance. These are known as adversarial attacks. To counter adversarial attacks, adversarial training formulated as a form of robust optimization has been demonstrated to be effective. However, conducting adversarial training brings much computational overhead compared with standard training. In order to reduce the computational cost, we propose an annealing mechanism, annealing mechanism for adversarial training acceleration (Amata), to reduce the overhead associated with adversarial training. The proposed Amata is provably convergent, well-motivated from the lens of optimal control theory, and can be combined with existing acceleration methods to further enhance performance. It is demonstrated that, on standard datasets, Amata can achieve similar or better robustness with around 1/3-1/2 the computational time compared with traditional methods. In addition, Amata can be incorporated into other adversarial training acceleration algorithms (e.g., YOPO, Free, Fast, and ATTA), which leads to a further reduction in computational time on large-scale problems.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA