Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Mais filtros

Base de dados
Ano de publicação
Tipo de documento
Intervalo de ano de publicação
1.
Neural Netw ; 174: 106224, 2024 Jun.
Artigo em Inglês | MEDLINE | ID: mdl-38479186

RESUMO

Adversarial training has become the mainstream method to boost adversarial robustness of deep models. However, it often suffers from the trade-off dilemma, where the use of adversarial examples hurts the standard generalization of models on natural data. To study this phenomenon, we investigate it from the perspective of spatial attention. In brief, standard training typically encourages a model to conduct a comprehensive check to input space. But adversarial training often causes a model to overly concentrate on sparse spatial regions. This reduced tendency is beneficial to avoid adversarial accumulation but easily makes the model ignore abundant discriminative information, thereby resulting in weak generalization. To address this issue, this paper introduces an Attention-Enhanced Learning Framework (AELF) for robustness training. The main idea is to enable the model to inherit the attention pattern of standard pre-trained model through an embedding-level regularization. To be specific, given a teacher model built on natural examples, the embedding distribution of teacher model is used as a static constraint to regulate the embedding outputs of the objective model. This design is mainly supported with that the embedding feature of standard model is usually recognized as a rich semantic integration of input. For implementation, we present a simplified AELFs that can achieve the regularization with single cross entropy loss via the parameter initialization and parameter update strategy. This avoids the extra consistency comparison operation between embedding vectors. Experimental observations verify the rationality of our argument, and experimental results demonstrate that it can achieve remarkable improvements in generalization under the high-level robustness.


Assuntos
Generalização Psicológica , Aprendizagem , Entropia , Semântica
2.
Comput Biol Med ; 144: 105345, 2022 05.
Artigo em Inglês | MEDLINE | ID: mdl-35240379

RESUMO

With the advancement of machine leaning technologies, Deep Neural Networks (DNNs) have been utilized for automated interpretation of Electrocardiogram (ECG) signals to identify potential abnormalities in a patient's heart within a second. Studies have shown that the accuracy of DNNs for ECG signal classification could reach human-expert cardiologist level if a sufficiently large training dataset is available. However, it is known that, in the field of computer vision, DNNs are not robust to adversarial noises that may cause DNNs to make wrong class-label predictions. In this work, we confirm that DNNs are not robust to adversarial noises in ECG signal classification applications, and we propose a novel regularization method to improve DNN robustness by minimizing the noise-to-signal ratio. Our method is evaluated on two public datasets: the MIT-BIH dataset and the CPSC2018 dataset, and the evaluation results show that our method can significantly enhance DNN robustness against adversarial noises generated by Projected Gradient Descent (PGD) and Smooth Adversarial Perturbation (SAP) adversarial attacks, with a minimal reduction of accuracy on clean data. Our method may serve as the baseline for designing new methods to defend against adversarial attacks for life-critical applications depending on ECG interpretation. The code of this work is publicly available at github.com/SarielMa/Robust_DNN_for_ECG.


Assuntos
Eletrocardiografia , Redes Neurais de Computação , Humanos , Tórax
SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA