Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 6 de 6
Filtrar
1.
Neural Netw ; 172: 106013, 2024 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-38354665

RESUMEN

Many large and complex deep neural networks have been shown to provide higher performance on various computer vision tasks. However, very little is known about the relationship between the complexity of the input data along with the type of noise and the depth needed for correct classification. Existing studies do not address the issue of common corruptions adequately, especially in understanding what impact these corruptions leave on the individual part of a deep neural network. Therefore, we can safely assume that the classification (or misclassification) might be happening at a particular layer(s) of a network that accumulates to draw a final correct or incorrect prediction. In this paper, we introduce a novel concept of corruption depth, which identifies the location of the network layer/depth until the misclassification persists. We assert that the identification of such layers will help in better designing the network by pruning certain layers in comparison to the purification of the entire network which is computationally heavy. Through our extensive experiments, we present a coherent study to understand the processing of examples through the network. Our approach also illustrates different philosophies of example memorization and a one-dimensional view of sample or query difficulty. We believe that the understanding of the corruption depth can open a new dimension of model explainability and model compression, where in place of just visualizing the attention map, the classification progress can be seen throughout the network.


Asunto(s)
Compresión de Datos , Redes Neurales de la Computación , Atención
2.
IEEE Trans Image Process ; 31: 7338-7349, 2022.
Artículo en Inglés | MEDLINE | ID: mdl-36094979

RESUMEN

Adversarial attacks have been demonstrated to fool the deep classification networks. There are two key characteristics of these attacks: firstly, these perturbations are mostly additive noises carefully crafted from the deep neural network itself. Secondly, the noises are added to the whole image, not considering them as the combination of multiple components from which they are made. Motivated by these observations, in this research, we first study the role of various image components and the impact of these components on the classification of the images. These manipulations do not require the knowledge of the networks and external noise to function effectively and hence have the potential to be one of the most practical options for real-world attacks. Based on the significance of the particular image components, we also propose a transferable adversarial attack against unseen deep networks. The proposed attack utilizes the projected gradient descent strategy to add the adversarial perturbation to the manipulated component image. The experiments are conducted on a wide range of networks and four databases including ImageNet and CIFAR-100. The experiments show that the proposed attack achieved better transferability and hence gives an upper hand to an attacker. On the ImageNet database, the success rate of the proposed attack is up to 88.5%, while the current state-of-the-art attack success rate on the database is 53.8%. We have further tested the resiliency of the attack against one of the most successful defenses namely adversarial training to measure its strength. The comparison with several challenging attacks shows that: (i) the proposed attack has a higher transferability rate against multiple unseen networks and (ii) it is hard to mitigate its impact. We claim that based on the understanding of the image components, the proposed research has been able to identify a newer adversarial attack unseen so far and unsolvable using the current defense mechanisms.

3.
IEEE Trans Neural Netw Learn Syst ; 33(8): 3277-3289, 2022 Aug.
Artículo en Inglés | MEDLINE | ID: mdl-33710959

RESUMEN

Adversarial perturbations have demonstrated the vulnerabilities of deep learning algorithms to adversarial attacks. Existing adversary detection algorithms attempt to detect the singularities; however, they are in general, loss-function, database, or model dependent. To mitigate this limitation, we propose DAMAD-a generalized perturbation detection algorithm which is agnostic to model architecture, training data set, and loss function used during training. The proposed adversarial perturbation detection algorithm is based on the fusion of autoencoder embedding and statistical texture features extracted from convolutional neural networks. The performance of DAMAD is evaluated on the challenging scenarios of cross-database, cross-attack, and cross-architecture training and testing along with traditional evaluation of testing on the same database with known attack and model. Comparison with state-of-the-art perturbation detection algorithms showcase the effectiveness of the proposed algorithm on six databases: ImageNet, CIFAR-10, Multi-PIE, MEDS, point and shoot challenge (PaSC), and MNIST. Performance evaluation with nearly a quarter of a million adversarial and original images and comparison with recent algorithms show the effectiveness of the proposed algorithm.

4.
IEEE Trans Pattern Anal Mach Intell ; 29(4): 561-72, 2007 Apr.
Artículo en Inglés | MEDLINE | ID: mdl-17299214

RESUMEN

Biometrics-based authentication systems offer obvious usability advantages over traditional password and token-based authentication schemes. However, biometrics raises several privacy concerns. A biometric is permanently associated with a user and cannot be changed. Hence, if a biometric identifier is compromised, it is lost forever and possibly for every application where the biometric is used. Moreover, if the same biometric is used in multiple applications, a user can potentially be tracked from one application to the next by cross-matching biometric databases. In this paper, we demonstrate several methods to generate multiple cancelable identifiers from fingerprint images to overcome these problems. In essence, a user can be given as many biometric identifiers as needed by issuing a new transformation "key." The identifiers can be cancelled and replaced when compromised. We empirically compare the performance of several algorithms such as Cartesian, polar, and surface folding transformations of the minutiae positions. It is demonstrated through multiple experiments that we can achieve revocability and prevent cross-matching of biometric databases. It is also shown that the transforms are noninvertible by demonstrating that it is computationally as hard to recover the original biometric identifier from a transformed version as by randomly guessing. Based on these empirical results and a theoretical analysis we conclude that feature-level cancelable biometric construction is practicable in large biometric deployments.


Asunto(s)
Inteligencia Artificial , Biometría/métodos , Dermatoglifia/clasificación , Dedos/anatomía & histología , Interpretación de Imagen Asistida por Computador/métodos , Reconocimiento de Normas Patrones Automatizadas/métodos , Técnica de Sustracción , Algoritmos , Humanos , Reproducibilidad de los Resultados , Sensibilidad y Especificidad , Procesamiento de Señales Asistido por Computador
5.
IEEE Trans Image Process ; 23(12): 5654-69, 2014 Dec.
Artículo en Inglés | MEDLINE | ID: mdl-25314702

RESUMEN

Face recognition algorithms are generally trained for matching high-resolution images and they perform well for similar resolution test data. However, the performance of such systems degrades when a low-resolution face image captured in unconstrained settings, such as videos from cameras in a surveillance scenario, are matched with high-resolution gallery images. The primary challenge, here, is to extract discriminating features from limited biometric content in low-resolution images and match it to information rich high-resolution face images. The problem of cross-resolution face matching is further alleviated when there is limited labeled positive data for training face recognition algorithms. In this paper, the problem of cross-resolution face matching is addressed where low-resolution images are matched with high-resolution gallery. A co-transfer learning framework is proposed, which is a cross-pollination of transfer learning and co-training paradigms and is applied for cross-resolution face matching. The transfer learning component transfers the knowledge that is learnt while matching high-resolution face images during training to match low-resolution probe images with high-resolution gallery during testing. On the other hand, co-training component facilitates this transfer of knowledge by assigning pseudolabels to unlabeled probe instances in the target domain. Amalgamation of these two paradigms in the proposed ensemble framework enhances the performance of cross-resolution face recognition. Experiments on multiple face databases show the efficacy of the proposed algorithm and compare with some existing algorithms and a commercial system. In addition, several high profile real-world cases have been used to demonstrate the usefulness of the proposed approach in addressing the tough challenges.


Asunto(s)
Inteligencia Artificial , Identificación Biométrica/métodos , Cara/anatomía & histología , Procesamiento de Imagen Asistido por Computador/métodos , Algoritmos , Bases de Datos Factuales , Humanos
6.
IEEE Trans Pattern Anal Mach Intell ; 33(9): 1877-93, 2011 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-21339529

RESUMEN

Noncontact biometrics such as face and iris have additional benefits over contact-based biometrics such as fingerprint and hand geometry. However, three important challenges need to be addressed in a noncontact biometrics-based authentication system: ability to handle unconstrained acquisition, robust and accurate matching, and privacy enhancement without compromising security. In this paper, we propose a unified framework based on random projections and sparse representations, that can simultaneously address all three issues mentioned above in relation to iris biometrics. Our proposed quality measure can handle segmentation errors and a wide variety of possible artifacts during iris acquisition. We demonstrate how the proposed approach can be easily extended to handle alignment variations and recognition from iris videos, resulting in a robust and accurate system. The proposed approach includes enhancements to privacy and security by providing ways to create cancelable iris templates. Results on public data sets show significant benefits of the proposed approach.


Asunto(s)
Identificación Biométrica/métodos , Procesamiento de Imagen Asistido por Computador/métodos , Iris/anatomía & histología , Algoritmos , Bases de Datos Factuales , Humanos
SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA