Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 7 de 7
Filtrar
Más filtros











Base de datos
Intervalo de año de publicación
1.
J Pharm Bioallied Sci ; 15(Suppl 2): S1270-S1273, 2023 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-37694027

RESUMEN

Aim: To study the microbial adhesion on different orthodontic brackets (conventional, ceramic, and self-ligating brackets). Materials and Methods: Three types of bracket systems i.e. self-ligating, conventional, and ceramic brackets were used consisting of 10 patients for each bracket system. Out of 30 patients 20 patients will be treated with conventional and ceramic brackets, in which, in one-half of the mouth steel ligature ties are placed and in the other half elastomeric rings would be placed. We collected swabs from the central incisors and first premolars of the both the right and left sides of both the maxillary and mandibular arches. The samples were collected three times from the above-mentioned teeth once prior to the placement of the brackets, the second and third samples after one and three months respectively. Result: Significant variations were between the pretreatment and after one and three months of bracket placement in all three groups. Significant increase in the microbial adhesion of aerobic and anaerobic bacteria in conventional bracket form pretreatment to one and three months after bracket placement is seen. Although the colony formed by anaerobic bacteria is more in number in comparison to the aerobic bacteria. Conclusion: Our study reveals that the most hygienic bracket is a self-ligating bracket that should be used in patients who have poor oral hygiene. We also found that using steel ligature is more suitable as compared to elastomeric ligature in both conventional and ceramic brackets.

2.
IEEE Trans Pattern Anal Mach Intell ; 43(9): 3154-3166, 2021 Sep.
Artículo en Inglés | MEDLINE | ID: mdl-32149623

RESUMEN

Deep neural networks can easily be fooled by an adversary with minuscule perturbations added to an input image. The existing defense techniques suffer greatly under white-box attack settings, where an adversary has full knowledge of the network and can iterate several times to find strong perturbations. We observe that the main reason for the existence of such vulnerabilities is the close proximity of different class samples in the learned feature space of deep models. This allows the model decisions to be completely changed by adding an imperceptible perturbation to the inputs. To counter this, we propose to class-wise disentangle the intermediate feature representations of deep networks, specifically forcing the features for each class to lie inside a convex polytope that is maximally separated from the polytopes of other classes. In this manner, the network is forced to learn distinct and distant decision regions for each class. We observe that this simple constraint on the features greatly enhances the robustness of learned models, even against the strongest white-box attacks, without degrading the classification performance on clean images. We report extensive evaluations in both black-box and white-box attack scenarios and show significant gains in comparison to state-of-the-art defenses.

3.
Artículo en Inglés | MEDLINE | ID: mdl-31545722

RESUMEN

Convolutional Neural Networks have achieved significant success across multiple computer vision tasks. However, they are vulnerable to carefully crafted, human-imperceptible adversarial noise patterns which constrain their deployment in critical security-sensitive systems. This paper proposes a computationally efficient image enhancement approach that provides a strong defense mechanism to effectively mitigate the effect of such adversarial perturbations. We show that deep image restoration networks learn mapping functions that can bring off-the-manifold adversarial samples onto the natural image manifold, thus restoring classification towards correct classes. A distinguishing feature of our approach is that, in addition to providing robustness against attacks, it simultaneously enhances image quality and retains models performance on clean images. Furthermore, the proposed method does not modify the classifier or requires a separate mechanism to detect adversarial images. The effectiveness of the scheme has been demonstrated through extensive experiments, where it has proven a strong defense in gray-box settings. The proposed scheme is simple and has the following advantages: (1) it does not require any model training or parameter optimization, (2) it complements other existing defense mechanisms, (3) it is agnostic to the attacked model and attack type and (4) it provides superior performance across all popular attack algorithms. Our codes are publicly available at https://github.com/aamir-mustafa/super-resolution-adversarial-defense.

4.
Neural Netw ; 110: 82-90, 2019 Feb.
Artículo en Inglés | MEDLINE | ID: mdl-30504041

RESUMEN

The big breakthrough on the ImageNet challenge in 2012 was partially due to the 'Dropout' technique used to avoid overfitting. Here, we introduce a new approach called 'Spectral Dropout' to improve the generalization ability of deep neural networks. We cast the proposed approach in the form of regular Convolutional Neural Network (CNN) weight layers using a decorrelation transform with fixed basis functions. Our spectral dropout method prevents overfitting by eliminating weak and 'noisy' Fourier domain coefficients of the neural network activations, leading to remarkably better results than the current regularization methods. Furthermore, the proposed is very efficient due to the fixed basis functions used for spectral transformation. In particular, compared to Dropout and Drop-Connect, our method significantly speeds up the network convergence rate during the training process (roughly ×2), with considerably higher neuron pruning rates (an increase of ∼30%). We demonstrate that the spectral dropout can also be used in conjunction with other regularization approaches resulting in additional performance gains.


Asunto(s)
Aprendizaje Profundo , Redes Neurales de la Computación , Aprendizaje Profundo/tendencias
5.
IEEE Trans Neural Netw Learn Syst ; 29(8): 3573-3587, 2018 08.
Artículo en Inglés | MEDLINE | ID: mdl-28829320

RESUMEN

Class imbalance is a common problem in the case of real-world object detection and classification tasks. Data of some classes are abundant, making them an overrepresented majority, and data of other classes are scarce, making them an underrepresented minority. This imbalance makes it challenging for a classifier to appropriately learn the discriminating boundaries of the majority and minority classes. In this paper, we propose a cost-sensitive (CoSen) deep neural network, which can automatically learn robust feature representations for both the majority and minority classes. During training, our learning procedure jointly optimizes the class-dependent costs and the neural network parameters. The proposed approach is applicable to both binary and multiclass problems without any modification. Moreover, as opposed to data-level approaches, we do not alter the original data distribution, which results in a lower computational cost during the training process. We report the results of our experiments on six major image classification data sets and show that the proposed approach significantly outperforms the baseline algorithms. Comparisons with popular data sampling techniques and CoSen classifiers demonstrate the superior performance of our proposed method.

6.
IEEE Trans Pattern Anal Mach Intell ; 38(3): 431-46, 2016 Mar.
Artículo en Inglés | MEDLINE | ID: mdl-27046489

RESUMEN

We present a framework to automatically detect and remove shadows in real world scenes from a single image. Previous works on shadow detection put a lot of effort in designing shadow variant and invariant hand-crafted features. In contrast, our framework automatically learns the most relevant features in a supervised manner using multiple convolutional deep neural networks (ConvNets). The features are learned at the super-pixel level and along the dominant boundaries in the image. The predicted posteriors based on the learned features are fed to a conditional random field model to generate smooth shadow masks. Using the detected shadow masks, we propose a Bayesian formulation to accurately extract shadow matte and subsequently remove shadows. The Bayesian formulation is based on a novel model which accurately models the shadow generation process in the umbra and penumbra regions. The model parameters are efficiently estimated using an iterative optimization procedure. Our proposed framework consistently performed better than the state-of-the-art on all major shadow databases collected under a variety of conditions.

7.
IEEE Trans Image Process ; 25(7): 3372-3383, 2016 Jul.
Artículo en Inglés | MEDLINE | ID: mdl-28113718

RESUMEN

Indoor scene recognition is a multi-faceted and challenging problem due to the diverse intra-class variations and the confusing inter-class similarities that characterize such scenes. This paper presents a novel approach that exploits rich mid-level convolutional features to categorize indoor scenes. Traditional convolutional features retain the global spatial structure, which is a desirable property for general object recognition. We, however, argue that the structure-preserving property of the convolutional neural network activations is not of substantial help in the presence of large variations in scene layouts, e.g., in indoor scenes. We propose to transform the structured convolutional activations to another highly discriminative feature space. The representation in the transformed space not only incorporates the discriminative aspects of the target data set but also encodes the features in terms of the general object categories that are present in indoor scenes. To this end, we introduce a new large-scale data set of 1300 object categories that are commonly present in indoor scenes. Our proposed approach achieves a significant performance boost over the previous state-of-the-art approaches on five major scene classification data sets.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA