Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 3 de 3
Filtrar
Mais filtros

Base de dados
Tipo de documento
Intervalo de ano de publicação
1.
Entropy (Basel) ; 24(3)2022 Feb 28.
Artigo em Inglês | MEDLINE | ID: mdl-35327860

RESUMO

As state-of-the-art deep neural networks are being deployed at the core level of increasingly large numbers of AI-based products and services, the incentive for "copying them" (i.e., their intellectual property, manifested through the knowledge that is encapsulated in them) either by adversaries or commercial competitors is expected to considerably increase over time. The most efficient way to extract or steal knowledge from such networks is by querying them using a large dataset of random samples and recording their output, which is followed by the training of a student network, aiming to eventually mimic these outputs, without making any assumption about the original networks. The most effective way to protect against such a mimicking attack is to answer queries with the classification result only, omitting confidence values associated with the softmax layer. In this paper, we present a novel method for generating composite images for attacking a mentor neural network using a student model. Our method assumes no information regarding the mentor's training dataset, architecture, or weights. Furthermore, assuming no information regarding the mentor's softmax output values, our method successfully mimics the given neural network and is capable of stealing large portions (and sometimes all) of its encapsulated knowledge. Our student model achieved 99% relative accuracy to the protected mentor model on the Cifar-10 test set. In addition, we demonstrate that our student network (which copies the mentor) is impervious to watermarking protection methods and thus would evade being detected as a stolen model by existing dedicated techniques. Our results imply that all current neural networks are vulnerable to mimicking attacks, even if they do not divulge anything but the most basic required output, and that the student model that mimics them cannot be easily detected using currently available techniques.

2.
Entropy (Basel) ; 21(3)2019 Feb 26.
Artigo em Inglês | MEDLINE | ID: mdl-33266936

RESUMO

In recent years, large datasets of high-resolution mammalian neural images have become available, which has prompted active research on the analysis of gene expression data. Traditional image processing methods are typically applied for learning functional representations of genes, based on their expressions in these brain images. In this paper, we describe a novel end-to-end deep learning-based method for generating compact representations of in situ hybridization (ISH) images, which are invariant-to-translation. In contrast to traditional image processing methods, our method relies, instead, on deep convolutional denoising autoencoders (CDAE) for processing raw pixel inputs, and generating the desired compact image representations. We provide an in-depth description of our deep learning-based approach, and present extensive experimental results, demonstrating that representations extracted by CDAE can help learn features of functional gene ontology categories for their classification in a highly accurate manner. Our methods improve the previous state-of-the-art classification rate (Liscovitch, et al.) from an average AUC of 0.92 to 0.997, i.e., it achieves 96% reduction in error rate. Furthermore, the representation vectors generated due to our method are more compact in comparison to previous state-of-the-art methods, allowing for a more efficient high-level representation of images. These results are obtained with significantly downsampled images in comparison to the original high-resolution ones, further underscoring the robustness of our proposed method.

3.
Sci Rep ; 10(1): 12959, 2020 07 31.
Artigo em Inglês | MEDLINE | ID: mdl-32737327

RESUMO

We describe the application of the computerized deep learning methodology to the recognition of corals in a shallow reef in the Gulf of Eilat, Red Sea. This project is aimed at applying deep neural network analysis, based on thousands of underwater images, to the automatic recognition of some common species among the 100 species reported to be found in the Eilat coral reefs. This is a challenging task, since even in the same colony, corals exhibit significant within-species morphological variability, in terms of age, depth, current, light, geographic location, and inter-specific competition. Since deep learning procedures are based on photographic images, the task is further challenged by image quality, distance from the object, angle of view, and light conditions. We produced a large dataset of over 5,000 coral images that were classified into 11 species in the present automated deep learning classification scheme. We demonstrate the efficiency and reliability of the method, as compared to painstaking manual classification. Specifically, we demonstrated that this method is readily adaptable to include additional species, thereby providing an excellent tool for future studies in the region, that would allow for real time monitoring the detrimental effects of global climate change and anthropogenic impacts on the coral reefs of the Gulf of Eilat and elsewhere, and that would help assess the success of various bioremediation efforts.

SELEÇÃO DE REFERÊNCIAS
DETALHE DA PESQUISA