Your browser doesn't support javascript.
loading
Mostrar: 20 | 50 | 100
Resultados 1 - 2 de 2
Filtrar
Más filtros

Banco de datos
Tipo del documento
Asunto de la revista
Intervalo de año de publicación
1.
Artículo en Inglés | MEDLINE | ID: mdl-38743536

RESUMEN

Deep neural networks (DNNs) provide state-of-the-art accuracy for vision tasks, but they require significant resources for training. Thus, they are trained on cloud servers far from the edge devices that acquire the data. This issue increases communication cost, runtime, and privacy concerns. In this study, a novel hierarchical training method for DNNs is proposed that uses early exits in a divided architecture between edge and cloud workers to reduce the communication cost, training runtime, and privacy concerns. The method proposes a brand-new use case for early exits to separate the backward pass of neural networks between the edge and the cloud during the training phase. We address the issues of most available methods that, due to the sequential nature of the training phase, cannot train the levels of hierarchy simultaneously or they do it with the cost of compromising privacy. In contrast, our method can use both edge and cloud workers simultaneously, does not share the raw input data with the cloud, and does not require communication during the backward pass. Several simulations and on-device experiments for different neural network architectures demonstrate the effectiveness of this method. It is shown that the proposed method reduces the training runtime for VGG-16 and ResNet-18 architectures by 29% and 61% in CIFAR-10 classification and by 25% and 81% in Tiny ImageNet classification, respectively, when the communication with the cloud is done over a low bit rate channel. This gain in the runtime is achieved, while the accuracy drop is negligible. This method is advantageous for online learning of high-accuracy DNNs on sensor-holding low-resource devices such as mobile phones or robots as a part of an edge-cloud system, making them more flexible in facing new tasks and classes of data.

2.
IEEE Trans Image Process ; 25(5): 2275-87, 2016 May.
Artículo en Inglés | MEDLINE | ID: mdl-27019493

RESUMEN

A crucial component of steerable wavelets is the radial profile of the generating function in the frequency domain. In this paper, we present an infinite-dimensional optimization scheme that helps us find the optimal profile for a given criterion over the space of tight frames. We consider two classes of criteria that measure the localization of the wavelet. The first class specifies the spatial localization of the wavelet profile, and the second that of the resulting wavelet coefficients. From these metrics and the proposed algorithm, we construct tight wavelet frames that are optimally localized and provide their analytical expression. In particular, one of the considered criterion helps us finding back the popular Simoncelli wavelet profile. Finally, the investigation of local orientation estimation, image reconstruction from detected contours in the wavelet domain, and denoising indicate that optimizing wavelet localization improves the performance of steerable wavelets, since our new wavelets outperform the traditional ones.

SELECCIÓN DE REFERENCIAS
DETALLE DE LA BÚSQUEDA