Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 2 de 2
Filter
Add more filters










Database
Language
Publication year range
1.
Neural Netw ; 144: 614-626, 2021 Dec.
Article in English | MEDLINE | ID: mdl-34653719

ABSTRACT

Pruning methods to compress and accelerate deep convolutional neural networks (CNNs) have recently attracted growing attention, with the view of deploying pruned networks on resource-constrained hardware devices. However, most existing methods focus on small granularities, such as weight, kernel and filter, for the exploration of pruning. Thus, it will be bound to iteratively prune the whole neural networks based on those small granularities for high compression ratio with little performance loss. To address these issues, we theoretically analyze the relationship between the activation and gradient sparsity, and the channel saliency. Based on our findings, we propose a novel and effective method of weak sub-network pruning (WSP). Specifically, for a well-trained network model, we divide the whole compression process into two non-iterative stages. The first stage is to directly obtain a strong sub-network by pruning the weakest one. We first identify the less important channels from all the layers and determine the weakest sub-network, whereby each selected channel makes a minimal contribution to both the feed-forward and feed-backward processes. Then, a one-shot pruning strategy is executed to form a strong sub-network enabling fine tuning, while significantly reducing the impact of the network depth and width on the compression efficiency, especially for deep and wide network architectures. The second stage is to globally fine-tune the strong sub-network using several epochs to restore its original recognition accuracy. Furthermore, our proposed method impacts on the fully-connected layers as well as the convolutional layers for simultaneous compression and acceleration. Comprehensive experiments on VGG16 and ResNet-50 involving a variety of popular benchmarks, such as ImageNet-1K, CIFAR-10, CUB-200 and PASCAL VOC, demonstrate that our WSP method achieves superior performance on classification, domain adaption and object detection tasks with small model size. Our source code is available at https://github.com/QingbeiGuo/WSP.git.


Subject(s)
Data Compression , Neural Networks, Computer , Computers , Software
2.
Neural Netw ; 132: 491-505, 2020 Dec.
Article in English | MEDLINE | ID: mdl-33039787

ABSTRACT

Although group convolution operators are increasingly used in deep convolutional neural networks to improve the computational efficiency and to reduce the number of parameters, most existing methods construct their group convolution architectures by a predefined partitioning of the filters of each convolutional layer into multiple regular filter groups with an equal spatial group size and data-independence, which prevents a full exploitation of their potential. To tackle this issue, we propose a novel method of designing self-grouping convolutional neural networks, called SG-CNN, in which the filters of each convolutional layer group themselves based on the similarity of their importance vectors. Concretely, for each filter, we first evaluate the importance value of their input channels to identify the importance vectors, and then group these vectors by clustering. Using the resulting data-dependent centroids, we prune the less important connections, which implicitly minimizes the accuracy loss of the pruning, thus yielding a set of diverse group convolution filters. Subsequently, we develop two fine-tuning schemes, i.e. (1) both local and global fine-tuning and (2) global only fine-tuning, which experimentally deliver comparable results, to recover the recognition capacity of the pruned network. Comprehensive experiments carried out on the CIFAR-10/100 and ImageNet datasets demonstrate that our self-grouping convolution method adapts to various state-of-the-art CNN architectures, such as ResNet and DenseNet, and delivers superior performance in terms of compression ratio, speedup and recognition accuracy. We demonstrate the ability of SG-CNN to generalize by transfer learning, including domain adaption and object detection, showing competitive results. Our source code is available at https://github.com/QingbeiGuo/SG-CNN.git.


Subject(s)
Deep Learning , Data Compression/methods , Software
SELECTION OF CITATIONS
SEARCH DETAIL
...