Abstract
Network pruning has been proposed as a remedy for alleviating the over-parameterization problem of deep neural networks. However, its value has been recently challenged especially from the perspective of neural architecture search (NAS). We challenge the conventional wisdom of pruning after-training by proposing a joint search-and-training approach that directly learns a compact network from the scratch. By treating pruning as a search strategy, we present two new insights in this paper: 1) it is possible to expand the search space of networking pruning by associating each filter with a learnable weight; 2) joint search-and-training can be conducted iteratively to maximize the learning effificiency. More specifically, we propose a coarse-to-fine tuning strategy to iteratively sample and update compact sub-network to approximate the target network. The weights associated with network fifilters will be accordingly updated by joint search-and-training to reflect learned knowledge in NAS space. Moreover, we introduce strategies of random perturbation (inspired by Monte Carlo) and flexible thresholding (inspired by Reinforcement Learning) to adjust the weight and size of each layer. Extensive experiments on ResNet and VGGNet demonstrate the superior performance of our proposed method on popular datasets including CIFAR10, CIFAR100 and ImageNet.
Paper & Code & Demo
Experimental Results
CIFAR Results:

ImageNet Results:

Citation
@inproceedings{lu2020beyond,
title={Beyond network pruning: a joint search-and-training approach},
author={Lu, Xiaotong and Huang, Han and Dong, Weisheng and Li, Xin and Shi, Guangming},
booktitle={International Joint Conference on Artificial Intelligence},
year={2020}
}
Concat
Xiaotong Lu, Email: xiaotonglu47@gmail.com
Weisheng Dong, Email: wsdong@mail.xidian.edu.cn
Xin Li, Email: xin.li@mail.wvu.edu
Han Huang, Email: hanhuang8264@gmail.com
Guangming Shi, Email: gmshi@xidian.edu.cn