• 沒有找到結果。

壓縮實驗部分我們針對VGG、ResNet與DenseNet系列模型以及CIFAR-10與CIFAR-100 兩組資料集,進行全面性的實驗與分析,驗證本論文提出方法之泛用性。並觀察各個模型

‧ 國

立 政 治 大 學

N a

tio na

l C h engchi U ni ve rs it y

來則可以省去重新訓練所花費的資源與時間。

最後則是針對不同的任務情境,所需辨識的類別也不盡相同,在不需要重新訓練的前 提下,我們是否能夠從一個已訓練好的大模型中,選定其中幾個所需的類別,接著從中抽 取對於辨識所選類別有意義的神經元與連接生成一個子模型,此子模型的參數量與FLOP理 所當然會有所降低,或許可以看成是一種變相的模型壓縮,且我們可就當下的狀況及需求 動態的調整辨識類別。

[1] Song Han, Huizi Mao, William J. Dally. Deep compression: Compressing deep neural networks with pruning, trained quantization and huffman coding. arXiv:1510.00149v5, Feb 2016.

[2] Hao Li, Asim Kadav, Igor Durdanovic, Hanan Samet, Hans Peter Graf. Pruning Filters for Efficient ConvNets. arXiv:1608.08710v3, Mar 2017.

[3] Forrest N. Iandola, Song Han, Matthew W. Moskewicz, Khalid Ashraf, William J. Dally, Kurt Keutzer. SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size. arXiv:1602.07360v4, Nov 2016.

[4] Andrew G. Howard, Menglong Zhu, Bo Chen, Dmitry Kalenichenko, Weijun Wang, Tobias Weyand, Marco Andreetto, Hartwig Adam. MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications. arXiv:1704.04861v1, Apr 2017.

[5] Alex Krizhevsky, Vinod Nair, Geoffrey Hinton. CIFAR-10 and CIFAR-100 datasets.

https://www.cs.toronto.edu/~kriz/cifar.html, last visited on Jan 2018.

[6] 李宏毅, [DSC 2016]系列活動: 李宏毅/一天搞懂深度學習,

https://www.slideshare.net/tw_dsconf/ss-62245351, last visited on Jan 2018.

[7] Yann LeCun, Corinna Cortes, Christopher J.C. Burges. THE MNIST DATABASE of handwritten digits. http://yann.lecun.com/exdb/mnist/, last visited on Oct 2018 [8] ImageNet. http://www.image-net.org/, last visited on Jan 2018.

[9] ImageNet Large Scale Visual Recognition Competition (ILSVRC). http://www.image-net.org/challenges/LSVRC/, last visited on Jan 2018.

[10] Yuanqing Lin, Fengjun Lv, Shenghuo Zhu, Ming Yang, Timothee Cour, Kai Yu, Liangliang Cao, Thomas Huang. Large-scale image classification: Fast feature extraction and SVM training. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference, pages 1689-1696. IEEE, 2011.

[11] Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton. ImageNet Classification with Deep Convolutional Neural Networks. In Advances in neural information processing systems, pages 1097-1105, 2012.

[12] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Sermanet, Scott Reed, Dragomir Anguelov, Dumitru Erhan, Vincent Vanhoucke, Andrew Rabinovich. Going Deeper with Convolutions. arXiv:1409.4842v1, Sep 2014.

[13] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. Deep Residual Learning for Image Recognition. IEEE, pages 770-778, 2016.

[14] Kaiming He. Learning Deep Features for Visual Recognition.

http://deeplearning.csail.mit.edu/cvpr2017_tutorial_kaiminghe.pdf, last visited on Oct 2018.

[15] Embedded Systems Developer Kits, Modules, & SDKs | NVIDIA Jetson.

https://www.nvidia.com/en-us/autonomous-machines/embedded-systems-dev-kits-modules/, last visited on Oct 2018.

[16] Raspberry Pi. https://www.raspberrypi.org/, last visited on Oct 2018.

[17] Song Han, Jeff Pool, John Tran, William J. Dally. Learning both Weights and Connections for Efficient Neural Networks. arXiv:1506.02626v3, Oct 2015.

[18] Babajide O. Ayinde, Jacek M. Zurada. Building Efficient ConvNets using Redundant Feature Pruning. arXiv:1802.07653v1, Feb 2018.

[19] Karen Simonyan, Andrew Zisserman. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv:1409.1556v6, Apr 2015.

[20] Gao Huang, Zhuang Liu, Laurens van der Maaten, Kilian Q. Weinberger. Densely Connected Convolutional Networks. arXiv:1608.06993v5, Jan 2018.

[21] Sergey Ioffe, Christian Szegedy. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift. arXiv:1502.03167v3, Mar 2015.

[22] Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber. Highway Networks.

arXiv:1505.00387v2, Nov 2015.

[23] TensorFlow. https://www.tensorflow.org/, last visited on Oct 2018.

[24] Keras: Deep Learning for humans. https://github.com/keras-team/keras, last visited on Oct

‧ 國

立 政 治 大 學

N a

tio na

l C h engchi U ni ve rs it y

附錄一 VGG16模型壓縮前後通道數對比

CIFAR10 Benchmark Pruned-A Pruned-B CIFAR100 Benchmark Pruned-A Pruned-B

Conv2d 1 64 25 24 Conv2d 1 64 33 33

Conv2d 2 64 57 50 Conv2d 2 64 63 63

Conv2d 3 128 128 128 Conv2d 3 128 123 123

Conv2d 4 128 128 128 Conv2d 4 128 128 128

Conv2d 5 256 186 90 Conv2d 5 256 227 120

Conv2d 6 256 242 140 Conv2d 6 256 255 210

Conv2d 7 256 240 118 Conv2d 7 256 256 231

Conv2d 8 512 379 173 Conv2d 8 512 502 436

Conv2d 9 512 228 76 Conv2d 9 512 512 482

Conv2d 10 512 72 34 Conv2d 10 512 504 345

Conv2d 11 512 19 51 Conv2d 11 512 94 94

Conv2d 12 512 30 101 Conv2d 12 512 63 63

Conv2d 13 512 74 286 Conv2d 13 512 165 165

CIFAR10 Benchmark Pruned-A* Pruned-B* CIFAR100 Benchmark Pruned-A* Pruned-B*

Conv2d 1 16 16 16 Conv2d 1 16 16 16

CIFAR10 Benchmark Pruned-A* Pruned-B* CIFAR100 Benchmark Pruned-A* Pruned-C*

Conv2d 1 16 16 16 Conv2d 1 16 16 16

CIFAR10 Benchmark Pruned-A Pruned-B CIFAR100 Benchmark Pruned-A Pruned-B

Conv2d 1 24 24 24 Conv2d 1 24 24 24

‧ 國

立 政 治 大 學

N a

tio na

l C h engchi U ni ve rs it y

Conv2d 28 12 11 11 Conv2d 28 12 12 12

Conv2d 29 12 11 11 Conv2d 29 12 12 12

Conv2d 30 12 10 10 Conv2d 30 12 12 12

Conv2d 31 12 11 11 Conv2d 31 12 12 12

Conv2d 32 12 11 11 Conv2d 32 12 12 12

Conv2d 33 12 12 12 Conv2d 33 12 12 12

Conv2d 34 12 9 9 Conv2d 34 12 12 12

Conv2d 35 12 10 10 Conv2d 35 12 12 12

Conv2d 36 12 10 10 Conv2d 36 12 12 12

Conv2d 37 12 12 12 Conv2d 37 12 12 12

Conv2d 38 12 8 8 Conv2d 38 12 12 12

Conv2d 39 12 11 11 Conv2d 39 12 12 12

CIFAR10 Benchmark Pruned-A Pruned-B CIFAR100 Benchmark Pruned-A Pruned-B

Conv2d 1 24 24 24 Conv2d 1 24 24 24

‧ 國

立 政 治 大 學

N a

tio na

l C h engchi U ni ve rs it y

Conv2d 28 48 47 47 Conv2d 28 48 46 46

Conv2d 29 12 12 12 Conv2d 29 12 12 12

Conv2d 30 48 48 48 Conv2d 30 48 47 48

Conv2d 31 12 12 12 Conv2d 31 12 12 12

Conv2d 32 48 48 48 Conv2d 32 48 48 48

Conv2d 33 12 12 12 Conv2d 33 12 12 12

Conv2d 34 48 48 48 Conv2d 34 48 48 48

Conv2d 35 12 12 12 Conv2d 35 12 12 12

Conv2d 36 48 48 48 Conv2d 36 48 48 48

Conv2d 37 12 12 12 Conv2d 37 12 12 12

Conv2d 38 48 48 48 Conv2d 38 48 48 48

Conv2d 39 12 12 12 Conv2d 39 12 12 12

相關文件