Performance Aware Convolutional Neural Network Channel Pruning for Embedded GPUs

Radu, V., Kaszyk, K., Wen, Y., Turner, J., Cano, J. , Crowley, E. J., Franke, B., Storkey, A. and O’Boyle, M. (2019) Performance Aware Convolutional Neural Network Channel Pruning for Embedded GPUs. In: 2019 IEEE International Symposium on Workload Characterization (IISWC), Orlando, FL, USA, 03-05 Nov 2019, pp. 24-34. ISBN 9781728140452 (doi: 10.1109/IISWC47752.2019.9042000)

[img]
Preview
Text
203845.pdf - Accepted Version

2MB

Abstract

Convolutional Neural Networks (CNN) are becoming a common presence in many applications and services, due to their superior recognition accuracy. They are increasingly being used on mobile devices, many times just by porting large models designed for server space, although several model compression techniques have been considered. One model compression technique intended to reduce computations is channel pruning. Mobile and embedded systems now have GPUs which are ideal for the parallel computations of neural networks and for their lower energy cost per operation. Specialized libraries perform these neural network computations through highly optimized routines. As we find in our experiments, these libraries are optimized for the most common network shapes, making uninstructed channel pruning inefficient. We evaluate higher level libraries, which analyze the input characteristics of a convolutional layer, based on which they produce optimized OpenCL (Arm Compute Library and TVM) and CUDA (cuDNN) code. However, in reality, these characteristics and subsequent choices intended for optimization can have the opposite effect. We show that a reduction in the number of convolutional channels, pruning 12% of the initial size, is in some cases detrimental to performance, leading to 2× slowdown. On the other hand, we also find examples where performance-aware pruning achieves the intended results, with performance speedups of 3× with cuDNN and above 10× with Arm Compute Library and TVM. Our findings expose the need for hardware-instructed neural network pruning.

Item Type:Conference Proceedings
Keywords:Convolutional neural networks, channel pruning, embedded GPU.
Status:Published
Refereed:Yes
Glasgow Author(s) Enlighten ID:Cano Reyes, Dr Jose
Authors: Radu, V., Kaszyk, K., Wen, Y., Turner, J., Cano, J., Crowley, E. J., Franke, B., Storkey, A., and O’Boyle, M.
College/School:College of Science and Engineering > School of Computing Science
ISBN:9781728140452
Published Online:19 March 2020
Copyright Holders:Copyright © 2019 IEEE
First Published:First published in 2019 IEEE International Symposium on Workload Characterization (IISWC): 24-34
Publisher Policy:Reproduced in accordance with the publisher copyright policy
Related URLs:

University Staff: Request a correction | Enlighten Editors: Update this record