Image credit: Flickr

Low-Complexity Approximate Convolutional Neural Networks

Image credit: Flickr

Low-Complexity Approximate Convolutional Neural Networks

Abstract

In this paper, we present an approach for minimizing the computational complexity of the trained convolutional neural networks (ConvNets). We present an approach for minimizing the computational complexity of the trained convolutional neural networks (ConvNets). The idea is to approximate all elements of a given ConvNet and replace the original convolutional filters and parameters (pooling and bias coefficients; and activation function) with an efficient approximations capable of extreme reductions in computational complexity. Low-complexity convolution filters are obtained through a binary (zero and one) linear programming scheme based on the Frobenius norm over sets of dyadic rationals. The resulting matrices allow for multiplication-free computations requiring only addition and bit-shifting operations. Such low-complexity structures pave the way for low power, efficient hardware designs. We applied our approach on three use cases of different complexities: 1) a light but efficient ConvNet for face detection (with around 1000 parameters); 2) another one for hand-written digit classification (with more than 180000 parameters); and 3) a significantly larger ConvNet: AlexNet with ≈1.2 million matrices. We evaluated the overall performance on the respective tasks for different levels of approximations. In all considered applications, very low-complexity approximations have been derived maintaining an almost equal classification performance.

Publication
In Transactions on Neural Networks and Learning Systems, IEEE.
Next
Previous