Towards Accurate Network Quantization with Equivalent Smooth Regularizer

Kirill Solodskikh, Vladimir Chikin, Ruslan Aydarkhanov, Dehua Song, Irina Zhelavskaya, Jiansheng Wei ;

Abstract


"Neural network quantization techniques have been a prevailing way to reduce the inference time and storage cost of full-precision models for mobile devices. However, they still suffer from accuracy degradation due to inappropriate gradients in the optimization phase, especially for low-bit precision network and low-level vision tasks. To alleviate this issue, this paper defines a family of equivalent smooth regularizers for neural network quantization, named as SQR, which represents the equivalent of actual quantization error. Based on the definition, we propose a novel QSin regularizer as an instance to evaluate the performance of SQR, and also build up an algorithm to train the network for integer weight and activation. Extensive experimental results on classification and SR tasks reveal that the proposed method achieves higher accuracy than other prominent quantization approaches. Especially for SR task, our method alleviates the plaid artifacts effectively for quantized networks in terms of visual quality."

Related Material


[pdf] [supplementary material] [DOI]