PlugNet: Degradation Aware Scene Text Recognition Supervised by a Pluggable Super-Resolution Unit

Yongqiang Mou, Lei Tan, Hui Yang, Jingying Chen, Leyuan Liu, Rui Yan, Yaohong Huang ;

Abstract


In this paper, we address the problem of recognizing degradation images that are suffering from high blur or low-resolution. We propose a novel degradation aware scene text recognizer with a pluggable super-resolution unit (PlugNet) to recognize low-quality scene text to solve this task from the feature-level. The whole networks can be trained end-to-end with a pluggable super-resolution unit (PSU) and the PSU will be removed after training so that it brings no extra computation. The PSU aims to obtain a more robust feature representation for recognizing low-quality text images. Moreover, to further improve the feature quality, we introduce two types of feature enhancement strategies: Feature Squeeze Module (FSM) which aims to reduce the loss of spatial acuity and Feature Enhance Module (FEM) which combines the feature maps from low to high to provide diversity semantics. As a consequence, the PlugNet achieves state-of-the-art performance on various widely used text recognition benchmarks like IIIT5K, SVT, SVTP, ICDAR15 and etc."

Related Material


[pdf]