United Defocus Blur Detection and Deblurring via Adversarial Promoting Learning

Wenda Zhao, Fei Wei, You He, Huchuan Lu ;

Abstract


"Understanding blur from a single defocused image contains two tasks of defocus detection and deblurring. This paper makes the earliest effort to jointly learn both defocus detection and deblurring without using pixel-level defocus detection annotation and paired defocus deblurring ground truth. We build on the observation that these two tasks are supplementary to each other: Defocus detection can segment the focused area from the defocused image to guide the defocus deblurring; Conversely, to achieve better defocus deblurring, an accurate defocus detection as the guide is essential. Therefore, we implement an adversarial promoting learning framework to jointly handle defocus detection and defocus deblurring. Specifically, a defocus detection generator $G_{ws}$ is implemented to represent the defocused image as a layered composition of two elements: defocused image $I_{df}$ and a focused image $I_f$. Then, $I_{df}$ and $I_f$ are fed into a self-referenced defocus deblurring generator $G_{sr}$ to generate a deblurred image. Two generators of $G_{ws}$ and $G_{sr}$ are optimized alternately in an adversarial manner against a discriminator $D$ with unpaired realistic fully-clear images. Thus, $G_{sr}$ will produce a deblurred image to fool $D$, and $G_{ws}$ is forced to generate an accurate defocus detection map to effectively guide $G_{sr}$. Comprehensive experiments on two defocus detection datasets and one defocus deblurring dataset demonstrate the effectiveness of our framework. \textcolor[rgb]{0.00,0.50,0.50}{Code and model are available at: https://github.com/wdzhao123/APL.}"

Related Material


[pdf] [DOI]