How does Lipschitz Regularization Influence GAN Training?

Yipeng Qin, Niloy Mitra, Peter Wonka ;

Abstract


Despite the success of Lipschitz regularization in stabilizing GAN training, the exact reason of its effectiveness remains poorly understood. The direct effect of $K$-Lipschitz regularization is to restrict the $L2$-norm of the neural network gradient to be smaller than a threshold $K$ (e.g., $K=1$) such that $\| Lipschitz regularization ensures that all loss functions effectively work in the same way. Empirically, we verify our proposition on the MNIST, CIFAR10 and CelebA datasets."

Related Material


[pdf]