The Hessian Penalty: A Weak Prior for Unsupervised Disentanglement
Existing popular methods for disentanglement rely on hand-picked priors and complex encoder-based architectures. In this paper, we propose the Hessian Penalty, a simple regularization function that encourages the input Hessian of a function to be diagonal. Our method is completely model-agnostic and can be applied to any deep generator with just a few lines of code. We show that our method automatically uncovers meaningful factors of variation in the standard basis when applied to ProgressiveGAN across several datasets. Additionally, we demonstrate that our regularization term can be used to identify interpretable directions in BigGAN's latent space in a fully unsupervised fashion. Finally, we provide provide empirical evidence that our regularization term encourages sparsity when applied to overparameterized latent spaces."