What makes fake images detectable? Understanding properties that generalize

Lucy Chai, David Bau, Ser-Nam Lim, Phillip Isola ;


The quality of image generation and manipulation is reaching impressive levels, making it exceedingly difficult for a human to distinguish between what is real and what is fake. However, deep networks can still pick up on the subtle artifacts in these doctored images. We seek to understand what properties of these fake images make them detectable and identify what generalizes across different model architectures, datasets, and variations in training. We use a patch-based classifier with limited receptive fields to focus on low-level artifacts rather than global semantics, and use patch-wise predictions to localize the manipulated regions. We further show a technique to exaggerate these detectable properties and demonstrate that even when the image generator is adversarially finetuned against the fakeness classifier, it is still imperfect and makes detectable mistakes in similar regions of the image."

Related Material