An Information Theoretic Approach for Attention-Driven Face Forgery Detection

Ke Sun, Hong Liu, Taiping Yao, Xiaoshuai Sun, Shen Chen, Shouhong Ding, Rongrong Ji ;

Abstract


"Recently, Deepfakes arises as a powerful tool to fool the existing real-world face detection systems, which has received wide attention in both academia and society. Most existing forgery face detection methods use heuristic clues to build a binary forgery detector, which mainly takes advantage of the empirical observation based on abnormal texture, blending clues, or high-frequency noise, etc. However, heuristic clues only reflect certain aspects of the forgery, which lead to model bias or sub-optimization. Our key observation is that most of the forgery clues are hidden in the informative region, which can be measured quantitatively by classical information maximization theory. Motivated by this, we make the first attempt to introduce the self-information metric to enhance the forgery feature representation. The metric can be formulated as a plug-and-play block, termed self-information attention (SIA) module, that can be applied to most recent top-performance deep model. The SIA module can explicitly help the model extract high information features and recalibrate channel-wise feature responses, which improves both model’s performance and generalization with few additional parameters. Extensive experiments on several large-scale benchmarks demonstrate the superiority of the proposed method against the state-of-the-art competitors."

Related Material


[pdf] [supplementary material] [DOI]