Human Interaction Learning on 3D Skeleton Point Clouds for Video Violence Recognition

Yukun Su, Guosheng Lin, Jinhui Zhu, Qingyao Wu ;

Abstract


This paper introduces a new method for recognizing violent behavior by learning contextual relationships between related people from human skeleton points. Unlike previous work, we first formulate 3D skeleton point clouds from human skeleton sequences extracted from videos and then perform interaction learning on these 3D skeleton point clouds. A novel extbf{S}keleton extbf{P}oints extbf{I}nteraction extbf{L}earning (SPIL) module, is proposed to model the interactions between skeleton points. Specifically, by constructing a specific weight distribution strategy between local regional points, SPIL aims to selectively focus on the most relevant parts of them based on their features and spatial-temporal position information. In order to capture diverse types of relation information, a multi-head mechanism is designed to aggregate different features from independent heads to jointly handle different types of relationships between points. Experimental results show that our model outperforms the existing networks and achieves new state-of-the-art performance on video violence datasets.

Related Material


[pdf]