Representation Learning on Visual-Symbolic Graphs for Video Understanding

Effrosyni Mavroudi, Benjamín Béjar Haro, René Vidal ;


Events in natural videos typically arise from spatio-temporal interactions between actors and objects and involve multiple co-occurring activities and object classes. To capture this rich visual and semantic context, we propose using two graphs:(1) an attributed spatio-temporal visual graph whose nodes correspond to actors and objects and whose edges encode different types of interactions, and (2) a symbolic graph that models semantic relationships. We further propose a graph neural network for refining the representations of actors, objects and their interactions on the resulting hybrid graph. Our framework goes beyond current approaches that assume nodes and edges of the same type, operate on a fixed graph structure and do not use a symbolic graph. In particular, our framework: a) has specialized attention-based aggregation functions for different node and edge types; b) uses visual edge features; c) integrates visual evidence with label relationships; and d) performs global reasoning in the semantic space. Experiments on challenging video understanding tasks, such as temporal action localization on the Charades dataset, show that the proposed method leads to state-of-the-art performance."

Related Material