Hidden Footprints: Learning Contextual Walkability from 3D Human Trails

Jin Sun, Hadar Averbuch-Elor, Qianqian Wang, Noah Snavely ;

Abstract


Predicting where people can walk in a scene is important for many tasks, including autonomous driving systems and human behavior analysis. Yet learning a computational model for this purpose is challenging due to semantic ambiguity and a lack of labeled data: current datasets only have labels on where people $ extit{are}$, not where they $ extit{could be}$. We tackle this problem by leveraging information from existing datasets, without any additional labeling. We first augment the set of valid walkable regions by propagating person observations between images, utilizing 3D information and temporal coherence, leading to $ extit{Hidden Footprints}$. We then design a training strategy that combines a class-balanced classification loss with a contextual adversarial loss to learn from sparse observations, thus obtaining a model that predicts a walkability map of a given scene. We evaluate our model on the Waymo and Cityscapes datasets, demonstrating superior performance against baselines and state-of-the-art models.

Related Material


[pdf]