Is Sharing of Egocentric Video Giving Away Your Biometric Signature?
Easy availability of wearable egocentric cameras, and the sense of privacy propagated by the fact that the wearer is never seen in the captured videos, has led to a tremendous rise in public sharing of such videos. Unlike hand-held cameras, egocentric cameras are harnessed on the wearer’s head, which makes it possible to track the wearer’s head motion by observing optical flow in the egocentric videos. In this work, we create a novel kind of privacy attack by extracting the wearer’s gait profile, a well known biometric signature, from such optical flow in the egocentric videos. We demonstrate strong wearer recognition capabilities based on extracted gait features, an unprecedented and critical weakness completely absent in hand-held videos. We demonstrate the following attack scenarios: (1) In a closed-set scenario, we show that it is possible to recognize the wearer of an egocentric video with an accuracy of more than 92.5% on the benchmark video dataset. (2) In an open-set setting, when the system has not seen the camera wearer even once during the training, we show that it is still possible to identify that the two egocentric videos have been captured by the same wearer with an Equal Error Rate (EER) of less than 14.35%. (3) We show that it is possible to extract gait signature even if only sparse optical flow and no other scene information from egocentric video is available. We demonstrate the accuracy of more than 84% for wearer recognition with only global optical flow. (4) While the first person to first person matching does not give us access to the wearer’s face, we show that it is possible to match the extracted gait features against the one obtained from a third person view such as a surveillance camera looking at the wearer in a completely different background at a different time. In essence, our work indicates that sharing one’s egocentric video should be treated as giving away one’s biometric identity and recommend much more oversight before sharing of egocentric videos. The code, trained models, and the datasets and their annotations are available at https://egocentricbiometric.github.io/"