Learning to Censor by Noisy Sampling

Ayush Chopra, Abhinav Java, Abhishek Singh, Vivek Sharma, Ramesh Raskar ;

Abstract


"Point clouds are an increasingly ubiquitous input modality and the raw signal can be efficiently processed with recent progress in deep learning. This signal may, often inadvertently, capture sensitive information that can leak semantic and geometric properties of the scene which the data owner does not want to share. The goal of this work is to protect sensitive information when learning from point clouds; by censoring signal before the point cloud is released for downstream tasks. Specifically, we focus on preserving utility for perception tasks while mitigating attribute leakage attacks. The key motivating insight is to leverage the localized saliency of perception tasks on point clouds to provide good privacy-utility trade-offs. We realize this through a mechanism called censoring by noisy sampling (CBNS), which is composed of two modules: i) Invariant Sampling: a differentiable point-cloud sampler which learns to remove points invariant to utility and ii) Noise Distortion: which learns to distort sampled points to decouple the sensitive information from utility, and mitigate privacy leakage. We validate the effectiveness of CBNS through extensive comparisons with state-of-the-art baselines and sensitivity analyses of key design choices. Results show that CBNS achieves superior privacy-utility trade-offs."

Related Material


[pdf] [DOI]