Beyond Fixed Grid: Learning Geometric Image Representation with a Deformable Grid

Jun Gao, Zian Wang, Jinchen Xuan, Sanja Fidler ;

Abstract


In modern computer vision, images are typically represented as a fixed uniform grid with some stride and processed via a deep convolutional neural network. We argue that deforming the grid to better align with the high-frequency image content is a more effective strategy. We introduce mph{Deformable Grid} (Defgrid), a learnable neural network module that predicts location offsets of vertices of a 2-dimensional triangular grid such that the edges of the deformed grid align with image boundaries. We showcase our Defgrid in a variety of use cases, i.e., by inserting it as a module at various levels of processing. We utilize Defgrid as an end-to-end mph{learnable geometric downsampling} layer that replaces standard pooling methods for reducing feature resolution when feeding images into a deep CNN. We show significantly improved results at the same grid resolution compared to using CNNs on uniform grids for the task of semantic segmentation. We also utilize Defgrid at the output layers for the task of object mask annotation, and show that reasoning about object boundaries on our predicted polygonal grid leads to more accurate results over existing pixel-wise and curve-based approaches. We finally showcase DefGrid as a standalone module for unsupervised image segmentation, showing superior performance over existing superpixel-based approaches. Project website: http://www.cs.toronto.edu/~jungao/def-grid ."

Related Material


[pdf]