Across Scales & Across Dimensions: Temporal Super-Resolution using Deep Internal Learning

Liad Pollak Zuckerman, Eyal Naor, George Pisha, Shai Bagon, Michal Irani ;

Abstract


When a very fast dynamic event is recorded with a low-framerate camera, the resulting video suffers from severe motion blur (due to exposure time) and motion aliasing (due to low sampling rate in time). True Temporal Super-Resolution (TSR) is more than just Temporal-Interpolation (increasing framerate). It can also recover new high temporal frequencies beyond the temporal Nyquist limit of the input video, thus resolving both motion-blur and motion aliasing – effects that temporal frame interpolation (as sophisticated as it may be) cannot undo. In this paper we propose a “Deep Internal Learning” approach for true TSR. We train a video-specific FCN on examples extracted directly from the low-framerate input video. Our method exploits the strong recurrence of small space-time patches inside a single video sequence, both within and across different spatio-temporal scales of the video.We further observe (for the first time) that small space-time patches recur also across-dimensions of the video sequence – i.e., by swapping the spatial and temporal dimensions. In particular, the higher spatial resolution of video frames provides strong examples as to how to increase the temporal resolution of that video. Such internal video-specific examples give rise to strong self-supervision, requiring no data but the input video itself. This results in Zero-Shot Temporal-SR of complex videos, which removes both motion blur and motion aliasing, outperforming previous supervised methods trained on external video datasets."

Related Material


[pdf]