Text2LIVE: Text-Driven Layered Image and Video Editing

Omer Bar-Tal, Dolev Ofri-Amar, Rafail Fridman, Yoni Kasten, Tali Dekel ;

Abstract


"We present a method for zero-shot, text-driven editing of natural images and videos. Given an image or a video and a text prompt, our goal is to edit the appearance of existing objects (e.g., texture) or augment the scene with visual effects (e.g., smoke, fire) in a semantic manner. We train a generator on an \emph{internal dataset}, extracted from a single input, while leveraging an \emph{external} pretrained CLIP model to impose our losses. Rather than directly generating the edited output, our key idea is to generate an \emph{edit layer} (color+opacity) that is composited over the input. This allows us to control the generation and maintain high fidelity to the input via novel text-driven losses applied directly to the edit layer. Our method neither relies on a pretrained generator nor requires user-provided masks. We demonstrate localized, semantic edits on high-resolution images and videos across a variety of objects and scenes."

Related Material


[pdf] [DOI]