Repaint123: Fast and High-quality One Image to 3D Generation with Progressive Controllable Repainting
Junwu Zhang*, Zhenyu Tang, Yatian Pang, Xinhua Cheng, Peng Jin, Yida Wei, xing zhou, munan ning, Li Yuan*
;
Abstract
"Recent image-to-3D methods achieve impressive results with plausible 3D geometry due to the development of diffusion models and optimization techniques. However, existing image-to-3D methods suffer from texture deficiencies in novel views, including multi-view inconsistency and quality degradation. To alleviate multi-view bias and enhance image quality in novel-view textures, we present Repaint123, a fast image-to-3D approach for creating high-quality 3D content with detailed textures. Repaint123 proposes a progressively repainting strategy to simultaneously enhance the consistency and quality of textures across different views, generating invisible regions according to visible textures, with the visibility map calculated by the depth alignment across views. Furthermore, multiple control techniques, including reference-driven information injection and coarse-based depth guidance, are introduced to alleviate the texture bias accumulated during the repainting process for improved consistency and quality. For novel-view texture refinement with short-term view consistency, our method progressively repaints novel-view images with adaptive strengths based on visibility, enhancing the balance of image quality and view consistency. To alleviate the accumulated bias as progressively repainting, we control the repainting process by depth-guided geometry and attention-driven reference-view textures. Extensive experiments demonstrate the superior ability of our method to create 3D content with consistent and detailed textures in 2 minutes."
Related Material
[pdf]
[supplementary material]
[DOI]