Prompting Language-Informed Distribution for Compositional Zero-Shot Learning
Wentao Bao*, Lichang Chen, Heng Huang, Yu Kong
;
Abstract
"Compositional zero-shot learning (CZSL) task aims to recognize unseen compositional visual concepts, , sliced tomatoes, where the model is learned only from the seen compositions, , sliced potatoes and red tomatoes. Thanks to the prompt tuning on large pre-trained visual language models such as CLIP, recent literature shows impressively better CZSL performance than traditional vision-based methods. However, the key aspects that impact the generalization to unseen compositions, including the diversity and informativeness of class context, and the entanglement between visual primitives, , state and object, are not properly addressed in existing CLIP-based CZSL literature. In this paper, we propose a model by prompting the language-informed distribution, aka., P LID, for the CZSL task. Specifically, the P LID leverages pre-trained large language models (LLM) to (i ) formulate the language-informed class distributions which are diverse and informative, and (ii ) enhance the compositionality of the class embedding. Moreover, a visual-language primitive decomposition (VLPD) module is proposed to dynamically fuse the classification decisions from the compositional and the primitive space. Orthogonal to the existing literature of soft, hard, or distributional prompts, our method advocates prompting the LLM-supported class distributions, leading to a better zero-shot generalization. Experimental results on MIT-States, UT-Zappos, and C-GQA datasets show the superior performance of the P LID to the prior arts. Our code and models are released: https://github.com/Cogito2012/PLID."
Related Material
[pdf]
[supplementary material]
[DOI]