Model-Agnostic Boundary-Adversarial Sampling for Test-Time Generalization in Few-Shot learning
Few-shot learning is an important research problem that tackles one of the greatest challenges of machine learning: learning a new task from a limited amount of labeled data. We propose a model-agnostic method that improves the test-time performance of any few-shot learning models with no additional training, and thus is free from the training-test domain gap. Based on only the few support samples in a meta-test task, our method generates the samples adversarial to the base few-shot classifier's boundaries and fine-tunes its embedding function in the direction that increases the classification margins of the adversarial samples. Consequently, the embedding space becomes denser around the labeled samples which makes the classifier robust to query samples. Experimenting on miniImageNet, CIFAR-FS, and FC100, we demonstrate that our method brings significant performance improvement to three different base methods with various properties, and achieves the state-of-the-art performance in a number of few-shot learning tasks.