Boosting Discriminative Visual Representation Learning with Scenario-Agnostic Mixup
Abstract
Mixup is a well-known data-dependent augmentation technique for DNNs, consisting of two sub-tasks: <PRE_TAG>mixup generation</POST_TAG> and classification. However, the recent dominant online training method confines mixup to supervised learning (SL), and the objective of the generation sub-task is limited to selected sample pairs instead of the whole data manifold, which might cause trivial solutions. To overcome such limitations, we comprehensively study the objective of <PRE_TAG>mixup generation</POST_TAG> and propose Scenario-Agnostic Mixup (SAMix) for both SL and Self-supervised Learning (SSL) scenarios. Specifically, we hypothesize and verify the objective function of <PRE_TAG>mixup generation</POST_TAG> as optimizing local smoothness between two mixed classes subject to global discrimination from other classes. Accordingly, we propose eta-balanced mixup loss for complementary learning of the two sub-objectives. Meanwhile, a label-free generation sub-network is designed, which effectively provides non-trivial mixup samples and improves transferable abilities. Moreover, to reduce the computational cost of online training, we further introduce a pre-trained version, SAMix^P, achieving more favorable efficiency and generalizability. Extensive experiments on nine SL and SSL benchmarks demonstrate the consistent superiority and versatility of SAMix compared with existing methods.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper