The moment distance between each source domain and target domain is defined as the distance between source domain and target domain and the distance between each source domain. The formula is as follows:
The objective function is expressed as the classification loss of the source domain plus the above loss.
S-T alignment is more effective.
This is a paper of ICCV20 19.
In practice, there is often an order of magnitude gap between marked data and unlabeled data, and the distribution (which can be considered as real distribution) of marked data and unlabeled data is also very different. This paper attempts to learn the distribution of labeled data and unlabeled data by confrontation, and learn from mixup method to solve the problem of less labeled data.
The following is the loss formula of this paper, which is divided into two parts. The first part represents classified tasks, and the second part represents antagonistic learning.
Specifically, the anti-loss is as follows, which is the classification function of whether a sample belongs to a marked sample or an unlabeled sample. Through antagonistic learning, the features of labeled samples and unlabeled samples can be pulled into the same space.
In addition, in order to solve the problem of less labeled samples, this paper uses the mixup method to use unlabeled samples, and the formula is as follows. The category labels of unlabeled samples are generated by the classifier. The label of the discriminator is also mixed up.
The following figure shows the performance on SVHN dataset and its comparison with STOA method.
The whole process is as follows:
Plot effect
Http://www.vision.ee.ethz.ch/~liwenw/ Wenli