Current location - Education and Training Encyclopedia - Graduation thesis - Non-service domain adaptive back propagation reading notes
Non-service domain adaptive back propagation reading notes
About this paper

Unsupervised domain adaptation to reverse propagation

Authors: yaroslav Garning, Victor Len Pitzki,

Theme: domain adaptation

From: ICML 20 15.

The main contribution of this paper is to propose a new method to measure the data distribution difference between source domain and target domain (based on confrontation method).

So how to optimize such an unconventional model? Explain intuitively first, and then supplement the formula.

First of all, for the above branch, only the source domain data is marked, so only the source domain data will be trained. Both the feature extractor and the label classifier will minimize the classification error in order to obtain good classification results on the source domain data.

Secondly, for the following branches, both source domain data and target domain data will be trained. The feature extractor will maximize the domain classification error to obtain domain-invariant features, while the domain classifier will minimize the domain classification error to ensure that the domain category of features can be accurately judged. The antagonistic thought embodied here (much like Gan! )。

To sum up, the role of the three parts of the model:

(1) Label Classifier: Minimize the classification error and accurately classify the source domain data.

(2) Domain Classifier: Minimize the error of domain classification, so as to classify the domain categories of features.

(3) Feature extractor: On the one hand, it minimizes the classification error and obtains discriminative features. On the other hand, the domain classification error is maximized to keep the feature domain unchanged.