Transfer Feature Learning with Joint Distribution Adaptation
COMP7404 Group16 | Quziqi, He Qing, Lu Zhihua, Liu Zhengyang
Domain adaptation aims to learn effective classifiers for target domains with only labeled source data. The key challenge is that source and target domains often have different distributions.
Where M0 is the marginal distribution MMD and Mc (c>0) are conditional distribution MMDs.
Simultaneously adapts both marginal P(Xs) ≈ P(Xt) and conditional P(Y|Xs) ≈ P(Y|Xt) distributions
Uses pseudo-labels to iteratively improve conditional distribution adaptation
Learns optimal projection A such that Z = ATX minimizes domain discrepancy
Outperforms state-of-the-art methods on digit recognition, object classification, and face recognition
JDA iteratively refines pseudo-labels for target domain to improve conditional distribution adaptation
Format: Ours (Paper), showing reproduction results (paper reported values).
| Task | Type | NN | PCA | GFK | TCA | TSL | JDA |
|---|
Accuracy on 36 cross-domain image tasks across 4 dataset types (Digit, COIL, PIE, SURF/Office), showing knowledge adaptation under different transfer difficulties.
Accuracy (%) on 4 types of 36 cross-domain image datasets
PIE1→PIE2 task: Accuracy and MMD Distance vs iterations (data from fig4_results.csv)
(c) TCA & (d) JDA Similarity Matrices
(a) k Sensitivity
(b) λ Sensitivity
(c) Accuracy Convergence
(d) Distance Convergence
Data source: Embedded 36 cross-domain task results (Paper + Ours), see data.json / _embedded_data.txt.
Comparing reproduced JDA with paper-reported JDA accuracy, diff = Ours - Paper.
| Dataset Type | JDA (Ours) | JDA (Paper) | Difference (Ours − Paper) |
|---|
Title: Transfer Feature Learning with Joint Distribution Adaptation
Authors: Mingsheng Long, Jianmin Wang, Guiguang Ding, Jiaguang Sun, Philip S. Yu
Conference: ICCV 2013
| Total Tasks | 36 |
| Methods | NN, PCA, GFK, TCA, TSL, JDA |
| Avg JDA (Paper) | - |
| Avg JDA (Ours) | - |
| Avg Difference | - |