近年來深度卷積神經網路在影像辨識領域取得相當的成功,這樣的成功需要大量資料標籤作為訓練資料,然而現實場景中獲取標籤的成本卻是相當龐大的,於是推展了遷移學習的研究。 遷移學習為了解決訓練資料不足的問題,其假設訓練與測試資料為獨立同分布,目的將知識從來源域遷移至目標域,使得在目標任務資料集只有少量標籤資料甚至無標籤資料情況下,深度學習模型亦可透過現有標籤資料訓練學習,省下蒐集成本。近年來,逐漸發展出對抗式遷移學習應用於領域自適應的場景中,對抗式遷移學習主要透過額外的領域判別器與特徵擷取器進行最大最小化對抗學習,藉以抓取領域不變性特徵拉近領域分布距離達到領域自適應。 本論文採用CAN架構概念,其作為首個使用所有區塊特徵搭配領域判別器的高精度模型,CAN透過淺層區塊學習領域特徵,末層區塊透過反向梯度層學習領域不變性特徵。本論文提出之架構同樣使用各層區塊特徵增益遷移,透過權重式密集連接各區塊特徵並搭配反向梯度層,且在領域判別器中加入分類器,使其在學習領域不變性特徵時保有分類能力,增強領域自適應能力,最終於實驗中比較提出架構與CAN架構結果,於Office資料集中3個遷移任務中準確度明顯提升0.3~0.5。 ;In recent years, deep neural networks (DNN) have demonstrated their powerful learning ability in image recognition. Such success result requires a large-scale data set as training data. However, acquiring data label in real-life needs high cost. This dilemma condition makes the study of transfer learning. In order to solve the problem of insufficient training data, transfer learning assumes that the training and test dataset are independent and identically distributed. Its purpose is to transfer knowledge from the source domain to the target domain. Thus, even if target task data set has only a small amount of label data or no label data, deep neural network still can be used to train and learn through existing label data. Nowadays, adversarial-based deep transfer learning has been gradually applied in the domain adaption task. It uses a extra domain discriminator to learn domain-invariant features by minimizing the loss of domain discriminator and utilizing reversed gradient during the backpropagation process for aligning distributions. The architecture proposed in this paper based on concept of collaborative and adversarial network(CAN),which is the first model using all of blocks features with high accuracy. It use all blocks with domain discriminators and let them learn domain feature except last one learning for domain-invariant feature by a reversal gradient layer. Our architecture also use all block features to advance transfer effect by weighted dense connection with a gradient reversal layer. In addition, our architecture add a classifier in domain discriminator, in order to make network retain classification capabilities when learning domain invariant features. In the experiment, our architecture demonstrates better transfer ability in three transfer tasks of Office dataset. The accuracy of our proposed architecture in three transfer tasks are all about 0.3~0.5 higher than CAN.