博碩士論文 108022003 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:8 、訪客IP:3.135.198.180
姓名 謝承憲(Cheng-Hsien Hsieh)  查詢紙本館藏   畢業系所 遙測科技碩士學位學程
論文名稱 以深度學習進行遙測影像植生區域偵測
(Vegetation Region Detection for RSI based on Deep Learning)
相關論文
★ 應用多核特徵線嵌入法進行高光譜影像分類★ 基於SIFT演算法進行車牌認證
★ 利用自適性權重估測機制改善傳統爬山演算法之對焦問題★ 以核心模糊最近特徵線轉換法做人臉辨識
★ 利用模糊最近特徵線轉換做人臉辨識★ 基於Leap Motion之三維手寫中文文字特徵擷取
★ 使用人臉辨識強化VPN身份認證★ 應用核心最近特徵線轉換做人臉辨識
★ 應用相鄰最近特徵空間轉換法於跌倒偵測★ 使用Sentinel -2 影像提出空間、光譜與時間的深度學習架構製作佛羅里達州西南部於2017年受艾瑪颶風影響之紅樹林退化圖
★ 利用深度學習方法檢測震前電離層異常★ 衛星降水資料於高衝擊天氣和滑坡事件的應用研究
★ 基於VIT及向日葵8號氣象衛星台灣區域雨量預測之可行性評估★ 基於SPOT-7衛星影像之台灣土地使用分析
★ 基於支援向量的特徵線轉換於高光譜影像辨識
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2026-8-1以後開放)
摘要(中) 由於部分遙測影像在某些特定地區存在光斑或雜訊,如:建築物附近,容易在地物分類上產生問題,而本研究的目為都市地區綠地覆蓋的偵測,即找出植生與非植生區域。然而植被在這些區域附近的像元多少會受到干擾,導致人工判釋上的困擾,因此我們將利用Sentinel-2影像當作參考,以人工選取的方式挑選屬於植生和非植生的像元,作為本次訓練和測試使用的地真資料(Ground Truth),並在經輻射校正過後目標影像中,選取研究範圍作為訓練樣本和測試樣本的來源。本實驗的研究區域為「竹南頭份都市計畫地區」,並選取該地區清晰無雲的影像,能降低訓練和測試時的誤差和提升準確性。實驗採用Deep ML、Spectral DeseNet和Spectral-Spatial DenseNet三種不同的深度學習方式進行模型訓練,並隨機選取樣本作為訓練和測試資料。最後將三種深度學習方式的分類結果和根據植生指數NDVI閾值所選取的分類結果作比較,檢驗模型分類能力。
摘要(英) In the particular situation, we might get noise from the high reflectance region e.g. Building area, and it exactly affect our classification result and accuracy in vegetation regions. Thus, the purpose of this research is to detect green space coverage in urban areas, that is, to identify Vegetation and non-Vegetation area. However, the pixels of vegetation near these areas that mentioned above will be disturbed to some extent, which will cause problems in manual interpretation. Therefore, we use the Sentinel-2 image as a reference, selecting the pixels as the ground truth data manually. The training samples and test samples were selected from the target image after radiation correction base on the ground truth. The research area of this experiment is the "Zhunan Toufen Urban Planning Area", the clear and cloudless images of this area are selected, which can reduce the error and improve the accuracy during training and testing. The experiment uses three different deep learning methods, Deep ML, Spectral DeseNet and Spectral-Spatial DenseNet for model training, and randomly selects samples as training and test data. Finally, the classification results of these three deep learning methods are compared with the classification results selected according to the NDVI and EVI thresholds of the vegetation index to test the classification ability of the model.
關鍵字(中) ★ 衛星影像
★ 深度學習
★ 植生偵測
關鍵字(英)
論文目次 摘要 vi
Abstract vii
目錄 viii
圖目錄 x
表目錄 xii
第一章 緒論 1
1.1 研究動機 1
1.2 研究目的 4
1.3 論文架構 5
第二章 相關文獻 6
2.1 相關研究 6
2.2 衛星遙測影像 11
2.3 輻射校正 13
2.4 植生指數(Vegetation Index, VI) 18
2.5 多層感知器(Multilayer perceptron, MLP) 19
2.6 卷積神經網路(Convolutional Neural Network) 20
2.7 殘差神經網路(Residual neural network) 21
2.8 稠密連接網路(Densely Connection Network) 23
第三章 研究方法 26
3.1 NDVI與EVI2 26
3.2 Deep Multilayer Perceptron 28
3.3 Spectral DenseNet 29
3.4 Spectral-Spatial DenseNet 30
第四章 實驗結果 31
4.1 研究區域及資料選取 31
4.2 實驗說明 32
4.3 實驗結果 33
第五章 結論與未來展望 38
參考文獻 39
參考文獻 [1] P. Ghamisi, J. Plaza, Y. Chen, J. Li, and A. J. Plaza, “Advanced spectral classifiers for hyperspectral images: A review,” IEEE Geosci. Remote Sens. Mag., vol. 5, no. 1, pp. 8–32, Mar. 2017.
[2] G. F. Hughes, “On the mean accuracy of statistical pattern recognizers,” IEEE Trans. Inf. Theory, vol. IT-14, no. 1, pp. 55–63, Jan. 1968.
[3] J. A. Gualtieri and R. F. Cromp, “Support vector machines for hyperspectral remote sensing classification,” Proc. SPIE, vol. 3584, pp. 221–232, Jan. 1999.
[4] F. Melgani and L. Bruzzone, “Classification of hyperspectral remote sensing images with support vector machines,” IEEE Trans. Geosci. Remote Sens., vol. 42, no. 8, pp. 1778–1790, Aug. 2004.
[5] M. Fauvel, J. A. Benediktsson, J. Chanussot, and J. R. Sveinsson, “Spectral and spatial classification of hyperspectral data using SVMs and morphological profiles,” IEEE Trans. Geosci. Remote Sens., vol. 46, no. 11, pp. 3804–3814, Nov. 2008.
[6] P. Ghamisi, M. D. Mura, and J. A. Benediktsson, “A survey on spectral– spatial classification techniques based on attribute profiles,” IEEE Trans. Geosci. Remote Sens., vol. 53, no. 5, pp. 2335–2353, May 2015.
[7] Y. Gu, J. Chanussot, X. Jia, and J. A. Benediktsson, “Multiple Kernel learning for hyperspectral image classification: A review,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 11, pp. 6547–6565, Nov. 2017.
[8] Y. Yuan, J. Lin, and Q. Wang, “Hyperspectral image classification via multitask joint sparse representation and stepwise MRF optimization,” IEEE Trans. Cybern., vol. 46, no. 12, pp. 2966–2977, Dec. 2016.
[9] W. Hu, Y. Huang, L. Wei, F. Zhang, and H. Li, “Deep convolutional neural networks for hyperspectral image classification,” J. Sensors., vol. 2015, Jan. 2015, Art. no. 258619.
[10] W. Li, G. Wu, F. Zhang, and Q. Du, “Hyperspectral image classification using deep pixel-pair features,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 2, pp. 844–853, Feb. 2017.
[11] W. Shao and S. Du, “Spectral-spatial feature extraction for hyperspectral image classification: A dimension reduction and deep learning approach,” IEEE Trans. Geosci. Remote Sens., vol. 54, no. 8, pp. 4544–4554, Oct. 2016.
[12] J. Yue, W. Zhao, S. Mao, and H. Liu, “Spectral–spatial classification of hyperspectral images using deep convolutional neural networks,” Remote Sens. Lett., vol. 6, no. 6, pp. 468–477, 2015.
[13] H. Liang and Q. Li, “Hyperspectral imagery classification using sparse representations of convolutional neural network features,” Remote Sens., vol. 8, no. 2, p. 99, 2016.
[14] Y. Chen, L. Zhu, P. Ghamisi, X. Jia, G. Li, and L. Tang, “Hyperspectral images classification with Gabor filtering and convolutional neural network,” IEEE Geosci. Remote Sens. Lett., vol. 14, no. 12, pp. 2355–2359, Dec. 2017.
[15] Gao Huang, Zhuang Liu, Laurens Van Der Maaten and Kilian Q. Weinberger, “Densely Connected Convolutional Networks”, IEEE Transactions on Pattern Analysis and Machine Intelligence Published: 2018
[16] Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun, ”Deep Residual Learning for Image Recognition”, 2016, IEEE Conference on Computer Vision and Pattern Recognition (CVPR)
[17] Juncheng Li, Faming Fang, Kangfu Mei, Guixu Zhang, “Multi-scale Residual Network for Image,” European Conference on Computer Vision (ECCV), 2018, pp. 517-532
[18] Alfredo Canziani, Adam Paszke, Eugenio Culurciello, “An Analysis of Deep Neural Network Models for Practical Applications,” arXiv preprint arXiv:1605.07678, 2016 - arxiv.org
[19] Ying-Nong Chen, Cheng-Ta Hsieh, Ming-Gang Wen, Chin-Chuan Han & Kuo-Chin Fan, 2015, “A Dimension Reduction Framework for HSI Classification Using Fuzzy and Kernel NFLE Transformation,” ISSN 2072-4292.
[20] Lin Zhu, Yushi Chen , Member, IEEE, Pedram Ghamisi , Member, IEEE, and Jón Atli Benediktsson , Fellow, “Generative Adversarial Networks for Hyperspectral Image Classification, “ IEEE Transactions on Geoscience and Remote Sensing Vol. 56, Issue 9, pp. 5046 – 5063, 2018.
[21] Tsung-Han Chan, Member, IEEE, Kui Jia, Shenghua Gao, Jiwen Lu, Senior Member, IEEE, Zinan Zeng, and Yi Ma, Fellow, “PCANet: A Simple Deep Learning Baseline for Image Classification?,” IEEE TRANSACTIONS ON IMAGE PROCESSING, VOL. 24, NO. 12, DECEMBER 2015.
[22] Zilong Zhong, Student Member, IEEE, Jonathan Li , Senior Member, IEEE, Zhiming Luo, and Michael Chapman, “Spectral–Spatial Residual Network for Hyperspectral Image Classification: A 3-D Deep Learning Framework,” IEEE TRANSACTIONS ON GEOSCIENCE AND REMOTE SENSING, VOL. 56, NO. 2, FEBRUARY 2018.
[23] Z. Yang, T. Dan and Y. Yang, “Multi-Temporal Remote Sensing Image Registration Using Deep Convolutional Features,” in IEEE Access, vol. 6, pp. 38544-38555, 2018.
指導教授 陳映濃 審核日期 2021-7-14
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明