博碩士論文 110022002 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:18 、訪客IP:18.220.239.179
姓名 林溦蓁(Wei-Chen Lin)  查詢紙本館藏   畢業系所 遙測科技碩士學位學程
論文名稱 修改可解釋人工智慧方法辨識多光譜土地覆蓋分類網路模型之具類別判別性光譜特徵
(Identifying Class-Discriminative Spectral Features in a Multispectral Land Cover Classification Network with Modified Explainable Artificial Intelligence Methods)
相關論文
★ 物聯網制動功能之互操作性解決方案★ 地理網路爬蟲:具擴充及擴展性之地理網路資源爬行架構
★ TDR監測資訊平台之改善與 感測器觀測服務之建立★ 利用高解析衛星立體像對產製近岸水底地形
★ 整合oneM2M 及OGC SensorThings API 標準建立開放式物聯網架構★ 巨量物聯網資料之多重屬性索引架構
★ 高效率異質性時序資料表示法辨別系統★ A TOA-reflectance-based Spatial-temporal Image Fusion Method for Aerosol Optical Depth Retrieval
★ An Automatic Embedded Device Registration Procedure for the OGC SensorThings API★ 基於本體論與使用者興趣之個人化地理網路搜尋引擎
★ 利用本體論整合城市模型及物聯網開放式標準探討智慧城市之應用★ 運用無人機及影像套合法進行混凝土橋梁裂縫檢測
★ GeoRank: A Geospatial Web Ranking Algorithm for a GeoWeb Search Engine★ 應用高時空解析度遙測影像融合於海水覆蓋率之監測
★ LoRaWAN Positioning based on Time Difference of Arrival and Differential Correction★ 類神經網路逆向工程理解遙測資訊:以Landsat 8植被分類為例
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2028-12-14以後開放)
摘要(中) 遙測衛星影像具有周期性及面積廣闊的觀測資訊,常用於分析地球各環境現象的發展及變化。近年來,隨著深度學習技術的進步,許多人工神經網路(Artificial Neural Networks, ANNs)被應用於各種遙測資料的處理,例如影像融合、特徵提取、土地利用和土地覆蓋分類。雖然人工神經網路的設計變得越加複雜以獲得更好的表現性能,但其背後的邏輯隱藏在多層的網路計算,難以被學者或使用者理解,因此常被視為「黑盒子」。為了解決這個問題,文獻提出可解釋人工智慧(Explainable Artificial Intelligence, XAI)方法來觀察人工神經網路黑盒子內的運作原理。因此,本研究的目的是採用 XAI 方法了解遙測影像的 ANN 模型其運作邏輯,並選擇土地覆蓋分類為主題來證明此構想。然而,現有針對卷積神經網路(Convolutional Neural Networks, CNNs)模型之XAI方法著重於辨識模型所採用的重要空間特徵,而光譜特徵亦為遙測影像分類的重要資訊,因此本研究針對現有XAI方法進行調整以辨識重要的光譜特徵。首先,本研究設計了一個深度學習網路從多光譜影像中萃取空間和光譜特徵並進行土地覆蓋分類。本研究使用的遙測影像來自 Sentinel-2 衛星的 EuroSAT 數據集,實現了90%以上的分類準確率。再調整Guided Backpropagation、Gradient-weighted Class Activation Mapping (Grad-CAM)和 Guided Grad-CAM 三種 XAI 方法的運作程序,進而不僅能辨識重要的空間特徵,也能區隔光譜特徵。本研究之分析結果包含(1)基於各光譜波段的視覺化顯著圖(saliency map)進行定性分析以解讀ANN模型判斷依據;(2)以顯著圖進行統計排序並分析每個地物類別分類時的各光譜波段重要性;(3)最終基於本研究所提XAI方法辨識所得之重要光譜波段組合,為每個地物類別重新訓練二元分類ANN模型以驗證其有效性。實驗結果表明,相比於以所有波段為輸入之模型,以XAI方法萃取出的光譜波段組合之模型可達相似的高準確率;甚至在一些類別的表現可提升1.5-7%準確率,代表XAI方法有能力抓取正確光譜資訊並基於此資訊獲得在遙測影像分類上更佳的表現。整體而言,本研究所提出之XAI方法除了原有的空間特徵,亦能識別分類時重要的光譜特徵,進而可分析理解遙測影像分類類神經網路的內部運作機制。
摘要(英) As satellite images provide periodical observations of a large area, Remote Sensing (RS) data can help analyze the developments of the Earth and its environmental variation. In recent years, with the advancement of deep learning technology, many Artificial Neural Networks (ANNs) have been proposed to support various remote sensing applications, such as image fusion, feature extraction, and Land Use and Land Cover (LULC) classification. However, the design of ANN models become more and more complex for better performance, ANNs are considered as “black boxes”, where their logics are hidden behind the scenes. To address this issue, Explainable Artificial Intelligence (XAI) methods were proposed to provide a peek into the black boxes. Therefore, the objective of this research is to understand the reasoning process of a RS ANN model with XAI methods, where the land cover classification is chosen to prove the concept. Existing XAI methods designed for Convolutional Neural Networks (CNNs) mainly focus on identifying important spatial features used in CNN models. However, spectral features are also important in RS image classification. Therefore, this research modifies existing XAI methods to extract important spectral features. As the first step, this research designs a deep learning network to retrieve spatial and spectral features from multi-spectral images and perform the land cover classification. The EuroSAT dataset from the Sentinel-2 satellite is applied in this research, where more than 90% classification accuracy is achieved. Afterward, this research modified three existing XAI methods including Guided Backpropagation, Gradient-weighted Class Activation Mapping (Grad-CAM), and Guided Grad-CAM to retrieve not only the spatial but also the spectral features learnt by the deep learning network. The analyses of results includes: (1) a qualitative analysis based on the visual saliency maps of each spectral band to interpret the reasoning basis of the deep learning network; (2) a quantitative analysis of ranking important spectral bands of each class based on the saliency maps; (3) finally, based on the important spectral bands identified by the proposed XAI methods, binary classification ANN models are trained for each class to verify the identified bands.
In summary, experimental results indicate that the models constructed with the identified spectral bands achieved similar accuracies compared to the model using all bands as inputs, where the classification accuracies for some classes even increased by 1.5-7%. Hence, XAI methods can capture the important spectral information in RS land cover classification and could help achieve better accuracy. Overall, the research demonstrates that besides the spatial features, the proposed XAI methods can also identify important spectral features for a better understanding to the underlying mechanisms of a RS land cover classification ANN model.
關鍵字(中) ★ 土地覆蓋分類
★ 深度學習
★ 可解釋人工智慧
★ 多光譜
★ 顯著圖
關鍵字(英) ★ Land cover classification
★ deep learning
★ Explainable Artificial Intelligence
★ multi-spectral
★ saliency maps
論文目次 摘 要 i
Abstract iii
Table of Contents v
List of Figures vii
List of Tables xii
1. Introduction 1
1.1 Background 1
1.2 Explainable Artificial Intelligence (XAI) 2
1.3 Objectives 4
2. Literature Review 6
2.1 “black box” 7
2.2 Explainable Artificial Intelligence (XAI) 8
2.3 XAI methods 8
2.3.1. Post-hoc explanation methods 9
3. Methodology 16
3.1 Forward land cover classification model 17
3.2 Dataset 20
3.3 Guided Backpropagation 23
3.4 Grad-CAM 24
3.5 Guided Grad-CAM 24
3.6 Spectrum separable deconvolution modifications 25
4. Experimental Results and Discussion 28
4.1 Saliency maps 30
4.1.1 Guided Backpropagation (GBP) method 32
4.1.2. Grad-CAM (GC) method 44
4.1.3. Guided Grad-CAM (GGC) method 55
4.2 Statistical analysis 66
4.2.1. Ranking rules 66
4.2.2 Ranking scores 66
4.3 Validation 74
5. Conclusions and Future Work 79
References 81
參考文獻 [1] Helber, P., Bischke, B., Dengel, A., & Borth, D. (2019). Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 12(7), 2217-2226.
[2] Yaloveha, V., Hlavcheva, D., & Podorozhniak, A. (2021). Spectral Indexes Evaluation for Satellite Images Classification using CNN. Journal of Information and Organizational Sciences, 45(2), 435-449.
[3] Loyola-Gonzalez, O. (2019). Black-box vs. white-box: Understanding their advantages and weaknesses from a practical point of view. IEEE access, 7, 154096-154113.
[4] Esteva, A., Robicquet, A., Ramsundar, B., Kuleshov, V., DePristo, M., Chou, K., ... & Dean, J. (2019). A guide to deep learning in healthcare. Nature medicine, 25(1), 24-29.
[5] Papoutsis, I., Bountos, N. I., Zavras, A., Michail, D., & Tryfonopoulos, C. (2023). Benchmarking and scaling of deep learning models for land cover image classification. ISPRS Journal of Photogrammetry and Remote Sensing, 195, 250-268.
[6] Kussul, N., Lavreniuk, M., Skakun, S., & Shelestov, A. (2017). Deep learning classification of land cover and crop types using remote sensing data. IEEE Geoscience and Remote Sensing Letters, 14(5), 778-782.
[7] Springenberg, J. T., Dosovitskiy, A., Brox, T., & Riedmiller, M. (2014). Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806.
[8] Huang, J., Chai, J., & Cho, S. (2020). Deep learning in finance and banking: A literature review and classification. Frontiers of Business Research in China, 14(1), 1-24.
[9] Voulodimos, A., Doulamis, N., Doulamis, A., & Protopapadakis, E. (2018). Deep learning for computer vision: A brief review. Computational intelligence and neuroscience, 2018.
[10] Sun, Z., Di, L., & Fang, H. (2019). Using long short-term memory recurrent neural network in land cover classification on Landsat and Cropland data layer time series. International journal of remote sensing, 40(2), 593-614.
[11] He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).
[12] Sutskever, I., Vinyals, O., & Le, Q. V. (2014). Sequence to sequence learning with neural networks. Advances in neural information processing systems, 27.
[13] Pritt, M., & Chern, G. (2017, October). Satellite image classification with deep learning. In 2017 IEEE applied imagery pattern recognition workshop (AIPR) (pp. 1-7). IEEE.
[14] Ahmed, T., & Sabab, N. H. N. (2022). Classification and understanding of cloud structures via satellite images with EfficientUNet. SN Computer Science, 3, 1-11.
[15] Ma, L., Liu, Y., Zhang, X., Ye, Y., Yin, G., & Johnson, B. A. (2019). Deep learning in remote sensing applications: A meta-analysis and review. ISPRS journal of photogrammetry and remote sensing, 152, 166-177.
[16] Cheng, G., Yang, C., Yao, X., Guo, L., & Han, J. (2018). When deep learning meets metric learning: Remote sensing image scene classification via learning discriminative CNNs. IEEE transactions on geoscience and remote sensing, 56(5), 2811-2821.
[17] Mou, L., Bruzzone, L., & Zhu, X. X. (2018). Learning spectral-spatial-temporal features via a recurrent convolutional neural network for change detection in multispectral imagery. IEEE Transactions on Geoscience and Remote Sensing, 57(2), 924-935.
[18] Chen, H., & Shi, Z. (2020). A spatial-temporal attention-based method and a new dataset for remote sensing image change detection. Remote Sensing, 12(10), 1662.
[19] Chen, Y., Jiang, H., Li, C., Jia, X., & Ghamisi, P. (2016). Deep feature extraction and classification of hyperspectral images based on convolutional neural networks. IEEE transactions on geoscience and remote sensing, 54(10), 6232-6251.
[20] Xu, F., Uszkoreit, H., Du, Y., Fan, W., Zhao, D., & Zhu, J. (2019). Explainable AI: A brief survey on history, research areas, approaches and challenges. In Natural Language Processing and Chinese Computing: 8th CCF International Conference, NLPCC 2019, Dunhuang, China, October 9–14, 2019, Proceedings, Part II 8 (pp. 563-574). Springer International Publishing.
[21] Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D. (2018). A survey of methods for explaining black box models. ACM computing surveys (CSUR), 51(5), 1-42.
[22] Gunning, D., & Aha, D. (2019). DARPA’s explainable artificial intelligence (XAI) program. AI magazine, 40(2), 44-58.
[23] Kakogeorgiou, I., & Karantzalos, K. (2021). Evaluating explainable artificial intelligence methods for multi-label deep learning classification tasks in remote sensing. International Journal of Applied Earth Observation and Geoinformation, 103, 102520.
[24] Hoffman, R. R., Mueller, S. T., Klein, G., & Litman, J. (2018). Metrics for explainable AI: Challenges and prospects. arXiv preprint arXiv:1812.04608.
[25] Ali, S., Abuhmed, T., El-Sappagh, S., Muhammad, K., Alonso-Moral, J. M., Confalonieri, R., ... & Herrera, F. (2023). Explainable Artificial Intelligence (XAI): What we know and what is left to attain Trustworthy Artificial Intelligence. Information Fusion, 99, 101805.
[26] Simonyan, K., Vedaldi, A., & Zisserman, A. (2013). Deep inside convolutional networks: Visualising image classification models and saliency maps. arXiv preprint arXiv:1312.6034.
[27] Hsu, C. Y., & Li, W. (2023). Explainable GeoAI: can saliency maps help interpret artificial intelligence’s learning process? An empirical study on natural feature detection. International Journal of Geographical Information Science, 37(5), 963-987.
[28] Lundberg, S. M., & Lee, S. I. (2017). A unified approach to interpreting model predictions. Advances in neural information processing systems, 30.
[29] Peters, U. (2022). Explainable AI lacks regulative reasons: why AI and human decision-making are not equally opaque. AI and Ethics, 1-12.
[30] Iwana, B. K., Kuroki, R., & Uchida, S. (2019, October). Explaining convolutional neural networks using softmax gradient layer-wise relevance propagation. In 2019 IEEE/CVF International Conference on Computer Vision Workshop (ICCVW) (pp. 4176-4185). IEEE.
[31] Islam, M. R., Ahmed, M. U., Barua, S., & Begum, S. (2022). A systematic review of explainable artificial intelligence in terms of different application domains and tasks. Applied Sciences, 12(3), 1353.
[32] Zhang, Y., Weng, Y., & Lund, J. (2022). Applications of explainable artificial intelligence in diagnosis and surgery. Diagnostics, 12(2), 237.
[33] Chen, H. Y., & Lee, C. H. (2020). Vibration signals analysis by explainable artificial intelligence (XAI) approach: Application on bearing faults diagnosis. IEEE Access, 8, 134246-134256.
[34] Adadi, A., & Berrada, M. (2018). Peeking inside the black-box: a survey on explainable artificial intelligence (XAI). IEEE access, 6, 52138-52160.
[35] Shrikumar, A., Greenside, P., & Kundaje, A. (2017, July). Learning important features through propagating activation differences. In International conference on machine learning (pp. 3145-3153). PMLR.
[36] Sundararajan, M., Taly, A., & Yan, Q. (2017, July). Axiomatic attribution for deep networks. In International conference on machine learning (pp. 3319-3328). PMLR.
[37] Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE international conference on computer vision (pp. 618-626).
[38] Zeiler, M. D., & Fergus, R. (2014). Visualizing and understanding convolutional networks. In Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part I 13 (pp. 818-833). Springer International Publishing.
[39] Ribeiro, M. T., Singh, S., & Guestrin, C. (2016, August). " Why should I trust you?" Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144).
[40] Bach, S., Binder, A., Montavon, G., Klauschen, F., Müller, K. R., & Samek, W. (2015). On pixel-wise explanations for non-linear classifier decisions by layer-wise relevance propagation. PloS one, 10(7), e0130140.
[41] Montavon, G., Lapuschkin, S., Binder, A., Samek, W., & Müller, K. R. (2017). Explaining nonlinear classification decisions with deep taylor decomposition. Pattern recognition, 65, 211-222.
[42] Gu, J., Yang, Y., & Tresp, V. (2019). Understanding individual decisions of cnns via contrastive backpropagation. In Computer Vision–ACCV 2018: 14th Asian Conference on Computer Vision, Perth, Australia, December 2–6, 2018, Revised Selected Papers, Part III 14 (pp. 119-134). Springer International Publishing.
[43] Hasanpour Zaryabi, E., Moradi, L., Kalantar, B., Ueda, N., & Halin, A. A. (2022). Unboxing the black box of attention mechanisms in remote sensing big data using xai. Remote Sensing, 14(24), 6254.
[44] Sumbul, G., Charfuelan, M., Demir, B., & Markl, V. (2019, July). Bigearthnet: A large-scale benchmark archive for remote sensing image understanding. In IGARSS 2019-2019 IEEE International Geoscience and Remote Sensing Symposium (pp. 5901-5904). IEEE.
[45] Sarker, I. H. (2021). Deep learning: a comprehensive overview on techniques, taxonomy, applications and research directions. SN Computer Science, 2(6), 420.
[46] Lakshmanna, K., Kaluri, R., Gundluru, N., Alzamil, Z. S., Rajput, D. S., Khan, A. A., ... & Alhussen, A. (2022). A review on deep learning techniques for IoT data. Electronics, 11(10), 1604.
[47] O′Shea, K., & Nash, R. (2015). An introduction to convolutional neural networks. arXiv preprint arXiv:1511.08458.
[48] Carranza-García, M., García-Gutiérrez, J., & Riquelme, J. C. (2019). A framework for evaluating land use and land cover classification using convolutional neural networks. Remote Sensing, 11(3), 274.
[49] Masolele, R. N., De Sy, V., Herold, M., Marcos, D., Verbesselt, J., Gieseke, F., ... & Martius, C. (2021). Spatial and temporal deep learning methods for deriving land-use following deforestation: A pan-tropical case study using Landsat time series. Remote Sensing of Environment, 264, 112600.
[50] Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2921-2929).
[51] Casalicchio, G., Molnar, C., & Bischl, B. (2019). Visualizing the feature importance for black box models. In Machine Learning and Knowledge Discovery in Databases: European Conference, ECML PKDD 2018, Dublin, Ireland, September 10–14, 2018, Proceedings, Part I 18 (pp. 655-670). Springer International Publishing.
[52] Montavon, G., Binder, A., Lapuschkin, S., Samek, W., & Müller, K. R. (2019). Layer-wise relevance propagation: an overview. Explainable AI: interpreting, explaining and visualizing deep learning, 193-209.
[53] Byrne, R. M. (2019, August). Counterfactuals in Explainable Artificial Intelligence (XAI): Evidence from Human Reasoning. In IJCAI (pp. 6276-6282).
[54] Lapuschkin, S., Wäldchen, S., Binder, A., Montavon, G., Samek, W., & Müller, K. R. (2019). Unmasking Clever Hans predictors and assessing what machines really learn. Nature communications, 10(1), 1096.
[55] Sumbul, G., Kang, J., Kreuziger, T., Marcelino, F., Costa, H., Benevides, P., ... & Demir, B. (2020). BigEarthNet dataset with a new class-nomenclature for remote sensing image understanding. arXiv preprint arXiv:2001.06372.
[56] Gesmundo, A. (2022). A continual development methodology for large-scale multitask dynamic ml systems. arXiv preprint arXiv:2209.07326.
[57] Neumann, M., Pinto, A. S., Zhai, X., & Houlsby, N. (2019). In-domain representation learning for remote sensing. arXiv preprint arXiv:1911.06721.
[58] Wang, Y., Albrecht, C. M., Braham, N. A. A., Mou, L., & Zhu, X. X. (2022). Self-supervised learning in remote sensing: A review. arXiv preprint arXiv:2206.13188.
[59] Allam, S. (2016). The Impact of Artificial Intelligence on Innovation-An Exploratory Analysis. Sudhir Allam," The Impact of Artificial Intelligence on Innovation-An Exploratory Analysis", International Journal of Creative Research Thoughts (IJCRT), ISSN, 2320-2882.
[60] Gunning, D. (2017). Explainable artificial intelligence (xai). Defense advanced research projects agency (DARPA), nd Web, 2(2), 1.
[61] Longo, L., Goebel, R., Lecue, F., Kieseberg, P., & Holzinger, A. (2020, August). Explainable artificial intelligence: Concepts, applications, research challenges and visions. In International cross-domain conference for machine learning and knowledge extraction (pp. 1-16). Cham: Springer International Publishing.
[62] Hohman, F., Kahng, M., Pienta, R., & Chau, D. H. (2018). Visual analytics in deep learning: An interrogative survey for the next frontiers. IEEE transactions on visualization and computer graphics, 25(8), 2674-2693.
[63] Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., Tabik, S., Barbado, A., ... & Herrera, F. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82-115.
[64] Szegedy, C., Liu, W., Jia, Y., Sermanet, P., Reed, S., Anguelov, D., ... & Rabinovich, A. (2015). Going deeper with convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1-9).
[65] Russakovsky, O., Deng, J., Su, H., Krause, J., Satheesh, S., Ma, S., ... & Fei-Fei, L. (2015). Imagenet large scale visual recognition challenge. International journal of computer vision, 115, 211-252.
[66] Chaudhuri, U., Dey, S., Datcu, M., Banerjee, B., & Bhattacharya, A. (2021). Interband retrieval and classification using the multilabeled sentinel-2 bigearthnet archive. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 14, 9884-9898.
[67] Patel, N., & Mukherjee, R. (2015). Extraction of impervious features from spectral indices using artificial neural network. Arabian Journal of Geosciences, 8, 3729-3741.
[68] Chen, J., Yang, K., Chen, S., Yang, C., Zhang, S., & He, L. (2019). Enhanced normalized difference index for impervious surface area estimation at the plateau basin scale. Journal of Applied Remote Sensing, 13(1), 016502-016502.
[69] Lapuschkin, S., Binder, A., Montavon, G., Muller, K.R., Samek, W.: Analyzing classifiers: Fisher vectors and deep neural networks. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. pp. 2912–2920 (2016)
[70] Motohka, T., Nasahara, K. N., Oguma, H., & Tsuchida, S. (2010). Applicability of green-red vegetation index for remote sensing of vegetation phenology. Remote Sensing, 2(10), 2369-2387.
[71] Sun, Y., Qin, Q., Ren, H., Zhang, T., & Chen, S. (2019). Red-edge band vegetation indices for leaf area index estimation from sentinel-2/msi imagery. IEEE Transactions on Geoscience and Remote Sensing, 58(2), 826-840.
[72] Vescovo, L., Wohlfahrt, G., Balzarolo, M., Pilloni, S., Sottocornola, M., Rodeghiero, M., & Gianelle, D. (2012). New spectral vegetation indices based on the near-infrared shoulder wavelengths for remote detection of grassland phytomass. International journal of remote sensing, 33(7), 2178-2195.
[73] Rouse, J. W., Haas, R. H., Schell, J. A., & Deering, D. W. (1974). Monitoring vegetation systems in the Great Plains with ERTS. NASA Spec. Publ, 351(1), 309.
[74] Parekh, J. R., Poortinga, A., Bhandari, B., Mayer, T., Saah, D., & Chishtie, F. (2021). Automatic detection of impervious surfaces from remotely sensed data using deep learning. Remote Sensing, 13(16), 3166.
[75] Jiang, W., Ni, Y., Pang, Z., He, G., Fu, J., Lu, J., ... & Lei, T. (2020). A new index for identifying water body from sentinel-2 satellite remote sensing imagery. ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 3, 33-38.
指導教授 黃智遠 審核日期 2024-1-19
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明