博碩士論文 111525007 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:50 、訪客IP:18.119.255.31
姓名 黃皇舜(Huang-Shun Huang)  查詢紙本館藏   畢業系所 軟體工程研究所
論文名稱 坵塊分離、合併檢測系統
(Parcel Separation and Merging Detection System)
相關論文
★ 使用雲林、彰化地區航空影像建立水稻判釋模型之研究
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 ( 永不開放)
摘要(中) 每年,政府會使用含有農作物標記的坵塊向量圖(簡稱坵塊圖)來管理坵塊的資訊,以掌握坵塊的使用情形。不同作物有不同的生長季,在一年之中,坵塊可能會因為種植不同的作物而發生合併或是分離使用的情形,因此這些坵塊圖必須隨著坵塊的變動而更新。然而,會發生變動的坵塊少於1%,使用人力檢查每塊坵塊是否發生變動非常耗時,因此,利用人工智慧分類器來檢測在航照影像中發生變動的坵塊成為一個重要的議題。在此背景之下,過去有研究團隊提出一個透過二元分類器來判斷坵塊是否分離使用的方法,然而,該研究有兩個缺點,第一,該研究所提出的方法僅能找出發生分離的坵塊,而無法找出發生合併的坵塊,導致其在實際應用中效果有限;第二,該研究所提出的模型在坵塊分離檢測的真實測試中結果不太理想。綜合以上所述,本研究提出一個針對坵塊分離、合併檢測的解決方案。解決方案包含了一套資料合成方法來產生「仿坵塊方離使用樣態」、一個判斷坵塊是否發生分離、合併使用的二元分類器,以及將此分類器應用於坵塊合併檢測任務的方法。提出的資料合成方法有效解決資料集極度不平衡的問題,我們使用合成的資料來訓練二元分類器,使其學習坵塊分離與合併使用的影像特徵,並使用訓練好的分類器檢測真實航照影像中的坵塊。本研究的貢獻在於透過少量且極度不平衡的資料集訓練出一個準確度高的分類器,並將此分類器應用於坵塊分離、合併檢測任務中。在坵塊分離檢測真實測試中,提出的方法獲得了比現有方法更高的F1-Score,並在坵塊分離合併檢測真實測試中獲得了良好的F1-Score。本研究之效益在於可以用分類器找出少於1%發生分離或合併的坵塊,節省人力與時間。
摘要(英) Every year, the government uses polygon vectors containing crop labels to manage information about parcels and monitor their usage. However, different crops have different growing seasons, causing parcels to merge or separate throughout the year due to the planting of different crops. Therefore, these polygon vectors need to be updated whenever there are changes. However, less than 1% of parcels undergo changes, making manually checking each parcel for changes very time-consuming. As a result, using AI classifiers to detect changes from aerial images has become an important issue.
Previous research has proposed a binary image classifier to determine whether a parcel has been separated. However, this approach has two drawbacks. Firstly, the method can only identify separated parcels, not merged ones, limiting its practical application. Secondly, the model
proposed by the research did not perform well in real testing for parcel separation detection task.
To address these issues, this study proposes a solution for detecting both parcel separations and merges. The solution includes a data synthesis method to generate ”simulated parcel separation patterns”, a binary classifier to determine parcel separation or merging, and a method to apply this classifier to the detection task. The proposed data synthesis method effectively addresses the issue of extremely imbalanced datasets. We use the synthesized data to train the binary classifier, enabling it to learn the image features of parcel separation and merging, and then use the trained classifier to detect changes in real aerial images.
This study’s contributions include training a high-accuracy classifier using a small, extremely imbalanced dataset and applying it to parcel separation and merging detection tasks. Additionally, the proposed method achieved higher F1-Scores in real testing for parcel separation detection compared to existing methods and also performed well in parcel merging detection. With this classifier, we can identify less than 1% of parcels that have undergone separation or merging, saving both manpower and time.
關鍵字(中) ★ 坵塊分離檢測
★ 坵塊合併檢測
★ 深度學習
★ 二元分類器
關鍵字(英) ★ Parcel Separation Detection
★ Parcel Merging Detection
★ Deep Learning
★ Binary Classifier
論文目次 目錄
頁次
摘要i
Abstract iii
目錄v
圖目錄vii
表目錄ix
一、緒論1
1.1研究背景...................................................................... 1
1.2研究動機與目的............................................................... 3
1.2.1問題定義............................................................... 3
1.2.2二元分類類別定義..................................................... 4
1.3研究貢獻...................................................................... 6
1.4論文架構...................................................................... 6
二、相關研究9
2.1坵塊與坵塊圖................................................................. 9
2.2坵塊分離檢測................................................................. 9
2.2.1分離坵塊樣態合成..................................................... 9
2.2.2 VGG16-UNet-GAP..................................................... 10
2.3 ResNetV2...................................................................... 10
2.4 DenseNet201................................................................... 11
2.5 MobileNetV3Large............................................................ 11
2.6 EfficientNetV2................................................................. 11
2.7 ViT............................................................................ 12
2.8 SwinTransformerV2........................................................... 12
三、研究方法13
3.1資料前處理.................................................................... 14
3.1.1分離坵塊樣態合成..................................................... 14
3.1.2產生單一坵塊影像..................................................... 15
3.1.3正規化及影像縮放..................................................... 17
3.2影像分類模型................................................................. 18
3.2.1 SwinTransformerV2分類器........................................... 18
3.3模型訓練...................................................................... 19
3.4將模型應用於坵塊分離、合併檢測任務...................................... 20
3.4.1坵塊分離檢測任務..................................................... 20
3.4.2坵塊合併檢測任務..................................................... 20
四、實驗與結果討論23
4.1評估方法...................................................................... 23
4.1.1 F1-Score................................................................ 23
4.1.2 Two-TailedT-Test....................................................... 24
4.2實驗設備與環境............................................................... 25
4.3實驗資料集介紹............................................................... 25
4.3.1實驗一:內部測試...................................................... 25
4.3.2實驗二:外部測試...................................................... 27
4.4實驗一:內部測試............................................................. 27
4.4.1子實驗一:資料生成方法比較......................................... 27
4.4.2子實驗二:模型架構比較.............................................. 32
4.5實驗二:外部測試............................................................. 35
4.5.1動機與目的............................................................ 35
4.5.2實驗方法............................................................... 35
4.5.3實驗結果............................................................... 36
五、結論與未來展望39
5.1結論........................................................................... 39
5.2未來展望...................................................................... 39
參考文獻41
參考文獻 參考文獻
[1] 常健行,“112 年度農業地理資訊整合應用計畫,”行政院農業委員會,M.S. thesis,民國112年. [Online]. Available: https://www.grb.gov.tw/search/planDetail?id=15522712.
[2] 張淯淞,“農業影像二元分類:坵塊分離的檢測,”碩士論文,國立中央大學,民國112年6月.
[3] 洪銘德,“運用農業地理資訊落實智慧農業,”行政院農業委員會農田水利處,M.S. thesis,民國100年10月.[Online]. Available: https://www.moa.gov.tw/ws.php?id=24221.
[4] I. Wahyuni, W.-J. Wang, D. Liang, and C.-C. Chang, “Rice semantic segmentation using unetvgg16:A cases tudy in yunlin,taiwan,”in 2021 International Symposiumon Intelligent Signal Processing and Communication Systems(ISPACS),Nov.2021,pp.1–2.DOI:10.1109/ISPACS51563. 2021.9651038.
[5] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2018, pp. 7132–7141. DOI: 10.1109/CVPR.2018.00745.
[6] K.He,X.Zhang, S. Ren, and J. Sun, “Identity mappings in deep residual networks,” in Computer Vision– ECCV 2016, Springer International Publishing, 2016, pp. 630–645. DOI: 10.1007/9783-319-46493-0_38.
[7] K.He,X.Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” in Proceedings of the IEEE Conferenceon Computer Vision and Pattern Recognition(CVPR),2016,pp.770778. DOI: 10.1109/CVPR.2016.90.
[8] G. Huang, Z. Liu, L. van der Maaten, and K. Q. Weinberger, “Densely connected convolutional networks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jul. 2017, pp. 2261–2269. DOI: 10.1109/CVPR.2017.243.
[9] A.Howard, M.Sandler, G. Chu, et al., “Searching for mobilenetv3,” in Proceedings of the IEEE/ CVF International Conference on Computer Vision (ICCV), Oct. 2019, pp. 1314–1324. DOI: 10. 1109/ICCV.2019.00140.
[10] M.Sandler,A.Howard,M.Zhu,A.Zhmoginov,andL.-C.Chen,“Mobilenetv2:Invertedresiduals and linear bottlenecks,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Jun. 2018, pp. 4510–4520. DOI: 10.1109/CVPR.2018.00474.
[11] T.-J. Yang, A. Howard, B. Chen, et al., “Netadapt: Platform-aware neural network adaptation for mobile applications,” in Proceedings of the European Conference on Computer Vision (ECCV), Sep. 2018, pp. 289–304. DOI: 10.1007/978-3-030-01249-6_18.
指導教授 梁德容 張欽圳 王尉任 林家瑜 審核日期 2024-8-17
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明