博碩士論文 109522153 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator陳昭瑋zh_TW
DC.creatorChao-Wei Chenen_US
dc.date.accessioned2022-9-30T07:39:07Z
dc.date.available2022-9-30T07:39:07Z
dc.date.issued2022
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=109522153
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract水稻是台灣重要的作物之一,政府每隔一段時間就需要了解水稻的種植狀況,例如種植區域與種植面積等資訊,用於統計產量及訂定相關決策。傳統作法是以人工標註的方式對每張遙測影像進行判釋及數化作業。近年來隨著人工智慧相關技術的發展,若能使用相關技術輔助專家進行遙測影像的判釋,將能夠減少對人力資源的需求。因此本研究團隊使用深度學習技術,產生將航攝影像作為輸入並且輸出具有是否為水稻的分類結果的水稻判釋模型,將能夠減少以人力資源觀看、標註航攝影像上的水稻的需求。 本團隊在過去取得坵塊向量圖資料後,以當時的研究成果,基於像素的UNet-VGG16[1]為基礎,在資料集加入坵塊向量圖,從坵塊向量圖取出坵塊資訊,想以最小的改動實現基於坵塊(Parcel-based)的判釋,最終完成了基於坵塊的UNet-FNN模型[2]。其設計不對UNet-VGG16模型做任何修改,而是取出UNet-VGG16在後段網路層的特徵圖與坵塊資訊進行處理,產生基於坵塊的資料作為另一個模型FNN(Fully-connect Neural Network)的輸入資料,達到基於坵塊的判釋,比起基於像素的UNet-VGG16具有更準確的測試結果,但是這種坵塊資訊的使用方式在模型的訓練與測試太過耗時,並且受限於FNN的設計需要使用符合一定規則的大量資料。 本研究基於不同於UNet-FNN的坵塊資訊使用方式,將坵塊資訊直接用於從航攝影像中取出基於坵塊的影像資料,並且提出不同於UNet-FNN的網路架構VGG16BN-G,能夠對基於坵塊的影像資料直接進行基於坵塊的判釋。 除了受限於UNet-FNN的設計而不可改變的資料以外,本研究在可改變的資料集中使用盡量相同的訓練資料訓練UNet-VGG16模型、UNet-FNN模型與本研究所提出的VGG16BN-G模型。 研究貢獻為提出一個基於坵塊的水稻判釋模型VGG16BN-G,在相同的測試設計與T-test的結果呈現VGG16BN-G僅需要UNet-FNN的大約20%訓練時間以及大約6%的訓練資料量,即可達到與UNet-VGG、UNet-FNN相近而沒有顯著差異的效能,以盒鬚圖(box plot)的各項指標呈現VGG16BN-G具有與UNet-FNN相近並且比UNet-VGG16更好的穩定度。最後根據本研究團隊的經驗提出一個適用於水稻判釋的航攝影像準備的指引,包含檢查影像品質的想法流程。zh_TW
dc.description.abstractRice is one of the most important crops in Taiwan. The government needs to know the planting status of rice every now and then, such as the planting location and planting area, for yield statistics and decision making. Traditionally, each remote sensing image is interpreted and digitized by manual annotation. In recent years, with the development of artificial intelligence-related technology, if we can use related technology to assist experts in interpreting remote sensing images, we can reduce the demand for human resources, reduce the possible misjudgment caused by manual interpretation, and improve the operational efficiency. Therefore, our team uses deep learning technology to generate a rice interpretation model that uses aerial images as input and outputs a classification result showing where are rice and non-rice., which will reduce the need for human resources to view and annotate rice on aerial images. In the past, after obtaining the parcel vector map data, our team added the parcel vector map to the dataset, extracted the parcel information from the parcel vector map, based on the research result pixel-based UNet-VGG16 [1], tried to implement the parcel-based interpretation with minimal changes. Instead of modifying the UNet-VGG16 model, the design idea is to take out the feature map of UNet-VGG16 in the back-end network layer and processes it and parcel information to generate parcel-based data as input to another model FNN (Fully Neural Network) to achieve parcel -based interpretation, which has more accurate test results than the pixel-based UNet-VGG16. However, this way of using parcel information is too time-consuming in model training and testing, and is limited by the fact that the design of FNN requires a large amount of data that conforms to certain rules. This study uses parcel information in a different way than the UNet-FNN, and uses parcel information directly to retrieve parcel -based image data from aerial image, and proposes a different network architecture from UNet-FNN, that is VGG16BN-G, to do the parcel-based interpretation on the parcel-based image data. For the contribution of this study, the results of the same test design and t-test show that VGG16BN-G requires only about 20% of the training time and about 6% of the training data of UNet-FNN to achieve similar performance as UNet-VGG and UNet-FNNN without significant differences. The box plot shows that VGG16BN-G has similar stability to UNet-FNN and better stability than UNet-VGG16. Finally, based on the experience of our research team, we propose a guideline for aerial image preparation for rice interpretation, including an idea process to check the image quality.en_US
DC.subject坵塊zh_TW
DC.subject航攝影像zh_TW
DC.subject水稻判釋zh_TW
DC.subject語意分割zh_TW
DC.subject卷積神經網路zh_TW
DC.subjectParcelen_US
DC.subjectAerial imageen_US
DC.subjectRice interpretationen_US
DC.subjectSemantic segmentationen_US
DC.subjectConvolutional neural networken_US
DC.title應用卷積神經網路於航攝影像做基於坵塊的水稻判釋之研究zh_TW
dc.language.isozh-TWzh-TW
DC.titleApplication of Convolutional Neural Networks to Aerial Images for parcel-based Rice Interpretationen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明