博碩士論文 108552010 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:22 、訪客IP:3.149.229.253
姓名 吳少凱(Shao-Kai Wu)  查詢紙本館藏   畢業系所 資訊工程學系在職專班
論文名稱 以物件偵測方法建立台灣建物結構辨識模型
(Taiwan Building Recognition Model Based on Object Detection Method)
相關論文
★ 條件判斷式事件驅動程式設計之C語言擴充★ 基于小波变换的指纹活度检测,具有聚集 LPQ 和 LBP 特征
★ 應用自動化測試於異質環境機器學習管道之 MLOps 系統★ 設計具有可視化思維工具和程式作為單一步的 輔助學習程式之棋盤式遊戲
★ TOCTOU 漏洞的靜態分析與實作★ 用於繪製風力發電控制邏輯之特定領域語言
★ 在Java程式語言中以雙向結構表達數學公式間關聯之設計與實作★ 支援模組化規則製作之程式碼轉換工具
★ 基於替代語意的 pandas DataFrame 靜態型別檢查器★ 自動化時間複雜度分析的設計與實作–從軟體層面評估嵌入式系統的功率消耗
★ 以震波層析成像為應用之特定領域語言實作與分析★ 用特徵選擇減少疲勞偵測腦電圖通道數
★ 一個應用紙本運算與數位化於程式設計學習使程序性思維可視化的機制★ 基於抽象語法樹的陣列形狀錯誤偵測
★ 從合作學習角色分工獲得函式程式設計思維學習遞迴程式的機制★ 基於抽象語法樹的深度複製及彈性別名之所有權系統解決 Java 表示暴露問題
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2031-8-1以後開放)
摘要(中) 在過去的數年中,由於卷積神經網絡(CNN)技術的發展,以街景圖資分類建築物結構或使用類型,在方法上已有一定的成果。這種分類方式通常是對單張街景圖片中的單一建物類型進行標記,由CNN分類架構訓練辨識模型,並在其後獲得建物結構類型的地理分布,也就是災害風險中的暴露度地圖。然而,這種由單一圖片進行分類的方式,在許多基礎場景的應用上成功率有限。比如台灣這樣壅擠的都市區域,一張街景圖片中通常包含多個類別的建物,導致單純CNN分類架構所訓練出的模型無法良好的運作。在本文中,我們使用物件偵測方法對單個建築物的建造材質進行分類,此方法可根本性的優化整個分類辨識的流程,也妥善處理了從前無法在擁擠都市進行應用的問題。此方法在台灣的資料集中獲得了75\%的辨識成功率(mAP),同時我們也比較了北美的資料集,在物件偵測方法上可獲得約5\%的正確率提升。本文所使用的物件偵測方法是YOLOv4的物件偵測架構,此方法可以從街景圖像(例如Google街景)中對單一建物的立面結構進行分類,並將分類完成的目標與建物的地理資訊進行關聯處理,進而獲得一個區域的暴露度地圖。我們同時創建了一個針對台灣建物的基礎數據集,用於訓練和評估CNN物件偵測或分類模型。此外,我們還針對臺中市中區,應用本研究所訓練的模型,實際製作城市尺度的建物結構分類圖,並且針對圖片擷取方法以及流程進行修正,獲得一個可做為後續災害暴露度應用之圖資。
摘要(英) In the past few years, with the development of Convolutional Neural Network (CNN) technology, there are extensively researched on classifying the structure or using types of buildings based on street view images. These kinds of classification methods usually use a typical CNN classification framework, training by the single label of each image. This recognition framework is used to obtain an exposure model for earthquake hazard analysis in an entire region. The exposure model here represents the geographical distribution of building structure types in an area. However, classifying by a single image is not well fitted to many basic scenarios. In crowded urban areas such as Taiwan, one street view image usually contains multiple types of buildings, which causes the weights trained by a simple CNN classification model is not able to make reasonable recognition. In this paper, we proposed the object detection method to classify the construction materials of a single building. This method fundamentally optimizes the entire classification and identification process and also handles the problem that could not be applied in crowded areas properly. This method has achieved a mean Average Precision(mAP) of 75\% in the Taiwan dataset. At the same time, we also compared the North American dataset, and we can obtain an accuracy rate improvement of about 6\% on our method. The method used in this article is an object detection architecture based on CNN. This method can classify the facade structure of a single building from street view images (such as Google Street View). After correlating with the geographic information of the building, we obtain a regional exposure model. We create a benchmark dataset for buildings in Taiwan that could be used on training and validating the CNN object classification model. In addition, we have also produced a city-level structure classification map for Taichung City.
關鍵字(中) ★ 卷積神經網路
★ 物件偵測
★ 建物分類
★ 街景圖片
關鍵字(英) ★ CNN
★ Object detection
★ Building instance classification
★ Street view images
論文目次 目錄
頁次
摘要i
Abstract iii
誌謝v
目錄vi
圖目錄viii
表目錄x
一、介紹1
1.1 風險是甚麼、暴露度的重要性....................................... 1
1.2 如何獲得暴露度地圖- 老方法....................................... 2
1.3 Machine Learning 在風險領域的應用- 新方法.................. 3
1.4 目前遇到的難題與解決方式.......................................... 4
二、動機以及目的6
2.1 暴露度地圖的建立...................................................... 6
2.2 本研究欲解決之問題................................................... 8
三、提案12
3.1 物件偵測模型............................................................ 13
3.1.1 資料集中建物結構類別的來源.............................. 13
3.1.2 資料集的製作與標記.......................................... 14
3.1.3 物件偵測模型YOLOv4 及訓練結果....................... 16
3.2 建物結構分類地圖(暴露度地圖) 的製作.......................... 20
四、實作22
4.1 資料清理.................................................................. 22
4.1.1 臺北市政府:臺北市歷年使用執照摘要.................. 22
4.1.2 內政部全國門牌地址定位服務(TGOS)................... 26
4.1.3 臺中市政府:原臺中市現場地面測量建物(WGS84) .. 29
4.1.4 本研究之資料集整理(標記前) .............................. 33
4.2 Google Street View 街景圖資........................................ 33
4.3 資料集的標記............................................................ 34
4.4 訓練及模型調整......................................................... 35
4.4.1 模型的訓練...................................................... 35
4.5 訓練結果以及測試...................................................... 36
4.5.1 訓練結果......................................................... 36
4.5.2 模型結果測試................................................... 37
五、結果評估與討論40
5.1 交叉驗證- 實驗1 ....................................................... 41
5.2 交叉驗證- 實驗2 ....................................................... 42
5.3 建立暴露度地圖......................................................... 47
5.3.1 暴露度地圖- 初始結果....................................... 47
5.3.2 暴露度地圖- 方法修正....................................... 52
5.3.3 暴露度地圖- 評估方式討論................................. 56
六、結論與展望57
參考文獻58
參考文獻 [1] I. S. for Disaster Reduction, “Living with risk: A global review of disaster reduction
initiatives,” 2004.
[2] O. D. Cardona, M. K. Van Aalst, J. Birkmann, M. Fordham, G. Mc Gregor, P. Rosa,
R. S. Pulwarty, E. L. F. Schipper, B. T. Sinh, H. Décamps, et al., “Determinants of
risk: exposure and vulnerability,” in Managing the risks of extreme events and disasters
to advance climate change adaptation: special report of the intergovernmental
panel on climate change, pp. 65–108, Cambridge University Press, 2012.
[3] OpenStreetMap contributors, “Planet dump retrieved from https://planet.osm.org
.” https://www.openstreetmap.org, 2017.
[4] D. Anguelov, C. Dulong, D. Filip, C. Frueh, S. Lafon, R. Lyon, A. Ogale, L. Vincent,
and J. Weaver, “Google street view: Capturing the world at street level,” Computer,
vol. 43, pp. 32–38, jun 2010.
[5] M. Wieland, M. Pittore, S. Parolai, J. Zschau, B. Moldobekov, and U. Begaliev, “Estimating
building inventory for rapid seismic vulnerability assessment: Towards an
integrated approach based on multi-source imaging,” Soil Dynamics and Earthquake
Engineering, vol. 36, pp. 70–83, 2012.
[6] J. Kang, M. Körner, Y. Wang, H. Taubenböck, and X. X. Zhu, “Building instance
classification using street view images,” ISPRS Journal of Photogrammetry and Remote
Sensing, vol. 145, pp. 44–59, nov 2018.
[7] G. Cristian Iannelli, F. Dell’acqua, and M. P. Smith, “Extensive Exposure Mapping
in Urban Areas through Deep Analysis of Street-Level Pictures for Floor Count
Determination,” 2017.
[8] D. Gonzalez, D. Rueda-Plata, A. B. Acevedo, J. C. Duque, R. Ramos-Pollán, A. Betancourt,
and S. García, “Automatic detection of building typology using deep learning
methods on street level images,” Building and Environment, vol. 177, p. 106805,
jun 2020.
[9] D. Rueda-Plata, D. González, A. B. Acevedo, J. C. Duque, and R. Ramos-Pollán,
“Use of deep learning models in street-level images to classify one-story unreinforced masonry buildings based on roof diaphragms,” Building and Environment, vol. 189,
p. 107517, feb 2021.
[10] N. Kerle and R. R. Hoffman, “Collaborative damage mapping for emergency response:
the role of cognitive systems engineering,” Natural hazards and earth system
sciences, vol. 13, no. 1, pp. 97–113, 2013.
[11] M. Pittore and M. Wieland, “Toward a rapid probabilistic seismic vulnerability assessment
using satellite and ground-based remote sensing,” Natural Hazards, vol. 68,
no. 1, pp. 115–145, 2013.
[12] H. Santa María, M. A. Hube, F. Rivera, C. Yepes-Estrada, and J. A. Valcárcel, “Development
of national and local exposure models of residential structures in chile,”
Natural Hazards, vol. 86, no. 1, pp. 55–79, 2017.
[13] A. B. Acevedo, J. D. Jaramillo, C. Yepes, V. Silva, F. A. Osorio, and M. Villar, “Evaluation
of the seismic risk of the unreinforced masonry building stock in antioquia,
colombia,” Natural Hazards, vol. 86, no. 1, pp. 31–54, 2017.
[14] J. C. Gomez-Zapata, M. Pittore, F. Cotton, H. Lilienkamp, S. Shinde, P. Aguirre,
and H. Santa Maria, “Epistemic uncertainty of probabilistic building exposure compositions
in scenario-based earthquake loss models,” 2021.
[15] T. Weyand, A. Araujo, B. Cao, and J. Sim, “Google landmarks dataset v2 A largescale
benchmark for instance-level recognition and retrieval,” in Proceedings of the
IEEE Computer Society Conference on Computer Vision and Pattern Recognition,
pp. 2572–2581, 2020. urlhttps://www.kaggle.com/c/landmark-recognition-2019.
[16] B. Zhou, A. Lapedriza, A. Torralba, and A. Oliva, “Places: An Image Database for
Deep Scene Understanding,” Journal of Vision, vol. 17, no. 10, p. 296, 2017.
[17] T. C. G. Taipei City Construction Management Office, “臺北市資料大平臺-歷年使
用執照摘要,” dec 2020. https://data.taipei/#/dataset/detail?id=c876ff02-
af2e-4eb8-bd33-d444f5052733.
[18] T. C. G. Urban Development Bureau, “臺中市建物WGS84,” sep
2019. https://opendata.taichung.gov.tw/dataset/bdaa52e5-b5d6-4a62-
81b6-d4d5e9728c45.
[19] A. Bochkovskiy, C.-Y. Wang, and H.-Y. M. Liao, “YOLOv4: Optimal Speed and
Accuracy of Object Detection,” tech. rep., 2020. https://github.com/AlexeyAB/
darknet.
[20] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You only look once: Unified,
real-time object detection,” in Proceedings of the IEEE Computer Society Conference
on Computer Vision and Pattern Recognition, vol. 2016-Decem, pp. 779–788, 2016.
[21] M. Everingham, L. Van Gool, C. K. Williams, J. Winn, and A. Zisserman, “The
pascal visual object classes (voc) challenge,” International journal of computer vision,
vol. 88, no. 2, pp. 303–338, 2010.
[22] T. Ministry of the Interior, “地理資訊圖資雲服務平台TGOS,” may 2020. https:
//www.tgos.tw/TGOS/Web/Address/TGOS_Address.aspx.
[23] CECI Engineering Consultants, Inc. Taiwan, “108 年度臺中市部分地區三維近似化
建物模型建置工作採購案工作總報告,” tech. rep., 2019.
[24] Google, “Street View Static API overview | Google Developers,” may 2020.
https://developers.google.com/maps/documentation/streetview/overview#
more-info.
[25] Intel, “Computer Vision Annotation Tool (CVAT),” may 2021. https://github.
com/openvinotoolkit/cvat.
[26] Google, “Overview | Places API | Google Developers,” Sep 2021. https://
developers.google.com/maps/documentation/places/web-service/overview.
指導教授 莊永裕(YungYu Zhuang) 審核日期 2021-10-26
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明