博碩士論文 112325007 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:53 、訪客IP:18.119.118.169
姓名 沈廉(Lian Shen)  查詢紙本館藏   畢業系所 土木系營建管理碩士班
論文名稱 土石方收容處理場上物件特徵與擬製指向性辨識之研究
相關論文
★ 運用深度神經網路建立H型鋼構件自動辨識系統之研究★ 運用關聯法則探討協力廠商對營造廠報價行為之研究
★ 探討影響台灣工程顧問公司落實法律遵循反貪腐關鍵因素之研究★ 以分包商角度探討對營造廠報價行為策略之研究
★ 運用SOMCM分群演算法開發設計雲端智慧平台之營運維護產業介面-以桃園市某園區為例★ 專案管理在履約爭議處理機制之比較與研析
★ H型鋼構件智慧塗裝路徑優化研究★ 以資料包絡分析法評估大型統包營造廠之經營績效-以上市櫃公司為例
★ 運用組織特徵映射圖動作軌跡相似度測量法探索預鑄工項生產效率與資源規劃之研究★ 預鑄專案成本估算策略之研究
★ 新建工程建造執照查核缺失要項之探討--以台北市為例★ 灰關聯分析探討古蹟與歷史建築再利用之研究
★ 營造工地管理人資量化與預測★ 公共建設專案現金管理與控制之研究
★ 營建業ERP整合PDA模型之研究★ 水庫營運效益評估之研究-以石門水庫為例
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2029-7-1以後開放)
摘要(中) 隨著社會經濟蓬勃成長與都市化建設發展,重大公共工程與民間營造工程每年產生大量之營建副產物,剩餘土石方循環利用與有效管理逐漸受到主管機關之重視,有效之監控與管理能夠減少違規傾倒與環境破壞的發生。一般仰賴人力於各個土資場出入口進行營建剩餘土石方監控,輔以政府提供之電子系統供查核。無人飛行載具可於空中一次性獲取大量影像資料,在測量與監控領域已行之有年,搭配演算法等電腦深度學習,有助於辨識及監控土石方資源堆置處理場內物件進出情形。
本研究使用無人飛行載具獲取土資場相關機具之影像資料,搭配YOLOv8演算法,進行模型訓練與物件偵測,透過模型中的指向性辨識,進行擬製指向性辨識,分析場景中物件之移動方向,以追蹤與監控場中可疑之物件。研究中使用162張影像進行模型訓練與測試,平均偵測率為0.806,符合預期辨識精度,透過擬製指向性辨識可追蹤物件移動方向。此研究成果結合科技與深度學習,將有助於相關業者提高場中之監控效率,同時降低人力管理與時間之負擔。
摘要(英) With the vigorous growth of the economy and urbanization, significant public works and private construction projects generate a large amount of construction by-products every year. The recycling and effective management of surplus earthwork materials have gradually attracted attention from regulatory authorities. Effective monitoring and management can reduce illegal dumping and environmental damage. Traditionally, manpower is relied upon to monitor the entrances and exits of various soil resource storage sites, supplemented by government-provided electronic systems for inspection. Unmanned aerial vehicles can obtain large amounts of image data from the air in one go, a practice long established in the fields of surveying and monitoring. Coupled with algorithms such as computer deep learning, they can assist in identifying and monitoring the inflow and outflow of objects in soil resource storage sites.
This study utilizes UAVs to acquire image data of relevant equipment in soil yards and employs the YOLOv8 algorithm for model training and object detection. Through directional recognition within the model, simulated directional recognition is conducted to analyze the movement direction of objects in the scene, enabling the tracking and monitoring of suspicious objects in the area. A total of 162 images were used for model training and testing, with an average detection rate of 0.806, meeting the expected recognition accuracy. Through simulated directional recognition, object movement directions can be tracked. This research combines technology and deep learning, which will help relevant stakeholders improve monitoring efficiency on-site while reducing the burden of manpower and time management.
關鍵字(中) ★ 土方管理
★ 無人飛行載具(UAV)
★ 物件偵測
★ YOLOv8演算法
★ 指向性辨識
關鍵字(英) ★ Earthwork Management
★ Unmanned Aerial Vehicles(UAV)
★ Object Detection
★ YOLOv8 algorithm
★ Directional Recognition
論文目次 目錄
圖目錄 v
表目錄 vii
第一章 緒論 1
1.1 研究動機 1
1.2 研究問題 3
1.3 研究目的 4
1.4 研究範圍及研究限制 4
1.5 研究流程 5
第二章 文獻回顧 7
2.1 土方管理 7
2.1.1 土方測量技術 7
2.1.2 土石方運輸管理 8
2.2 無人飛行載具(UAV) 8
2.2.1無人機技術原理 8
2.2.2 無人機應用 9
2.2.3 無人機影像建模 10
2.2.4 無人機定位技術 12
2.3 LiDAR 13
2.3.1 LiDAR技術與原理 13
2.3.2 結合UAV及LiDAR測量技術之應用 14
2.4 U-Net 15
2.4.1 U-Net影像分割原理與技術 15
2.4.2 U-Net應用領域與優勢 16
2.5 影像分割與辨識 17
2.5.1 影像分割原理及類型 17
2.5.2 R-CNN與YOLO深度學習技術 20
2.5.3 YOLOv8 23
2.5.4 指向性辨識與分析 24
第三章 研究方法 26
3.1 資料蒐集 26
3.2 資料蒐集工具 27
3.2.1 無人機規格與參數 28
3.2.2 影像擷取 29
3.3 模型訓練與辨識 30
3.3.1 模型結構介紹 30
3.3.2 訓練過程 34
第四章 影像辨識成果 38
4.1 物件偵測結果 38
4.2 擬製指向性辨識 45
4.3 綜合討論 48
第五章 結論與建議 50
5.1 結論 50
5.2 建議 51
參考文獻 53
參考文獻 參考文獻
[1] Tavakoli, D., A. Heidari, and S.H. Pilehrood (2014). "Properties of Concrete made with Waste Clay Brick as Sand Incorporating Nano SiO^ sub 2." Indian Journal of Science and Technology, 7(12), 1899.
[2] 內政部營建署 (2023). "111年度營建工程剩餘土石方資源回收處理與資訊交流及總量管制計畫."
[3] 國土利用監測整合資訊網, <https://landchg.tcd.gov.tw/Module/RWD/Web/pub_exhibit.aspx>.
[4] Lee, S.B., D. Han, and M. Song (2022). "Calculation and Comparison of Earthwork Volume Using Unmanned Aerial Vehicle Photogrammetry and Traditional Surveying Method." Sensors & Materials, 34.
[5] Bügler, M., et al. (2017). "Fusion of photogrammetry and video analysis for productivity assessment of earthwork processes." Computer‐Aided Civil and Infrastructure Engineering, 32(2), 107-123.
[6] Liu, Q., et al. "Summary of calculation methods of engineering earthwork." Proc., Journal of Physics: Conference Series, IOP Publishing, 032002.
[7] Park, H.C., T.S.N. Rachmawati, and S. Kim (2022). "UAV-Based High-Rise Buildings Earthwork Monitoring-A Case Study." Sustainability, 14(16).
[8] 內政部國土管理署, <https://www.nlma.gov.tw/>.
[9] Lee, S.S., S.I. Park, and J. Seo (2018). "Utilization analysis methodology for fleet telematics of heavy earthwork equipment." Automation in Construction, 92, 59-67.
[10] Kim, S.K., J. Seo, and J.S. Russell (2012). "Intelligent navigation strategies for an automated earthwork system." Automation in Construction, 21, 132-147.
[11] Nex, F. and F. Remondino (2014). "UAV for 3D mapping applications: a review." Applied geomatics, 6, 1-15.
[12] Zeng, Y., R. Zhang, and T.J. Lim (2016). "Wireless communications with unmanned aerial vehicles: Opportunities and challenges." IEEE Communications Magazine, 54(5), 36-42.
[13] Nex, F., et al. (2022). "UAV in the advent of the twenties: Where we stand and what is next." Isprs Journal of Photogrammetry and Remote Sensing, 184, 215-242.
[14] Dalla Corte, A.P., et al. (2020). "Measuring Individual Tree Diameter and Height Using GatorEye High-Density UAV-Lidar in an Integrated Crop-Livestock-Forest System." Remote Sensing, 12(5).
[15] Radoglou-Grammatikis, P., et al. (2020). "A compilation of UAV applications for precision agriculture." Computer Networks, 172.
[16] Ota, T., et al. (2017). "Forest Structure Estimation from a UAV-Based Photogrammetric Point Cloud in Managed Temperate Coniferous Forests." Forests, 8(9).
[17] Guan, S.Y., Z. Zhu, and G. Wang (2022). "A Review on UAV-Based Remote Sensing Technologies for Construction and Civil Applications." Drones, 6(5).
[18] Stöcker, C., et al. (2017). "Review of the Current State of UAV Regulations." Remote Sensing, 9(5).
[19] 內政部國土測繪中心, <https://www.nlsc.gov.tw/cp.aspx?n=13658>.
[20] Alzahrani, B., et al. (2020). "UAV assistance paradigm: State-of-the-art in applications and challenges." Journal of Network and Computer Applications, 166.
[21] Siebert, S. and J. Teizer (2014). "Mobile 3D mapping for surveying earthwork projects using an Unmanned Aerial Vehicle (UAV) system." Automation in Construction, 41, 1-14.
[22] Zhou, G.Q., et al. (2021). "Selection of Optimal Building Facade Texture Images From UAV-Based Multiple Oblique Image Flows." Ieee Transactions on Geoscience and Remote Sensing, 59(2), 1534-1552.
[23] Vetrivel, A., et al. (2018). "Disaster damage detection through synergistic use of deep learning and 3D point cloud features derived from very high resolution oblique aerial images, and multiple-kernel-learning." Isprs Journal of Photogrammetry and Remote Sensing, 140, 45-59.
[24] Outay, F., H.A. Mengash, and M. Adnan (2020). "Applications of unmanned aerial vehicle (UAV) in road safety, traffic and highway infrastructure management: Recent advances and challenges." Transportation Research Part a-Policy and Practice, 141, 116-129.
[25] Duque, L., J. Seo, and J. Wacker (2018). "Bridge Deterioration Quantification Protocol Using UAV." Journal of Bridge Engineering, 23(10).
[26] Yu, Z.W., Y.G. Shen, and C.K. Shen (2021). "A real-time detection approach for bridge cracks based on YOLOv4-FPM." Automation in Construction, 122, 11.
[27] Yang, Z.Y., et al. (2022). "UAV remote sensing applications in marine monitoring: Knowledge visualization and review." Science of the Total Environment, 838.
[28] Wang, X.P., et al. (2022). "Multi-UAV Cooperative Localization for Marine Targets Based on Weighted Subspace Fitting in SAGIN Environment." Ieee Internet of Things Journal, 9(8), 5708-5718.
[29] Nesbit, P.R. and C.H. Hugenholtz (2019). "Enhancing UAV-SfM 3D Model Accuracy in High-Relief Landscapes by Incorporating Oblique Images." Remote Sensing, 11(3).
[30] Blewitt, G. (1997). "Basics of the GPS technique: observation equations." Geodetic applications of GPS, 1, 46.
[31] 先創國際 "電力無人機巡檢中的RTK 技術." <https://www.esentra.com.tw/2019/04/%E9%9B%BB%E5%8A%9B%E7%84%A1%E4%BA%BA%E6%A9%9F%E5%B7%A1%E6%AA%A2%E4%B8%AD%E7%9A%84rtk-%E6%8A%80%E8%A1%93/>.
[32] Tahar, K.N. and S. Kamarudin (2016). "UAV onboard GPS in positioning determination." The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 41, 1037-1042.
[33] Stroner, M., et al. (2021). "Photogrammetry Using UAV-Mounted GNSS RTK: Georeferencing Strategies without GCPs." Remote Sensing, 13(7).
[34] Roriz, R., J. Cabral, and T. Gomes (2022). "Automotive LiDAR Technology: A Survey." Ieee Transactions on Intelligent Transportation Systems, 23(7), 6282-6297.
[35] Villa, F., et al. (2021). "SPADs and SiPMs Arrays for Long-Range High-Speed Light Detection and Ranging (LiDAR)." Sensors, 21(11).
[36] Zhao, J.X., et al. (2019). "Detection and tracking of pedestrians and vehicles using roadside LiDAR sensors." Transportation Research Part C-Emerging Technologies, 100, 68-87.
[37] Zhao, X.M., et al. (2020). "Fusion of 3D LIDAR and Camera Data for Object Detection in Autonomous Vehicle Applications." Ieee Sensors Journal, 20(9), 4901-4913.
[38] Gao, H.B., et al. (2018). "Object Classification Using CNN-Based Fusion of Vision and LIDAR in Autonomous Vehicle Environment." Ieee Transactions on Industrial Informatics, 14(9), 4224-4231.
[39] Li, Y., et al. (2021). "Deep Learning for LiDAR Point Clouds in Autonomous Driving: A Review." Ieee Transactions on Neural Networks and Learning Systems, 32(8), 3412-3432.
[40] Liu, X., et al. (2007). "LiDAR-derived high quality ground control information and DEM for image orthorectification." GeoInformatica, 11, 37-53.
[41] Gneeniss, A., J. Mills, and P. Miller (2013). "Reference LiDAR surfaces for enhanced aerial triangulation and camera calibration." The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 40, 111-116.
[42] Lin, Y.-C., et al. (2019). "Evaluation of UAV LiDAR for mapping coastal environments." Remote Sensing, 11(24), 2893.
[43] Villanueva, J.R.E., L.I. Martínez, and J.I.P. Montiel (2019). "DEM Generation from Fixed-Wing UAV Imaging and LiDAR-Derived Ground Control Points for Flood Estimations." Sensors, 19(14).
[44] Hu, T.Y., et al. (2021). "Development and Performance Evaluation of a Very Low-Cost UAV-Lidar System for Forestry Applications." Remote Sensing, 13(1).
[45] Ronneberger, O., P. Fischer, and T. Brox "U-net: Convolutional networks for biomedical image segmentation." Proc., Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, Springer, 234-241.
[46] Du, G.T., et al. (2020). "Medical Image Segmentation based on U-Net: A Review." Journal of Imaging Science and Technology, 64(2).
[47] Siddique, N., et al. (2021). "U-Net and Its Variants for Medical Image Segmentation: A Review of Theory and Applications." Ieee Access, 9, 82031-82057.
[48] Wei, Y.H., et al. (2022). "Multiscale feature U-Net for remote sensing image segmentation." Journal of Applied Remote Sensing, 16(1).
[49] Kattenborn, T., J. Eichel, and F.E. Fassnacht (2019). "Convolutional Neural Networks enable efficient, accurate and fine-grained segmentation of plant species and communities from high-resolution UAV imagery." Scientific Reports, 9.
[50] Zou, K.L., et al. (2021). "A Field Weed Density Evaluation Method Based on UAV Imaging and Modified U-Net." Remote Sensing, 13(2).
[51] Yang, X.F., et al. (2019). "Road Detection and Centerline Extraction Via Deep Recurrent Convolutional Neural Network U-Net." Ieee Transactions on Geoscience and Remote Sensing, 57(9), 7209-7220.
[52] Zhang, Z.X., Q.J. Liu, and Y.H. Wang (2018). "Road Extraction by Deep Residual U-Net." Ieee Geoscience and Remote Sensing Letters, 15(5), 749-753.
[53] Liu, X.B., et al. (2021). "A Review of Deep-Learning-Based Medical Image Segmentation Methods." Sustainability, 13(3).
[54] Lin, T.-Y., et al. "Microsoft coco: Common objects in context." Proc., Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, Springer, 740-755.
[55] Hafiz, A.M. and G.M. Bhat (2020). "A survey on instance segmentation: state of the art." International journal of multimedia information retrieval, 9(3), 171-189.
[56] Wei, S.J., et al. (2020). "HRSID: A High-Resolution SAR Images Dataset for Ship Detection and Instance Segmentation." Ieee Access, 8, 120234-120254.
[57] Kirillov, A., et al. "Panoptic segmentation." Proc., Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9404-9413.
[58] Mohan, R. and A. Valada (2021). "Efficientps: Efficient panoptic segmentation." International Journal of Computer Vision, 129(5), 1551-1579.
[59] Lindeberg, T. (2012). "Scale invariant feature transform."
[60] Bay, H., T. Tuytelaars, and L. Van Gool "Surf: Speeded up robust features." Proc., Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006. Proceedings, Part I 9, Springer, 404-417.
[61] Aglave, P. and V.S. Kolkure (2015). "Implementation Of High Performance Feature Extraction Method Using Oriented Fast And Rotated Brief Algorithm." Int. J. Res. Eng. Technol, 4, 394-397.
[62] Peng, X.L., et al. (2013). "Extreme Learning Machine-Based Classification of ADHD Using Brain Structural MRI Data." Plos One, 8(11).
[63] Liao, L.Y., L. Du, and Y.C. Guo (2022). "Semi-Supervised SAR Target Detection Based on an Improved Faster R-CNN." Remote Sensing, 14(1).
[64] He, K.M., et al. (2020). "Mask R-CNN." Ieee Transactions on Pattern Analysis and Machine Intelligence, 42(2), 386-397.
[65] Xu, X.Y., et al. (2022). "Crack Detection and Comparison Study Based on Faster R-CNN and Mask R-CNN." Sensors, 22(3).
[66] Jia, W.K., et al. (2020). "Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot." Computers and Electronics in Agriculture, 172.
[67] He, W.F., et al. (2021). "Recognition and detection of aero-engine blade damage based on Improved Cascade Mask R-CNN." Applied Optics, 60(17), 5124-5133.
[68] Diwan, T., G. Anirudh, and J.V. Tembhurne (2023). "Object detection using YOLO: challenges, architectural successors, datasets and applications." Multimedia Tools and Applications, 82(6), 9243-9275.
[69] Redmon, J., et al. "You only look once: Unified, real-time object detection." Proc., Proceedings of the IEEE conference on computer vision and pattern recognition, 779-788.
[70] Terven, J. and D. Cordova-Esparza (2023). "A comprehensive review of YOLO: From YOLOv1 to YOLOv8 and beyond." arXiv preprint arXiv:2304.00501.
[71] Singh, S., et al. (2021). "Face mask detection using YOLOv3 and faster R-CNN models: COVID-19 environment." Multimedia Tools and Applications, 80(13), 19753-19768.
[72] RangeKing (2023). "Brief summary of YOLOv8 model structure." <https://github.com/ultralytics/ultralytics/issues/189>.
[73] Ultralytics (2023). "YOLOv8." <https://github.com/ultralytics/ultralytics?tab=readme-ov-file>.
[74] Su, M.-C., et al. (2021). "A Projection-Based Human Motion Recognition Algorithm Based on Depth Sensors." IEEE Sensors Journal, 21(15), 16990-16996.
[75] Lee, D., et al. (2023). "UAV Control for Close Tracking of a Flying Object Using Search Region Focused Detector and Target-Visibility Enhancing Control." Ieee Access, 11, 139326-139334.
[76] 營建剩餘土石方服務資訊中心, <https://www.soilmove.tw/>.
[77] Wang, G., et al. (2023). "UAV-YOLOv8: A Small-Object-Detection Model Based on Improved YOLOv8 for UAV Aerial Photography Scenarios." Sensors, 23(16).
[78] Wang, C.-Y., A. Bochkovskiy, and H.-Y.M. Liao "YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors." Proc., Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 7464-7475.
[79] 泓宇 (2024). "[Object detection] YOLOv8詳解." <https://henry870603.medium.com/object-detection-yolov8%E8%A9%B3%E8%A7%A3-fdf8874e5e99>.
[80] Ge, Z., et al. (2021). "Yolox: Exceeding yolo series in 2021." arXiv preprint arXiv:2107.08430.
[81] Elfwing, S., E. Uchibe, and K. Doya (2018). "Sigmoid-weighted linear units for neural network function approximation in reinforcement learning." Neural networks, 107, 3-11.
[82] Chang, C. (2023). "YOLOv8模型訓練及其指標意義." <https://claire-chang.com/2023/08/16/yolov8%E6%A8%A1%E5%9E%8B%E8%A8%93%E7%B7%B4%E5%8F%8A%E5%85%B6%E6%8C%87%E6%A8%99%E6%84%8F%E7%BE%A9/>.
[83] Pai, J. (2021). "mean Average Precision (mAP) - 評估物體偵測模型好壞的指標." <https://medium.com/lifes-a-struggle/mean-average-precision-map-%E8%A9%95%E4%BC%B0%E7%89%A9%E9%AB%94%E5%81%B5%E6%B8%AC%E6%A8%A1%E5%9E%8B%E5%A5%BD%E5%A3%9E%E7%9A%84%E6%8C%87%E6%A8%99-70a2d2872eb0>.
指導教授 陳介豪(Jieh-Haur Chen) 審核日期 2024-7-12
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明