參考文獻 |
參考文獻
[1] Tavakoli, D., A. Heidari, and S.H. Pilehrood (2014). "Properties of Concrete made with Waste Clay Brick as Sand Incorporating Nano SiO^ sub 2." Indian Journal of Science and Technology, 7(12), 1899.
[2] 內政部營建署 (2023). "111年度營建工程剩餘土石方資源回收處理與資訊交流及總量管制計畫."
[3] 國土利用監測整合資訊網, <https://landchg.tcd.gov.tw/Module/RWD/Web/pub_exhibit.aspx>.
[4] Lee, S.B., D. Han, and M. Song (2022). "Calculation and Comparison of Earthwork Volume Using Unmanned Aerial Vehicle Photogrammetry and Traditional Surveying Method." Sensors & Materials, 34.
[5] Bügler, M., et al. (2017). "Fusion of photogrammetry and video analysis for productivity assessment of earthwork processes." Computer‐Aided Civil and Infrastructure Engineering, 32(2), 107-123.
[6] Liu, Q., et al. "Summary of calculation methods of engineering earthwork." Proc., Journal of Physics: Conference Series, IOP Publishing, 032002.
[7] Park, H.C., T.S.N. Rachmawati, and S. Kim (2022). "UAV-Based High-Rise Buildings Earthwork Monitoring-A Case Study." Sustainability, 14(16).
[8] 內政部國土管理署, <https://www.nlma.gov.tw/>.
[9] Lee, S.S., S.I. Park, and J. Seo (2018). "Utilization analysis methodology for fleet telematics of heavy earthwork equipment." Automation in Construction, 92, 59-67.
[10] Kim, S.K., J. Seo, and J.S. Russell (2012). "Intelligent navigation strategies for an automated earthwork system." Automation in Construction, 21, 132-147.
[11] Nex, F. and F. Remondino (2014). "UAV for 3D mapping applications: a review." Applied geomatics, 6, 1-15.
[12] Zeng, Y., R. Zhang, and T.J. Lim (2016). "Wireless communications with unmanned aerial vehicles: Opportunities and challenges." IEEE Communications Magazine, 54(5), 36-42.
[13] Nex, F., et al. (2022). "UAV in the advent of the twenties: Where we stand and what is next." Isprs Journal of Photogrammetry and Remote Sensing, 184, 215-242.
[14] Dalla Corte, A.P., et al. (2020). "Measuring Individual Tree Diameter and Height Using GatorEye High-Density UAV-Lidar in an Integrated Crop-Livestock-Forest System." Remote Sensing, 12(5).
[15] Radoglou-Grammatikis, P., et al. (2020). "A compilation of UAV applications for precision agriculture." Computer Networks, 172.
[16] Ota, T., et al. (2017). "Forest Structure Estimation from a UAV-Based Photogrammetric Point Cloud in Managed Temperate Coniferous Forests." Forests, 8(9).
[17] Guan, S.Y., Z. Zhu, and G. Wang (2022). "A Review on UAV-Based Remote Sensing Technologies for Construction and Civil Applications." Drones, 6(5).
[18] Stöcker, C., et al. (2017). "Review of the Current State of UAV Regulations." Remote Sensing, 9(5).
[19] 內政部國土測繪中心, <https://www.nlsc.gov.tw/cp.aspx?n=13658>.
[20] Alzahrani, B., et al. (2020). "UAV assistance paradigm: State-of-the-art in applications and challenges." Journal of Network and Computer Applications, 166.
[21] Siebert, S. and J. Teizer (2014). "Mobile 3D mapping for surveying earthwork projects using an Unmanned Aerial Vehicle (UAV) system." Automation in Construction, 41, 1-14.
[22] Zhou, G.Q., et al. (2021). "Selection of Optimal Building Facade Texture Images From UAV-Based Multiple Oblique Image Flows." Ieee Transactions on Geoscience and Remote Sensing, 59(2), 1534-1552.
[23] Vetrivel, A., et al. (2018). "Disaster damage detection through synergistic use of deep learning and 3D point cloud features derived from very high resolution oblique aerial images, and multiple-kernel-learning." Isprs Journal of Photogrammetry and Remote Sensing, 140, 45-59.
[24] Outay, F., H.A. Mengash, and M. Adnan (2020). "Applications of unmanned aerial vehicle (UAV) in road safety, traffic and highway infrastructure management: Recent advances and challenges." Transportation Research Part a-Policy and Practice, 141, 116-129.
[25] Duque, L., J. Seo, and J. Wacker (2018). "Bridge Deterioration Quantification Protocol Using UAV." Journal of Bridge Engineering, 23(10).
[26] Yu, Z.W., Y.G. Shen, and C.K. Shen (2021). "A real-time detection approach for bridge cracks based on YOLOv4-FPM." Automation in Construction, 122, 11.
[27] Yang, Z.Y., et al. (2022). "UAV remote sensing applications in marine monitoring: Knowledge visualization and review." Science of the Total Environment, 838.
[28] Wang, X.P., et al. (2022). "Multi-UAV Cooperative Localization for Marine Targets Based on Weighted Subspace Fitting in SAGIN Environment." Ieee Internet of Things Journal, 9(8), 5708-5718.
[29] Nesbit, P.R. and C.H. Hugenholtz (2019). "Enhancing UAV-SfM 3D Model Accuracy in High-Relief Landscapes by Incorporating Oblique Images." Remote Sensing, 11(3).
[30] Blewitt, G. (1997). "Basics of the GPS technique: observation equations." Geodetic applications of GPS, 1, 46.
[31] 先創國際 "電力無人機巡檢中的RTK 技術." <https://www.esentra.com.tw/2019/04/%E9%9B%BB%E5%8A%9B%E7%84%A1%E4%BA%BA%E6%A9%9F%E5%B7%A1%E6%AA%A2%E4%B8%AD%E7%9A%84rtk-%E6%8A%80%E8%A1%93/>.
[32] Tahar, K.N. and S. Kamarudin (2016). "UAV onboard GPS in positioning determination." The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 41, 1037-1042.
[33] Stroner, M., et al. (2021). "Photogrammetry Using UAV-Mounted GNSS RTK: Georeferencing Strategies without GCPs." Remote Sensing, 13(7).
[34] Roriz, R., J. Cabral, and T. Gomes (2022). "Automotive LiDAR Technology: A Survey." Ieee Transactions on Intelligent Transportation Systems, 23(7), 6282-6297.
[35] Villa, F., et al. (2021). "SPADs and SiPMs Arrays for Long-Range High-Speed Light Detection and Ranging (LiDAR)." Sensors, 21(11).
[36] Zhao, J.X., et al. (2019). "Detection and tracking of pedestrians and vehicles using roadside LiDAR sensors." Transportation Research Part C-Emerging Technologies, 100, 68-87.
[37] Zhao, X.M., et al. (2020). "Fusion of 3D LIDAR and Camera Data for Object Detection in Autonomous Vehicle Applications." Ieee Sensors Journal, 20(9), 4901-4913.
[38] Gao, H.B., et al. (2018). "Object Classification Using CNN-Based Fusion of Vision and LIDAR in Autonomous Vehicle Environment." Ieee Transactions on Industrial Informatics, 14(9), 4224-4231.
[39] Li, Y., et al. (2021). "Deep Learning for LiDAR Point Clouds in Autonomous Driving: A Review." Ieee Transactions on Neural Networks and Learning Systems, 32(8), 3412-3432.
[40] Liu, X., et al. (2007). "LiDAR-derived high quality ground control information and DEM for image orthorectification." GeoInformatica, 11, 37-53.
[41] Gneeniss, A., J. Mills, and P. Miller (2013). "Reference LiDAR surfaces for enhanced aerial triangulation and camera calibration." The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, 40, 111-116.
[42] Lin, Y.-C., et al. (2019). "Evaluation of UAV LiDAR for mapping coastal environments." Remote Sensing, 11(24), 2893.
[43] Villanueva, J.R.E., L.I. Martínez, and J.I.P. Montiel (2019). "DEM Generation from Fixed-Wing UAV Imaging and LiDAR-Derived Ground Control Points for Flood Estimations." Sensors, 19(14).
[44] Hu, T.Y., et al. (2021). "Development and Performance Evaluation of a Very Low-Cost UAV-Lidar System for Forestry Applications." Remote Sensing, 13(1).
[45] Ronneberger, O., P. Fischer, and T. Brox "U-net: Convolutional networks for biomedical image segmentation." Proc., Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18, Springer, 234-241.
[46] Du, G.T., et al. (2020). "Medical Image Segmentation based on U-Net: A Review." Journal of Imaging Science and Technology, 64(2).
[47] Siddique, N., et al. (2021). "U-Net and Its Variants for Medical Image Segmentation: A Review of Theory and Applications." Ieee Access, 9, 82031-82057.
[48] Wei, Y.H., et al. (2022). "Multiscale feature U-Net for remote sensing image segmentation." Journal of Applied Remote Sensing, 16(1).
[49] Kattenborn, T., J. Eichel, and F.E. Fassnacht (2019). "Convolutional Neural Networks enable efficient, accurate and fine-grained segmentation of plant species and communities from high-resolution UAV imagery." Scientific Reports, 9.
[50] Zou, K.L., et al. (2021). "A Field Weed Density Evaluation Method Based on UAV Imaging and Modified U-Net." Remote Sensing, 13(2).
[51] Yang, X.F., et al. (2019). "Road Detection and Centerline Extraction Via Deep Recurrent Convolutional Neural Network U-Net." Ieee Transactions on Geoscience and Remote Sensing, 57(9), 7209-7220.
[52] Zhang, Z.X., Q.J. Liu, and Y.H. Wang (2018). "Road Extraction by Deep Residual U-Net." Ieee Geoscience and Remote Sensing Letters, 15(5), 749-753.
[53] Liu, X.B., et al. (2021). "A Review of Deep-Learning-Based Medical Image Segmentation Methods." Sustainability, 13(3).
[54] Lin, T.-Y., et al. "Microsoft coco: Common objects in context." Proc., Computer Vision–ECCV 2014: 13th European Conference, Zurich, Switzerland, September 6-12, 2014, Proceedings, Part V 13, Springer, 740-755.
[55] Hafiz, A.M. and G.M. Bhat (2020). "A survey on instance segmentation: state of the art." International journal of multimedia information retrieval, 9(3), 171-189.
[56] Wei, S.J., et al. (2020). "HRSID: A High-Resolution SAR Images Dataset for Ship Detection and Instance Segmentation." Ieee Access, 8, 120234-120254.
[57] Kirillov, A., et al. "Panoptic segmentation." Proc., Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 9404-9413.
[58] Mohan, R. and A. Valada (2021). "Efficientps: Efficient panoptic segmentation." International Journal of Computer Vision, 129(5), 1551-1579.
[59] Lindeberg, T. (2012). "Scale invariant feature transform."
[60] Bay, H., T. Tuytelaars, and L. Van Gool "Surf: Speeded up robust features." Proc., Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006. Proceedings, Part I 9, Springer, 404-417.
[61] Aglave, P. and V.S. Kolkure (2015). "Implementation Of High Performance Feature Extraction Method Using Oriented Fast And Rotated Brief Algorithm." Int. J. Res. Eng. Technol, 4, 394-397.
[62] Peng, X.L., et al. (2013). "Extreme Learning Machine-Based Classification of ADHD Using Brain Structural MRI Data." Plos One, 8(11).
[63] Liao, L.Y., L. Du, and Y.C. Guo (2022). "Semi-Supervised SAR Target Detection Based on an Improved Faster R-CNN." Remote Sensing, 14(1).
[64] He, K.M., et al. (2020). "Mask R-CNN." Ieee Transactions on Pattern Analysis and Machine Intelligence, 42(2), 386-397.
[65] Xu, X.Y., et al. (2022). "Crack Detection and Comparison Study Based on Faster R-CNN and Mask R-CNN." Sensors, 22(3).
[66] Jia, W.K., et al. (2020). "Detection and segmentation of overlapped fruits based on optimized mask R-CNN application in apple harvesting robot." Computers and Electronics in Agriculture, 172.
[67] He, W.F., et al. (2021). "Recognition and detection of aero-engine blade damage based on Improved Cascade Mask R-CNN." Applied Optics, 60(17), 5124-5133.
[68] Diwan, T., G. Anirudh, and J.V. Tembhurne (2023). "Object detection using YOLO: challenges, architectural successors, datasets and applications." Multimedia Tools and Applications, 82(6), 9243-9275.
[69] Redmon, J., et al. "You only look once: Unified, real-time object detection." Proc., Proceedings of the IEEE conference on computer vision and pattern recognition, 779-788.
[70] Terven, J. and D. Cordova-Esparza (2023). "A comprehensive review of YOLO: From YOLOv1 to YOLOv8 and beyond." arXiv preprint arXiv:2304.00501.
[71] Singh, S., et al. (2021). "Face mask detection using YOLOv3 and faster R-CNN models: COVID-19 environment." Multimedia Tools and Applications, 80(13), 19753-19768.
[72] RangeKing (2023). "Brief summary of YOLOv8 model structure." <https://github.com/ultralytics/ultralytics/issues/189>.
[73] Ultralytics (2023). "YOLOv8." <https://github.com/ultralytics/ultralytics?tab=readme-ov-file>.
[74] Su, M.-C., et al. (2021). "A Projection-Based Human Motion Recognition Algorithm Based on Depth Sensors." IEEE Sensors Journal, 21(15), 16990-16996.
[75] Lee, D., et al. (2023). "UAV Control for Close Tracking of a Flying Object Using Search Region Focused Detector and Target-Visibility Enhancing Control." Ieee Access, 11, 139326-139334.
[76] 營建剩餘土石方服務資訊中心, <https://www.soilmove.tw/>.
[77] Wang, G., et al. (2023). "UAV-YOLOv8: A Small-Object-Detection Model Based on Improved YOLOv8 for UAV Aerial Photography Scenarios." Sensors, 23(16).
[78] Wang, C.-Y., A. Bochkovskiy, and H.-Y.M. Liao "YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors." Proc., Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 7464-7475.
[79] 泓宇 (2024). "[Object detection] YOLOv8詳解." <https://henry870603.medium.com/object-detection-yolov8%E8%A9%B3%E8%A7%A3-fdf8874e5e99>.
[80] Ge, Z., et al. (2021). "Yolox: Exceeding yolo series in 2021." arXiv preprint arXiv:2107.08430.
[81] Elfwing, S., E. Uchibe, and K. Doya (2018). "Sigmoid-weighted linear units for neural network function approximation in reinforcement learning." Neural networks, 107, 3-11.
[82] Chang, C. (2023). "YOLOv8模型訓練及其指標意義." <https://claire-chang.com/2023/08/16/yolov8%E6%A8%A1%E5%9E%8B%E8%A8%93%E7%B7%B4%E5%8F%8A%E5%85%B6%E6%8C%87%E6%A8%99%E6%84%8F%E7%BE%A9/>.
[83] Pai, J. (2021). "mean Average Precision (mAP) - 評估物體偵測模型好壞的指標." <https://medium.com/lifes-a-struggle/mean-average-precision-map-%E8%A9%95%E4%BC%B0%E7%89%A9%E9%AB%94%E5%81%B5%E6%B8%AC%E6%A8%A1%E5%9E%8B%E5%A5%BD%E5%A3%9E%E7%9A%84%E6%8C%87%E6%A8%99-70a2d2872eb0>. |