博碩士論文 102521604 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:9 、訪客IP:3.94.21.209
姓名 馬妮雅(Wahyu Rahmaniar)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 基於不同取像技術之物件辨識解決方案
(Practical solutions for object identification based on different image acquisitions)
相關論文
★ 直接甲醇燃料電池混合供電系統之控制研究★ 利用折射率檢測法在水耕植物之水質檢測研究
★ DSP主控之模型車自動導控系統★ 旋轉式倒單擺動作控制之再設計
★ 高速公路上下匝道燈號之模糊控制決策★ 模糊集合之模糊度探討
★ 雙質量彈簧連結系統運動控制性能之再改良★ 桌上曲棍球之影像視覺系統
★ 桌上曲棍球之機器人攻防控制★ 模型直昇機姿態控制
★ 模糊控制系統的穩定性分析及設計★ 門禁監控即時辨識系統
★ 桌上曲棍球:人與機械手對打★ 麻將牌辨識系統
★ 相關誤差神經網路之應用於輻射量測植被和土壤含水量★ 三節式機器人之站立控制
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本論文對於不同應用面的照片或影片,提出了三種物件辨識解決方案,分別如下:跟骨骨折的分割和分類、使用無人機 (UAV)對運動物體的檢測與識別及使用RGB-D攝影機的人數統計應用。在第一項研究中提出以電腦輔助法檢測跟骨骨折以獲取更快、更詳細的檢測結果。首先選擇輸入圖像中骨的解剖平面方向以確定跟骨的位置,然後偵測跟骨圖像的幾個碎片並使用顏色分割進行標記。Sanders系統可以在橫向和冠狀影像中根據碎片的數量將骨折分為四種類型,在矢狀影像中,根據骨折區域的參與程度可將骨折分為三種類型。在第二項研究中利用無人機擷取的影像設計出了一種新穎且高效的技術用於檢測和識別移動物體。首先找到兩個連續幀之間的特徵點用來估計攝影機的運動以穩定圖像序列,然後將物體的感興趣區域(ROI)檢測為移動物體候選(前景)。除此之外,根據前景和背景中的最大運動向量對靜態物體和動態物體進行分類。在第三項研究中提出了一種使用RGB-D攝像機進行實時人數計數的新技術。 首先,提出了圖像校準以獲得深度和RGB圖像之間的比率和偏移值。 在深度圖像中,通過去除背景將人物檢測為前景。 然後,根據檢測到的人的位置註冊他們的關注區域(ROI),並將其映射到RGB圖像。 基於具有通道和空間可靠性的判別相關濾波器,在RGB圖像中跟踪註冊人員。 最終,當人們越過興趣線(LOI)且位移距離超過2米時,就算在內 。
摘要(英) This dissertation proposes some practical solutions for object identification based on different image acquisitions such as segmentation and classification of calcaneal fractures, detection and recognition of moving objects using an Unmanned Aerial Vehicle (UAV), and people counting applications using an RGB-D camera. In the first study, a computer-aid method for calcaneal fracture detection to acquire a faster and more detailed observation is proposed. First, the anatomical plane orientation of the tarsal bone in the input image is selected to determine the location of the calcaneus. Then, several fragments of the calcaneus image are detected and marked by color segmentation. The Sanders system is used to classify fractures in transverse and coronal images into four types based on the number of fragments. In the sagittal image, fractures are classified into three types based on the involvement of the fracture area. In the second study, a new and efficient technique is proposed for the detection and recognition of moving objects in a sequence of images captured from a UAV. First, the feature points between two successive frames are found for estimating the camera movement to stabilize the sequence of images. Then, the region of interest (ROI) of the objects are detected as the moving object candidate (foreground). Furthermore, static and dynamic objects are classified based on the most motion vectors that occur in the foreground and background. In the third study, a new technique for real-time people counting using an RGB-D camera is proposed. First, image calibration is proposed to obtain the ratio and shift values between the depth and the RGB image. In the depth image, people are detected as the foreground by removing the background. Then, the region of interest (ROI) of the detected people is registered based on their locations and is mapped to the RGB image. The registered people are tracked in the RGB image based on the discriminative correlation filter with the channel and spatial reliabilities. Finally, people are counted when they cross the line of interest (LOI) and the displacement distance is more than 2 meters.
關鍵字(中) ★ 生物醫學成像
★ 移動物體
★ 光流
★ 人員計數
★ 人員檢測
★ 人員跟踪
★ 監視
★ 無人機
關鍵字(英) ★ biomedical imaging
★ moving object
★ optical flow
★ people counting
★ people detection
★ people tracking
★ surveillance
★ UAV
論文目次 摘要 II
Abstract III
Acknowledgment IV
Contents V
List of Figures VIII
List of Tables XI
Chapter 1 Introduction 1
1.1 Background and Motivation 1
1.2 Review of Previous Works 2
1.2.1 Paper Review for Bone Fractures Detection 3
1.2.2 Paper Review for Moving Objects Detection by Using UAV 4
1.2.3 Paper Review for People Counting 6
1.3 Thesis Organization 7
Chapter 2 Automated Segmentation and Classification of Calcaneal Fractures in CT Images 9
2.1 Introduction 9
2.2 Fracture Classification in Calcaneus 10
2.3 The Proposed Method 12
2.4 Step 1: Foreground Detection 13
2.4.1 Local Binary Pattern 13
2.4.2 Cascade Classifier 15
2.5 Step 2: Foreground Selection and Object Identification 17
2.5.1 Classification in Coronal and Transverse Images 17
2.5.2 Otsu Method 19
2.5.3 Contour Detection 20
2.5.4 Classification in Sagittal Images 23
2.6 Experimental Results and Discussion 27
2.7 Summary 33
Chapter 3 Detection and Recognition of Multiple Moving Objects for Aerial Surveillance 34
3.1 Introduction 34
3.2 The Proposed Method 35
3.3 Step 1: Pre-processing 37
3.3.1 Speed-Up Robust Features 37
3.3.2 Affine Transformation 39
3.3.3 Kalman Filter 41
3.4 Step 2: Foreground Detection 42
3.5 Step 3: Foreground Selection and Object Identification 45
3.5.1 Farneback Optical Flow 45
3.5.2 Movement Direction 46
3.6 Experimental Results and Discussion 51
3.6.1 Result of Motion Vectors 51
3.6.2 Result of Moving Objects Detection 52
3.7 Summary 57

Chapter 4 Bi-Directional People Counting Using an RGB-D Camera 58
4.1 Introduction 58
4.2 Hardware 58
4.3 The Proposed Method 59
4.4 Step 1: Pre-processing 60
4.5 Step 2: Foreground Detection 63
4.5.1 Foreground Segmentation 63
4.5.2 Person Registration 67
4.6 Step 3: Foreground Selection and Object Identification 68
4.6.1 People Tracking 68
4.6.2 People Counting 69
5.1 Experimental Results and Discussion 72
5.2 Summary 79
Chapter 5 Conclusion and Future Works 80
5.1 Conclusion 80
5.1 Future Works 81
Publications 88
Appendix A 89
參考文獻 [1] J. Wu, P. Davuluri, K. R. Ward, C. Cockrell, R. Hobson, and K. Najarian, “Fracture detection in traumatic pelvic CT images,” Int. J. Biomed. Imaging, vol. 2012, 2012.
[2] H. R. Roth, Y. Wang, J. Yao, L. Lu, J. E. Burns, and R. M. Summers, “Deep convolutional networks for automated detection of posterior-element fractures on spine CT,” in Proc. of the SPIE Med. Imaging, 2016, p97850P.
[3] Y. D. Pranata et al., “Deep learning and SURF for automated classification and detection of calcaneus fractures in CT images,” Comput. Methods Programs Biomed., vol. 171, pp. 27–37, 2019.
[4] M. Yazdi and T. Bouwmans, “New trends on moving object detection in video images captured by a moving camera: A survey,” Comput. Sci. Rev., vol. 28, pp. 157–177, 2018.
[5] A. F. M. S. Saif, A. S. Prabuwono, and Z. R. Mahayuddin, “Moving object detection using dynamic motion modelling from UAV aerial images,” SCI. World J., vol. 2014, pp. 1–12, 2014.
[6] J. Maier and M. Humenberger, “Movement detection based on dense optical flow for unmanned aerial vehicles,” Int. J. Adv. Robot. Syst., vol. 10, no. 2, pp. 146–157, 2013.
[7] B. Kalantar, S. Bin Mansor, A. Abdul Halin, H. Z. M. Shafri, and M. Zand, “Multiple moving object detection from UAV videos using trajectories of matched regional adjacency graphs,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 9, pp. 5198–5213, 2017.
[8] Y. Wu, X. He, and T. Q. Nguyen, “Moving object detection with a freely moving camera via background motion subtraction,” IEEE Trans. Circuits Syst. Video Technol., vol. 27, no. 2, pp. 236–248, 2017.
[9] S. Cai, Y. Huang, B. Ye, and C. Xu, “Dynamic illumination optical flow computing for sensing multiple mobile robots from a drone,” IEEE Trans. Syst. Man, Cybern. Syst., vol. 48, no. 8, pp. 1370–1382, 2017.
[10] S. Minaeian, J. Liu, and Y. J. Son, “Effective and efficient detection of moving targets from a UAV’s camera,” IEEE Trans. Intell. Transp. Syst., vol. 19, no. 2, pp. 497–506, 2018.
[11] D. Ryan, S. Denman, C. Fookes, and S. Sridharan, “Scene invariant multi camera crowd counting - ScienceDirect,” Pattern Recognit. Lett., vol. 44, pp. 98–112, 2014.
[12] O. Perdikaki, S. Kesavan, and J. M. Swaminathan, “Effect of traffic on sales and conversion rates of retail stores,” Manuf. Serv. Oper. Manag., vol. 14, no. 1, pp. 145–162, Dec. 2012.
[13] T. M. Tsai, P. C. Yang, and W. N. Wang, “Pilot study toward realizing social effect in O2O commerce services,” in Proc. of IEEE International Conference on Pervasive Computing and Communication Workshops, 2015, pp. 372 377.

[14] N. C. Tang, Y. Y. Lin, M. F. Weng, and H. Y. M. Liao, “Cross-camera knowledge transfer for multiview people counting,” IEEE Trans. Image Process., vol. 24, no. 1, pp. 80–93, 2015.
[15] S. Sun, N. Akhtar, H. Song, C. Zhang, J. Li, and A. Mian, “Benchmark data and method for real-time people counting in cluttered scenes using depth sensors,” IEEE Trans. Intell. Transp. Syst., vol. 20, no. 10, pp. 3599–3612, 2019.
[16] N. Ahmed, A. Ghose, A. K. Agrawal, C. Bhaumik, V. Chandel, and A. Kumar, “SmartEvacTrak: A people counting and coarse-level localization solution for efficient evacuation of large buildings,” in Proc. of International Conference on Pervasive Computing and Communication Workshops, 2015, pp. 372–377..
[17] W. J. Wang, H. G. Chou, C. H. Chiu, Y. P. Liao, and P. J. Lee, “A DSP embedded control system with people number counting for energy saving,” in Proc. of Int. Conf. Informatics Appl., 2013, pp. 138–142.
[18] X. Yang, W. Yin, L. Li, and L. Zhang, “Dense people counting using IR-UWB radar with a hybrid feature extraction method,” IEEE Geosci. Remote Sens. Lett., vol. 16, no. 1, pp. 30–34, 2019.
[19] J. W. Choi, X. Quan, and S. H. Cho, “Bi-directional passing people counting system based on IR-UWB radar sensors,” IEEE Internet Things J., vol. 5, no. 2, pp. 512–522, 2018.
[20] S. Kumar, T. K. Marks, and M. Jones, “Improving person tracking using an inexpensive thermal infrared sensor,” in Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2014, pp. 217–224.
[21] H. Li, E. C. L. Chan, X. Guo, J. Xiao, K. Wu, and L. M. Ni, “Wi-Counter: Smartphone-based people counter using crowdsourced wi-fi signal data,” IEEE Trans. Human-Machine Syst., vol. 45, no. 4, pp. 442–452, 2015.
[22] O. T. Ibrahim, W. Gomaa, and M. Youssef, “CrossCount: A deep learning system for device-free human counting using WiFi,” IEEE Sens. J., vol. 19, no. 21, pp. 9921–9928, 2019.
[23] Z. Q. H. Al-Zaydi, D. L. Ndzi, Y. Yang, and M. L. Kamarudin, “An adaptive people counting system with dynamic features selection and occlusion handling,” J. Vis. Commun. Image Represent., vol. 39, pp. 218–225, 2016.
[24] H. C. Chen, Y. C. Chang, N. J. Li, C. F. Weng, and W. J. Wang, “Real-time people counting method with surveillance cameras implemented on embedded system,” in Proc. of World Congress on Engineering and Computer Science, 2013, pp. 512-515.
[25] A. Tokta and A. K. Hocaoglu, “A fast people counting method based on optical flow,” in Proc. of Int. Conf. Artif. Intell. Data Process, 2018, pp. 1–4.
[26] J. Garcia, A. Gardel, I. Bravo, J. L. Lazaro, M. Martinez, and D. Rodriguez, “Directional people counter based on head tracking,” IEEE Trans. Ind. Electron., vol. 60, no. 9, pp. 3991–4000, 2013.
[27] E. Bondi, L. Seidenari, A. D. Bagdanov, and A. Del Bimbo, “Real-time people counting from depth imagery of crowded environments,” in Proc. of IEEE International Conference on Advanced Video and Signal-Based Surveillance, 2014, pp. 337–342.
[28] S. I. Cho and S. J. Kang, “Real-time people counting system for customer movement analysis,” IEEE Access, vol. 6, pp. 55264–55272, 2018.
[29] K. M. Moussa, M. A. E. Hassaan, A. N. Moharram, and M. D. Elmahdi, “The role of multidetector CT in evaluation of calcaneal fractures,” Egypt. J. Radiol. Nucl. Med., vol. 46, no. 2, pp. 413–421, Jun. 2015.
[30] M. Galluzzo et al., “Calcaneal fractures: radiological and CT evaluation and classification systems.,” Acta Biomed., vol. 89, no. 1-S, pp. 138–150, Jan. 2018.
[31] S. A. Swanson, M. P. Clare, and R. W. Sanders, “Management of intra-articular fractures of the calcaneus,” Foot Ankle Clin., vol. 13, no. 4, pp. 659–678, Dec. 2008.
[32] R. Sanders, P. Fortin, T. DiPasquale, and A. Walling, “Operative treatment in 120 displaced intraarticular calcaneal fractures. Results using a prognostic computed tomography scan classification.,” Clin. Orthop. Relat. Res., no. 290, pp. 87–95, May 1993.
[33] R. Sanders, “Displaced intra-articular fractures of the calcaneus,” J. Bone Joint Surg. Am., vol. 82, no. 2, pp. 225–50, Feb. 2000.
[34] D. L. Janzen, D. G. Connell, P. L. Munk, R. E. Buckley, R. N. Meek, and M. T. Schechter, “Intraarticular fractures of the calcaneus: value of CT findings in determining prognosis.,” Am. J. Roentgenol., vol. 158, no. 6, pp. 1271–1274, Jun. 1992.
[35] S. Rammelt, A. L. Godoy-Santos, W. Schneiders, G. Fitze, and H. Zwipp, “Foot and ankle fractures during childhood: review of the literature and scientific evidence for appropriate treatment.,” Rev. Bras. Ortop., vol. 51, no. 6, pp. 630–639, 2016.
[36] K. Badillo, J. a Pacheco, S. O. Padua, A. a Gomez, E. Colon, and J. a Vidal, “Multidetector CT evaluation of calcaneal fractures.,” Radiographics, vol. 31, no. 1, pp. 81–92, 2011.
[37] D. Huang, C. Shan, M. Ardebilian, Y. Wang, and L. Chen, “Local binary patterns and its application to facial image analysis : A Survey,” IEEE Trans. Syst. Man, Cybern. Part C, vol. 41, pp. 765–781, 2011.
[38] O. Ludwig, U. Nunes, B. Ribeiro, and C. Premebida, “Improving the generalization capacity of cascade classifiers,” IEEE Trans. Cybern., vol. 43, no. 6, pp. 2135–2146, Dec. 2013.
[39] H. Yu and P. Moulin, “Regularized adaboost learning for identification of time-varying content,” IEEE Trans. Inf. Forensics Secur., vol. 9, no. 10, pp. 1606–1616, Oct. 2014.
[40] P. Viola and M. J. Jones, “Robust real-time face detection,” Int. J. Comput. Vis., vol. 57, no. 2, pp. 137–154, May 2004.
[41] S. Suzuki and K. Abe, “Topological structural analysis of digital binary image by border following,” Comput. Vision, Graph. Image Process., vol. 30, no. 1, pp. 32–46, 1985.
[42] X. Yang, X. Shen, J. Long, and H. Chen, “An improved median-based otsu image thresholding algorithm,” AASRI Procedia, vol. 3, pp. 468–473, Jan. 2012.
[43] R. C. Gonzalez and R. E., Digital image processing. Prentice Hall, 2002.
[44] J. Seo, S. Chae, J. Shim, D. Kim, C. Cheong, and T.-D. Han, “Fast contour-tracing algorithm based on a pixel-following method for image sensors.,” Sensors., vol. 16, no. 3, Mar. 2016.
[45] Y. Zhu and C. Huang, “An improved median filtering algorithm for image noise reduction,” Phys. Procedia, vol. 25, pp. 609–616, Jan. 2012.
[46] C.-M. Chen et al., “Automatic contrast enhancement of brain MR images using hierarchical correlation histogram analysis,” J. Med. Biol. Eng., vol. 35, no. 6, pp. 724–734, 2015.
[47] T. Harnroongroj, T. Harnroongroj, T. Suntharapa, and M. Arunakul, “The new intra-articular calcaneal fracture classification system in term of sustentacular fragment configurations and incorporation of posterior calcaneal facet fractures with fracture components of the calcaneal body,” Acta Orthop. Traumatol. Turc., vol. 50, no. 5, pp. 519–526, Oct. 2016.
[48] The wiki-based collaborative Radiology resource, https://radiopaedia.org/. [Accessed: 09-Jan-2018].
[49] Fractures and dislocations of the tarsal bones, https://aneskey.com/fractures-and-dislocations-of-the-tarsal-bones/. [Accessed: 09-Jan-2018].
[50] L. Leal-Taixé, A. Milan, K. Schindler, D. Cremers, I. Reid, and S. Roth, “Tracking the trackers: an analysis of the state of the art in multiple object tracking,” arXiv 2017, arXiv:1704.02781.
[51] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-Up Robust Features (SURF),” Comput. Vis. Image Underst., vol. 110, no. 3, pp. 346–359, Jun. 2008.
[52] T. N. Shene, K. Sridharan, and N. Sudha, “Real-time SURF-based video stabilization system for an FPGA-driven mobile robot,” IEEE Trans. Ind. Electron., vol. 63, no. 8, pp. 5012–5021, 2016.
[53] W. Rahmaniar and W.-J. Wang, “A novel object detection method based on Fuzzy sets theory and SURF,” in Proc. of International Conference on System Science and Engineering, 2015, pp. 570–584.
[54] S. Kumar, H. Azartash, M. Biswas, and T. Nguyen, “Real-time affine global motion estimation using phase correlation and its application for digital image stabilization.,” IEEE Trans. image Process., vol. 20, no. 12, pp. 3406–18, 2011.
[55] C. Wang, J. Kim, K. Byun, J. Ni, and S. Ko, “Robust digital image stabilization using the Kalman filter,” IEEE Trans. Consum. Electron., vol. 55, no. 1, pp. 6–14, Feb. 2009.


[56] Y. G. Ryu and M. J. Chung, “Robust online digital image stabilization based on point-feature trajectory without accumulative global motion estimation,” IEEE Signal Process. Lett., vol. 19, no. 4, pp. 223–226, 2012.
[57] G. Farneback, “Two-frame motion estimation based on polynomial expansion,” in Proc. of Scandinavian Conference on Image Analysis, 2003, vol. 2749, no. 1, pp. 363–370.
[58] R. J. O. Cayon, “Online Video Stabilization for UAV,” Politecnico di Milano, 2013.
[59] J. Li, T. Xu, and K. Zhang, “Real-Time Feature-Based Video Stabilization on FPGA,” IEEE Trans. Circuits Syst. Video Technol., vol. 27, no. 4, pp. 907–919, 2017.
[60] S. Hong, A. Dorado, G. Saavedra, J. C. Barreiro, and M. Martinez-Corral, “Three-dimensional integral-imaging display from calibrated and depth-hole filtered kinect information,” J. Disp. Technol., vol. 12, no. 11, pp. 1301–1308, Nov. 2016.
[61] M. Muja and D. G. Lowe, “Fast approximate nearest neighbors with automatic algorithm configuration,” in Proc. of International Conference on Computer Vision Theory and Applications, 2009, pp. 331–340.
[62] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM, vol. 24, no. 6, pp. 381–395, Jun. 1981.
[63] Center for Research in Computer Vision at the University of Central Florida, http://crcv.ucf.edu/data/UCF_Aerial_Action.php. [Accessed: 02-Jul-2018].
[64] RGB-D Camera Module - iSSA-SYS003-01, http://issatek.com. [Accessed: 13-Jan-2020].
[65] Jetson TX2 Module, https://developer.nvidia.com/embedded/jetson-tx2. [Accessed: 13-Jan-2020].
[66] L. Xu and E. Oja, “Randomized hough transform (RHT): Basic mechanisms, algorithms, and computational complexities,” Comput. Vis. Image Underst., vol. 57, no. 2, pp. 131–154, 1993.
[67] R. K. K. Yip, P. K. S. Tam, and D. N. K. Leung, “Modification of hough transform for circles and ellipses detection using a 2-dimensional array,” Pattern Recognit., vol. 25, no. 9, pp. 1007–1022, 1992.
[68] A. Lukežič, T. Vojíř, L. Čehovin Zajc, J. Matas, and M. Kristan, “Discriminative Correlation Filter Tracker with Channel and Spatial Reliability,” Int. J. Comput. Vis., vol. 126, no. 7, pp. 671–688, 2018.
[69] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn., vol. 3, no. 1, pp. 1–122, 2010.
指導教授 王文俊(Wen-June Wang) 審核日期 2020-6-3
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明