博碩士論文 100552010 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:13 、訪客IP:35.168.110.128
姓名 王得懿(Wang, Te-Yi)  查詢紙本館藏   畢業系所 資訊工程學系在職專班
論文名稱 跟隨特定物體移動的自走車
(Automatic Vehicle Following A Specific Moving Objec)
相關論文
★ 適用於大面積及場景轉換的視訊錯誤隱藏法★ 虛擬觸覺系統中的力回饋修正與展現
★ 多頻譜衛星影像融合與紅外線影像合成★ 腹腔鏡膽囊切除手術模擬系統
★ 飛行模擬系統中的動態載入式多重解析度地形模塑★ 以凌波為基礎的多重解析度地形模塑與貼圖
★ 多重解析度光流分析與深度計算★ 體積守恆的變形模塑應用於腹腔鏡手術模擬
★ 互動式多重解析度模型編輯技術★ 以小波轉換為基礎的多重解析度邊線追蹤技術(Wavelet-based multiresolution edge tracking for edge detection)
★ 基於二次式誤差及屬性準則的多重解析度模塑★ 以整數小波轉換及灰色理論為基礎的漸進式影像壓縮
★ 建立在動態載入多重解析度地形模塑的戰術模擬★ 以多階分割的空間關係做人臉偵測與特徵擷取
★ 以小波轉換為基礎的影像浮水印與壓縮★ 外觀守恆及視點相關的多重解析度模塑
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 在本論文中,我們提出能跟隨特定物體移動的自走車系統。利用即時影像及目標物偵測與定位技術,使自走車具有主動影像追蹤與跟隨目標物的移動控制。
本系統的設計規劃是由硬體架構與軟體流程兩個部份整合而成。硬體架構可以細分為四個次系統:微電腦系統、影像處理系統、動力驅動系統、及通訊傳輸系統。每個次系統代表著不同的硬體與周邊介面。系統的運作是利用微電腦系統進行所有影像處理與演算法程序的實現與計算,並統合控制所有周邊介面;例如,相機模組的影像擷取控制、馬達正反轉控制的訊號傳輸、遠端個人電腦的通訊等。
軟體流程則可以分成三個部份:目標物的偵測與定位、目標物的追蹤、與車體運動系統的控制。目標物的偵測與定位,是運用色彩偵測法的方式,先是以高斯混合模型觀察出色彩範圍的定義並選用對光線變化較不敏感的YCbCr色彩空間來使用。後續再經過二值化、形態學雜訊去除、再利用輪廓搜尋法來取得目標物影像區塊的特徵資訊。而目標物追蹤的處理流程,則是事先以校正法建立一個影像像素與真實世界距離之間的轉換關係式查找表(look-up table),用來給之前取得的特徵資訊進行影像中像素變化與真實世界距離遠近的轉換計算。此距離可以結合目標物在影像中的左右位置估計目標物相對於相機中心線的左右方向。後續在利用三角函數來計算求出直角三角形其餘的斜邊與銳角角度,之後就可以根據此距離與方向進行車體運動系統的直行、旋轉、及停止的驅動控制。
最後本研究將軟體演算法與硬體裝置整合,完成一個利用影像中目標物特徵資訊的測距方法,並且實現目標物跟隨的自走車控制系統。
摘要(英) In this thesis, we propose a automatic vehicle system, which can track the specified moving object. Use the real-time video and target detection and location technology, then the automatic vehicle can achieve active video tracking by image and follow this moving object control.
This main system is integrated by two parts. One is the architecture of the hardware and the other is software flow control with image processing algorithms. The architecture of the hardware could be implemented by the four sub-systems: There are MCU system, image processing system, the power driving system, and the communication system. Each sub-system has individual hardware, interface, and peripheral. The main system operates by all sub-systems, performs image processing algorithm of MCU, and controls all peripherals by interface. For example, the MCU controls the camera module of image acquisition, send direction signals to the motor controller, and communicates with remote PC, etc.
Software flow control and algorithms can be implemented by three parts: The first one is Object detection and location, the second is object tracking, and the last is the motion control of automatic vehicle. The Object detection and location utilizes color detection, which define the range of colors by Gaussian Mixture Model (GMM) and transform to YCbCr color-space for processing, the YCbCr color-space is lighting-insensitivity. After the YCbCr color-space transformation, we can get the feature from target object image by thresholding, morphology noise removal, and contour searching method, and furthermore the object tracking obtains a look-up table by calibration method. This calibration method establishes a transform between image’s pixels and real world’s distance. This distance can be used to estimate the target is in left or right relative to camera center line. Following that, we use trigonometric function to evaluate the hypotenuse and acute angle of the right triangle, then to control the driver of the automatic vehicle going forward, rotation, and stop.
Finally, in this research, by the integration of software algorithm and hardware device that we use the image feature distance measurement method to approach controlling the automatic vehicle that can follow the specified object.
關鍵字(中) ★ 自走車
★ 色彩偵測
★ 輪廓搜尋
★ 測距
★ 目標物跟隨
關鍵字(英) ★ Automatic Vehicle
★ color detect
★ Contour Search
論文目次 摘 要 i
Abstract ii
誌謝 iv
目 錄 v
圖目錄 vii
表目錄 x
第一章 緒論 1
1.1. 研究背景與動機 1
1.2. 系統硬體架構 2
1.3. 系統軟體流程 3
1.4. 論文架構 4
第二章 相關研究 6
2.1.目標物偵測 6
2.2.目標物追蹤 9
第三章 系統周邊硬體介紹 13
3.1. 微電腦系統 13
3.2. 影像處理系統 15
3.3. 動力驅動系統 17
3.4. 通訊傳輸系統 21
第四章 目標物偵測與定位 23
4.1 影像前處理 25
4.1.1 色彩空間選用 26
4.1.2 色彩空間轉換 27
4.2高斯混合模型 28
4.3 影像二值化 32
4.4 型態學雜訊去除 33
4.5 輪廓搜尋 37
4.6 質心與面積判定 38
第五章 目標物追蹤與車體運動系統 42
5.1 以校正法訓練距離的判斷 44
5.2 實際距離的判斷 47
5.3 停止範圍決策 48
5.4 車體運動系統 50
第六章 實驗及結果 52
6.1 實驗環境 52
6.2 實驗結果展示 52
6.2.1 不同時間點與室內外的目標物偵測 53
6.2.2 陰暗環境的目標物偵測與多目標物辨別 54
6.2.3 自走車的目標物跟隨 55
6.3 實驗結果分析 59
第七章 結論與未來展望 61
7.1 結論 61
7.2 未來展望 61
參考文獻 63
參考文獻 [1] S.-Y. Chien, S.-Y. Ma, and L.-G. Chen, “Efficient moving object segmentation algorithm using background registration technique,” IEEE Trans. Circuits Syst. Video Technol., vol.12, no.7, pp.577-586, Jul. 2002.
[2] O. Masoud and N. P. Papanikolopoulos, “A novel method for tracking and counting pedestrians in real-time using a single camera,” IEEE Trans. on Vehicular Technology, vol.50, no.5, pp.1267-1278, Sep. 2001.
[3] Y. C. Chung, J. M. Wang, and S. W. Chen, “Progressive background image generation,” in Proc. of 15th IPPR Conf. on Computer Vision, Graphics and Image Processing, Taiwan, pp.858-865, 2002.
[4] I. Haritaoglu, D. Harwood, and L. S. Davis, “W4: real-time surveillance of people and their activities,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.22, pp.809-830, Aug. 2000.
[5] A. J. Lipton, H. Fujiyoshi and R. S. Patil, “Moving target classification and tracking from real-time video,” in Proc. IEEE Workshop Application of Computer Vision, Princeton, NJ, pp. 8-14, Oct. 1998.
[6] S. S. Ghidary, Y. Nakata, T. Takamori, and M. Hattori," Human detection and localization at indoor environment by home robot," in Proc. of IEEE Int. Conf. of Systems, Man, and Cybernetics, Nashville, TN, vol.2, pp.1360-1365, Oct. 2000.
[7] B. D. Lucas and T. Kanade, “An iterative image registration technique with an application to stereo vision,” DARPA Image Understanding Workshop, San Francisco, CA, pp.121-130, 1981.
[8] Y. L. Tian, and A. Hampapur, “Robust salient motion detection with complex background for real-time video surveillance,” in Proc. IEEE Workshop on Motion and Video Computing, Breckenridge, CO, vol.2, pp.30-35, Jan. 2005.

[9] C. Stauffer and W. Grimson, “Adaptive background mixture models for real-time tracking,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, Fort Collins, CO, vol.2, pp.246-252, Jun. 1999.
[10] Y. L. Tian, M. Lu and A. Hampapur, “Robust and efficient foreground analysis for real-time video surveillance,” in Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR’05), vol.1, pp.1182-1187, Jun. 2005.
[11] H. H. Lin, J. H. Chuang, and T. L. Liu, “Regularized background adaptation : a novel learning rate control scheme for gaussian mixture modeling,” IEEE Trans. on Image Processing, vol.20, pp.822-836, Mar. 2011.
[12] S. A. El-Azim, I. Ismail, and H. A. El-Latiff, “An efficient object tracking technique using block-matching algorithm,” in Proc. of Conf. the Nineteenth National, Radio Science, pp.427-433, 2002.
[13] S. McKenna, S. Jabri, Z. Duric, A. Rosenfeld, and H. Wechsler, “Tracking groups of people,” Computer Vision and Image Understanding, vol.80, pp.42-56, Oct. 2000.
[14] D. Koller, K. Daniilidis, and H. Nagel, “Model-based object tracking in monocular image sequences of road traffic scenes,” Int Journal of Computer Vision, vol.10, no.3, pp.257-281, Jun. 1993.
[15] I. A. Karaulova, P. M. Hall, and A. D. Marshall, “A hierarchical model of dynamics for tracking people with a single video camera,” in Proc. of Conf. British Machine Vision, pp.262-352, 2000.
[16] N. Paragios and R. Deriche, “Geodesic active contours and level sets for the detection and tracking of moving objects,” IEEE Trans. Pattern Anal. Machine Intell., vol.22, no.4, pp.266-280, Apr. 2002.
[17] P. B. Chen, C. M. Huang, and L. C. Fu, “A robust visual servo system for tracking an arbitrary-shaped object by a new active contour method,” in Proc. of Conf. the American Control, Boston, MA, vol.2, pp.1516-1521, Jun. 2004.
[18] J. S. Park and J. H. Han, “Contour matching: a curvature-based approach,” Image and Vision Computing, vol.16, pp.181-189, 1998.
[19] D.-S. Jang and H.-I. Choi, “Active models for tracking moving objects,” Pattern Recognition, vol.33, no.7, pp.1135–1146, 2000.
[20] D. Comaniciu, V. Ramesh, and P. Meer, “Kernel-based object tracking,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol.25, no.5, pp.564-575, May 2003.
[21] M. Sonka, V. Hlavac, and R. Boyle, Image Processing: Analysis And Machine Vision, Brooks/Cole Pubblishing Company, 1999.
[22] C. Garcia and G. Tziritas, “Face detection using quantized skin color regions merging and wavelet packet analysis,” IEEE Trans. on Multimedia, vol.1, no.3, pp.264-277, Sep. 1999.
[23] S. Suzuki and K. Be, “Topological structural analysis of digitized binary images by border following,” Computer Vision, Graphics, and Image Processing, vol.30, no.1, pp.32-46, 1985.
[24] D. Douglas and T. Peucker, “Algorithms for the reduction of the number of points required for represent a digitized line or its caricature,” Cartographica : The Int. Journal for Geographic Information and Geovisualization, vol.10, no.2, pp.112-122, 1973.
指導教授 曾定章(Tseng, Din-Chang) 審核日期 2014-7-28
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明