博碩士論文 93522041 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:10 、訪客IP:3.145.108.9
姓名 廖英傑(Ying-Chieh Liao)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 可以自由移動頭部之視線追蹤演算法
(Eye Gaze Estimation from Iris Imageswith Free Head Movements)
相關論文
★ 使用視位與語音生物特徵作即時線上身分辨識★ 以影像為基礎之SMD包裝料帶對位系統
★ 手持式行動裝置內容偽變造偵測暨刪除內容資料復原的研究★ 基於SIFT演算法進行車牌認證
★ 基於動態線性決策函數之區域圖樣特徵於人臉辨識應用★ 基於GPU的SAR資料庫模擬器:SAR回波訊號與影像資料庫平行化架構 (PASSED)
★ 利用掌紋作個人身份之確認★ 利用色彩統計與鏡頭運鏡方式作視訊索引
★ 利用欄位群聚特徵和四個方向相鄰樹作表格文件分類★ 筆劃特徵用於離線中文字的辨認
★ 利用可調式區塊比對並結合多圖像資訊之影像運動向量估測★ 彩色影像分析及其應用於色彩量化影像搜尋及人臉偵測
★ 中英文名片商標的擷取及辨識★ 利用虛筆資訊特徵作中文簽名確認
★ 基於三角幾何學及顏色特徵作人臉偵測、人臉角度分類與人臉辨識★ 一個以膚色為基礎之互補人臉偵測策略
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 現行的視線追蹤系統可以應用於許多地方,例如居家看護、滑鼠控制以及線上學習等等。而如果是希望做一般性的使用,也就是希望系統的安裝時間或者使用難易度等等都會容易使用,此時,非侵入式的系統比較被建議,也就是架設一台攝影機在固定地方來追蹤視線而建構出的系統。然而,對於大部分的非侵入式系統,都會要求使用者的頭部保持固定不動,對於漸凍人這類的病人而言,這是行的通的,但是若希望一般人也可以使用,就會變的非常困難,原因是在於大部分的人都沒辦法長時間的保持頭部固定,也因此,會使得系統的正確率下降許多。
因此,為了解決頭移動限制的問題,在本篇論文中,提出了一個有效的解決辦法。此演算法的流程是,在輸入部分為一320*240 大小的眼睛圖,接著取出已
定好的特徵,包括了兩個眼角位置以及用來表示虹膜輪廓的橢圓之中心點、兩軸比例和方向等等,共有八個。接著,把這些特徵值代入在校正過程中已訓練好的類神經網路,而類神經網路的輸出值則是最後所推測的注視點位置。與其他系統方法不同點的地方在於,用來表示虹膜輪廓的是橢圓,而非圓形,這是因為用圓形會失去很多資訊。此外,輸入部分和輸出部分(注視點)之間的對應函式不使用二階或三階的多項式,而是利用了類神經網路來做函式逼近。而特徵的取決,由於也考量到頭移動的因素,因此除了使用眼睛中心點作為特徵外,也取了多個有意義的特徵。最後,實驗結果證明了本研究所提出的演算法是可以用來解決頭移動的限制。
摘要(英) A gaze estimation method using the eye position and the iris contour is proposed in this thesis. In traditional methods, users’ heads are asked to remain still for a long time which is an exhausted and fatigue work. In order to create a comfortable environment, users’ heads are allowed to move freely. In our approach, the eye region is assumed within the view of the camera. As we know, the mapping between the gaze points and the eye region is difficultly to be formulated as a polynomial. The zoomed and clear eye images were grabbed to increase the accuracy of gaze estimation. Therefore, the center of an eye ball and the shape of iris contour should both be considered. First, the eye corners are located manually to estimate the center of eye
ball. Using this information, the pose of a human head is guessed to be a global gaze feature. In addition, the iris center, size, and orientation, called local gaze features, are
calculated and integrated to train a neural network (NN). Instead of a polynomial function approximation, the NN was trained to estimate the mapping between the gaze features and the gaze points. Experiments were conducted and the results demonstrate the effectiveness of the proposed method in gaze estimation. Finally,
conclusions are given and future works are suggested.
關鍵字(中) ★ 虹膜輪廓
★ 注視點估計
★ 多層感知機
★ 虹膜定位
★ 人機介面
關鍵字(英) ★ iris contour
★ iris localization
★ human-computer interface
★ gaze estimation
★ multilayer perceptron
論文目次 Chapter 1 Introduction 1
1.1 Motivations 1
1.2 Related Works 3
1.2.1 Eye Anatomy 3
1.2.2 Eye Tracking Techniques 3
1.2.3 Head Mounted Device 4
1.2.4 Electric Skin Potential 4
1.2.5 Eye Image Using Artificial Neural Networks (ANN) 4
1.2.6 Dual Purkinje Image 5
1.2.7 Video-based Iris and Pupil Tracking 6
1.2.8 Pupil-Glint Vector Technique 7
1.3 System Overview8
1.4 Thesis Organization 9
Chapter 2 Iris Contour Extraction10
2.1 Iris Contour 10
2.2 Iris Localization 11
(a) Vertical Boundary Identification 13
(b) 8-Connected Component Labeling14
(c) Horizontal Boundaries Identification 15
2.3 Elliptical Iris Contour Detection 17
Chapter 3 Eye Gaze Determination 23
3.1 Neural Network23
(a) Face detection24
(b) Function approximation 24
(c) Incident detection 25
3.2 Calibration 25
3.3 Geometrical Features of Eyes 25
(a) Still Head Movement Constraint 26
(b) Without Still Head Movement Constraint 28
(c) Feature Selection 29
(d) Gaze Determination 29
Chapter 4 Experimental Results and Discussions 31
4.1 System Configuration 31
4.2 Experiments 32
4.2.1 Iris Contour Detection32
4.2.2 Gaze Estimation 33
4.3 Discussions 38
Chapter 5 Conclusions and Future Works 39
5.1 Conclusions 39
5.2 Future works 39
References 41
參考文獻 [1] LC Technique. http://www.eyegaze.com/
[2] SensoMotoric Instruments. http://www.smi.de/
[3] Z. Zhu and Q. Ji, “Eye gaze tracking under natural head movements”, IEEE
Computer Society Conference on Computer Vision and Pattern Recognition, vol. 1, pp. 918-923, June 2005.
[4] Z. Zhu and Q. Ji, “Eye and gaze tracking for interactive graphic display”, Machine Vision and Applications, vol. 15, no. 3, pp. 139-148, July 2004.
[5] D. Hyun and M. J. Chung, “Non-intrusive eye gaze estimation without
knowledge of eye pose”, Proc. of 6th IEEE International Conference on
Automatic Face and Gesture Recognition, pp. 785-790, May 2004.
[6] T. Cornsweet, H. Crane, “Accurate two-dimensional eye tracker using first and fourth Purkinje images”, Journal of the Optical Society of America, vol. 63, pp.
921-928, 1973.
[7] S. Baluja and D. Pomerleau, “Non-intrusive gaze tracking using artificial neural
networks”, Tech. Rep. CMU-CS-94-102, School of Computer Science, CMU,
CMU Pittsburgh, Pennsylvania 15213, Jan. 1994.
[8] Y. Ebisawa, “Improved video-based eye-gaze detection method”, IEEE
Transactions on Instrumentation and Measurement, vol. 47, pp. 948-955, Aug.
1998.
[9] S. W. Shih and J. Liu, “A novel approach to 3-D gaze tracking using stereo
cameras”, IEEE Transactions on Systems, Man and Cybernetics, Part B, vol. 34,
pp. 234-235, Feb. 2004.
[10] C. H. Morimoto, D. Koons, A. Amir, and M. Flickner, “Pupil detection and
tracking using multiple light sources”, Image Vision Computing, vol. 18, pp.
331-335, 2000.
[11] T. E. Hutchinson, K. P. White, Jr., W. N. Martin, K. C. Reichert and L. A. Frey,
“Human-computer interaction using eye-gaze input”, IEEE Transactions on
Systems, Man and Cybernetics, vol. 19, pp. 1527-1534, Nov./Dec. 1989.
[12] A. Fitzgibbon, M. Pilu, and R. B. Fisher, “Direct least square fitting of ellipses”,
IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 21, pp.
476- 480, May 1999.
[13] C. M. Privitera and L. W. Stark, “Algorithms for defining visual
regions-of-interest: Comparison with eye fixations”, IEEE Transactions on
Pattern Analysis and Machine Intelligence, vol. 22, pp. 970-982, Sep. 2000.
[14] A. L. Yuille, D. S. Cohen, and P. W. Hallinan, “Feature extraction from faces
using deformable templates”, Proc. of IEEE Computer Society Conference on
Computer Vision and Pattern Recognition, pp. 104-109, June 1989.
[15] W. Huang, B. Yin, C. Jiang, and J. Miao, “A new approach for eye feature
extraction using 3D eye template”, Proc. of 2001 International Symposium on
Intelligent Multimedia, Video and Speech Processing, pp. 340-343, May 2001.
[16] Y. Matsumoto and A. Zelinsky, “An algorithm for real-time stereo vision
implementation of head pose and gaze direction measurement”, Proc. of Fourth
International Conference on Automatic Face and Gesture Recognition, pp.
499-504, March 2000.
[17] K. N. Kim and R. S. Ramakrishna, “Vision-based eye-gaze tracking for human
computer interface”, IEEE International Conference on Systems, Man, and
Cybernetics, vol. 2, pp. 324-329, Oct. 1999.
[18] G. C. Feng and P.C. Yuen, “Variance projection function and its application to
eye detection for human face recognition”, Pattern Recognition Letters, vol. 19,
pp. 899-906, July 1998.
[19] Z. H. Zhou and X. Geng, “Projection functions for eye detection”, Pattern
Recognition, vol. 37, pp. 1049-1056, May 2004.
[20] S. Haykin, Neural Networks, 2nd edition, Prentice Hall, ch. 4, 1999.
[21] H. A. Rowley, S. Baluja, and T. Kanade, “Neural network-based face detection”,
Proc. of IEEE Computer Society Conference on Computer Vision and Pattern
Recognition, pp. 203-208, June 1996.
[22] C. Garcia and M. Delakis, “Convolutional face finder: a neural architecture for
fast and robust face detection”, IEEE Transactions on Pattern Analysis and
Machine Intelligence, vol. 26, pp. 1408-1423, Nov. 2004.
[23] T. Draelos and D. Hush, “A constructive neural network algorithm for function
approximation”, IEEE International Conference on Neural Networks, vol. 1, pp.
50-55, June 1996.
[24] X. Jin, D. Srinivasan, and R. L. Cheu, “Classification of freeway traffic patterns
for incident detection using constructive probabilistic neural networks”, IEEE
Transactions on Neural Networks, vol. 12, pp. 1173-1187, Sept. 2001.
[25] D. Srinivasan, Xin Jin, and R. L. Cheu, “Evaluation of adaptive neural network
models for freeway incident detection”, IEEE Transactions on Intelligent
Transportation Systems, vol. 5, pp. 1-11, March 2004.
[26] J. Zhu and J. Yang, “Subpixel eye gaze tracking”, Proc. of Fifth IEEE
International Conference on Automatic Face and Gesture Recognition, pp.
124-129, May 2002.
指導教授 范國清(Kuo-Chin Fan) 審核日期 2006-7-6
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明