博碩士論文 100522097 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:13 、訪客IP:18.191.33.207
姓名 謝瀚霆(Han-ting Hsieh)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 混合實境之兵棋推演互動桌
(A Mix-Reality-based Interactive War-Gaming Platform)
相關論文
★ 以Q-學習法為基礎之群體智慧演算法及其應用★ 發展遲緩兒童之復健系統研製
★ 從認知風格角度比較教師評量與同儕互評之差異:從英語寫作到遊戲製作★ 基於檢驗數值的糖尿病腎病變預測模型
★ 模糊類神經網路為架構之遙測影像分類器設計★ 複合式群聚演算法
★ 身心障礙者輔具之研製★ 指紋分類器之研究
★ 背光影像補償及色彩減量之研究★ 類神經網路於營利事業所得稅選案之應用
★ 一個新的線上學習系統及其於稅務選案上之應用★ 人眼追蹤系統及其於人機介面之應用
★ 結合群體智慧與自我組織映射圖的資料視覺化研究★ 追瞳系統之研發於身障者之人機介面應用
★ 以類免疫系統為基礎之線上學習類神經模糊系統及其應用★ 基因演算法於語音聲紋解攪拌之應用
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 傳統的兵棋推演是利用桌子、地圖及棋子建立一個作戰模擬的環境,以人工的方式進行棋子的移動來推演、分析與記錄戰術的變化,達到戰術規劃的目的,但由於人工兵棋操作耗時且推演空間受到限制,現在多利用電腦來取代傳統的人工兵棋。雖然電腦兵棋相對於人工兵棋更為優異,但在使用上須具備系統操作知識且缺乏實體的推演操作。
本論文利用微軟 Kinect 3D深度攝影機結合投影機,建立一個混合實境之兵棋推演互動桌環境。這個互動桌擁有許多吸引人之特點,如:直覺性的體感操作方式、大尺度觸控功能、及可即時提供滿足不同的兵棋推演需求的地理圖資等。此系統會自動記錄兵棋推演之路徑,於演習結束後,並用動畫重新模擬兵棋推演之路徑,此歷史軌跡可供檢討想定推演及兵力運用之依據。此外,我們利用Kinect 深度攝影機做為觸控感測器,將桌面當做觸控面板,不僅改善大型觸控螢幕搬遷上的不便,且在不需增加現有資訊設備及有限經費下,即可達成系統開發之目標。
最後,本系統之各項功能皆有透過各種不同之實驗設計來驗證。在觸控正確率實驗中,其指令正確率達97.1%。在物件偵測與辨識實驗,物件在不同角度時平均辨識正確率為97.42%,移動時辨識正確率為87.49%。此外,我們利用系統可用性量表評估本系統,得到的分數為77.14。
摘要(英) A traditional war simulation or war game is to use a game table, a map and some pieces representing different forces to build a combat simulation environment with staff officers moving pieces around on the game table to deduce, analyze, and record changes resulted from the simulated military strategies to achieve tactical planning purposes. Since the conventional war game is very time consuming and its operating environment has many limitations, computer-based war games are usually adopted to replace the traditional war games now. Although computer-based war games outperform the traditional war games, they require human staffs to have the expertise to operate the war game systems. In addition, they are lack of intuitive physical actions during the deduction of military operations.
This thesis integrates a Microsoft Kinect 3D depth camera with a projector to create a mixed-reality-based interactive war simulation (or war game) platform. This interactive platform owns many appealing characteristics such as intuitive embodied controls, large-scale touch screen functionality, and a real-time provision of geographic map information to meet different military practices’ requirements. The proposed system will automatically record the whole war game deduction procedures and then use animations to re-play the recorded deduction procedures at the end of the war game assignment. The recorded historical data would be the basis for reviewing the military practices and force troop’s deployment deductions. Via this kind of environment settings, the proposed interactive platform allows users to use embodied control to accomplish a military practice deduction. In addition, we use the Kinect depth camera as a touch sensor to make the table serve as a large-scale touch panel. This kind of arrangement can not only overcome the inconvenience incurred by the re-location of a large-scale panel but also reach the development goal under a small budget.
Finally, several experiments were designed to evaluate the functionalities of the proposed mix-reality-based war simulation platform. In the touch experiments, the rate of instructions correct is 97.1%. The correct rate is 97.42% in the experiment with objects at different angles, and in the shift- experiments the correct rate is 87.49% in our proposed system. Besides, we use the system usability scale to measure our system, and the score is 77.14.
關鍵字(中) ★ 深度攝影機
★ 混合實境
★ 觸控螢幕介面
★ 兵棋推演
★ 三維物件辨識
關鍵字(英) ★ depth cameras
★ mixed reality
★ touch screen interface
★ war game
★ three-dimensional object recognition
論文目次 摘要 i
ABSTRACT iii
誌謝 vi
目錄 vii
圖目錄List of Figures ix
表目錄List of Tables xii
第一章、緒論 1
1-1 研究動機 1
1-2 研究目的 2
1-3 論文架構 3
第二章、相關研究 4
2-1 兵棋推演 4
2-1-1 人工兵棋 4
2-1-1 電腦兵棋 5
2-2 觸控式互動桌 7
2-3 手型辨識 9
2-4 三維物件辨識 11
2-5 Kinect for Windows 13
2-5-1 設備規格 13
2-5-2 偵測深度範圍及精準度 16
2-5-3 Kinect 應用 17
2-6 Nasa WorldWind 介紹 20
2-7 Unity 介紹 21
第三章、研究方法與步驟 22
3-1 投影畫面定位 23
3-1-1 取得投影畫面邊界 23
3-1-2 座標系轉換 24
3-2 深度平面圖 25
3-3 膚色偵測 27
3-4 手部與物件偵測 28
3-4-1 侵蝕與擴張演算法 30
3-4-2 標號演算法 31
3-4-1 物件方向是與長軸統計表 32
3-5 物件辨識 33
3-6 觸控判定 35
3-6-1 移動層與觸控層 35
3-6-2 觸控位置 35
第四章、兵棋推演互動桌 37
4-1 系統環境 37
4-2 系統介紹 38
4-2-1 兵棋推演模式 39
4-2-2 任務排程模式 41
第五章、實驗結果 42
5-1 觸控實驗 42
5-1-1 觸控指令正確率 42
5-1-2 耗時比較 43
5-2 物件偵測與辨識實驗 44
5-2-1 物件位置準確度實驗 44
5-2-2 物件位置之辨識率實驗 45
5-2-3 物件旋轉之辨識率實驗 50
5-2-4 物件移動之辨識率實驗 55
5-3 問卷結果 59
第六章、結論與未來展望 61
6-1 結論 61
6-2 未來展望 62
參考文獻 63
附錄一、Least-squares Singular Value Decomposition 68
附錄二、System Usability Scale (SUS)系統可用性量表 69
參考文獻 [1] Konrad Lischka, Kriegsspiel, [Online] Available:
http://www.spiegel.de/netzwelt/spielzeug/kriegsspiel-wie-preussische-militaers-den-rollenspiel-ahnen-erfanden-a-625745.html Jun. 19, 2013[data accessed]
[2] Haw, U.S. Soldiers welcome Signapore army to Tiger Balm, [Online] Available: http://www.epochtimes.com/b5/6/2/7/n1214650.htm Jun. 19, 2013[data accessed]
[3] 黃金生,「聯合對抗與戰術模擬系統-使用者操作手冊」,陸軍司令部教準部作戰模擬處,2009。
[4] 毛公鼎漢字互動桌 [Online] Available: http://www.npm.gov.tw/exh101/paintings/ch/ch_02.html Jun. 19, 2013[data accessed]
[5] 北美水觸控. 墨影意形 [Online] Available: http://www.youtube.com/watch?v=BzS95KnFt6Q Jun. 19, 2013[data accessed]
[6] S. Jordà, G. Geiger, M. Alonso, and M. Kaltenbrunner, “The reacTable: exploring the synergy between live music performance and tabletop tangible interfaces,” in 1st International Conference on Tangible and Embedded Interaction, pp. 139-146, 2007.
[7] J.M. Rehg and T. Kanade, “Digiteyes: Vision-based hand tracking for human-computer interaction,” in Proc. of the Workshop on Motion of Non-Rigid and Articulated Bodies, pp. 16-22, 1994.
[8] E. Ueda, Y. Matsumoto, M. Imai, and T. Ogasawara, “Hand Pose Estimation for Vision-based Human Interface,” IEEE Trans. on Industrial Electronics, vol. 50, no. 4, pp. 676-684, 2003.
[9] A. Causo, M. Matsuo, E. Ueda, K. Takemura, Y. Matsumoto, J. Takamatsu, and T. Ogasawara, “Hand pose estimation using voxel-based individualized hand model,” in IEEE/ASME International Conference on Advanced Intelligent Mechatronics, pp. 451-456, Jul. 14-17, 2009.
[10] R. Yang and S. Sarkar, “Gesture Recognition Using Hidden Markov Models from Fragmented Observations,” in Proc. of IEEE Conference Computer Vision and Pattern Recognition, 2006
[11] M. Elmezain and A. Al-Hamadi, “Gesture Recognition for Alphabets from Hand Motion Trajectory Using Hidden Markov Models,” 2007 IEEE International Symposium on Signal Processing and Information Technology, pp. 1192-1197, 15-18 Dec. 2007
[12] M. A. Amin and H. Yan, “Sign Language Finger Alphabet Recognition From Gabor-PCA Representation of Hand Gestures,” in Proc. of the Sixth International Conference on Machine Learning and Cybernetics, pp. 2218-2223, 2007.
[13] Y. Fang, J. Cheng, K. Wang, and H. Lu, “Hand Gesture Recognition Using Fast Multi-scale Analysis,” in Proc. of IEEE international Conference on Image and Graphics, 2007.
[14] M. Vafadar and A. Behrad, “Human Hand Gesture Recognition Using Motion Orientation Histogram for Interaction of Handicapped Persons with Computer,” Lecture Notes In Computer Science, vol. 5099, pp. 378-385, 2008.
[15] A. Bobick and A.Wilson, “A state based approach to the representation and recognition of gesture,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 19, no. 12, pp. 1325–1337, Dec. 1997
[16] J. F. Lichtenauer, E. A. Hendriks, and M. J. T. Reinders, “Sign Language Recognition by Combining Statistical DTW and Independent Classification,” IEEE Trans. On Pattern Analysis and Machine Intelligence, vol. 30, no. 11, pp. 2040-2046, 2008.
[17] M. V. Lamar, M. S. Bhuiyan, and A. Iwata, “T-CombNET - A Neural Network Dedicated to Hand Gesture Recognition,” Lecture Notes In Computer Science, vol. 1811, pp. 613-622, 2000.
[18] E. Stergiopoulou, N. Papamarkos, and A. Atsalakis, “Hand Gesture Recognition Via a New Self-organized Neural Network,” Lecture Notes In Computer Science, vol. 3773, pp. 891-904, 2005.
[19] P. Hong, M. Turk, and T. Huang, “Gesture modeling and recognition using finite state machines,” in Proc. Fourth IEEE International Conference and Gesture Recognition, pp. 410-415, 2000.
[20] M. Kazhdan and T. Funkhouser, “Harmonic 3D shape matching,” International Conference on Computer Graphics and Interactive Techniques, pp. 191, 2002.
[21] A. E. Johnson and M. Hebert, “Using spin images for efficient object recognition in cluttered 3D scenes,” IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 21, no. 5, pp. 433-449, May 1999.
[22] Kinect for Windows [Online]Available:
http://www.microsoft.com/en-us/kinectforwindows/
Jun. 19, 2013[data accessed]
[23] Microsoft Kinect somatosensory game device full disassembly report _Microsoft XBOX, [Online] Available: http://www.waybeta.com/news/58230/microsoft-kinect-somatosensory-gamedevice-full-disassembly-report-_microsoft-xbox Jun. 9, 2012[data accessed]
[24] Kinect感應器 [Online] Available :
http://msdn.microsoft.com/zh-tw/hh367958.aspx
Jun. 19, 2013 [data accessed]
[25] Kinect 深度點密度分布圖 [Online] Available : http://kheresy.files.wordpress.com/2011/12/depthhistogram.png?w=630
Jun. 19, 2013 [data accessed]
[26] G. Fanelli, J. Gall, and L. V. Gool, “Real Time Head Pose Estimation with Random Regression Forests,” in IEEE Conference on Computer Vision and Pattern Recognition, 2011
[27] Andrew D. Wilson, “Using a Depth Camera As a Touch Sensor,” in Proc. Of ACM International Conference on Interactive Tabletops and Surfaces, pp. 69–72, 2010.
[28] NASA World Wind [Online]Available: http://worldwind.arc.nasa.gov/features.html Jun. 19, 2013 [data accessed]
[29] Unity [Online]Available: http://unity3d.com/unity/
Jun. 19, 2013 [data accessed]
[30] K. S. Arun, T. S. Huang and S. D. Blostein, “Least-Squares Fitting of Two 3-D Point Sets,” IEEE Trans. Analysis and Machine Intelligence, vol.9, no.5, pp. 698-700, 1987.
[31] 曾郁展,「DSP-Based 之即時人臉辨識系統」,國立中山大學電機工程學系碩士論文,民國九十四年。
[32] 吳成柯、戴善榮、程湘君、雲立實,數位影像處理,儒林圖書有限公司,台北市,民國九十年十月。
[33] B. F. Wu, S. P. Lin, and C. C. Chiu, “Extracting characters from real vehicle licence plates out-of-doors,” Computer Vision, IET , vol.1, no.1, pp. 2-10, Mar. 2007.
[34] M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Transactions on Information Theory, vol.8, no.2, pp. 179-187, Feb. 1962.
[35] 吳承宗,「基於深度攝影機之混合實境互動桌」,國立中央大學資訊工程研究所碩士論文,民國一百零一年。
[36] Jeff Sauro, A Practical Guide to the System Usability Scale: Background, Benchmarks & Best Practices, April 20, 2011
指導教授 蘇木春 審核日期 2013-8-1
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明