博碩士論文 106522079 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:15 、訪客IP:3.226.245.48
姓名 鄭文喻(Wen-Yu Cheng)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 深度學習結合擴增實境之繪圖場景建構系統
(A Drawing Scene Construction System based on the Integration of Augmented Reality and Deep Learning)
相關論文
★ 以Q-學習法為基礎之群體智慧演算法及其應用★ 發展遲緩兒童之復健系統研製
★ 從認知風格角度比較教師評量與同儕互評之差異:從英語寫作到遊戲製作★ 模糊類神經網路為架構之遙測影像分類器設計
★ 複合式群聚演算法★ 身心障礙者輔具之研製
★ 指紋分類器之研究★ 背光影像補償及色彩減量之研究
★ 類神經網路於營利事業所得稅選案之應用★ 一個新的線上學習系統及其於稅務選案上之應用
★ 人眼追蹤系統及其於人機介面之應用★ 結合群體智慧與自我組織映射圖的資料視覺化研究
★ 追瞳系統之研發於身障者之人機介面應用★ 以類免疫系統為基礎之線上學習類神經模糊系統及其應用
★ 基因演算法於語音聲紋解攪拌之應用★ 虹膜辨識系統之研究與實作
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 對於孩童而言,繪圖是個有趣又能表達自我的活動,繪圖不僅富有樂趣,更可以幫助孩童的手眼協調性、提升孩童的視覺化思考、創造力以及自信心的培養。本論文提出一套繪圖場景建構系統,結合繪圖與擴增實境技術,並使用低成本且常見的繪圖工具,給予孩童在繪畫與視覺上多層次的體驗。
本系統透過行動裝置作為人機介面,提供簡易的操作,將繪圖場景結合擴增實境技術,使空間具體化後,更有助於孩童學習抽象概念,增進孩童享受繪畫的樂趣。系統功能概分為四個部分: (1)繪圖物件辨識(2)旋轉角度分析(3)立體模型貼圖生成(4)擴增實境場景呈現。
目前本系統已實作八種繪圖物件模型。根據論文中實驗顯示,此八種類別偵測的平均辨識率達到89%,在實際測試下系統建構出的擴增實境場景辨識能力穩定性為95%,由此證明本系統對於繪圖辨識及場景建構上擁有良好的呈現效果。
摘要(英) For children, drawing is an interesting and self-expression activity. Drawing is not only fun, but also helps children improve their hand-eye coordination. It could enhance children′s visual thinking, creativity and self-confidence. This thesis proposes a drawing scene construction system which is combined with drawing and augmented reality technology. The system uses low-cost and common drawing tools to give children a multi-level experience in painting and visualization.
The proposed system involves in simple operations through the use of a mobile device as a human-machine interface, and combines the drawing scene with the augmented reality technology to make a 2-D painting become a 3-D concrete scene and help children learn abstract concepts and improve children′s enjoyment of painting. The system is consisted of four modules: (1) a drawing object recognition module, (2) a rotation angle analysis module, (3) a 3-D model texture generation module, and (4) an augmented reality scene rendering module.
At present, eight kinds of drawing object models have been implemented in the system. Simulation results showed that the average recognition rate of the eight models could reach 89% correct. The field test also showed that the stability of the scene recognition ability based on the augmented reality was about 95%. These testing results demonstrated that the drawing scene construction system can provide accurate recognition and has a good rendering effect.
關鍵字(中) ★ 擴增實境
★ Unity3D
★ 物件偵測
★ 生成對抗網路
★ 繪圖
關鍵字(英) ★ augmented reality
★ Unity3D
★ object detection
★ generative adversarial network
★ drawing
論文目次 摘要 i
ABSTRACT ii
致謝 iv
目錄 v
圖目錄 vii
表目錄 x
第一章、緒論 1
1-1 研究動機 1
1-2 研究目的 2
1-3 論文架構 3
第二章、相關研究 4
2-1 擴增實境結合繪圖之應用 4
2-2 物件偵測技術 6
2-3 Faster R-CNN 12
2-4 GAN 15
第三章、研究方法 19
3-1 系統介紹 19
3-1-1 系統架構與流程 20
3-1-2 擴增實境場景建立 22
3-1-3 立體模型與貼圖設計 25
3-2 繪圖物件偵測 26
3-3 旋轉分析演算法 29
3-3-1 繪圖物件擷取 29
3-3-2 旋轉角度分析 33
3-4 3D模型定位 41
3-5 立體模型貼圖生成 44
第四章、實驗設計與結果分析 48
4-1 繪圖物件辨識實驗 50
4-1-1 實驗設計 50
4-1-2 實驗結果 51
4-1-3 實驗分析 59
4-2 繪圖物件旋轉角度實驗 61
4-2-1 實驗設計 61
4-2-2 實驗結果 62
4-2-3 實驗分析 66
4-3 虛擬場景建構實驗 68
4-3-1 鏡頭擺放位置準確率實驗與結果 68
4-3-2 繪圖紙張旋轉準確率實驗與結果 73
第五章、結論與未來展望 80
5-1 結論 80
5-2 未來展望 81
參考文獻 82
附錄一 87
附錄二 89
參考文獻 [1] P. Milgram, H. Takemura, A. Utsumi, and F. Kishino, "Augmented reality: A class of displays on the reality-virtuality continuum," in Telemanipulator and telepresence technologies, vol. 2351, pp. 282-293, 1995.
[2] R. T. J. P. T. Azuma and V. Environments, "A survey of augmented reality," vol. 6, no. 4, pp. 355-385, 1997.
[3] "SketchAR:如何用AR繪畫," SketchAR, [Online]. Available: https://play.google.com/store/apps/details?id=ktech.sketchar&hl=zh_TW. [Accessed 8 - Jun - 2019].
[4] "Just a Line - 隨時隨地透過 AR 作畫," Google Creative Lab, [Online]. Available: https://play.google.com/store/apps/details?id=com.arexperiments.justaline&hl=zh_TW. [Accessed 7 - Jun - 2019].
[5] "Disney Colour and Play," StoryToys Entertainment Limited, [Online]. Available: https://apps.apple.com/ca/app/disney-colour-and-play/id957471210. [Accessed 8 - Jun - 2019].
[6] "Augmented Creativity," Disney Research, [Online]. Available: https://studios.disneyresearch.com/augmented-creativity/. [Accessed 8 - Jun - 2019].
[7] R. Girshick, J. Donahue, T. Darrell, and J. Malik, "Rich feature hierarchies for accurate object detection and semantic segmentation," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 580-587, 2014.
[8] R. Girshick, "Fast r-cnn," in Proceedings of the IEEE international conference on computer vision, pp. 1440-1448, 2015.
[9] S. Ren, K. He, R. Girshick, and J. Sun, "Faster r-cnn: Towards real-time object detection with region proposal networks," in Advances in neural information processing systems, pp. 91-99, 2015.
[10] K. He, G. Gkioxari, P. Dollár, and R. Girshick, "Mask r-cnn," in Proceedings of the IEEE international conference on computer vision, pp. 2961-2969, 2017.
[11] Y. LeCun, C. Cortes and C. J. Burges, "THE MNIST DATABASE of Handwritten Digits," [Online]. Available: http://yann.lecun.com/exdb/mnist/. [Accessed 10 - Jun - 2019].
[12] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, "You only look once: Unified, real-time object detection," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 779-788, 2016.
[13] W. Liu et al., "SSD: Single shot multibox detector," in European conference on computer vision, pp. 21-37, 2016.
[14] K. E. Van de Sande, J. R. Uijlings, T. Gevers, and A. W. Smeulders, "Segmentation as selective search for object recognition," in ICCV, vol. 1, no. 2, p. 7, 2011.
[15] K. Simonyan and A. Zisserman, "Very Deep Convolutional Networks for Large-Scale Image Recognition," in International Conference on Learning Representations, pp. 1-14, 2014.
[16] C. Szegedy et al., "Going deeper with convolutions," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1-9, 2015.
[17] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 770-778, 2016.
[18] C. Szegedy, S. Ioffe, V. Vanhoucke, and A. A. Alemi, "Inception-v4, inception-resnet and the impact of residual connections on learning," in Thirty-First AAAI Conference on Artificial Intelligence, 2017.
[19] "Vanishing Gradient Problem," Wikipedia, [Online]. Available: https://en.wikipedia.org/wiki/Vanishing_gradient_problem. [Accessed 10 - Jun - 2019].
[20] I. Goodfellow et al., "Generative adversarial nets," in Advances in neural information processing systems, pp. 2672-2680, 2014.
[21] M. Mirza and S. Osindero, "Conditional Generative Adversarial Nets," arXiv preprint arXiv:1411.1784, 2014.
[22] P. Isola, J.-Y. Zhu, T. Zhou, and A. A. Efros, "Image-to-image translation with conditional adversarial networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, pp. 1125-1134, 2017.
[23] O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in International Conference on Medical image computing and computer-assisted intervention, pp. 234-241, 2015.
[24] "Unity-Multiplatform Support," Unity, [Online]. Available: https://unity3d.com/unity/features/multiplatform. [Accessed 12 - Jun - 2019].
[25] "Optimizing Target Detection and Tracking Stability," Vuforia, [Online]. Available: https://library.vuforia.com/articles/Solution/Natural-Features-and-Ratings. [Accessed 10 - Jun - 2019].
[26] "UV mapping-Wikipedia," Wikipedia, [Online]. Available: https://en.wikipedia.org/wiki/UV_mapping. [Accessed 11 - Jun - 2019].
[27] "quickdraw-Dataset," Google Creative Lab, [Online]. Available: https://github.com/googlecreativelab/quickdraw-dataset. [Accessed 13 - Jun - 2019].
[28] "Quick, Draw!," Google Creative Lab, [Online]. Available: https://experiments.withgoogle.com/quick-draw. [Accessed 12 - Jun - 2019].
[29] J. Canny, "A computational approach to edge detection," in Readings in computer vision: Elsevier, pp. 184-203, 1987.
[30] A. Maćkiewicz, W. J. C. Ratajczak, and Geosciences, "Principal components analysis (PCA)," vol. 19, no. 3, pp. 303-342, 1993.
[31] "主成分分析(Principal Component Analysis, PCA)," Tommy Huang, [Online]. Available: https://medium.com/@chih.sheng.huang821/%E6%A9%9F%E5%99%A8-%E7%B5%B1%E8%A8%88%E5%AD%B8%E7%BF%92-%E4%B8%BB%E6%88%90%E5%88%86%E5%88%86%E6%9E%90-principle-component-analysis-pca-58229cd26e71. [Accessed 12 - Jun - 2019].
[32] "TurboSquid: Free 3D Models," TurboSquid, [Online]. Available: https://www.turbosquid.com/. [Accessed 12 - Jun - 2019].
指導教授 蘇木春(Mu-Chun Su) 審核日期 2019-8-20
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明