English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41634822      線上人數 : 2240
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/81242


    題名: 基於區域卷積神經網路之第一人稱視角即時手指偵測;Egocentric-View Real-Time Fingertip Detection based on Regional Convolutional Neural Networks
    作者: 陳永瀚;Chen, Yung-Han
    貢獻者: 資訊工程學系
    關鍵詞: 手指偵測;智慧型眼鏡應用;區域卷積神經網路;空中手寫;fingertip;smart glasses;region proposal network;air-writing
    日期: 2019-08-06
    上傳時間: 2019-09-03 15:40:19 (UTC+8)
    出版者: 國立中央大學
    摘要: 本研究針對第一人稱視角 RGB 影像,進行手指指尖即時偵測,並依此於智慧型眼鏡中實作空中手寫輸入的應用。首先,我們以 Unity3D 建立訓練資料集,即利用 3D 手部模型合成於自然場景中以快速地產生大量且高品質的訓練影像與標記資料,同時避免人工標記所可能產生的誤差。我們討論如何讓人工合成影像更貼近實際影像,並利用包含調整背景複雜度、光線明亮度、色彩對比等方式產生多樣化的影像以增加模型的可靠度。接著,我們改良 Mask R-CNN 模型,藉由簡化特徵提取網路,以及改善網路模型對於偵測小物件的適應性,讓所提出的模型在精準度或速度上都為該領域最佳,在 640×480 的 RGB 影像上進行手指偵測,平均像素誤差僅 8.31 像素點,處理畫幀速度達到每秒 38.8 張。最後我們整合手指偵測網路模型於智慧型眼鏡中,以手指指尖移動軌跡作為手寫輸入,再利用 Google Input API 辨識文字以回傳候選字給智慧型眼鏡使用者選擇,建立適用於智慧型眼鏡的新互動輸入法。;This research investigates real-time fingertip detection in RGB images/frames captured from such wearable devices as smart glasses. First, we established a synthetic dataset by using Unity3D and focused on the pointing gesture for egocentric view. The advantage of synthetic data is to avoid manual labeling errors and provide a large benchmark dataset with high quality. We discuss the dataset generation and how to produce the images in a natural way. Second, a modified Mask Regional Convolution Neural Network (Mask R-CNN) is proposed with one region-based CNN for hand detection and another three-layer CNN for locating the fingertip. We employ MobileNetV2 as the backbone network and simplify the number of bottleneck layers to avoid redundant features. Moreover, we improve the accuracy of detecting small objects by employing FPN and RoIAlign. We achieve fingertip detection with 25 milliseconds per frame for the 640×480 resolution by GPU and average 8.31 pixel errors. The processing speed is high enough to facilitate several interesting applications. One application is to trace the location of a user’s fingertip from first-person perspective to form writing trajectories. A text input mechanism for smart glasses can thus be implemented to enable a user to write letters/characters in air as the input and even interact with the system using simple gestures. Experimental results demonstrate the feasibility of this new text input methodology.
    顯示於類別:[資訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML112檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明