English  |  正體中文  |  简体中文  |  Items with full text/Total items : 70588/70588 (100%)
Visitors : 23201057      Online Users : 643
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/81079


    Title: 基於物件檢測的招牌辨識及半自動訓練資料產生器;Object detection for signboard recognition and semi-automatic ground truth generator
    Authors: 洪晨雅;Hong, Chen-Ya
    Contributors: 資訊工程學系
    Keywords: 深度學習;物件檢測;招牌辨識;deep learning;object detection;signboard recognition
    Date: 2019-07-15
    Issue Date: 2019-09-03 15:33:27 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 資料驅動的物件檢測技術廣泛應用於各種實際的領域。現今,許多研究專案都是提出改善關於電腦視覺應用的精確度。在這篇論文中,我們提出一個自動的招牌檢測方法以及一個半自動訓練資料產生的方法,以此幫助視覺障礙人士在臺灣的街道上行走。我們認為,當視覺障礙人士行走在街道上時,他們可能會對某些特定商店感興趣。然而,目前並沒有足夠的關於臺灣商店招牌的資料集。因此,我們收集了14種在人們日常生活中較常見的商店的影像。從臺灣數個主要縣市收集超過九百萬張的街道影像,其中只有大約百分之一含有招牌。所以,我們提出一個物件檢測模型可以預先標注不確定的樣本。我們也基於這個模型設計一個流程以便達到半自動訓練資料產生的目的。
    我們提出的物件檢測網路是基於Darknet-19這個架構,並且透過引進數種技術改善其精確度,例如,擴張模塊、非局部模塊以及通道注意力。擴張模塊以及非局部模塊的引入都是為了增加感受野,以便獲得更多資訊進而改善網路的精確度。我們也引進通道注意力機制賦予不同通道的特徵圖不同的權重,這個方法進一步改善了精確度。我們所提出的物件檢測網路可以達到91%的精確度,且其速度可達21 FPS。
    半自動的訓練資料產生方法包含數個應用程式,例如,Google地圖工具、我們提出的檢測網路以及編輯工具。Google地圖工具是用來收集街道影像成為原始資料。檢測網路是用來過濾含有招牌的影像。編輯工具是用來驗證被過濾出的影像的正確性。
    這篇論文的目的是要提供一個收集訓練資料的方法,並且減少在時間及人力資源上的重大負擔。
    ;Data-driven object detection techniques are widely applied to a variety of practical areas. Nowadays, many research projects have been proposed to improve the accuracy of computer vision applications. In this paper, we propose an automatic signboard detection method and a semi-automatic ground truth generation method to help visually impaired people walk on streets in Taiwan. We consider that when visually impaired people walk down the street, they may be interested in certain stores. However, there is no enough public dataset for signboards of Taiwanese stores. Therefore, we collect images of 14 kinds of the most popular stores in people’s daily lives. The collected street images number over 9 million from several major cities in Taiwan; however, only about 1% of images contain a signboard. We propose an object detection module to pre-label uncertain samples. Based on this module, we also design a process so that semi-automatic ground truth generation can be achieved.
    Our proposed object detection network is based on Darknet-19 and we improve it by introducing several techniques, such as the dilated block, the non-local block and channel attention. The dilated block and the non-local block are introduced to increase the receptive field for the purpose of getting more information so that the accuracy of the network will be improve. We also introduce the mechanism of channel attention to give different weights for feature maps of different channels. This method can improve the accuracy again. Our proposed object detection network can achieve the accuracy of 91 % and the speed is 21 FPS.
    The semi-automatic ground truth generation contain several applications, such as Google Maps tool, proposed detection network and the editing tool. Google Maps tool is used to collect street images as our raw data. The proposed detection network is used to filter the images which contains signboards. The editing tool is used to verify the correctness of filtered images.
    The purpose of this paper is to propose the method of ground truths data collection and reduces significant effort in terms of time and human resources.
    Appears in Collections:[資訊工程研究所] 博碩士論文

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML116View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - Feedback  - 隱私權政策聲明