博碩士論文 103522019 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:19 、訪客IP:3.149.234.251
姓名 張瓊方(CHIUNG-FANG CHANG)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 使用卷積神經網路偵測街景文字圖案
(Detecting Texts and Graphs in Street View Images by Convolutional Neural Networks)
相關論文
★ 基於QT之跨平台無線心率分析系統實現★ 網路電話之額外訊息傳輸機制
★ 針對與運動比賽精彩畫面相關串場效果之偵測★ 植基於向量量化之視訊/影像內容驗證技術
★ 植基於串場效果偵測與內容分析之棒球比賽精華擷取系統★ 以視覺特徵擷取為基礎之影像視訊內容認證技術
★ 使用動態背景補償以偵測與追蹤移動監控畫面之前景物★ 應用於H.264/AVC視訊內容認證之適應式數位浮水印
★ 棒球比賽精華片段擷取分類系統★ 利用H.264/AVC特徵之多攝影機即時追蹤系統
★ 利用隱式型態模式之高速公路前車偵測機制★ 基於時間域與空間域特徵擷取之影片複製偵測機制
★ 結合數位浮水印與興趣區域位元率控制之車行視訊編碼★ 應用於數位智權管理之H.264/AVC視訊加解密暨數位浮水印機制
★ 基於文字與主播偵測之新聞視訊分析系統★ 植基於數位浮水印之H.264/AVC視訊內容驗證機制
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本論文提出於街景畫面中尋找文字與圖案的偵測機制,主要考量街景環境所拍攝的畫面常出現具識別性的人為標記,包括交通路牌與商家招牌,
這些人造圖案提供了關於該影像的若干資訊,例如拍攝的所在位置與商家招牌的廣告效果等。然而,這類物件的多種圖案或形狀並不容易以固定的
樣板予以分析,再加上街景影像常包含雜亂背景(建築、道路、林木等),路/招牌在畫面中也可能重疊,或遭到街道中的其他物體遮蔽,而天候光線
等因素也會影響偵測結果,這些因素都增加了偵測街景影像人為資訊的困難。我們所提出的偵測機制分成兩個部分,第一部分定位影像中之路牌及
招牌所屬區域,我們採用基於全卷積網路(Fully Convolutional Network, FCN)分割技術,訓練街景路牌及招牌的偵測模型,以期迅速且有效地確認目標。第二部分則於該區域中擷取文字及商標,我們利用招牌及路牌的特性,即不論兩者形狀為何,通常都由一塊平滑區域組成背景,而文字及商標存在於其中。我們依據灰階梯度強度(Gradient Magnitude),建構平滑區域圖,再根據第一部分所偵測的區域,以比對平滑區域的方式確認畫面中招牌的實際區域,根據文字與圖案的特性定義人為資訊位置機率圖。最後以適用於文本檢測的最大穩定極值區域 (Maximally Stable Extremal Regions,MSER)方法,從資訊位置機率大的區域中擷取文字及商標。實驗結果顯示本機制在各類複雜街景畫面中能有效取得文字與圖案,並依此探討FCN在此應用中的使用方式。
摘要(英) Considering that traffic and shop signs appearing in street view images contain useful information, such as locations of scenes or effects of advertising billboards, a text and graph detection mechanism in street view images is proposed in this research. Many of these artificial objects in street view images are not easy to extract with a fixed template. Besides, cluttered backgrounds containing such items as buildings or trees may block some parts of the signs, increasing the challenges of detection. Weather or light conditions further complicate the detection process. The proposed detection mechanism is divided into two parts; first, we use the Fully Convolutional Network (FCN) to train a detection model for effectively locating the positions of signs in street view images. In the second part, we extract the texts and graphs in the selected areas employing their characteristics. By observing that, regardless of various shapes, the texts/graphs are usually superimposed on smooth areas, we construct
smooth-region maps according to the gradient magnitudes and then confirm the actual areas of signs. The texts and graphs can then be extracted by Maximally Stable Extremal Regions (MSER), which is suitable for text detection. Experimental results show that this mechanism can effectively extract texts and
graphs in different types of complex street scenes.
關鍵字(中) ★ 文字偵測
★ 卷積神經網路
★ 全卷積神經網路
★ 最大極值穩定區域
關鍵字(英) ★ text detection
★ convolution neural network
★ fully convolutional network
★ MSER
論文目次 論文摘要 .......................................................................................................................................... I
ABSTRACT ....................................................................................................................................... III
目錄 .............................................................................................................................................. IV
附圖目錄 ....................................................................................................................................... VI
表格目錄 ........................................................................................................................................ IX
第一章 緒論 ............................................................................................................................. 1
1.1 研究動機 ............................................................................................................................ 1
1.2 研究貢獻 ............................................................................................................................ 4
1.3 論文架構 ............................................................................................................................ 5
第二章 文字偵測相關研究 ...................................................................................................... 6
第三章 深度學習介紹 .................................................................................................................. 10
3.1 類神經網路 ............................................................................................................................ 11
3.1.1 類神經網路基本原理 ......................................................................................................... 11
3.1.2 類神經網路發展 ................................................................................................................. 14
3.1.3 類神經網路訓練 ................................................................................................................. 15
3.1.4 倒傳遞類神經網路問題 ..................................................................................................... 17
3.2 深度學習發展歷程 ................................................................................................................ 19
3.3 卷積神經網路 ........................................................................................................................ 21
3.3.1 卷積神經網路基本結構 ..................................................................................................... 22
3.3.2 卷積神經網路發展 ............................................................................................................. 25
3.3.3 卷積神經網路模型 ............................................................................................................. 26
第四章 提出方法 ......................................................................................................................... 30
4.1 招/路牌之偵測 ...................................................................................................................... 31
4.1.1 全卷積神經網路介紹 ......................................................................................................... 31
4.1.2 網路建立與訓練資料 ......................................................................................................... 33
4.2 定位文字與圖案 .................................................................................................................... 36
4.2.1 確認招/路牌實際位置 ....................................................................................................... 36
4.2.2 區分招/路中背景區域 ....................................................................................................... 42
4.2.3 擷取文字與圖案 ................................................................................................................. 46
4.2.4 定位文字與圖案位置 ......................................................................................................... 49
第五章 實驗結果 ................................................................................................................... 52
5.1 網路訓練結果分析 .............................................................................................................. 52
5.2 文字圖案偵測結果比較 ...................................................................................................... 57
5.3 不同情境之偵測結果 .......................................................................................................... 60
第六章 結論與未來工作 .............................................................................................................. 72
參考文獻 ....................................................................................................................................... 73
參考文獻 [1] J. J. Lee, P. H. Lee, S. W. Lee, A. Yuille, C. Koch, “AdaBoost for text detection in natural scene.” IEEE International Conference on Document Analysis and Recognition(ICDAR), pp. 429-434, 2011.
[2] R. Minetto, N. Thomeb, M. Cord, “T-HOG: an effective gradient-based descriptor for single line text regions.” Pattern Recognition, vol.46(3), pp. 1078-1090, 2013.
[3] A. Bissacco, M. Cummins, Y. Netzer, H. Neven, “PhotoOCR: Reading Text in Uncontrolled Conditions.” IEEE International Conference on Computer Vision(ICCV), 2013.
[4] A. Mishra, K. Alahari, C. V. Jawahar, “Top-down and bottom-up cues for scene text recognition.” IEEE International Conference Computer Vision and Pattern Recognition (CVPR), 2012.
[5] K. Wang, B. Babenko, S. Belongie, “End-to-end scene text recognition.” IEEE International Conference on Computer Vision(ICCV), 2011.
[6] T. Wang, D. J. Wu, A. Coates, A. Y. Ng, “End-to-end text recognition with convolutional neural network.” IEEE International Conference on Pattern Recognition (ICPR), 2012.
[7] N. Dalal, B. Triggs, “Histograms of oriented gradients for human detection.” IEEE International Conference Computer Vision and Pattern Recognition (CVPR), 2005.
[8] A. Coates, B. Carpenter, C. Case, S. Satheesh, B. Suresh, T. Wang, D. J. Wu, A. Y. Ng, “Text detection and character recognition in scene images with unsupervised feature learning.” IEEE International Conference on Document Analysis and Recognition, pp. 440–445, 2011.
[9] Y. F. Pan, X. Hou, C. L. Liu, "Text localization in natural scene images based on conditional random field." IEEE International Conference on Document Analysis and Recognition(ICDAR), 2009.
[10] J. Matas, O. Chum, M. Urban, T. Pajdla, "Robust wide-baseline stereo from maximally stable extremal regions." Image and Vision Computing vol.22(10), pp.761-767, 2004.
[11] B. Epshtein, O. Eyal, W. Yonatan, "Detecting text in natural scenes with stroke width transform." IEEE International Conference Computer Vision and Pattern Recognition (CVPR), 2010.
[12] C. Yao, X. Bai, W. Liu, Y. Ma, Z. Tu, “Detecting texts of arbitrary orientations in natural images.” IEEE International Conference Computer Vision and Pattern Recognition (CVPR), 2012.
[13] W. Huang, Z. Lin, J. Yang, J. Wang, “Text localization in natural images usingstroke feature transform and text covariance descriptors.” IEEE International Conference on Computer Vision (ICCV), 2013.
[14] L. Neumann, K. Matas, “Text localization in real-world images using eficiently pruned exhaustive search.” IEEE International Conference on Document Analysis and Recognition (ICDAR), 2011.
[15] L. Neumann, K. Matas, “Real-time scene text localization and recognition.” IEEE International Conference Computer Vision and Pattern Recognition (CVPR), 2012.
[16] W. Huang, Q. Yu, X. Tang, "Robust scene text detection with convolution neural network induced mser trees." European Conference on Computer Vision (ECCV), 2014.
[17] M. Jaderberg, K. Simonyan, A. Vedaldi, A. Zisserman, "Reading text in the wild with convolutional neural networks." International Journal of Computer Vision (IJCV), vol.116(1), pp.1-20, 2016.
[18] Z. Zhang, C. Zhang, W. Shen, C. Yao, W. Liu, X. BaiZhang, "Multi-oriented text detection with fully convolutional networks." IEEE International Conference Computer Vision and Pattern Recognition (CVPR), 2016.
[19] T. He, W. Huang, Y. Qiao, J. Yao, "Text-attentional convolutional neural network for scene text detection." IEEE Transactions on Image Processing, vol.25(6), pp.2529-2541, 2016.
[20] 葉怡成, ”類神經網路模式應用與實作.” 儒林圖書有限公司, 2004.
[21] W.S. McCulloch, W. Pitts, "A logical Calculus of the Ideas Immanent in Nervous activity." Bulletin of Mathematical Biophysics, vol.5, pp.115-133,1943.
[22] D.O. Hebb, “Organization of Behavior.” Wiley, 1949.
[23] F. Rosenblatt, "The perception: A probabilistic model for information storage and organization in the brain." Psychological Review, vol. 65, pp.386-408,1958.
[24] M. L. Minsky, S. Papert, “An Introduction to Computational Geometry.” MIT Press, 1969.
[25] D. E. Rumelhart, G. E. Hinton, R. J. Williams, “Learning representations by back-propagating errors.” Nature, vol.323 (6088), pp.533- 536, 1986.
[26] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, L. D. Jackel, “Backpropagation applied to handwritten zip code recognition.” Neural computation, pp.541-551, 1989.
[27] Y. Sun, X. Wang, X. Tang, “Deep learning face representation from predicting 10,000 classes.” IEEE International Conference on Computer Vision and Pattern Recognition(ICPR), pp.1891-1898, 2014.
[28] Y. Taigman, M. Yang, M. Ranzato, L. Wolf, “Deepface: closing the gap to human-level performance in face verification.” IEEE International Conference on Computer Vision and Pattern Recognition(ICPR), pp.1701-1708, 2014.
[29] D. Silver, A. Huang, C. J. Maddison, A. Guez, L. Sifre, G. Driessche, J. Schrittwieser, I. Antonoglou, V. Panneershelvam, M. Lanctot, S. Dieleman, D. Grewe, J. Nham, N. Kalchbrenner, I. Sutskever, T. Lillicrap, M. Leach, K. Kavukcuoglu, T. Graepel, D. Hassabis, "Mastering the game of Go with deep neural networks and tree search," Nature,vol. 529(7587), pp.484-489, 2016.
[30] J. Deng, W. Dong, R. Socher, “Imagenet: A large-scale hierarchical image database.” IEEE International Conference on Computer Vision and Pattern Recognition(ICPR), 2009.
[31] W. Wang, B. C. Ooi, W. Y. Yang, “Effective multimodal retrieval based on stacked auto-encoders.” The Proceedings of the VLDB Endowment (PVLDB), vol.7 (8), pp.649 – 660, 2014.
[32] L. Yandong, H. Zongbo, L. Hang, “Survey of convolutional neural network.” Journal of Computer Applications, vol.36(9), pp.8-2515, 2016.
[33] D. H. Hubel, T. N. Wiesel, “Receptive fields, binocular interaction, and functional architecture in the cat′s visual cortex.” Journal of Physiology, vol.160, pp.106-154, 1962.
[34] K. Fukushima1, “A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position.” Biological Cybernetics, vol.36(4), pp.193-202, 1980.
[35] Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, L. D. Jackel, “Backpropagation Applied to Handwritten Zip Code Recognition.” Neural Computation, vol.1(4), pp.541-551, 1989.
[36] Y. LeCun, L. Bottou, Y. Bengio, P. Haffner, “Gradient-Based Learning Applied to Document Recognition.” Proceedings of the IEEE, vol. 86(11), pp. 2278-2324,1998.
[37] A. Krizhevsky, I. Sutskever, G. E. Hinton, “ImageNet classification with deep convolutional neural networks.” Advances in Neural Information Processing Systems 25, pp.1097-1105, 2012.
[38] K. Simonyan, A. Zisserman, “Very deep convolutional networks for large-scale image recognition.” IEEE International Conference on Computer Vision and Pattern Recognition(ICPR), 2015.
[39] C. Szegedy, W. Liu, Y. Jia, “Going deeper with convolutions.” IEEE Conference on Computer Vision and Pattern Recognition(ICPR), pp.01-08, 2015.
[40] K. He, X. Zhang, S. Ren, “Deep residual learning for image recognition.” IEEE Conference on Computer Vision and Pattern Recognition(ICPR), 2016.
[41] “Deep Learning Tutorial”, Chapter 6.
[42] K. L. Bouman, G. Abdollahian, M. Boutin, E. J. Delp, "A low complexity sign detection and text localization method for mobile applications." IEEE Transactions on multimedia: 922-934, 2011
[43] Signs N800 dataset: http://cobweb.ecn.purdue.edu/~ace/kbsigns/
[44] V. Dumoulin, F. Visin, “A guide to convolution arithmetic for deep learning.” arXiv preprint arXiv:1603.07285, 2016
指導教授 蘇柏齊(Po-Chyi Su) 審核日期 2017-1-23
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明