博碩士論文 109521180 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:94 、訪客IP:18.222.107.128
姓名 吳岳澤(WU,YUEH-TSE)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 基於生成對抗網路之模糊車牌重建效果比較與分析
相關論文
★ 直接甲醇燃料電池混合供電系統之控制研究★ 利用折射率檢測法在水耕植物之水質檢測研究
★ DSP主控之模型車自動導控系統★ 旋轉式倒單擺動作控制之再設計
★ 高速公路上下匝道燈號之模糊控制決策★ 模糊集合之模糊度探討
★ 雙質量彈簧連結系統運動控制性能之再改良★ 桌上曲棍球之影像視覺系統
★ 桌上曲棍球之機器人攻防控制★ 模型直昇機姿態控制
★ 模糊控制系統的穩定性分析及設計★ 門禁監控即時辨識系統
★ 桌上曲棍球:人與機械手對打★ 麻將牌辨識系統
★ 相關誤差神經網路之應用於輻射量測植被和土壤含水量★ 三節式機器人之站立控制
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 從車輛中的行車紀錄器或是從監視器拍攝到的車牌影像,可能會因為拍攝時距離過遠、沒有對焦、車速太快等因素,導致影像變得模糊,使得一般的車牌辨識系統無法精確辨識車牌。雖然已有不少文獻使用生成對抗網路(GAN)去實踐,並有相關的研究成果,但成效不一,有些重建成功率較低、有些只對於特定的模糊方式解模糊較有效。在研究不同文獻後,發現不同的GAN架構,其重建影像的結果就有很明顯的差異。因此本論文將尋找並修改現有的GAN架構,將不同的生成器架構、判別器架構以及損失函數做配對與組合,並比較不同組合下,其重建模糊影像的效果優劣,以找出效果最好的重建效果的組合。另外,我們也對增加影像重建次數是否會使重建效果變好,感到好奇,並給予實驗與分析。
我們把重建成功的標準分為兩種,一為”車牌完全重建正確”,二為”重建出來可以辨識”。其中第一種為對於人眼能勉強判讀模糊影像中的車牌號碼,重建後影像中的車牌完全正確;第二種為車牌模糊程度高至人眼無法判讀,重建後車牌影像可以辨識以作為參考(不見得重建完全正確)。兩種的評估重建成功的指標均使用SSIM (structural similarity)。本論文最後的結果顯示,不論採用哪一種標準,使用DeblurGAN的重建效果最好,其中生成器使用含有全局跳躍連接的ResNet,判別器使用多尺度的PatchGAN,在不分類的情況下,整體avgSSIM為0.8036。關於影像重建次數的部分,如果為第一種車牌,其重建1次的車牌影像的avgSSIM已經達到0.8536,代表重建1次的車牌影像已經夠清楚且完全正確,不需要進行二次重建,重建2次的avgSSIM因為背景或少許區塊與原始影像更不同,因此下降成0.7647。如果為第二種車牌,重建出來的車牌時常不夠清晰或是不完全正確,如果重建出的車牌不夠清晰,可以將車牌進行多次重建,直到車牌可以辨識以作為參考或是無法變更清楚為止。即使重建1次的avgSSIM比重建2次的高,但重建後的車牌能夠辨識比正確更為重要。因此第二種車牌得根據重建出的車牌的情況來決定最佳的影像重建次數。
摘要(英) The license plate image captured by the dashcam in the vehicle or the monitor may blur due to the distance, lack of focus, or high speed of the vehicle. Therefore, the license plate recognition system can’t accurately identify the license plate. Although there is a lot of literature that uses Generative Adversarial Networks (GAN) to achieve and have related research results, the results are different. Some reconstruction success rates are low, and some are only effective for specific blurring methods. After studying different literature, it’s found that different GAN architectures have obvious differences in the results of reconstructed images. Therefore, this thesis searches for and modifies the existing GAN architecture, pairs and combines different generator architectures, discriminator architectures, and loss functions, and compares the effects of reconstructed images under different combinations to find the combination with the best reconstruction effect. In addition, we are also curious about whether increasing the number of image reconstructions will improve the reconstruction effect, and give experiments and analysis.
We divide the reconstruction success criteria into two types, one is "the license plate is completely reconstructed correctly", and the other is "the license plate can be recognized after reconstruction". The first type is that the human eye can barely read the license plate number in the blurred image, and the license plate number in the reconstructed image is completely correct. The second type is that the license plate is so blurred that it can not be read by the human eye, and the reconstructed license plate image can be recognized as a reference (the reconstruction may not be completely correct). Both metrics for evaluating the success of reconstruction use SSIM (structural similarity). The final results of this thesis show that no matter which standard is used, the reconstruction effect using DeblurGAN is the best, where the generator uses ResNet with global skip connections, and the discriminator uses multi-scale PatchGAN. Without classification, the overall avgSSIM is 0.8036. Regarding the number of image reconstruction times, if it is the first type of license plate, the avgSSIM of the license plate image reconstructed once has reached 0.8536, which means that the license plate image reconstructed once is clear enough and completely correct, and there is no need for secondary reconstruction. The avgSSIM of reconstruction twice drops to 0.7647 because the background or a few blocks are more different from the original image. Generally, if it is the second type of license plate, the reconstructed license plate is still blurred or not completely correct. If the reconstructed license plate is not clear enough, the license plate can be reconstructed several times until the license plate can be recognized as a reference or can not be changed clearly. Even though the avgSSIM of reconstruction once is higher than that of reconstruction twice, it is more important to be able to recognize the reconstructed license plate than to be correct. Therefore, the second type of license plate has to determine the optimal number of image reconstructions according to the condition of the reconstructed license plate.
關鍵字(中) ★ 深度學習
★ 生成對抗網路
★ 影像處理
關鍵字(英) ★ deep learning
★ Generative Adversarial Network
★ image processing
論文目次 摘要 ....................................................i
Abstract ...............................................ii
致謝....................................................iv
目錄 ....................................................v
圖目錄 ................................................vii
表目錄 .................................................ix
第一章 緒論.............................................1
1.1 研究背景與動機...................................1
1.2 文獻回顧........................................1
1.3 論文目標........................................3
1.4 論文架構........................................4
第二章 系統架構與軟硬體介紹..............................5
2.1 硬體架構........................................5
2.2 軟體介紹........................................5
2.3 影像重建流程....................................5
第三章 影像獲取方式與重建成功指標........................8
3.1 影像獲取方式....................................8
3.2 重建成功指標...................................11
3.2.1 PSNR.....................................11
3.2.2 SSIM.....................................12
第四章 基本型GAN之介紹.................................15
4.1基本架構介紹....................................15
4.2判別器架構介紹..................................16
4.3生成器架構介紹..................................21
4.3.1 UNet.....................................21
4.3.2 ResNet架構系列............................22
第五章 本研究使用之四種GAN..............................24
5.1 Pix2pix.......................................24
5.2 Pix2pixHD.....................................24
5.3 LSGAN.........................................25
5.4 DeblurGAN.....................................27
第六章 實驗結果與比較..................................31
6.1 有原始影像的測試結果與比較......................31
6.1.1 Pix2pix系列之比較.........................31
6.1.2 不同GAN之比較.............................34
6.1.3 不同內容損失之比較.........................43
6.1.4 重建影像次數比較...........................45
6.1.5 測試影像分類數據分析........................47
6.2 沒有原始影像的測試結果與比較......................48
6.3 實驗結果總結....................................52
第七章 結論與未來展望....................................53
7.1 結論............................................53
7.2 未來展望........................................54
參考文獻.................................................56
參考文獻 [1] V. Moslemi, “De-blurring methodology of license plate using sparse representation,” in Int. Conf. Comput. Knowl. Eng. (ICCKE), 2012, pp. 34-38.
[2] A. H. Yu, H. Bai, Q. R. Jiang, Z. H. Zhu, C. G. Huang and B. P. Hou, “Blurred license plate recognition via sparse representations,” in Proc. IEEE Conf. Ind. Electron. Appl. , 2014, pp. 1657-1661.
[3] J. Fang, Y. Yuan, W. Ji, P. Tang and Y. Zhao, “Licence plate images deblurring with binarization threshold,” in Proc. IEEE Int. Conf. on Imaging Syst. and Tech. (IST), 2015, pp. 1-6.
[4] Y. Kataoka, T. Matsubara and K. Uehara, “Image generation using generative adversarial networks and attention mechanism,” in Proc. IEEE/ACIS Int. Conf. on Comput. and Inf. Science (ICIS), 2016, pp. 1-6.
[5] D. Pathak, P. Krähenbühl, J. Donahue, T. Darrell and A. A. Efros, “Context Encoders: feature learning by inpainting,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2016, pp. 2536-2544.
[6] C. Yang, X. Lu, Z. Lin, E. Shechtman, O. Wang and H. Li, “High-Resolution image inpainting using multi-scale neural patch synthesis,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 4076-4084.
[7] P. Isola, J. Zhu, T. Zhou and A. A. Efros, “Image-to-Image translation with conditional adversarial networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 5967-5976.
[8] J. Zhu, T. Park, P. Isola and A. A. Efros, “Unpaired image-to-image translation using cycle-consistent adversarial networks,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), 2017, pp. 2242-2251.
[9] C. Li and M. Wand, “Precomputed real-time texture synthesis with markovian generative adversarial networks,” ECCV, 2016.
[10] T. Wang, M. Liu, J. Zhu, A. Tao, J. Kautz and B. Catanzaro, “High-Resolution image synthesis and semantic manipulation with conditional GANs,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2018, pp. 8798-8807.
[11] 程義凱, 基於生成對抗網路之模糊車牌影像重建與辨識, 國立臺灣海洋大學電機工程學系碩士班碩士論文, 2021.
[12] O. Kupyn, V. Budzan, M. Mykhailych, D. Mishkin and J. Matas, “DeblurGAN: blind motion deblurring using conditional adversarial networks,” in Proc. IEEE/CVF Conf. Comput. Vis. Pattern Recognit., 2018, pp. 8183-8192.
[13] I. Gulrajani, F. Ahmed, M. Arjovsky, V. Dumoulin and A. Courville, “Improved training of wasserstein GANs,” in Proc. Adv. Neural Inf. Process. Syst. (NIPS), 2017.
[14] Z. -M. Chen and L. -W. Chang, “Blind motion deblurring via inceptionresdensenet by using GAN model,” in Proc. IEEE Int. Conf. Acoust. Speech Signal Process. (ICASSP), 2019, pp. 1463-1467.
[15] J. Fan, L. Wu and C. Wen, “Sharp processing of blur image based on generative adversarial network,” in Int. Conf. Adv. Robot. Mechatron. (ICARM), 2020, pp. 437-441.
[16] L. Zhou, W. Min, D. Lin, Q. Han and R. Liu, “Detecting motion blurred vehicle logo in IoV using filter-DeblurGAN and VL-YOLO,” IEEE Trans. Veh. Technol., vol. 69, no. 4, pp. 3604-3614, 2020.
[17] G. Gong and K. Zhang, “Local blurred natural image restoration based on self-reference deblurring generative adversarial networks,” in Proc. IEEE Int. Conf. Signal Image Process Appl. (ICSIPA), 2019, pp. 231-235.
[18] G. -S. Hsu, J. -C. Chen and Y. -Z. Chung, “Application-Oriented license plate recognition,” IEEE Trans. Veh. Technol., vol. 62, no. 2, pp. 552-561, 2013.
[19] G. -S. Hsu, A. Ambikapathi, S. -L. Chung and C. -P. Su, “Robust license plate detection in the wild,” in Proc. IEEE Int. Conf. on Adv. Video Signal Based Surveill. (AVSS), 2017, pp. 1-6.
[20] Z. Liang, B. Yang and H. Xiao, “Using motion deblurring algorithm to improve vehicle recognition via DeblurGAN,” in Int. Conf. Virtual Real. Intell. Syst. (ICVRIS), 2020, pp. 486-489.
[21] S. Gonwirat and O. Surinta, “DeblurGAN-CNN: effective image denoising and recognition for noisy handwritten characters,” IEEE Access, vol. 10, pp. 90133-90148, 2022.
[22] NVIDIA Tesla V100 SXM2 32 GB Specs – TechPowerUp
https://www.techpowerup.com/gpu-specs/tesla-v100-sxm2-32-gb.c3185
[23] GitHub - junyanz/pytorch-CycleGAN-and-pix2pix: Image-to-Image Translation in PyTorch
https://github.com/junyanz/pytorch-CycleGAN-and-pix2pix
[24] GitHub - NVIDIA/pix2pixHD: Synthesizing and manipulating 2048x1024 images with conditional GANs
https://github.com/NVIDIA/pix2pixHD
[25] GitHub - KupynOrest/DeblurGAN: Image Deblurring using Generative Adversarial Networks
https://github.com/KupynOrest/DeblurGAN
[26] WoWtchout - 地圖型行車影像分享平台- Youtube
https://www.youtube.com/@WoWtchout
[27] 影像模糊化 - OpenCV 教學 ( Python ) | STEAM 教育學習網
https://steam.oxxostudio.tw/category/python/ai/opencv-blur.html
[28] OpenCV | Motion Blur in Python – GeeksforGeeks
https://www.geeksforgeeks.org/opencv-motion-blur-in-python/
[29] GitHub - eastmountyxz/ImageProcessing-Python
https://github.com/eastmountyxz/ImageProcessing-Python
[30] 峰值訊噪比 - 維基百科,自由的百科全書
https://zh.wikipedia.org/zh-tw/%E5%B3%B0%E5%80%BC%E4%BF%A1%E5%99%AA%E6%AF%94
[31] Zhou Wang, A. C. Bovik, H. R. Sheikh and E. P. Simoncelli, “Image quality assessment: from error visibility to structural similarity,” IEEE Trans. Image Process., vol. 13, no. 4, pp. 600-612, 2004.
[32] Y. Jia, R. Song, S. Chen, G. Wang and C. Yan, “Preliminary results of multipath ghost suppression based on generative adversarial nets in TWRI,” in Proc. IEEE Int. Conf. Signal Image Process. (ICSIP), 2019, pp. 208-212.
[33] controlling patch size · Issue #11 · yenchenlin/pix2pix-tensorflow · GitHub
https://github.com/yenchenlin/pix2pix-tensorflow/issues/11
[34] O. Ronneberger, P. Fischer and T. Brox, “U-net: convolutional networks for biomedical image segmentation,” MICCAI, pp. 234-241, 2015.
[35] S. Xie, R. Girshick, P. Dollár, Z. Tu and K. He, “Aggregated residual transformations for deep neural networks,” in Proc. IEEE Conf. Comput. Vis. Pattern Recognit. (CVPR), 2017, pp. 5987-5995.
[36] X. Mao, Q. Li, H. Xie, R. Y. K. Lau, Z. Wang and S. P. Smolley, “Least squares generative adversarial networks,” in Proc. IEEE Conf. Comput. Vis. (ICCV), 2017, pp. 2813-2821.
指導教授 王文俊(Wen-June Wang) 審核日期 2023-6-28
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明