姓名 |
陳皇翰(Huang-Han Chen)
查詢紙本館藏 |
畢業系所 |
資訊工程學系 |
論文名稱 |
基於刮痕瑕疵資料擴增的分割拼接影像生成 (Split-Generate-stitch generation method for augmentation based on scratch defect data)
|
相關論文 | |
檔案 |
[Endnote RIS 格式]
[Bibtex 格式]
[相關文章] [文章引用] [完整記錄] [館藏目錄] 至系統瀏覽論文 ( 永不開放)
|
摘要(中) |
在生產線上,若要提升產品品質,瑕疵的檢測與篩選是重要的生產流程之一,傳統的瑕疵檢測方式以人力為主,除了較耗費成本外,挑選品質會受到人員的身體狀況以及環境因素等外部原因的影響,因此使用電腦視覺進行瑕疵檢測是近年較普遍的做法。
使用電腦視覺判斷瑕疵與否的必要條件是需要有一定數量的資料,才能讓準確度保持一定水準,然而瑕疵資料比良品資料的收集難度高,因此使用資料擴增的方法是不可或缺的。目前常見的擴增方式(如平移、翻轉、旋轉等)可能產生不符合真實情況的影像,且瑕疵樣態接近,這種方式對較多樣性的特徵擴增樣態增加量有限。因此本研究提出生成多樣性瑕疵資料的資料擴增方法,將數量有限的真實影像轉換成新樣態的瑕疵資料,以達到擴增瑕疵樣態的目的,另外,為了讓生成的瑕疵結果能夠受到人為的控制,本研究的生成系統中使用語意分割的標記影像方式加入場域知識,然而傳統的語意分割生成模型受到少量資料集的限制,結果並不理想,因此本研究受到文獻啟發,提出以分割拼接的方式針對瑕疵進行生成以生成出品質更好的擴增影像。 |
摘要(英) |
In the production line, in order to improve product quality, defect detection and screening is one of the important production processes. The traditional defect detection method is mainly labor-based. In addition to the high cost, the selection result will be affected by the physical condition of the personnel and environmental factors. Due to the influence of external factors, the use of computer vision for defect detection has become a more common practice in recent years.
The necessary condition for using computer vision to judge defects is that a certain amount of data is required to maintain a certain level of accuracy. However, defect data is more difficult to collect than good data, so the use of data augmentation methods is indispensable. The current common augmentation methods (such as shifting, flipping, rotating, etc.) may produce images that do not conform to the real situation, and the defect patterns are close. This method has a limited increase in the more diverse feature augmentation patterns. Therefore, this study proposes a data augmentation method for generating diverse defect data, which converts a limited number of real images into new defect data to achieve the purpose of making up for the number of defective data. In addition, in order to make the generated results under artificial control, the semantic segmentation of the labeled images is used in the generation system of this study to provide domain knowledge. However, the traditional semantic segmentation generation model is limited by a small amount of datasets, and the results are not ideal. Therefore, this study is inspired by the literature, proposes to generate a better-quality augmented image by splitting and stitching for defects. |
關鍵字(中) |
★ 瑕疵檢測 ★ 影像擴增 ★ 語意分割 ★ 電腦視覺 |
關鍵字(英) |
★ Defect Inspection ★ Data Augmentation ★ Semantic segmentation ★ Computer Vision |
論文目次 |
摘要 I
Abstract II
致謝 III
目錄 IV
圖目錄 VI
表目錄 VII
一、 緒論 1
1-1 研究背景 1
1-2 研究動機與目的 1
1-3 研究貢獻 2
1-4 論文架構 3
二、 相關研究 4
2-1 基礎深度學習影像生成模型 4
2-2 語意分割與階層影像生成 5
2-3 U-net語意分割 6
三、 解決方案 7
3-1 資料前處理 11
3-2 合成影像組成 14
3-2.1 影像指引生成器 14
3-2.2 基本單元影像生成器 16
3-3 合成影像修正 17
四、 實驗與討論 20
4-1 評估方法 20
4-2 語意分割模型 21
4-3 實驗資料集 22
4-4 實驗一:深色刮痕擴增方法實驗 23
4-4.1 實驗動機與目的 23
4-4.2 實驗方法 23
4-4.3 實驗結果 25
4-5 實驗二:完整生成與部分生成實驗 26
4-5.1 實驗動機與目的 26
4-5.2 實驗方法 27
4-5.3 實驗結果 28
4-6 實驗三:淺色刮痕生成替代實驗 30
4-6.1 實驗動機與目的 30
4-6.2 實驗方法 30
4-6.3 實驗結果 31
五、 結論與未來展望 33
5-1 論文總結 33
5-2 未來展望 33
參考文獻 35 |
參考文獻 |
[1] Radford, Alec, Luke Metz, and Soumith Chintala. "Unsupervised representation learning with deep convolutional generative adversarial networks." arXiv preprint arXiv:1511.06434 (2015).
[2] Donahue, Chris, et al. "Semantically decomposing the latent spaces of generative adversarial networks." arXiv preprint arXiv:1705.07904 (2017).
[3] Wang, Ting-Chun, et al. "High-resolution image synthesis and semantic manipulation with conditional gans." Proceedings of the IEEE conference on computer vision and pattern recognition. 2018.
[4] Niemeyer, Michael, and Andreas Geiger. "Giraffe: Representing scenes as compositional generative neural feature fields." Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021.
[5] Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David WardeFarley, Sherjil Ozair, Aaron Courville, Yoshua Bengio,“Generative Adversarial Networks,” Proceedings of the International Conference on Neural Information Processing Systems (NIPS 2014), 2014, pp. 2672–2680.
[6] Kingma, Diederik P., and Max Welling. "Auto-encoding variational bayes." arXiv preprint arXiv:1312.6114 (2013).
[7] Yu, Xianwen, et al. "VAEGAN: A Collaborative Filtering Framework based on Adversarial Variational Autoencoders." IJCAI. 2019.
[8] Isola, Phillip, et al. "Image-to-image translation with conditional adversarial networks." Proceedings of the IEEE conference on computer vision and pattern recognition. 2017.
[9] Karras, Tero, et al. "Progressive growing of gans for improved quality, stability, and variation." arXiv preprint arXiv:1710.10196 (2017).
[10] Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for biomedical image segmentation." International Conference on Medical image computing and computer-assisted intervention. Springer, Cham, 2015. |
指導教授 |
梁德容(De-Ron Liang)
|
審核日期 |
2022-9-22 |
推文 |
facebook plurk twitter funp google live udn HD myshare reddit netvibes friend youpush delicious baidu
|
網路書籤 |
Google bookmarks del.icio.us hemidemi myshare
|