摘要: | 宏遠是一家紡織廠商,利用AOI來偵測紡織布上的瑕疵。然而目前AOI無法有效解決他們所遭遇的問題,主要是因為「瑕疵」還能夠細分成多個種類。AOI使用三個反射光相機與三個穿透光相機,依據AOI的規則,擷取出多種瑕疵種類,如:摺痕、斷線、髒污、脫線、白點等。然而宏遠卻不同於AOI規則,對瑕疵種類有不同的定義,他們認為的瑕疵如:破洞、斷經等種類。 因為兩種瑕疵定不同,使得AOI偵測出的瑕疵影像造成多達90%的誤判,因此宏遠專業的檢測員需要重新將誤判影像分類成有瑕疵與無瑕疵。而我們的目標則是要建立深度學習的模型,來減少檢測員花在重新分類上的勞力。這項作業使用Autoencoder來重建資料集,再利用CNN進行分類。實驗的結果顯示我們的模型能夠減少檢測員高達80%以上的勞力耗費,同時保有FNR小於5%與FPR小於15%的表現。;The textile company applied AOI to detect a defect on fabric automatically. In their case, AOI failed to overcome this problem because there are differences defect categories. AOI has defect with categories among others; fold, black thread, broken thread, hooked thread, needle mark, mosquito, stain, thread-off, white spot and uneven thickness. Whereas, Textile company has different defects categories, its categories among others; Thinning, missing weft, stop mark, weft mark, loose weft, warp break, mechanical section and fold back. These differences caused more than 90% of captured images are not defect (overkill). A Professional Inspector in Textile Company reclassified those overkill images into real defect and non-defect images. Our goal in this work is, to reduce a Professional Inspector working loads by proposed two approaches. The first approach is, building a tiny and light CNN architecture as classifier model. While second approach is, combining Autoencoder to reconstruct the dataset as an input to CNN model that built in first approach. The result shows that the models are able to reduce a Professional Inspector working loads up to 90% with maximum FNR 5% and FPR less than 5%. |