|Abstract: ||聚酯膠片是一種人工合成塑膠，可用在多項產品的外包裝上;例如，器材包裝、食品包裝、貨運包裝、藥品包裝、電子元件包裝等，聚酯膠片除了多元化應用之外，也需要兼顧產品品質，傳統上聚酯膠片在生產過程是使用人眼觀察檢查，但是聚酯膠片生產過程是一直持續移動前進的，對於移動的生產過程使用人工檢查上產生了很多不確定因素;例如，眼睛疲勞、新人不熟悉、移動中判斷瑕疵大小困難等。在本論文中我們基於卷積神經網路 (convolution neural network, CNN) 提出了一套聚酯膠片的瑕疵分類系統，此系統會先使用原始網路模型，並評估後改進卷積神經網路;例如，縮減模型，擴增模型、增加訓練次數、錯誤分析等，而後再利用SinGAN彌補資料樣本數不足造成資料不平衡的情況，使得系統對於分類聚酯膠片的瑕疵;例如，氣泡、破洞、晶點、蚊蟲、碳化物、噴料、缺料偏薄、積料過厚，能有更好的精確率、召回率、準確率，因此這套系統可以用於產品檢測上，替代人工視覺檢查，保持產品品質及穩定性。|
;Polyester films is a kind of synthetic plastic that can be used in the outer packaging of many products; for example, equipment packaging, food packaging, freight packaging, pharmaceutical packaging, electronic component packaging, etc. In addition to diversified applications, polyester film also needs to be considered. The quality of the products is traditionally inspected by human eyes during the production process of polyester film. However, the production process of polyester film has been continuously moving forward. There are many uncertain factors in the manual inspection of the mobile production process; for example, eyes Fatigue, unfamiliar newcomers, difficulty in judging the size of flaws while moving, etc. In this research, we propose a polyester films defect classification system based on convolution neural network. This system will first use the original network model and improve the convolutional neural network after evaluation. For example, reduce the model, expand the model, increase the number of training, error analysis, etc., and then use SinGAN to make up for the insufficient number of data samples to cause the imbalance of the data, so that the system has defects in the classification of polyester films; for example, bubble, hole, crystal, mosquito, lack, spray, carbon, and hoard, can have better precision, recall, and accuracy. Therefore, this system can be used for product inspection instead of artificial vision check to maintain product quality and stability.
This paper is divided into five parts: The first part is to use the three original network models of VGG16, GoogLeNet and DenseNet for training. The three models with the best classification results will proceed to the next step. The second part is to reduce the model. Conditional restrictions are carried out based on the use of DenseNet. The purpose is to understand in detail in the classification process of polyester films in the convolutional neural network, low-level features, high-level features or model depth affect the model the key to accuracy, the result is the optimal modification direction. The third part is the augmentation model, which deepens the number of layers for the convolutional neural network to improve the classification results. The fourth part is to use SinGAN to generate flawed samples, in order to avoid overfitting of the training model when the number of flawed samples is too small to maintain high accuracy. The fifth part is to increase the number of training and error analysis, increase the stability of the model, and check whether the original defective sample has manual classification errors, so as to improve the accuracy, recall and accuracy of classification.
In the experiment, we collected images of defects produced during the production of polyester film. There are eight types of defects, including carbon: 581 sheets, bubble: 428 sheets, mosquito: 32 sheets, spray: 234 sheets, lack: 98 sheets, holes: 28 sheets, hoard: 430 sheets, and crystal: 460 sheets, total is 2291 sheets, and the image resolution is 224×224. After SinGAN is not used to generate defect samples and the data is enhanced, the training sample is 6414 sheets, the test sample is 2750 sheets, total is 9164 sheets, the accuracy rate of VGG16 is 88.84%, the accuracy rate of GoogLeNet is 69.93%, and the accuracy rate of DenseNet is 91.38%. While using SinGAN to generate flawed samples, the mosquito data is enhance to 266 sheets, the training samples are 7070, the test samples are 3030, and the total is 10100. After using SinGAN, the accuracy rate is 96.57%, an increase of 5.19%. The improved DenseNet accuracy rate is 100%, which is 8.62% higher than the original DenseNet accuracy rate. The execution speed is 275 frames per second, and the model size is 45.5MB.