本研究開發了一套基於螢光顯微影像之RDL語意分割系統,解決了RDL(Redistribution Layer)製程之AOI檢測中,由於聚醯亞胺(PI)介電層具備高透光特性,導致傳統明場影像檢測易受影響的問題。為克服此瓶頸並提升檢測可靠度,本研究提出一套整合405 nm激發光之螢光顯微成像系統與深度學習語意分割技術的解決方案,以實現RDL線路的精準像素級分割。 在成像驗證方面,本研究透過對比同一視野下的明場與螢光影像,結合對比度及灰階分佈分析,證實螢光成像能有效抑制背景紋理干擾並強化線路邊界,基於此光學優勢,本研究提出一套非人工標註的資料集建置策略,利用高對比度的螢光影像自動生成線路遮罩,取代了傳統耗時且易受主觀判斷影響的人工標註流程,從而確保了訓練資料的高度一致性與客觀性。在模型訓練策略上,考量原始影像之高解析度(2448×2048),本研究採裁切方式建立資料集,並嚴格以原始視野影像對為單位進行五折交叉驗證,以防止重疊裁切造成的資料洩漏,確保效能評估之公平性。 本研究共評估三種語意分割模型,並於獨立測試集上驗證。比較的模型包含編解碼型的U-Net、偏重即時分割性能的PIDNet和Transformer架構的SegFormer,經綜合考量三種模型的精度與效率後,選定PIDNet作為最終部署模型。實驗結果顯示,PIDNet在本研究測試集上可達mIoU 96.24%,Precision 98.12%,Recall 98.05%,兼具低誤判與低漏檢之特性,且模型在輸入影像大小為2448×2048的情況下,推論速度達每秒21.36張,符合產線即時檢測需求。 ;This study develops an RDL (Redistribution Layer) semantic segmentation system using fluorescence microscopy to address Automated Optical Inspection (AOI) challenges caused by transparent Polyimide (PI) layers. By integrating a fluorescence imaging system with deep learning, the proposed solution achieves precise pixel-level segmentation of RDL circuits. Imaging analysis confirms that fluorescence imaging effectively suppresses background interference and enhances circuit boundaries compared to bright-field methods. Leveraging this optical advantage, the study establishes a physics-based automated annotation strategy where high-contrast fluorescence images are used to automatically generate ground truth masks. This approach effectively replaces labor-intensive and subjective manual annotation, thereby ensuring high data consistency and objectivity. For model training, a cropping strategy was applied to handle high-resolution images (2448x2048), and a five-fold cross-validation based on the original field of view was implemented to prevent data leakage and ensure fair evaluation. The study evaluated U-Net, PIDNet, and SegFormer models. PIDNet was selected for its optimal balance of accuracy and efficiency. Experimental results on an independent test set show that PIDNet achieved an mIoU of 96.24%, Precision of 98.12%, and Recall of 98.05%. With an inference speed of 21.36 FPS on 2448×2048 images, the system satisfies the requirements for real-time production line inspection.