English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 84432/84432 (100%)
造訪人次 : 65812872      線上人數 : 177
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: https://ir.lib.ncu.edu.tw/handle/987654321/99474


    題名: 基於螢光顯微影像之RDL語意分割系統開發;Development of an RDL Semantic Segmentation System Based on Fluorescence Microscopy Images
    作者: 朱俊諺;Chu, Chun-Yen
    貢獻者: 機械工程學系
    關鍵詞: RDL;螢光顯微鏡;語意分割;RDL;Fluorescence Microscope;Semantic Segmentation
    日期: 2026-01-27
    上傳時間: 2026-03-06 19:05:11 (UTC+8)
    出版者: 國立中央大學
    摘要: 本研究開發了一套基於螢光顯微影像之RDL語意分割系統,解決了RDL(Redistribution Layer)製程之AOI檢測中,由於聚醯亞胺(PI)介電層具備高透光特性,導致傳統明場影像檢測易受影響的問題。為克服此瓶頸並提升檢測可靠度,本研究提出一套整合405 nm激發光之螢光顯微成像系統與深度學習語意分割技術的解決方案,以實現RDL線路的精準像素級分割。
    在成像驗證方面,本研究透過對比同一視野下的明場與螢光影像,結合對比度及灰階分佈分析,證實螢光成像能有效抑制背景紋理干擾並強化線路邊界,基於此光學優勢,本研究提出一套非人工標註的資料集建置策略,利用高對比度的螢光影像自動生成線路遮罩,取代了傳統耗時且易受主觀判斷影響的人工標註流程,從而確保了訓練資料的高度一致性與客觀性。在模型訓練策略上,考量原始影像之高解析度(2448×2048),本研究採裁切方式建立資料集,並嚴格以原始視野影像對為單位進行五折交叉驗證,以防止重疊裁切造成的資料洩漏,確保效能評估之公平性。
    本研究共評估三種語意分割模型,並於獨立測試集上驗證。比較的模型包含編解碼型的U-Net、偏重即時分割性能的PIDNet和Transformer架構的SegFormer,經綜合考量三種模型的精度與效率後,選定PIDNet作為最終部署模型。實驗結果顯示,PIDNet在本研究測試集上可達mIoU 96.24%,Precision 98.12%,Recall 98.05%,兼具低誤判與低漏檢之特性,且模型在輸入影像大小為2448×2048的情況下,推論速度達每秒21.36張,符合產線即時檢測需求。
    ;This study develops an RDL (Redistribution Layer) semantic segmentation system using fluorescence microscopy to address Automated Optical Inspection (AOI) challenges caused by transparent Polyimide (PI) layers. By integrating a fluorescence imaging system with deep learning, the proposed solution achieves precise pixel-level segmentation of RDL circuits.
    Imaging analysis confirms that fluorescence imaging effectively suppresses background interference and enhances circuit boundaries compared to bright-field methods. Leveraging this optical advantage, the study establishes a physics-based automated annotation strategy where high-contrast fluorescence images are used to automatically generate ground truth masks. This approach effectively replaces labor-intensive and subjective manual annotation, thereby ensuring high data consistency and objectivity. For model training, a cropping strategy was applied to handle high-resolution images (2448x2048), and a five-fold cross-validation based on the original field of view was implemented to prevent data leakage and ensure fair evaluation.
    The study evaluated U-Net, PIDNet, and SegFormer models. PIDNet was selected for its optimal balance of accuracy and efficiency. Experimental results on an independent test set show that PIDNet achieved an mIoU of 96.24%, Precision of 98.12%, and Recall of 98.05%. With an inference speed of 21.36 FPS on 2448×2048 images, the system satisfies the requirements for real-time production line inspection.
    顯示於類別:[機械工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML22檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明