English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41144856      線上人數 : 412
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/93521


    題名: 使用擺置後的設計特徵及極限梯度提升演算法預測繞線後的繞線需求;Routing Demand Prediction Using Placement Features with Extreme Gradient Boosting
    作者: 黃仕成;Huang, Shih-Cheng
    貢獻者: 電機工程學系
    關鍵詞: 極限梯度提升;extreme gradient boosting
    日期: 2023-10-30
    上傳時間: 2024-09-19 17:10:16 (UTC+8)
    出版者: 國立中央大學
    摘要: 隨著現代半導體製程工藝的進步,在相同面積下能容納更多的電晶體的數量,而晶片設計的複雜度也相應變高,在這些影響下,設計積體電路所需要花費的時間也變得十分可觀。其中在物理設計(Physical design)的流程中繞線就是十分費時的步驟之一,在大多數情況下繞線會花費數小時甚至數天。然而在某些情況下會發生像是繞線擁擠,或是違反設計規則檢查(Design rule check)的狀況發生,當這些狀況發生時,我們需要回到流程的前幾個步驟修正晶片中的問題再重新繞線一次。然而在晶片設計的週期中這樣的狀況會反覆執行數次,進而延誤了晶片下線(Tapeout)的進度。因此許多研究利用機器學習的演算法來建構模型,提前幾個階段預測後續可能會遇到的問題,幫助工程師提前解決問題,減少反覆的執行繞線流程。例如預測晶片繞線後,哪些區域會有繞線擁擠,或是預測哪些區域會有違反晶片設計規範的問題。在這些研究領域多數使用了神經網路的模型架構,然而在訓練神經網路需要花費大量的計算時間,此外為了避免過度凝合(Overfitting)在訓練神經網路需要大量的訓練資料,但因為晶片的設計流程十分花費時間,要收集到充足的訓練資料要花費大量時間成本。因此在本篇論文中使用了整體學習(Ensemble learning)的演算法,極限梯度提升(Extreme gradient boosting, XGBoost),極限梯度提升是一種高效率的演算法,並且相較於傳統的梯度提升極限梯度提升在其目標函數加入了正則化項,這在少量的訓練資料下可以避免過度凝合,使得模型在有限的訓練資料下可以準確的預測出繞線結果,此外極限梯度提升是基於決策樹的架構,相較於神經網路有較佳的解釋性。藉由以上極限梯度提升的優點,本篇論文分析了各種輸入特徵的重要性,並且篩選掉多餘的特徵,以減少訓練時間及提升模型在推理時的效率,在訓練模型時,我們提取晶片擺置後的電路特徵,以及提取繞線過後的繞線需求當作預測目標。;With the advancement of semiconductor manufacturing techniques, more transistors can be accommodated in the same area. As the results, the complexity of Integrated Circuits (ICs) increases. Consequently, designing ICs has become notably time consuming. Among the various stages, routing, in particular, is one of the most time-consuming steps in the physical design process, often taking several hours or even days. In some cases, issues such as routing congestion or Design Rule Violations (DRVs) occur. When these issues arise, it necessitates returning to earlier stages of the process to adjust the chip design to fix these issues. This iterative process may occur multiple times throughout the chip design cycle. Hence, many studies employ machine learning algorithms to construct models that can predict potential issues in previous stages. Engineers can adjust chip design before actually performing routing, reducing the repeated execution of the routing process. However, due to the process of chip design is time consuming, it’s hard to collect a vast amount of training data. Moreover, Neural Network (NN) base model needs a lot of data for model training, insufficient training data might lead to overfitting. Therefore, this paper utilizes the ensemble learning algorithm, Extreme Gradient Boosting (XGBoost). XGBoost is a highly efficient algorithm, and compared to traditional gradient boosting methods, XGBoost incorporates regularization term into its objective function. This regularization term can prevent overfitting, enabling the model to predict routing outcomes accurately with limited training data. Additionally, XGBoost is based on a decision tree structure, offering better interpretability compared to neural networks. Leveraging these advantages of XGBoost, this paper analyzes the importance of input features, filtering out redundant features to reduce training time and boost model efficiency during inference. During model training, we extract circuit features in placement stage and use the routing demand after global routing as the prediction target. 
    顯示於類別:[電機工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML12檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明