博碩士論文 105521040 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:12 、訪客IP:3.135.210.58
姓名 劉銘傑(Ming-Jie Liu)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 以Current-steering DAC實現CNN Deep-learning可擴展之神經細胞元
(Expandable Neuron Cell with Current-steering Digital-to-analog Converter for Deep-learning Convolutional Neural Network)
相關論文
★ 應用於2.5G/5GBASE-T乙太網路傳收機之高成本效益迴音消除器★ 應用於IEEE 802.3bp車用乙太網路之硬決定與軟決定里德所羅門解碼器架構與電路設計
★ 適用於 10GBASE-T 及 IEEE 802.3bz 之高速低密度同位元檢查碼解碼器設計與實現★ 基於蛙跳演算法及穩定性準則之高成本效益迴音消除器設計
★ 運用改良型混合蛙跳演算法設計之近端串音干擾消除器★ 運用改良粒子群最佳化演算法之近端串擾消除器電路設計
★ 應用於多兆元網速乙太網路接收機 類比迴音消除器之最小均方演算法電路設計★ 應用於數位視頻廣播系統之頻率合成器及3.1Ghz寬頻壓控震盪器
★ 地面數位電視廣播基頻接收器之載波同步設計★ 適用於通訊系統之參數化數位訊號處理器核心
★ 以正交分頻多工系統之同步的高效能內插法技術★ 正交分頻多工通訊中之盲目頻域等化器
★ 兆元位元率之平行化可適性決策回饋等化器設計與實作★ 應用於數位視頻廣播系統中之自動增益放大器 及接受端濾波器設計
★ OFDM Symbol Boundary Detection and Carrier Synchronization in DVB-T Baseband Receiver Design★ 適用於億元位元率混合光纖與銅線之電信乙太接取網路技術系統之盲目等化器和時序同步電路設計
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2025-9-1以後開放)
摘要(中) 隨著物聯網技術以及第五代通訊技術逐漸純熟以及感測器的技術進步,人工智慧議題再度成為熱門議題,並且逐漸被廣泛應用於自動化工廠、商業數據分析以及自動駕駛技術等領域,人工智慧發展處於數學模型設計及演算法的階段,透過高規格電腦亦或是GPU架構進行演算。由於感測器使用數量上升以及深層神經網路架構的應用,演算的複雜度成指數型上升,功率消耗亦同。相比於大腦生物神經元,電腦運算的體積及單位功耗相當大,因此許多研究朝著類比電路設計之神經元架構邁進。
本文實現一基於生物神經元行為模式之可擴充類比神經元,並採用數位類比轉換器(DAC)及池化(Pooling)電路來實現,然後透過卷積神經網路辨識手寫數字應用驗證其功能。本文採用Current Reference電路將輸入電壓訊號轉換為電流並以Current-steering DAC作為神經網路權重調整功能,透過Q=CV=It將電荷累積至負載電容作為累加器使用來進行卷積運算,最後經由比較器電路進行最大池化運算,得出輸入手寫數字之特徵。
本電路採用台積電0.18μm COMS 1P6M製程,晶片面積約佔0.4255mm^2(包含 I/O PAD),電源供應電壓為1.8V,整體電路運算輸入為6×10手寫數字之功耗為670.23μW(包含Buffer),最大操作頻率為4 MHz。輸入0.5V及1V的LSB為103.91nA及198.75nA,INL及DNL皆遠小於0.5LSB,DAC動態範圍約為86.35dB。
摘要(英) With the evolution of internet of things (IoT), 5th Generation wireless system and CMOS sensor, artificial intelligence (AI) has again become a hot topic, it is widely used in applications such as automated factories, business data analysis and autonomous driving. The development of AI is at the stage of mathematical model design and algorithm. The complexity of the calculations increases exponentially, as does the power consumption, due to the increase in the number of sensors used and the hidden layer of deep neural network architectures. High-spec computers or GPUs are usually employed to meet the requirement of huge scale of calculations. Compared with brain biological neurons, the volume and unit power consumption of computer calculation are quite large, so many studies are moving towards the neuron architecture of analog circuit design.

This thesis presents a design of extensible analog neuron cell unit based on biological neuron behavior mode. The proposed extensible analog neuron cell unit is composed of a current-steering digital-to-analog converter (DAC) and a pooling circuit. The current reference circuit of current-steering DAC is used to convert the input voltage signal to current, the current steering DAC functions as the neural network weighting to charge the following load capacitor to perform convolution operation, after that, based on the function Q=CV=It, the load capacitor acts as an adder to accumulate charge, and finally the comparator circuit performs the maximum pooling operation. In this dissertation, the proposed extensible analog neuron cell unit is used to obtain the characteristics of input handwritten digits and recognizes handwrite number to demonstrate the circuit function work through convolution neural network.

This circuit is designed in TSMC 0.18μm COMS 1P6M process and the chip area is 0.4255mm^2(including I/O PAD). The power consumption is 670.23μW for 1.8V power supply voltage in calculation an input data of 6×10 handwritten number. The maximum operating frequency is 4 MHz. LSB is 103.91nA and 198.75nA when input is 0.5V and 1V. The INL and DNL are both much smaller than 0.5 LSB and dynamic range is 86.35dB.
關鍵字(中) ★ 卷積神經網路
★ 人工智能
★ 神經細胞元
★ 深度學習
★ 數位類比轉換器
關鍵字(英) ★ CNN
★ AI
★ Neuron Cell
★ Deep-learning
★ Current-steering DAC
論文目次 目錄
摘要 I
Abstract II
目錄 IV
圖目錄 VI
表目錄 VII
第一章 緒論 1
1.1 研究背景 1
1.2 研究動機 2
1.3 論文架構 3
第二章 人工智慧 4
2.1 人工智慧概述 4
2.1.1 人工智慧 5
2.1.2 機械學習 5
2.1.3 深度學習 7
2.2 神經元 8
2.3 神經網路 10
第三章 卷積神經網路 12
3.1 卷積神經網路概述 12
3.2 卷積層 14
3.3 激勵函數 16
3.3.1 步階函數 17
3.3.2 S型函數 18
3.3.3 雙曲正切函數 20
3.3.4 整流線性單位函數 21
3.4 池化層 24
第四章 類比可擴充式神經元電路 25
4.1 卷積層 25
4.1.1 數位影像處理 25
4.1.2 電流參考電路 26
4.1.3 數位類比轉換器電路 28
4.2 激勵函數及池化層 42
4.2.1 動態比較器(Dynamic Comparator) 43
第五章 佈局考量、模擬結果及量測考量 44
5.1 佈局考量 44
5.2 模擬結果 45
5.2.1 電流導向式數位類比轉換器 45
5.2.2 差動放大器 47
5.2.3 動態比較器 48
5.2.4 手寫辨識特徵 49
5.3 佈局驗證結果及錯誤說明 53
5.3.1 DRC驗證結果 53
5.3.2 LVS驗證結果 54
5.4 量測考量 55
第六章 結論及未來展望 57
6.1 結論 57
6.2 未來展望 57
參考文獻 58
參考文獻 參考文獻
[1-1] G. Volanis, A. Antonopoulos, A. A. Hatzopoulos and Y. Makris, “Toward Silicon-Based Cognitive Neuromorphic ICs-A Survey,” in IEEE Design & Test, Vol. 33, pp. 91-102, Mar. 2016.
[1-2] F. Akopyan, J. Sawada, A. Cassidy, A. I. Rodrigo, J. Arthur, P. Merolla, N. Imam, Y. Nakamura, P. Datta, G. J. Nam, B. Taba, M. Beakes, B. Brezzo, J. B. Kuang, R. Manohar, W. P. Risk, B. Jackson and D. S. Modha, “TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip,” in IEEE TCADICS, vol. 34, pp. 1537-1557, Aug. 2015.
[2-1] A. Newell and H. Simon, “The logic theory machine—A complex information processing system,” in IEEE, vol. 2, pp. 61-79, Sep. 1956.
[2-2] G. Miller, “Human memory and the storage of information,” in IEEE, vol. 2, pp.129-137, Sep. 1956.
[2-3] N. Chomsky, “Three models for the description of language,” in IEEE, vol. 2, pp. 113-124, Sep. 1956.
[2-4] J. MacQueen, “Some methods for classification and analysis of multivariate observations,” in UCLA, vol. 1, 1967.
[2-5] I. Gooddfellow, Y. Bengio and A. Courville, “Deep Learning,” 2015.
[2-6] A. S. Modi, “Review Article on Deep Learning Approaches,” in ICICCS, Jun. 2018.
[2-7] A. Shrestha and A. Mahmood, “Review of Deep Learning Algorithms and Architectures,” in IEEE Access, vol. 7 pp. 53040-53065, Apr. 2019.
[2-8] F. F. Li, R. Krishna and D. Xu, “CS231n: Convolutional Neural Networks for Visual Recognition,” Spring 2020.
[2-9] B. V. Benjamin, P. Gao, E. McQuinn, S. Choudhary, A. R. Chandrasekaran, J. M. Bussat, A. I. Rodrigo, J. V. Arthur, P. A. Merolla and K. Boahen, “Neurogrid: A Mixed-Analog-Digital Multichip System for Large-Scale Neural Simulations,” in IEEE, vol. 102, Apr. 2014.
[3-1] C. Nebauer, “Evaluation of convolutional neural network for visual recognition,” in IEEE Trans. Neural Networks, vol. 9, no. 7, pp. 685-696, July 1998.
[3-2] B. Zhao, H. Lu S. Chen, J. Liu and D. Wu, “Convolutional neural networks for time series classification,” in IEEE JSEE, Feb. 2017.
[3-3] G. Cybenko, “Approximation by Superpositions of a Sigmoidal Function,” in MCSS, pp 303-314, 1989.
[3-4] D. M. Augustus, “Trigonometry and double algebra,” 1849.
[3-5] X. Glorot, A. Bordes and Y. Bengio, “Deep Sparse Rectifier Neural Network,” in AISTATS, 2011.
[3-6] B. Xu, N. Wang, T. Chen and M. Li, “Empirical Evaluation of Rectified Activations in Convolution Network,” in arXiv.org, Nov. 2015.
[3-7] X. Jin, C. Xu, J. Feng, Y. Wei, J. Xiong and S. Yan, “Deep Learning with S-shaped Rectified Linear Activation Units,” in arXiv.org, Dec. 2015.
[3-8] D. A. Clevert, T. Unterthiner and S. Hochreiter, “Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs),” in arXiv.org, Feb. 2016.
[4-1] T. C. Carusone, D. A. Johns and K. W. Martin, “Analog Integrated Circuit Design,” 1997.
[4-2] B. Razavi, “The Current-Steering DAC,” in SSCM, vol. 10, pp. 11-15, Jan. 2018.
[4-3] M. Chakir, H. Akhamal and H. Qjidaa, “A Low Power 6-bit Current-steering DAC in 0.18-μm CMOS Process,” in ISCV, May. 2015.
指導教授 薛木添(Muh-Tian Shiue) 審核日期 2020-8-17
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明