博碩士論文 107222010 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:27 、訪客IP:3.129.69.151
姓名 黃建隆(Chien-Lung Huang)  查詢紙本館藏   畢業系所 物理學系
論文名稱
(AdS/CFT Correspondence with Machine Learning)
相關論文
★ 由Quintessencec和Phantom組成雙純量場的暗能量模型★ 自引力球殼穿隧的Hawking輻射
★ Gauss-Bonnet 重力理論中穿隧效應的霍金輻射★ SL(4,R)理論下的漸近平直對稱轉換
★ 外加B-場下於三維球面上之土坡弦及銳牙弦★ 克爾-紐曼/共形場中的三點關聯函數
★ 時空的熱力學面向★ 四維黑洞的全息描述
★ 萊斯納-諾德斯特洛姆黑洞下的成對產生★ 自旋粒子在萊斯納-諾思通黑洞的生成
★ Pseudo Spectral Method for Holographic Josephson Junction★ 克爾-紐曼黑洞下的成對產生
★ Holographic Josephson Junction in Various Dimensions★ Characteristics of Cylindrically Symmetric Spacetimes in General Relativity
★ Force Free Electrodynamics in Extremal Kerr-Newman Black Holes★ Schwinger Effect in Near Extremal Charged Black Holes
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 在2018年的時候,Koji Hashimoto教授發表了一篇期刊[1],在期刊中他們用深度神經網絡(DNN)的結構來建構一個模型與AdS/CFT對偶性質做連結。在這篇論文中我們將以重建他們的模型為出發點,並討論在原模型下產生的諸多問題;接著在第三章節中,為了解決這些問題我們嘗試利用其他機械學習的模型來建構新的學習架構,在這個架構下我們期望能擺脫使用負面資料(negative data;因為我們發現這些資料並不能在實際面上被使用),因此使用強化學習(RL)的方式並以其他近似函數來作配合(深度神經網絡(DNN)、神經微分方程(Neural ODE)、或其他近似函數)。然而從一直以來的結果中我們發現:在這個問題框架下會有複數對應解的問題,也因此我們在後記中在不同的兩個層面上討論對未來發展上的改進。
摘要(英) In 2018 [1], Koji Hashimoto had presented a deep-neural-like model to connect with AdS/CFT correspondence. We tried to reconstruct his model, and found some problems about uncertainty. Therefore, we attempted to use other learning models to solve these problems. The alternate models consist of using the concepts from reinforcement learning, Neural ODE, and Deep Neural Network. For our goal, we expect to keep from using the negative data in learning because the acquisition will come across problems in experimental. However, the result shows that the problem is actually about the uniqueness of solution, and we provide further discussion and improvement.
關鍵字(中) ★ 反德西特/共形場論對偶
★ 機器學習
★ 強化學習
關鍵字(英) ★ AdS/CFT Correspondence
★ Machine Learning
★ Reinforcement Learning
論文目次 1 Introduction 1
1.1 AdS/CFT correspondence . . . . . . . . . . . . . .2
1.2 Anti-de Sitter space and Conformal Theory . . . . 2
1.3 Machine Learning . . . . . . . . . . . . . . . . .4
2 First step from AdS/CFT to Neural Network 5
2.1 Bulk Metric . . . . . . . . . . . . . . . . . . . 6
2.2 Equation of Motion . . . . . . . . . . . . . . . .6
2.3 Boundary condition . . . . . . . . . . . . . . . .7
2.4 Data and Results . . . . . . . . . . . . . . . . .8
2.5 Problems and Discussion . . . . . . . . . . . . .10
3 Reinforcement learning model 12
3.1 Markov Decision Process . . . . . . . . . . . . .12
3.2 Function Approximation . . . . . . . . . . . . . 15
3.3 Algorithm . . . . . . . . . . . . . . . . . . . .17
3.4 Result and Discussion . . . . . . . . . . . . . .17
4 Epilogue: Improvements 20
5 Conclusion 22
參考文獻 [1] Koji Hashimoto, Sotaro Sugishita, Akinori Tanaka, and Akio Tomiya. Deep learning and the AdS/CFT
correspondence. Phys. Rev. D, 98:046019, Aug 2018.
[2] Juan Maldacena. International Journal of Theoretical Physics, 38(4):1113:1133, 1999.
[3] Alfonso V. Ramallo. Introduction to the ads/cft correspondence, 2013.
[4] Russell D. Reed and Robert J. Marks. Neural Smithing: Supervised Learning in Feedforward Arti cial
Neural Networks. MIT Press, 1999.
[5] Viren Jain, Joseph F. Murray, Fabian Roth, Srinivas Turaga, Valentin Zhigulin, Kevin L. Briggman,
Moritz N. Helmstaedter, Winfried Denk, and H. Sebastian Seung. Supervised learning of image restoration with convolutional networks. In 2007 IEEE 11th International Conference on Computer Vision, pages 1-8, 2007.
[6] H. B. Barlow. Unsupervised learning. Neural Computation, 1:295-311, 1989.
[7] Richard S. Sutton and Andrew G. Barto. Reinforcement Learning: An Introduction. The MIT Press,
second edition, 2018.
[8] Diederik P. Kingma and Jimmy Ba. Adam: A method for stochastic optimization, 2017.
[9] H. Robbins and S. Monro. A stochastic approximation method. Annals of Mathematical Statistics, 22:400-407, 1951.
[10] Marvin Minsky and Seymour Papert. Perceptrons: An Introduction to Computational Geometry. MIT Press, Cambridge, MA, USA, 1969.
[11] Vincent Francois-Lavet, Peter Henderson, Riashat Islam, Marc G. Bellemare, and Joelle Pineau. An
introduction to deep reinforcement learning. CoRR, abs/1811.12560, 2018.
[12] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A. Rusu, Joel Veness, Marc G. Bellemare,
Alex Graves, Martin Riedmiller, Andreas K. Fidjeland, Georg Ostrovski, Stig Petersen, Charles Beattie,
Amir Sadik, Ioannis Antonoglou, Helen King, Dharshan Kumaran, Daan Wierstra, Shane Legg, and Demis
Hassabis. Human-level control through deep reinforcement learning. Nature, 518(7540):529-533, Feb 2015.
[13] Jianzhun Du, Joseph Futoma, and Finale Doshi-Velez. Model-based reinforcement learning for semi-markov
decision processes with neural odes. CoRR, abs/2006.16210, 2020.
[14] Tian Qi Chen, Yulia Rubanova, Jesse Bettencourt, and David Duvenaud. Neural ordinary di erential
equations. CoRR, abs/1806.07366, 2018.
[15] Patrick Kidger, James Morrill, James Foster, and Terry Lyons. Neural controlled di erential equations for
irregular time series, 2020.
[16] Koji Hashimoto. Ads/cft correspondence as a deep boltzmann machine. Physical Review D, 99(10), May
2019.
[17] Tetsuya Akutagawa, Koji Hashimoto, and Takayuki Sumimoto. Deep learning and ads/qcd. Physical
Review D, 102(2), Jul 2020.
[18] Koji Hashimoto, Hong-Ye Hu, and Yi-Zhuang You. Neural ode and holographic qcd, 2020.
指導教授 陳江梅(Chiang-Mei Chen) 審核日期 2021-7-27
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明