博碩士論文 106281602 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:110 、訪客IP:18.217.207.112
姓名 馬葵娜(Reyna Marsya Quita)  查詢紙本館藏   畢業系所 數學系
論文名稱 守恆物理性神經網路法應用在靠近臨界值的巴克利-萊弗里特方程式
(CONSERVATIVE PHYSICS-INFORMED NEURAL NETWORKS FOR GENERALIZED BUCKLEY-LEVERETT EQUATION NEAR CRITICAL STATES)
相關論文
★ 氣流的非黏性駐波通過不連續管子之探究★ An Iteration Method for the Riemann Problem of Some Degenerate Hyperbolic Balance Laws
★ 影像模糊方法在蝴蝶辨識神經網路中之應用★ 單一非線性平衡律黎曼問題廣義解的存在性
★ 非線性二階常微方程組兩點邊界值問題之解的存在性與唯一性★ 對接近音速流量可壓縮尤拉方程式的柯西問題去架構區間逼近解
★ 一些退化擬線性波動方程的解的性質.★ 擬線性波方程中片段線性初始值問題的整體Lipchitz連續解的
★ 水文地質學的平衡模型之擴散對流反應方程★ 非線性守恆律的擾動Riemann 問題的古典解
★ BBM與KdV方程初始邊界問題解的週期性★ 共振守恆律的擾動黎曼問題的古典解
★ 可壓縮流中微黏性尤拉方程激波解的行為★ 非齊次雙曲守恆律系統初始邊界值問題之整域弱解的存在性
★ 有關非線性平衡定律之柯西問題的廣域弱解★ 單一雙曲守恆律的柯西問題熵解整體存在性的一些引理
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2025-1-31以後開放)
摘要(中) 在這篇論文中,我們提供一種改良版的守恆物理性神經網路來建構非線性守恆方程的黎曼問題解。這些守恆方程可以是寫成守恆或非守恆型式。如果是守恆型式,方程中的通量包含了一個不連續的擾動量。我們提供廣義的巴克利-萊弗里特方程來展示我們的研究結果。這個方程式代表了多孔介質流體在不連續孔隙率介質中的活動行為。藉由引進一個新的未知量,這個巴克利-萊弗里特方程可以被轉換成一個非線性共振守恆系統且帶有非連續的擾動項。我們用守恆物理性神經網路針對守恆或非守恆巴克利-萊弗里特系統在非臨界或臨界黎曼初始值的案例中造出弱解。針對此方程系統的性質,我們把時空間分割成二個區間,第一區間解一個二維系統,第二區間解一個單一非凸的守恆方程的黎曼問題,且針對不同區間給定不同的損失函數。當黎曼初始值靠近臨界值時,我們引進了重新縮放尺度技巧,適度的選取縮放尺度的參數使得弱解沒有振盪的行為。所以由守恆物理性神經網路法所得的解可以完美的匹配由傳統偏微分方程所得到的解。我們也比較了此方法和WENO 5數值方法的優缺點,我們發現此方法可以用在非守恆型式方程,但WENO 5方法則在這類型方程有一些困難度。最後,我們也研究了把元學習方法加入守恆物理性神經網路法的可行性。
摘要(英) In this dissertation, a modified version of conservative Physics-informed Neural Networks (cPINN) is provided to construct the solutions of Riemann problem for the hyperbolic scalar conservation laws in non-conservative form and scalar conservation laws with discontinuous perturbation in the flux. To demonstrate the results, we use the model of generalized Buckley-Leverett equation (GBL equation) with discontinuous porosity in porous media. By inventing a new unknown, the GBL equation is transformed into a two-by-two resonant hyperbolic conservation laws in conservative form. We experiment with our idea by using a cPINN algorithm to solve the GBL equation in both conservative and non-conservative forms, as well as the cases of critical and non-critical states. This method provides a combination of two different neural networks and corresponding loss functions, one is for the two-by-two resonant hyperbolic system, and the other is for the scalar conservation law with a discontinuous perturbation term in the non-convex flux. The technique of re-scaling to the unknowns is adopted to avoid the oscillation or inaccurate speed of the Riemann solutions in the cases of critical Riemann data. The solutions constructed by the modified cPINN match the exact solutions constructed by the theoretical analysis for hyperbolic conservation laws. Finally, we compare the performance of the modified cPINN with numerical method WENO5. Whereas WENO5 struggles with the approximate solutions for the Riemann problems of GBL equation in non-conservative form, cPINN works admirably. Moreover, the integration of meta-learning into the initialization of cPINN for the re-scaled GBL equation is being explored as an additional step forward.
關鍵字(中) ★ 守恆物理性神經網路
★ 深度學習
★ 雙曲型守恆律
★ 廣義巴克利-萊弗里特方程
★ 黎曼問題
★ 熵條件
關鍵字(英) ★ Physics-informed Neural Networks (PINN)
★ cPINN
★ Deep Learning
★ Hyperbolic System of Conservation Laws
★ Generalized Buckley-Leverett Equation
★ Riemann Problems
論文目次 論文摘要 ... i
Abstract ... ii
Acknowledgment ... iii
Contents ... iv
List of Figures ... vi
List of Tables ... xiv
1 Introduction ... 1
2 Preliminaries ... 8
2.1 Theoretical Results on Riemann problem of GBL equation ... 8
2.2 Physics-Informed Neural Networks (PINN) ... 11
2.3 Conservative Physics-Informed Neural Networks (cPINN) ... 14
2.4 Model-Agnostic Meta Learning (MAML) ... 15
2.5 A New Reptile Initialization based Physics-Informed Neural Network (NRPINN) ... 17
2.6 Meta Learning for Parametrized PDE ... 19
3 cPINN for the Generalized Buckley-Leverett (GBL) Equation ... 23
3.1 cPINN for the GBL Equation in Conservative Form ... 23
3.1.1 Non-Critical States ... 25
3.1.2 Critical States ... 26
3.2 cPINN for the GBL Equation in Non-Conservative Form ... 29
3.2.1 Non-Critical States ... 29
3.2.2 Critical States ... 31
3.3 Meta-Learning for the Weight Initialization in Re-scaling Method ... 32
4 Experiment Results ... 35
4.1 Non-Critical States ... 35
4.1.1 Case 1 ... 36
4.1.2 Case 2 ... 36
4.2 Critical States ... 38
4.2.1 Case 3a ... 38
4.2.2 Case 3b ... 39
4.2.3 Case 4a ... 42
4.2.4 Case 4b ... 42
4.2.5 Case 5a ... 44
4.2.6 Case 5b ... 46
4.3 Comparison with WENO5 ... 46
4.4 Meta-Learning for Weight Initialization in Re-scaled GBL
Equation ... 48
4.4.1 Case 3b ... 48
4.4.2 Case 4b ... 52
4.4.3 Case 5b ... 54
5 Conclusions ... 60
A The Detail of the Experimental Settings ... 65
B The Detail of the Experimental Settings with Meta-Learning ... 66
C The Loss Plot ... 67
D WENO Results ... 70
參考文獻 [1] Atilim Gunes Baydin et al. “Automatic Differentiation in Machine Learning: a Survey”. In: Journal of Machine Learning Research 18.153 (2018), pp. 1–43.
[2] L´eon Bottou et al. “Stochastic gradient learning in neural networks”. In: Proceedings of Neuro-Nımes 91.8 (1991), p. 12.
[3] Aidan Chaumet and Jan Giesselmann. Efficient wPINN-Approximations to Entropy Solutions of Hyperbolic Conservation Laws. 2022.
[4] G. Cybenko. “Approximation by superpositions of a sigmoidal function”. In: Math. Control Signal Systems 2 (1989), pp. 303–314.
[5] Ameya D. Jagtap and George Em Karniadakis. “Extended Physics-Informed Neural Networks (XPINNs): A Generalized Space-Time Domain Decomposition Based Deep Learning Framework for Nonlinear Partial Differential Equations”. In: Communications in Computational Physics 28.5 (2020), pp. 2002–2041.
[6] Tim De Ryck, Siddhartha Mishra, and Roberto Molinaro. wPINNs: Weak Physics informed neural networks for approximating entropy solutions of hyperbolic conservation laws. 2022.
[7] Waleed Diab and Mohammed Al Kobaisi. PINNs for the Solution of the Hyperbolic Buckley-Leverett Problem with a Non-convex Flux Function. 2021.
[8] C. J. van Duijn, L. A. Peletier, and I. S. Pop. “A New Class of Entropy Solutions of the Buckley–Leverett Equation”. In: SIAM Journal on Mathematical Analysis 39.2
(2007), pp. 507–536.
[9] Chelsea Finn, Pieter Abbeel, and Sergey Levine. “Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks”. In: Proceedings of the 34th International
Conference on Machine Learning - Volume 70. ICML’17. Sydney, NSW, Australia: JMLR.org, 2017, pp. 1126–1135.
[10] Cedric G. Fraces and Hamdi Tchelepi. Physics Informed Deep Learning for Flow and Transport in Porous Media. D011S006R002. Oct. 2021.
[11] Olga Fuks and Hamdi A. Tchelepi. “Limitations of Physics Informed Machine Learning for nonlinear two-phase transport in porous media”. In: Journal of Machine
Learning for Modeling and Computing 1.1 (2020), pp. 19–37.
[12] Xavier Glorot and Yoshua Bengio. “Understanding the difficulty of training deep feedforward neural networks”. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics. Ed. by Yee Whye Teh and Mike Titterington. Vol. 9. Proceedings of Machine Learning Research. Chia Laguna Resort, Sardinia, Italy: PMLR, 13–15 May 2010, pp. 249–256.
[13] Cuiyu He, Xiaozhe Hu, and Lin Mu. “A mesh-free method using piecewise deep neural network for elliptic interface problems”. In: Journal of Computational and
Applied Mathematics 412 (2022), p. 114358.
[14] John Meng-Kai Hong, Jiahong Wu, and Juan-Ming Yuan. “The generalized BuckleyLeverett and the regularized Buckley-Leverett equations”. In: Journal of Mathematical Physics 53.5 (2012), p. 053701.
[15] Kurt Hornik. “Approximation capabilities of multilayer feedforward networks”. In: Neural Networks 4.2 (1991), pp. 251–257.
[16] Wei-Fan Hu, Te-Sheng Lin, and Ming-Chih Lai. “A discontinuity capturing shallow neural network for elliptic interface problems”. In: Journal of Computational Physics
469 (2022), p. 111576.
[17] Wei-Fan Hu et al. A shallow physics-informed neural network for solving partial differential equations on surfaces. 2022.
[18] Ameya D. Jagtap, Kenji Kawaguchi, and George Em Karniadakis. “Adaptive activation functions accelerate convergence in deep and physics-informed neural networks”. In: Journal of Computational Physics 404 (2020), p. 109136.
[19] Ameya D. Jagtap, Ehsan Kharazmi, and George Em Karniadakis. “Conservative physics-informed neural networks on discrete domains for conservation laws: Applications to forward and inverse problems”. In: Computer Methods in Applied Mechanics and Engineering 365 (2020), p. 113028.
[20] Ameya D. Jagtap et al. “Physics-informed neural networks for inverse problems in supersonic flows”. In: Journal of Computational Physics 466 (2022), p. 111402.
[21] Ameya D. Jagtap et al. “Physics-informed neural networks for inverse problems in supersonic flows”. In: Journal of Computational Physics 466 (2022), p. 111402.
[22] Ehsan Kharazmi, Zhongqiang Zhang, and George E.M. Karniadakis. “hp-VPINNs: Variational physics-informed neural networks with domain decomposition”. In: Computer Methods in Applied Mechanics and Engineering 374 (2021), p. 113547.
[23] Ehsan Kharazmi, Zhongqiang Zhang, and George Em Karniadakis. “Variational Physics-Informed Neural Networks For Solving Partial Differential Equations”. In:
CoRR abs/1912.00873 (2019).
[24] David (David Ronald) Kincaid and E. W. Cheney. Numerical Analysis: Mathematics of Scientific Computing. 1991. isbn: 0-534-13014-3.
[25] Diederik P. Kingma and Jimmy Ba. “Adam: A Method for Stochastic Optimization”. In: 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings. Ed. by Yoshua
Bengio and Yann LeCun. 2015.
[26] Xu Liu et al. “A novel meta-learning initialization method for physics-informed neural networks”. In: Neural Computing and Applications 34 (2021), pp. 14511–
14534.
[27] S.G. Makridakis, S.C. Wheelwright, and R.J. Hyndman. Forecasting: Methods and Applications. Wiley, 1998. isbn: 9780471532330.
[28] Zhiping Mao, Ameya D. Jagtap, and George Em Karniadakis. “Physics-informed neural networks for high-speed flows”. In: Computer Methods in Applied Mechanics
and Engineering 360 (2020), p. 112789.
[29] Alex Nichol, Joshua Achiam, and John Schulman. On First-Order Meta-Learning Algorithms. 2018.
[30] Michael Penwarden et al. “A metalearning approach for Physics-Informed Neural Networks (PINNs): Application to parameterized PDEs”. In: Journal of Computational Physics 477 (2023), p. 111912.
[31] M. Raissi, P. Perdikaris, and G.E. Karniadakis. “Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations”. In: Journal of Computational Physics 378 (2019), pp. 686–707.
[32] Chi-Wang Shu. “Essentially non-oscillatory and weighted essentially non-oscillatory schemes for hyperbolic conservation laws”. In: Advanced Numerical Approximation
of Nonlinear Hyperbolic Equations: Lectures given at the 2nd Session of the Centro Internazionale Matematico Estivo (C.I.M.E.) held in Cetraro, Italy, June 23–28,
1997. Ed. by Alfio Quarteroni. Berlin, Heidelberg: Springer Berlin Heidelberg, 1998, pp. 325–432. isbn: 978-3-540-49804-9.
[33] Chi-Wang Shu. “High Order Weighted Essentially Nonoscillatory Schemes for Convection Dominated Problems”. In: SIAM Review 51.1 (2009), pp. 82–126.
[34] Michael L. Stein. “Large sample properties of simulations using latin hypercube sampling”. In: Technometrics 29 (1987), pp. 143–151.
[35] Ying Wang and Chiu-Yen Kao. “Central schemes for the modified Buckley–Leverett equation”. In: Journal of Computational Science 4.1 (2013). Computational Methods for Hyperbolic Problems, pp. 12–23.
[36] Dongkun Zhang et al. “Quantifying total uncertainty in physics-informed neural networks for solving forward and inverse stochastic problems”. In: Journal of Computational Physics 397 (2019), p. 108850.
指導教授 洪盟凱(John M. Hong) 審核日期 2024-1-12
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明