參考文獻 |
[1] 黃振興、黃尹男、黃仁傑,「微電子廠採用隔震設計之可行性試驗研究」,國家地震工程研究中心,編號NCREE 02-023,2002年。
[2] 栗正暐、黃宣諭,「高科技半導體廠結構設計之關鍵考量」,土木水利,第四十六卷, 第六期,社團法人中國土木水利工程學會,30-37頁。
[3] Zhang Y.A. and Zhu A., “Novel Model-free Optimal Active Vibration Control Strategy Based on Deep Reinforcement Learning”, Structural Control and Health Monitoring, 1, 6770137, 2023.
[4] 李柏陞,「基於深度強化學習之無人載具對未知環境的路徑規劃」,淡江大學,碩士論文,2023。
[5] 邱唯祐,「電腦斷層血管攝影影像使用 深度強化學習追蹤冠狀動脈中心線」,國立陽明交通大學,碩士論文,2023。
[6] Kelly J.M., Earthquake-Resistant Design with Rubber, Springer-Verlag, 1993.
[7] Naeim F., & Kelly J.M., Design of Seismic Isolated Structures: From Theory to Practice, 1999.
[8] Tsai C.S., Chiang T.C., Chen B.J. and Lin S.B., “An advanced analytical model for high damping rubber bearings”, Earthquake Engineering & Structural Dynamics, 32, pp. 1373-1387, 2003.
[9] Reggio A. and Angelis M.D., “Combined primary-secondary system approach to the design of an equipment isolation system with High-Damping Rubber Bearings”, Journal of Sound and Vibration, 333, pp. 2386-2403, 2014.
[10] Yang J.N., Agrawal A.K., and Samali B., “A Benchmark Problem for Response Control of Wind-Excited Tall Buildings”, Journal of Engineering Mechanics, 130, pp. 437-446, 2004.
[11] Rodellar J., Garcia G., Vidal Y., Acho L. and Pozo F., “Hysteresis based vibration control of base-isolated structures”, Procedia Engineering, 199, pp. 1798-1803, 2017.
[12] Nagarajaiah S., Riley M.A., Reinhorn A.M., “Control of sliding isolated bridge with absolute acceleration feedback”, Journal of Engineering Mechanics, 119, pp. 2317-2332, 1993.
[13] Venanzi I., Ierimonti L. and Materazzi A.L., “Active Base Isolation of Museum Artifacts under Seismic Excitation”, Journal of Earthquake Engineering, 24, pp. 506-527, 2020.
[14] Oh H.E., Ku J.M., Lee D.H., Hong C.S. and Jeong W.B., “Analysis for active isolation of the equipment on flexible beam structure”, Journal of Physics: Conf. Series, 1075, pp. 27-28, 2017.
[15] 陳佳恩,「天鉤主動隔震系統應用於單自由度機構分析與實驗驗證」,國立中央大學,碩士論文,2021年。
[16] 吳柏諺,「天鉤主動隔震系統應用於非剛體設備物之分析與實驗驗證」,國立中央大學,碩士論文,2022年。
[17] 蔡元峰,「設備物應用衝程考量天鉤主動隔震系統之數值模擬分析及實驗驗證」,國立中央大學,碩士論文,2022年。
[18] Basili M. and Angelis M.D., “Investigation on the optimal properties of semi active control devices with continuous control for equipment isolation”, Scalable Computing: Practice and Experience, 15, pp. 331-343, 2015.
[19] Housner G.W., Bergman L.A., Caughey T.K., Chassiakos A.G., Claus R.O., Masri S.F., and Yao J.T.P., “Structural Control: Past, Present, and Future”, Journal of Engineering Mechanics, 123, pp. 897-971, 1997.
[20] Yoshioka H., Ramallo J.C. and Spencer B.F., ““Smart” Base Isolation Strategies Employing Magnetorheological Dampers”, Journal of Engineering Mechanics, 128, pp. 540-551, 2002.
[21] Smith O.J.M., Feedback Control Systems, McGraw-Hill, 1958.
[22] Astrom K.J. and Hagglund T., Advanced PID Control, Automation, 2006.
[23] Bryson A.E. and Ho Y.C., Applied Optimal Control: Optimization, Estimation, and Control, Blaisdell Publishing Company, 1969.
[24] Glover K., Doyle J. and Packard A., “State-space formulae for all stabilizing controllers that satisfy an H-infinity-norm bound and relations to risk sensitivity”, Systems & Control Letters, 14, pp. 165-172, 1989.
[25] Watkins C.J.C.H. and Dayan, P., “Q-Learning”, Machine Learning, 8, pp. 279-292, 1992.
[26] Sutton R.S. and Barto A.G., Reinforcement Learning: An Introduction, The MIT Press, 1998.
[27] Mnih V., et al., “Human-level control through deep reinforcement learning”, Nature, 518, pp. 529-533, 2015.
[28] Lillicrap T.P., et al., “Continuous control with deep reinforcement learning”, ICLR, arXiv:1509.02971, 2016.
[29] Mnih V., et al., “Asynchronous methods for deep reinforcement learning”, International Conference on Machine Learning, 48, pp. 1928-1937, 2016.
[30] Silver D., et al., “Mastering the game of Go with deep neural networks and tree search”, Nature, 529, pp. 484-489, 2016.
[31] Silver D., et al., “Mastering the game of Go without human knowledge”, Nature, 550, pp. 354-359, 2017.
[32] Haarnoja T., et al., “Soft Actor-Critic: Off-Policy Maximum Entropy Deep Reinforcement Learning with a Stochastic Actor”, International Conference on Machine Learning, 35, pp. 1861-1870, 2018.
[33] 張承熹,「LQR和強化學習倒單擺控制之比較」,中國文化大學,碩士論文,2022。
[34] Greguri´c M., Vuji´c M., Alexopoulos C. and Mileti´c M., “Application of Deep Reinforcement Learning in Traffic Signal Control: An Overview and Impact of Open Traffic Data”, Applied Sciences., 10, 4011, 2020.
[35] 范淳皓,「基於深度強化學習神經網路之自動化裂縫分割與偵測」,國立台灣大學,碩士論文,2024。
[36] Xu H., Su X., Wang Y., Cai H., Cui K., and Chen X., “Automatic bridge crack detection using a convolutional neural network”, Applied Sciences, 9, 2867, 2019.
[37] 陳克宜,「應用深度強化學習於地震特性控制模組與壓電式智能滑動隔減震系統之研發」,國立陽明交通大學,碩士論文,2022。
[38] Eshkevari S.S., Eshkevari S.S., Sen D., Pakzad S.N., “Active structural control framework using policy-gradient reinforcement learning”, Engineering Structures, 274, pp.115-122, 2023.
[39] 莊竣凱,「應用長短期記憶神經網路於地震特性預測模組與智慧型隔減震控制系統之研發與實驗驗證」,國立陽明交通大學,碩士論文,2021。
[40] Yao J. and Ge Z., “Path-Tracking Control Strategy of Unmanned Vehicle Based on DDPG Algorithm”, Sensors, 22, 7881, 2022.
[41] Kang J.W. and Kim H.S., “Performance Evaluation of Reinforcement Learning Algorithm for Control of Smart TMD”, Journal of Korean Association for Spatial Structures, 21, pp.41-48, 2021.
[42] Liang G., Zhao T. and Wei Y., “DDPG based self-learning active and model-constrained semi-active suspension control”, CVCI, pp. 1-6, 2021.
[43] Yang J., Peng W. and Sun C., “A Learning Control Method of Automated Vehicle Platoon at Straight Path with DDPG-Based PID”, Electronics, 10, 2580, 2021.
[44] Yan R., Jiang R., Jia B., Huang J., and Yang D., “Hybrid Car-Following Strategy Based on Deep Deterministic Policy Gradient and Cooperative Adaptive Cruise Control”, Automation Science and Engineering, arXiv:2103.03796, 2022.
[45] Jonathan J. Hunt, et al. , “Continuous control with deep reinforcement learning”, ICLR, arXiv:1509.02971, 2016.
[46] Andrew G. Howard, Menglong Zhu and Bo Chen, “MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications”, Computer Vision and Pattern Recognition, arXiv:1704.04861, 2017.
[47] He K., Zhang X., Ren S. and Sun J., “Deep Residual Learning for Image Recognition”, Computer Vision and Pattern Recognition, arXiv:1512.03385, 2015.
[48] Bruin T., Kober J., Tuyls K. and Babuˇska R., “Experience Selection in Deep Reinforcement Learning for Control”, Journal of Machine Learning Research, 19, pp. 1-56, 2018.
[49] Kingma D.P. and Ba J.L., “Adam: A Method for Stochastic Optimization”, ICLR, arXiv:1412.6980, 2015. |