博碩士論文 91522049 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:32 、訪客IP:3.141.200.180
姓名 黃得原(De-Yuan Huang)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 適應性模糊化類神經網路及其應用
(The development of an adaptive neuro-fuzzy network and its applications)
相關論文
★ 以Q-學習法為基礎之群體智慧演算法及其應用★ 發展遲緩兒童之復健系統研製
★ 從認知風格角度比較教師評量與同儕互評之差異:從英語寫作到遊戲製作★ 基於檢驗數值的糖尿病腎病變預測模型
★ 模糊類神經網路為架構之遙測影像分類器設計★ 複合式群聚演算法
★ 身心障礙者輔具之研製★ 指紋分類器之研究
★ 背光影像補償及色彩減量之研究★ 類神經網路於營利事業所得稅選案之應用
★ 一個新的線上學習系統及其於稅務選案上之應用★ 人眼追蹤系統及其於人機介面之應用
★ 結合群體智慧與自我組織映射圖的資料視覺化研究★ 追瞳系統之研發於身障者之人機介面應用
★ 以類免疫系統為基礎之線上學習類神經模糊系統及其應用★ 基因演算法於語音聲紋解攪拌之應用
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 傳統上建立智慧型機器的途徑有類神經網路及模糊系統。這兩種傳統的方法各有其優缺點及限制。類神經網路的最大優點是:它具有學習的能力,但缺點在於它從許多的範例中所歸納出來的概念,是隱藏在一組網路參數中,但對人來說,這些參數過於抽象無法理解。模糊系統的優點是,它提供一條便捷的路徑,且可善加利用專家處理事物的經驗法則來處理許多工作。更重要的是,它可以提供我們邏輯上的解釋。可是建立模糊系統的瓶頸,在於那些必要的模糊規則要從何而來?因為要建立一個完整且有效的模糊規則庫 (rule base) 是無法單靠人類或是專家口語所給予的經驗法則來建立的。因此,如何整合類神經網路與模糊系統雙方面的優點,近幾年來在相關領域中十分重要的研究課題。
在類神經網路的學習演算法則大致可分為監督式學習、非監督式學習以及增強式學習。在本論文中,提出了一個以模糊多維矩形複合式類神經網路為基礎之增強式學習演算法則 (FHRCNN-Q)。此學習演算法則能在缺乏明確的訓練資料下,自動地建構一個完整的模糊化類神經網路系統,並且能透過增強式學習的方式,自行調整此系統中的參數值來改善此模糊化類神經網路系統的效能。FHRCNN-Q 可以透過增強式學習法則建構其網路架構,並經由增強式學習探索可能的解空間。本論文使用倒單擺系統跟倒車入庫系統來驗證FHRCNN-Q效能。
在一個未知的環境中,移動式機器人的導航系統必須透過感測器讀取環境資訊,並藉由對環境資訊的感知來做出適當的動作。本論文用FHRCNN-Q建構在未知環境中的移動式機器人的導航系統。透過移動式機器人的模擬我們可以看到FHRCNN-Q可經由增強式學習法則建構出移動機器人在未知環境中所需的模糊規則庫。
摘要(英) Over the last few decades, neural networks and fuzzy systems have established their reputation as alternative approaches to information processing. Both have certain advantages over classical methods, especially when vague data or prior knowledge is involved. However, their applicability suffered from several weaknesses of the individual models. Therefore, combinations of neural networks with fuzzy systems have been proposed, where both models complement each other.
A neuro-fuzzy network can be defined as a fuzzy system trained with some algorithm derived from the neural network theory. The integration of neural networks and fuzzy systems aims at the generation of a more robust, efficient and easily interpretable system where the advantages of each model are kept and their possible disadvantages are removed.
In this dissertation, the implementation of Q-learning based on fuzzy hyperrectangular composite neural network (FHRCNN) is proposed. The proposed system referred to as FHRCNN-Q. In the proposed FHRCNN-Q, the antecedents part express a model of the environment. In other words, the proposed system construct the uncertain environment by using a FHRCNN model.
In the proposed FHRCNN-Q, a fuzzy hyperrectangular composite neural network consists of a set of fuzzy IF-THEN rules that describe the input-output mapping relationship of the networks. Simply stated, the process of presenting input data to each hidden node in a FHRCNN-Q is equivalent to firing a fuzzy rule.
The proposed FHRCNN-Q can not only tune its parameters but also incrementally construct its architecture in a reinforcement learning environment. Through injecting new rules into the system, an FHRCNN-Q can explore the possible solution space. The inverted pendulum system and the back-driving truck problem were used to demonstrate the performance of the proposed FHRCNN-Q.
It’s important for robots to obtain environmental information by means of their sensory system and decide their own behavior when they move in an unknown environment. This dissertation also presents a FHRCNN-Q approach to a navigation system which allows a goal-directed mobile robot to incrementally adapt to an unknown environment. In the FHRCNN-Q, the fuzzy rules which map current sensory inputs to appropriate actions are built through the reinforcement learning. Some simulation results of a mobile robot illustrate the performance of the proposed navigation system.
關鍵字(中) ★ 增強式學習
★ Q-learning
★ 模糊化類神經網路
★ 移動式機器人
★ 模糊多維矩形複合式類神經網路 (FHRCNN)
關鍵字(英) ★ Q-learning
★ neuro-fuzzy system
★ robot
★ navigation
★ reinforcement learning
★ fuzzy hyperrectangular composite neural network
★ navigation.fuzzy hyperrectangular composite neur
論文目次 ABSTRACT i
Acknowledgments v
Table of Contents vi
List of Figures viii
List of Tables ix
Nomenclature x
Chapter 1 Introduction 1
1.1 Motivation 1
1.2 Overview of the Study 3
1.2.1 Fuzzy Hyperrectangular Composite Neural Network 3
1.2.2 Reinforcement Learning algorithms 4
1.3 Organization of Dissertation 5
Chapter 2 The Related Works 6
2.1 Review of Neuro-Fuzzy Systems 6
2.2 Review of the Reinforcement Learning Scheme 10
2.2.1 Brief Review of Q-learning 14
2.2.2 Connectionist Q-learning (QCON) 15
2.2.3 CMAC-based Q-learning 16
2.2.4 Q-Self Organizing Map (Q-KOHON) 17
2.3 Adaptive classifier-system-based neuro-fuzzy inference system 17
2.3.1 The Architecture of the ACSNFIS 18
2.3.2 Learning Mechanisms of the ACSNFIS 19
2.3.3 Updating Weights of the ACSNFIS 21
2.4 Review of Hyperrectangular Composite Neural Network 22
2.4.1 Supervised decision-directed learning (SDDL) algorithm 24
2.5 Discussion 27
Chapter 3 Fuzzy HyperRectangular Composite Neural Networks in a Reinforcement Learning Environment 29
3.1 Fuzzy HyperRectangular Composite Neural Network (FHRCNN) 29
3.1.1 Architecture of Fuzzy HyperRectangular Composite Neural Networks (FHRCNN) 30
3.2 New Approach to Fuzzy HyperRectangular Composite Neural Networks in a Reinforcement Learning Environment 33
3.2.1 The Architecture of the FHRCNN-Q 33
3.2.2 Construction of the FHRCNN-Q 34
3.3 Learning Algorithm of FHRCNN-Q 36
3.4 Updating Parameters of FHRCNN-Q 39
3.3.1 Updating Rules for the Antecedents 41
3.3.2 Updating Rules for the Consequents 42
3.3.3 Updating Q-value for the Rules 43
3.5 Conclusions 43
Chapter 4 The Reinforcement Learning Approach to Control Problems 44
4.1 The Inverted Pendulum Problem 44
4.2 The Back-Driving Truck Problem 49
4.3 A Reinforcement Learning Approach to Robot Navigation 54
4.3.1 The Simulated Robot Navigation System 54
4.3.2 The Proposed FHRCNN-Q Approach to Robot Navigation 55
4.3.3 Experimental Results of Robot Navigation 56
4.4 Conclusions 59
Chapter 5 Conclusions and Future Works 60
5.1 Conclusions 60
5.2 Future Works 61
References 62
Publications 68
參考文獻 [1] M. N. Ahmadabadi and M. Asadpour, “Expertness based cooperative Q-learning,” IEEE Trans. on System, Man, and Cybernetics, Part B, Vol.32, No.1, 2002, pp. 66-76.
[2] C. V. Altrock, Fuzzy Logic & NeuroFuzzy Applications Explained, Prentice-Hall International, Inc., 1995.
[3] R. C. Arkin, “Integrating behavioral, perctptual, and world knowledge in reactive navigation,” Robotics and Autonomics Systems, vol.6, pp. 105-122, 1990.
[4] A. G. Barto, R. S. Sutton and C. W. Anderson, “Neuronlike adaptive elements that can solve difficult learning control problems,” IEEE Trans. on System, Man, and Cybernetics, vol. 13, no. 5, pp. 834-846, 1983.
[5] A. G. Barto and M. I. Jordan, “Gradient following without backpropagation in layered networks,” in Proc. IEEE First Annual Conf. Neural Networks, pp. 11629-11636, 1987.
[6] G. A. Bekey and R. Tomovic ,“Reflex control of robot actions,” in IEEE Int. Conf. on Robotics and Automation, pp. 240-247, 1986.
[7] H. R. Beom and H. S. Cho, “A Sensor-based navigation for a Mobile Robot Using Fuzzy Logic and Reinforcement Learning,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 25, no. 3, March 1995.
[8] H. R. Berenj and P. Khedkar, “Learning and tuning fuzzy logic controllers through reinforcements,” IEEE Trans. on Neural Networks, Vol.3, No.5, 1992, pp. 724-740.
[9] H.R. Berenji, P. Khedkar, A. Malkani, “Refining linear fuzzy rules by reinforcement learning,” Proceedings of the IEEE International Conference on Fuzzy Systems, vol. 3, 1996, pp. 1750–1756.
[10] J. Borenstein and Y. koren, “Real-time obstacle avoidance for fast mobile robot,” IEEE Trans. on Syst. Man Cyber., vol. 19, no. 5, pp. 1179-1187, Sept./Oct. 1989.
[11] J. Borenstein and Y. Koren, “Potential field methods and their inherent limitations for mobile robot navigation,” in Proc. IEEE Int. Conf. Robotics and Automation (Sacramento, CA, Apr. 9-11, 1991), pp. 818-823, 1991.
[12] R. A. Brooks, “A robust layered control system for a mobile robot,” IEEE Trans. on Robotics Automat., vol. RA-2, no. 1, pp. 1-23, Mar. 1986.
[13] R. H. Cannon, Dynamics of physical systems. McGraw-Hill, New York, 1967.
[14] J. H. Connell, “A hybrid architecture applied to robot navigation,” in IEEE International Conference on Robotics and Automation, pp. 2719-2724, 1992.
[15] C. Gaskett, D. Wettergreen, and A. Zelinsky, “Q-learning in continuous state and action spaces, “ 12 Australian Joint Conference on Artificial Intelligence, Australia, 1999.
[16] J. H. Connell, Minimalist Mobile Robotics: A colony-Style Architecture for an artificial creature, San Diego, CA: Academic Press, 1990.
[17] A. H. Fagg, D. Lotspeich and G. A. Bekey, “A reinforcement-learning approach to reactive control policy design for autonomous robots,” IEEE International Conference on Robotics and Automation, vol. 1, pp. 39-44, 1994.
[18] P. Y. Glorennec, “Fuzzy Q-learning and Dynamical Fuzzy Q-Learning,” Proc. of 3rd IEEE International Conference on Fuzzy Systems, USA, 1994, pp. 474-479.
[19] P. Y. Glorennec and L. Jouffe, “Fuzzy Q-Learning,” Proc. Of 6th IEEE International Conference on Fuzzy Systems, Spain, 1997, pp. 659-662.
[20] H.-M. Gross, V. Stephan, and M. Krabbes, “A neural field approach to topological reinforcement learning in continuous action spaces,” In Proc. 1998 IEEE World Congress on Computational Intelligence, WCCI'98 and International Joint Conference on Neural Networks, IJCNN'98, Anchorage, Alaska, 1998.
[21] G. E. Hinton, “Connectionist learning procedures,” Art. Intell., vol. 40, no. 1, pp. 143-150, 1989.
[22] J. H. Holland, K. J. Holyoak, R. E. Nisbett, and P. R. Thagard, Induction: Processes of inference, learning, and discovery, Cambridge, MA: MIT Press, 1986.
[23] T. Horiuchi, A. Fujino, O. Katai, and T. Sawaragi, “Fuzzy Interpolation-Based Q-Learning with Continuous States and Actions,” Proc. of 5th IEEE International Conference on Fuzzy Systems, USA, 1996, pp. 594-600.
[24] J.-S. R. Jang, C.-T. Sun, and E. Mizutani, Neuro-Fuzzy And Soft Computing, Prentice-Hall International, Inc., 1997.
[25] L. Jouffe and P. Y. Glorennec, “Comparison between Connectionist and Fuzzy Q-learning,” Proc. of 4th International Conference on Sofr Computing, Japan, 1996, pp. 557-560.
[26] L. P. Kaelbling, M. L. Littman and A. W. Moore, “Reinforcement learning: a survey,” Journal of Artificial Intelligence Research, vol. 4, pp. 237-285, 1996.
[27] O. Khatib, “Real-time obstacle avoidance for manipulators and mobile robots,” Int. J. of Robotics Research, vol. 5, no.1, pp. 90-98, Spring 1986.
[28] G. J. Klir, and B. Yuan, Fuzzy Sets And Fuzzy Logic, Theory And Applications, Prentice-Hall International, Inc., 1995.
[29] S. G. Kong, B. Kosko, “Adaptive fuzzy systems for backing up a truck-and-trailer,” IEEE Trans. Neural Networks, vol. 3, no. 2, 1992, pp. 211–223.
[30] B. Kosko, Neural Networks And Fuzzy Systems, Prentice-Hall International, Inc., 1992.
[31] B. Kosko, Neural Networks and Fuzzy Systems: A Dynamical system Approach to Machine Intelligence, Prentice-Hall, Englewood Cliffs, NJ, 1992.
[32] B. Kosko, Fuzzy Engineering, Prentice-Hall International, Inc., 1997.
[33] C.J. Lin, C.T. Lin, Reinforcement learning for an ART-based fuzzy adaptive learning control network, IEEE Trans. Neural Networks vol. 7 no. 3, pp.709–731, 1996.
[34] C. T. Lin and C. S. G. Lee, “Neural-network-based fuzzy logic control and decision system,” IEEE Trans. On Computers, vol. 40, no. 12, pp. 1320-1336, 1991.
[35] C.T. Lin, C.S.G. Lee, “Reinforcement structure/parameter learning for neural-network-based fuzzy logic control systems,” IEEE Trans. Fuzzy System, no2, pp.46–63, 1994.
[36] C. T. Lin and C. S. Lee, Neural Fuzzy Systems: A Neuro-Fuzzy Synergism to Intelligent Systems, Upper Saddle River, NJ: Prentice-Hall, 1996.
[37] L. J. Lin, “Self-improving reactive agents based on reinforcement learning, planning and teaching,” Machine Learning, vol.8, no.3, 1992.
[38] L. J. Lin and T. M. Mitchell, “Reinforcement Learning with Hidden States,” Animals to Animats 2, MIT Press, 1993, pp. 271-280.
[39] T. Lozano-Pérez and M. A. Wesley, “An algorithm for planning collision-free paths among the polyhedral obstacles,” Communications of the ACM, vol. 22, no. 10, pp. 560-570, Oct.1979.
[40] J. del R. Millán and C. Torras, “A reinforcement connectionist approach to robot path finding in non-maze-like environments,” Machine Learning, 8, pp. 1992.
[41] J. del R. Millán, “Learning Efficient Reactive Behavioral Sequences from Basic Reflexes in a Goal-Directed Autonomous Robot,” From Animals to Animats: Third International Conference on Simulation of Adaptive Behavior, Brighton, UK, August 8-12, 1994.
[42] D. P. Miller and M. G. Slack, “Global symbolic maps from local navigation,” in the 9th National Conference on Artificial Intelligence, pp. 750-755, 1991.
[43] T. M. Mitchell and S. B. Thrun, “Explanation-based neural networks learning for robot control,” In C. L. Giles, S. J. Hanson and J. D. Cowan (eds.), Advances in Neural information Processing Systems 5, pp. 287-294. San Mateo, CA: Morgan Kaufmann.
[44] J. Moody and M. Saffel, “Learning to trade via direct reinforcement,” IEEE Trans. on Neural Networks, Vol.12, No.4, 2001, pp. 875-889.
[45] D. Nguyen and B. Widrow, “The truck backer-upper: An example of self-learning in neural network,” Proc. Int. Joint Conf. Neural Networks, vol. 2, pp 357-363, 1989.
[46] C. H. Oh, T. Nakashima, and H. Ishibuchi, “Initialization of Q-values by fuzzy rules for accelerating Q-learning,” IEEE International Joint Conference, Vol.3, No.4-9, 1998, pp. 2051 – 2056.
[47] S. Paul, S. Kumar, “Subsethood-product fuzzy neural inference system,” IEEE Trans. Neural Networks, vol. 13, no. 3, 2002, pp.578–599.
[48] C. Ribeiro, “Reinforcement learning agents,” Artificial intelligence review, vol. 17, pp. 223-250, 2002.
[49] J. S. Roger Jang, “ANFIS: adaptive-network-based fuzzy inference systems,” IEEE Trans. On Systems, Man, and Cybernetics, vol. 23, no. 3, pp. 665-685, 1993.
[50] G. A. Rummery. Problem solving with reinforcement learning. PhD thesis, Cambridge University, 1995.
[51] L. X. Wang and J. H. Mendel, “Back-propagation fuzzy systems as nonlinear dynamic system identifiers,” Proc. IEEE Int. Conf. On Fuzzy Systems, San Diego, pp. 1163-1 170, 1992.
[52] L. Wang and J. M. Mendel, “Generating fuzzy rules by learning form examples,” IEEE Trans. on Systems, Man, and Cybernetics, vol. 22, no. 6, pp. 1414-1427, 1992.
[53] L. X. Wang, Adaptive Fuzzy Systems and Control, Prentice-Hall International, Inc., 1994.
[54] L. X. Wang, A Course In Fuzzy Systems and Control, Prentice-Hall International, Inc., 1997.
[55] C. J. C. H. Watkins, Learning form delayed rewards. Ph.D. dissertation, King’s College, Cambridge, UK, 1989.
[56] C. J. C. H. Watkins and P. Dayan, “Q-learning,” Machine Learning, vol. 8, no. 3, pp. 279-292. 1992.
[57] P. J. Werbos, “Approximate dynamic programming for real-time control and neural modeling,” In D. A. White and D. A. Sofge, editors, Handbook of Intelligent Control: Neural, Fuzzy, and Adaptive Approaches. Van Nostrand Reinhold, 1992.
[58] F. Saito and T. Fukuda, “Learning architecture for real robot systems—extension of connectionist Q-learning for continuous robot control domain,” Proceedings of the International Conference on Robotics and Automation(IROS’94), pp. 27-32, 1994.
[59] J. C. Santamaria, R. S. Sutton, and Ashwin Ram. “Experiments with reinforcement learning in problems with continuous state and action spaces,” Adaptive Behaviour, Vol.6, No.2, 1998, pp. 163-218.
[60] J. Schmidhuber, “A general method for multi-agent learning and incremental self-improvement in unrestricted environments,” In Yao, X. (Ed.), Evolutionary Computation: Theory and Applications. Scientfic Publ. Co., Singapore, 1996.
[61] M. J. Schoppers, “Universal plans for reactive robots in unpredictable environments,” in the 10th International Joint Conference on Artificial Intelligence, pp. 1039-1046, 1987.
[62] S. Sehad and C. Touzet, “Self-organising map for reinforcement learning: Obstacle avoidance with Khepera,” Proceedings of Perception to Action, Lausanne, Switzerland, 1994.
[63] J. Si and Y. T. Wang, “Online learning control by association and reinforcement,” IEEE Trans. on Neural Networks, Vol.12, No.2, 2001, pp. 264-276.
[64] M. C. Su and C.-J. Kao, “Time series prediction based on a novel neurofuzzy system,” in Proc. 4th Golden West Int. Conf. Intell. Syst., San Francisco, CA, 1995, pp. 229–233.
[65] M. C. Su, “Identification of Singleton fuzzy Models via fuzzy hyper-rectangular composite NN,” in Fuzzy Model Identification: Selected Approaches, H. Hellen doorn and D. Driankov, Eds. pp. 215-250, 1997.
[66] M. C. Su, D. Y. Huang, C. H. Chou, and C. C. Hsieh, “A Reinforcement-Learning Approach to Robot Navigation,” in 2004 IEEE International Conf. on Networking, Sensing, and Control, Taiwan, Mar. 21-23, pp. 665-669.
[67] M. C. Su, C. H. Chou, E. Lai, and J. Lee, “A New Approach to Fuzzy Classifier Systems and its Application in Self-Generating Neuro-Fuzzy Systems,” Neurocomputing, vol. 69, pp. 584-614, Jan. 2005.
[68] R. S. Sutton, “Learning to predict by the methods of temporal differences,” Machine Learning, vol. 3, pp. 9-44, 1988.
[69] R. S. Sutton, “Generalization in Reinforcement Learning: Successful Examples Using Sparse Coarse Coding,” Advances in Neural Information Processing Systems 8, MIT Press, 1996, pp. 1038-1044.
[70] R. S. Sutton and A. G. Barto, Reinforcement learning: An introduction. Cambridge, MA: MIT Press, 1998.
[71] O. Takahashi and R. J. Schilling, “Motion planning in a plane using generalized Voronoi diagrams,” IEEE Trans. on Robotics Automat., vol. 5, no. 2, pp. 143-150,1989.
[72] C. F. Touzet, “Neural reinforcement learning for behavior synthesis,” Robotics and Autonomous Systems, Vol.22, No.3-4, 1997, pp. 251-81.
[73] S. Yamashita, T. Horiuchi, S. Kato, “A study on skill acquisition in trailer-truck steering problem by reinforcement learning,” Proceedings of the 41st SICE Annual Conference, vol. 2, 2002, pp. 810–812.
[74] L.A. Zadeh, “Fuzzy sets” Information and Control, vol. 8, pp. 338-353, 1965.
[75] R.A.A. Zitar, M.H. Hassoun, “Genetic and reinforcement-based rule extraction for regulator control,” Proceedings of the 32nd IEEE Conference on Decision and Control, vol. 2, 1993, pp. 1258–1263.
指導教授 蘇木春(Mu-Chun Su) 審核日期 2010-8-4
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明