中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/44764
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 41777441      Online Users : 1781
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/44764


    Title: 適應性模糊化類神經網路及其應用;The development of an adaptive neuro-fuzzy network and its applications
    Authors: 黃得原;De-Yuan Huang
    Contributors: 資訊工程研究所
    Keywords: 增強式學習;Q-learning;模糊化類神經網路;移動式機器人;模糊多維矩形複合式類神經網路 (FHRCNN);Q-learning;neuro-fuzzy system;robot;navigation;reinforcement learning;fuzzy hyperrectangular composite neural network;navigation.fuzzy hyperrectangular composite neur
    Date: 2010-08-04
    Issue Date: 2010-12-09 13:54:43 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 傳統上建立智慧型機器的途徑有類神經網路及模糊系統。這兩種傳統的方法各有其優缺點及限制。類神經網路的最大優點是:它具有學習的能力,但缺點在於它從許多的範例中所歸納出來的概念,是隱藏在一組網路參數中,但對人來說,這些參數過於抽象無法理解。模糊系統的優點是,它提供一條便捷的路徑,且可善加利用專家處理事物的經驗法則來處理許多工作。更重要的是,它可以提供我們邏輯上的解釋。可是建立模糊系統的瓶頸,在於那些必要的模糊規則要從何而來?因為要建立一個完整且有效的模糊規則庫 (rule base) 是無法單靠人類或是專家口語所給予的經驗法則來建立的。因此,如何整合類神經網路與模糊系統雙方面的優點,近幾年來在相關領域中十分重要的研究課題。 在類神經網路的學習演算法則大致可分為監督式學習、非監督式學習以及增強式學習。在本論文中,提出了一個以模糊多維矩形複合式類神經網路為基礎之增強式學習演算法則 (FHRCNN-Q)。此學習演算法則能在缺乏明確的訓練資料下,自動地建構一個完整的模糊化類神經網路系統,並且能透過增強式學習的方式,自行調整此系統中的參數值來改善此模糊化類神經網路系統的效能。FHRCNN-Q 可以透過增強式學習法則建構其網路架構,並經由增強式學習探索可能的解空間。本論文使用倒單擺系統跟倒車入庫系統來驗證FHRCNN-Q效能。 在一個未知的環境中,移動式機器人的導航系統必須透過感測器讀取環境資訊,並藉由對環境資訊的感知來做出適當的動作。本論文用FHRCNN-Q建構在未知環境中的移動式機器人的導航系統。透過移動式機器人的模擬我們可以看到FHRCNN-Q可經由增強式學習法則建構出移動機器人在未知環境中所需的模糊規則庫。Over the last few decades, neural networks and fuzzy systems have established their reputation as alternative approaches to information processing. Both have certain advantages over classical methods, especially when vague data or prior knowledge is involved. However, their applicability suffered from several weaknesses of the individual models. Therefore, combinations of neural networks with fuzzy systems have been proposed, where both models complement each other. A neuro-fuzzy network can be defined as a fuzzy system trained with some algorithm derived from the neural network theory. The integration of neural networks and fuzzy systems aims at the generation of a more robust, efficient and easily interpretable system where the advantages of each model are kept and their possible disadvantages are removed. In this dissertation, the implementation of Q-learning based on fuzzy hyperrectangular composite neural network (FHRCNN) is proposed. The proposed system referred to as FHRCNN-Q. In the proposed FHRCNN-Q, the antecedents part express a model of the environment. In other words, the proposed system construct the uncertain environment by using a FHRCNN model. In the proposed FHRCNN-Q, a fuzzy hyperrectangular composite neural network consists of a set of fuzzy IF-THEN rules that describe the input-output mapping relationship of the networks. Simply stated, the process of presenting input data to each hidden node in a FHRCNN-Q is equivalent to firing a fuzzy rule. The proposed FHRCNN-Q can not only tune its parameters but also incrementally construct its architecture in a reinforcement learning environment. Through injecting new rules into the system, an FHRCNN-Q can explore the possible solution space. The inverted pendulum system and the back-driving truck problem were used to demonstrate the performance of the proposed FHRCNN-Q. It’s important for robots to obtain environmental information by means of their sensory system and decide their own behavior when they move in an unknown environment. This dissertation also presents a FHRCNN-Q approach to a navigation system which allows a goal-directed mobile robot to incrementally adapt to an unknown environment. In the FHRCNN-Q, the fuzzy rules which map current sensory inputs to appropriate actions are built through the reinforcement learning. Some simulation results of a mobile robot illustrate the performance of the proposed navigation system.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML946View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明