博碩士論文 108522052 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator劉宗祐zh_TW
DC.creatorLAU CHUNG YAUen_US
dc.date.accessioned2021-8-11T07:39:07Z
dc.date.available2021-8-11T07:39:07Z
dc.date.issued2021
dc.identifier.urihttp://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=108522052
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract近年來有越來越多關於機器學習的應用,在很多不同的領域中都會使用類神經網路作為輔助的工具來加速以及改善原本工作的表現和效率。雖然機器學習已被廣泛的使用,但是在某一些領域中如果涉及到非常重要的決定的話,則仍然可能會讓人懷疑類神經網路的可信度和可靠度,例如在醫學上的應用。類神經網路讓人懷疑的原因絕大部份是來自於它的「黑盒子」特色。類神經網路的學習只取決於人們給它的資料,但是人們並不懂為甚麼類神經網路會做出那樣的決定。由於類神經網路輸出結果缺乏參考資料,使得人們無法完全信賴類神經網路以及機器學習所帶來的任何決定。 在此,我們提出一個規則解釋模組 (Rule-based Explaining Module) 針對於使用規則來解釋循環關係性類神經網路 (Recurrent Relational Network) 的輸出結果。我們選擇RRN作為我們解釋的對象是因為RRN本身擁有關係的特性能為它自己本身學習解釋自身帶來不少好處。規則解釋模組核心的想法是易於實作和能夠輕易地加入各種類神經網路上,以改善它們輸出結果的說服力。模組需要前置的知識,又或是說規則來進行學習解釋。我們將會討論相關的研究、我們的方法以及最後在解數獨上進行實驗。我們實驗使用了Gordon Royle的Minimum Sudoku作為資料集的基底,並加上我們的規則解數獨步驟器 (Rule-based Sudoku Step Solver) 的處理,最後輸出解數獨所需要的規則資料給予規則解釋模組進行學習。最後我們實驗的辨認規則平均準確度達到98%以上。zh_TW
dc.description.abstractNowadays, there are more and more applications based on the machine learning. Many areas are using neural network as a support tool to their works. However, in some areas that may include crucial decision which may cause some doubt to the people of whether the neural network is reliable or not such as medical usage. Although there are many applications based on machine learning, people are not fully trusted the neural network because of its unexplainable results from the “black box”. Neural network only learns the solution by feeding datasets to it and people don’t know why a trained neural network will make a specific decision. People can’t totally trust it without any reference. Therefore, we proposed Rule-based Explaining Module (REM) to add further explanation to the Recurrent Relational Network’s (RRN) outputs, which we chosen RRN as the most suitable neural network for our module because of its feature of learning the relations that beneficial for learning to explain itself. The core idea of the module is to be easily implemented and be able to applies to different neural networks for improving the persuasion of the output. It requires the prior knowledge or called rules to provide to the REM to learn to explain. We are going to have discussions to related work, our main method and finally test our model with experiments on solving Sudoku puzzles. We preprocess the Minimum Sudoku from Gordon Royle and other Sudoku puzzle datasets with our Rule-based Sudoku Step Solver and generated the rule for our REM to learn. We got a positive result from our experiment receiving over 98% of accuracy of the rules.en_US
DC.subject可解釋人工智慧zh_TW
DC.subject循環關係性類神經網路zh_TW
DC.subject數獨zh_TW
DC.subjectExplainable AIen_US
DC.subjectRecurrent Relational Networken_US
DC.subjectSudokuen_US
DC.title基於使用循環關係性類神經網路解數獨之 規則解釋模組zh_TW
dc.language.isozh-TWzh-TW
DC.titleRule-based Explaining Module for Solving Sudoku using Recurrent Relational Networken_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明