DC 欄位 |
值 |
語言 |
DC.contributor | 資訊工程學系 | zh_TW |
DC.creator | 劉宗祐 | zh_TW |
DC.creator | LAU CHUNG YAU | en_US |
dc.date.accessioned | 2021-8-11T07:39:07Z | |
dc.date.available | 2021-8-11T07:39:07Z | |
dc.date.issued | 2021 | |
dc.identifier.uri | http://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=108522052 | |
dc.contributor.department | 資訊工程學系 | zh_TW |
DC.description | 國立中央大學 | zh_TW |
DC.description | National Central University | en_US |
dc.description.abstract | 近年來有越來越多關於機器學習的應用,在很多不同的領域中都會使用類神經網路作為輔助的工具來加速以及改善原本工作的表現和效率。雖然機器學習已被廣泛的使用,但是在某一些領域中如果涉及到非常重要的決定的話,則仍然可能會讓人懷疑類神經網路的可信度和可靠度,例如在醫學上的應用。類神經網路讓人懷疑的原因絕大部份是來自於它的「黑盒子」特色。類神經網路的學習只取決於人們給它的資料,但是人們並不懂為甚麼類神經網路會做出那樣的決定。由於類神經網路輸出結果缺乏參考資料,使得人們無法完全信賴類神經網路以及機器學習所帶來的任何決定。
在此,我們提出一個規則解釋模組 (Rule-based Explaining Module) 針對於使用規則來解釋循環關係性類神經網路 (Recurrent Relational Network) 的輸出結果。我們選擇RRN作為我們解釋的對象是因為RRN本身擁有關係的特性能為它自己本身學習解釋自身帶來不少好處。規則解釋模組核心的想法是易於實作和能夠輕易地加入各種類神經網路上,以改善它們輸出結果的說服力。模組需要前置的知識,又或是說規則來進行學習解釋。我們將會討論相關的研究、我們的方法以及最後在解數獨上進行實驗。我們實驗使用了Gordon Royle的Minimum Sudoku作為資料集的基底,並加上我們的規則解數獨步驟器 (Rule-based Sudoku Step Solver) 的處理,最後輸出解數獨所需要的規則資料給予規則解釋模組進行學習。最後我們實驗的辨認規則平均準確度達到98%以上。 | zh_TW |
dc.description.abstract | Nowadays, there are more and more applications based on the machine learning. Many areas are using neural network as a support tool to their works. However, in some areas that may include crucial decision which may cause some doubt to the people of whether the neural network is reliable or not such as medical usage. Although there are many applications based on machine learning, people are not fully trusted the neural network because of its unexplainable results from the “black box”. Neural network only learns the solution by feeding datasets to it and people don’t know why a trained neural network will make a specific decision. People can’t totally trust it without any reference.
Therefore, we proposed Rule-based Explaining Module (REM) to add further explanation to the Recurrent Relational Network’s (RRN) outputs, which we chosen RRN as the most suitable neural network for our module because of its feature of learning the relations that beneficial for learning to explain itself. The core idea of the module is to be easily implemented and be able to applies to different neural networks for improving the persuasion of the output. It requires the prior knowledge or called rules to provide to the REM to learn to explain. We are going to have discussions to related work, our main method and finally test our model with experiments on solving Sudoku puzzles. We preprocess the Minimum Sudoku from Gordon Royle and other Sudoku puzzle datasets with our Rule-based Sudoku Step Solver and generated the rule for our REM to learn. We got a positive result from our experiment receiving over 98% of accuracy of the rules. | en_US |
DC.subject | 可解釋人工智慧 | zh_TW |
DC.subject | 循環關係性類神經網路 | zh_TW |
DC.subject | 數獨 | zh_TW |
DC.subject | Explainable AI | en_US |
DC.subject | Recurrent Relational Network | en_US |
DC.subject | Sudoku | en_US |
DC.title | 基於使用循環關係性類神經網路解數獨之 規則解釋模組 | zh_TW |
dc.language.iso | zh-TW | zh-TW |
DC.title | Rule-based Explaining Module for Solving Sudoku using Recurrent Relational Network | en_US |
DC.type | 博碩士論文 | zh_TW |
DC.type | thesis | en_US |
DC.publisher | National Central University | en_US |