博碩士論文 110522104 完整後設資料紀錄

DC 欄位 語言
DC.contributor資訊工程學系zh_TW
DC.creator楊景豐zh_TW
DC.creatorChing-Feng Yangen_US
dc.date.accessioned2023-8-9T07:39:07Z
dc.date.available2023-8-9T07:39:07Z
dc.date.issued2023
dc.identifier.urihttp://ir.lib.ncu.edu.tw:88/thesis/view_etd.asp?URN=110522104
dc.contributor.department資訊工程學系zh_TW
DC.description國立中央大學zh_TW
DC.descriptionNational Central Universityen_US
dc.description.abstract近年來,隨著人工智慧的迅速發展,人工智慧改變了我們的生活和許多領域,其造成的影響難以量化。在一些領域中的表現甚至已經超越了人類,例如圍棋、象棋、德州撲克等遊戲,但經常浮現出來的問題是,人工智慧的決策過程往往是黑箱,那麼它又是如何做出決定的呢? 本研究提出了一種基於卷積神經網路的深度學習模型,利用大腦視覺皮質的運作方式和階層架構及時序性的概念,來解釋深度學習模型的決策過程。此模型使用多層的架構進行影像分類,當影像輸入後,透過高斯卷積及特徵增強的機制,並將影像特徵透過時序性進行結合並輸出至下一層,就如同如視覺皮質在接收影像訊號時的運作方式,底層神經元會將細小的資訊根據時序性進行結合,並透過階層性的結構進行傳遞,最後使用一層全連接層,將其輸出轉換為影像的分類結果。 本實驗中,共使用兩種資料集,分別是 MNIST 和 Fashion-MNIST,皆有不錯的表現。在每一階段針對特徵進行解釋,並透過特徵視覺化,可以觀察到每一層的特徵都有獨特的意義,這對於可解釋的人工智慧具有重要意義,同時為機器學習和相關領域的發展提供了新的思路和方法。zh_TW
dc.description.abstractIn recent years, with the rapid development of artificial intelligence (AI), it has significantly transformed our lives and various domains, and its impact is difficult to quantify. AI has even surpassed humans in performance in certain areas such as Go, chess, and Texas Hold’em poker. However, the decision- making process of artificial intelligence (AI) is often considered a black box, raising the question of how it actually makes decisions. This research proposes a deep learning model based on convolutional neural networks (CNNs) that incorporates the concepts of multi-layer SOM and the functioning of the visual cortex in the human brain to provide interpretability to the decision-making process of deep learning models. This model uses a multi-layer architecture for image classification. When an image is inputted, it undergoes Gaussian convolution and feature enhancement mechanisms. The image features are then combined in a temporal sequence and propagated to the next layer, mimicking the operation of the visual cortex in processing visual signals. Lower-level neurons integrate fine-grained information and transmit it hierarchically through the network structure. Finally, a fully connected layer is used to convert the output into the classification result of the image. In our experiment, two datasets, namely MNIST and Fashion-MNIST, were used, both yielding favorable performance. At each stage, the features were explained, and through feature visualization, it was observed that each layer had its unique significance. This is of paramount importance for explainable AI, providing new insights and methods for the development of machine learning and related fields.en_US
DC.subject可解釋的人工智慧zh_TW
DC.subject深度學習zh_TW
DC.subject視覺皮質zh_TW
DC.subject自我組織特徵映射zh_TW
DC.subject影像分類zh_TW
DC.subjectExplainable Artificial Intelligenceen_US
DC.subjectDeep Learningen_US
DC.subjectVisual Cortexen_US
DC.subjectSelf-Organizing Mapsen_US
DC.subjectImage Classificationen_US
DC.title一種以卷積神經網路為基礎的具可解釋性的深度學習模型zh_TW
dc.language.isozh-TWzh-TW
DC.titleA CNN-based Interpretable Deep Learning Modelen_US
DC.type博碩士論文zh_TW
DC.typethesisen_US
DC.publisherNational Central Universityen_US

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明