中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/93446
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 80990/80990 (100%)
Visitors : 41648482      Online Users : 1440
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/93446


    Title: 一種以卷積神經網路為基礎的具可解釋性的深度學習模型;A CNN-based Interpretable Deep Learning Model
    Authors: 楊景豐;Yang, Ching-Feng
    Contributors: 資訊工程學系
    Keywords: 可解釋的人工智慧;深度學習;視覺皮質;自我組織特徵映射;影像分類;Explainable Artificial Intelligence;Deep Learning;Visual Cortex;Self-Organizing Maps;Image Classification
    Date: 2023-08-09
    Issue Date: 2024-09-19 17:02:02 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 近年來,隨著人工智慧的迅速發展,人工智慧改變了我們的生活和許多領域,其造成的影響難以量化。在一些領域中的表現甚至已經超越了人類,例如圍棋、象棋、德州撲克等遊戲,但經常浮現出來的問題是,人工智慧的決策過程往往是黑箱,那麼它又是如何做出決定的呢?
    本研究提出了一種基於卷積神經網路的深度學習模型,利用大腦視覺皮質的運作方式和階層架構及時序性的概念,來解釋深度學習模型的決策過程。此模型使用多層的架構進行影像分類,當影像輸入後,透過高斯卷積及特徵增強的機制,並將影像特徵透過時序性進行結合並輸出至下一層,就如同如視覺皮質在接收影像訊號時的運作方式,底層神經元會將細小的資訊根據時序性進行結合,並透過階層性的結構進行傳遞,最後使用一層全連接層,將其輸出轉換為影像的分類結果。
    本實驗中,共使用兩種資料集,分別是 MNIST 和 Fashion-MNIST,皆有不錯的表現。在每一階段針對特徵進行解釋,並透過特徵視覺化,可以觀察到每一層的特徵都有獨特的意義,這對於可解釋的人工智慧具有重要意義,同時為機器學習和相關領域的發展提供了新的思路和方法。;In recent years, with the rapid development of artificial intelligence (AI), it has significantly transformed our lives and various domains, and its impact is difficult to quantify. AI has even surpassed humans in performance in certain areas such as Go, chess, and Texas Hold’em poker. However, the decision- making process of artificial intelligence (AI) is often considered a black box, raising the question of how it actually makes decisions.
    This research proposes a deep learning model based on convolutional neural networks (CNNs) that incorporates the concepts of multi-layer SOM and the functioning of the visual cortex in the human brain to provide interpretability to the decision-making process of deep learning models. This model uses a multi-layer architecture for image classification. When an image is inputted, it undergoes Gaussian convolution and feature enhancement mechanisms. The image features are then combined in a temporal sequence and propagated to the next layer, mimicking the operation of the visual cortex in processing visual signals. Lower-level neurons integrate fine-grained information and transmit it hierarchically through the network structure. Finally, a fully connected layer is used to convert the output into the classification result of the image.
    In our experiment, two datasets, namely MNIST and Fashion-MNIST, were used, both yielding favorable performance. At each stage, the features were explained, and through feature visualization, it was observed that each layer had its unique significance. This is of paramount importance for explainable AI, providing new insights and methods for the development of machine learning and related fields.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML9View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明