中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/92460
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 78937/78937 (100%)
Visitors : 39425422      Online Users : 457
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: http://ir.lib.ncu.edu.tw/handle/987654321/92460


    Title: 可解釋人工智慧在教育之應用;Explainable Artificial Intelligence in Education
    Authors: 李信鋌;Li, Shun-Ting
    Contributors: 資訊工程學系
    Keywords: 可解釋人工智慧;機器學習;學習分析;Explainable Artificial Intelligence;Machine Learning;Learning Analytics
    Date: 2023-07-05
    Issue Date: 2023-10-04 16:02:07 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 在過去的研究中,發現透過機器學習可以更好的預測學生的學習成效,並可
    以透過可解釋人工智慧方式對模型的預測進行解釋。但是不同解釋方式可能對同一筆資料帶來不同的解釋,導致教學團隊無所適從。
    本研究根據LBLS-467 資料集,一個記錄大學生在線上學習環境的課程中的
    行為日誌與問卷資料,透過機器學習的方法識別高風險與低風險學生。另外,我們進一步使用可解釋人工智慧方法(LIME、SHAP)提取模型預測之解釋,並利用六種評估解釋的評估標準評估解釋的穩定程度(Stability) 與真實程度(Faithfulness),幫助教師選出更好的解釋,更加瞭解學生目前的學習狀況。
    研究結果顯示分類器識別了大多數有風險的學生,也就是說根據LBLS-467
    資料集可以找出有風險的學生。透過可解釋人工智慧方法可以得到模型預測的解釋,進一步幫助教師可以了解學生的學習情況。而透過六種評估解釋之評估標準可找出品質較好之解釋,使教學團隊可選出更好的解釋。;In past research, it has been found that machine learning can better predict students′ learning outcomes, and it can provide explanations for the model′s predictions through explainable artificial intelligence methods. However, different explanation methods may yield different interpretations for the same data, leaving teaching teams uncertain.
    This study is based on the LBLS-467 dataset, which records behavioral logs and questionnaire data of university students in an online learning environment. Using machine learning techniques, high-risk and low-risk students are identified.
    Additionally, we further utilize explainable artificial intelligence methods (LIME, SHAP) to extract explanations for the model′s predictions. We evaluate the stability and
    faithfulness of the explanations using six evaluation criteria, helping teachers select better explanations and gain a deeper understanding of students′ current learning status.
    The research results show that the classifiers identified the majority of at-risk students, meaning that with the LBLS-467 dataset, it is possible to identify students at
    risk. Through the use of explainable artificial intelligence methods, explanations for the model′s predictions can be obtained, further assisting teachers in understanding
    students′ learning situations. By utilizing the six evaluation criteria for explaining, high quality explanations can be identified, enabling teaching teams to select better explanations.
    Appears in Collections:[Graduate Institute of Computer Science and Information Engineering] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML52View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明