中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/98419
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 83776/83776 (100%)
造访人次 : 60742977      在线人数 : 896
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: https://ir.lib.ncu.edu.tw/handle/987654321/98419


    题名: VISTA Learning Model:結合具身互動、AI即時評測與第三人稱視角之多模態數位實境學習設計;VISTA Learning Model: Designing for Multimodal Learning through Embodied Interaction, Real-Time AI Assessment, and Third-Person Perspective in Digital Reality
    作者: 劉慧婷;Liu, Hui-Ting
    贡献者: 資訊工程學系
    关键词: VISTA學習模型;多模態互動;AI即時評測;數位實境;具身認知;情境學習;未來自我連續性;實務導向學習;VISTA Learning Model;Multimodal Interaction;Real-Time AI Assessment;Digital Reality;Embodied Cognition;Situated Learning;Future Self-Continuity;Practice-Based Learning
    日期: 2025-07-29
    上传时间: 2025-10-17 12:45:45 (UTC+8)
    出版者: 國立中央大學
    摘要: 本研究提出一套整合式學習模型VISTA Learning Model (Virtual Integrated Sensorimotor-Situated Training with AI),融合具身互動、實物操作、多模態AI即時評測、未來自我動機導向學習與第三人稱視角觀察機制,建構具備情境模擬與即時回饋功能的數位實境學習環境,以回應專業實務導向學科對多模態知識學習的真實需求。傳統教室教學長期侷限於紙本與語言知識的傳授方式,難以支持學習者在感知、動作與空間理解等層面的深層學習;而現有虛擬實境系統多聚焦於視覺模擬與控制器操作,缺乏對身體與感官參與的整合支持。
    本研究以科技大學餐旅系專業課程為實驗場域,依教學介入層級分為三組:使用VISTA模型的數位實境整合介入組、使用控制器操作的數位實境基礎介入組、採用傳統角色扮演教學法的對照組。透過前後測與問卷調查,分析各組學習者在學習成效、專業身份認同、未來自我連續性、學習動機、認知負荷與行為主體感等層面的表現差異。
    研究結果顯示,完整介入組在多模態學習成效、專業身份認同與未來自我連續性等面向皆顯著優於其他組別,並能有效降低外在認知負荷、提升相關負荷與行為主體感。本研究驗證了VISTA模型在促進具身與多模態知識整合、強化情境學習動機與自我覺察、優化認知資源分配上的應用潛力,亦為專業實務導向學科的數位教學轉型提供具體實證與設計參考。;This study proposes the VISTA Learning Model (Virtual Integrated Sensorimotor-Situated Training with AI), an integrated learning framework that combines embodied interaction, physical object manipulation, real-time multimodal AI assessment, future-self guided motivation, and third-person perspective observation. The model aims to construct a digitally situated learning environment with contextual simulation and instant feedback to address the authentic needs of multimodal knowledge acquisition in practice-oriented disciplines. Traditional classroom teaching often relies on textual and linguistic content, failing to support learners’ deeper engagement in perception, motor coordination, and spatial understanding. Meanwhile, existing virtual reality systems tend to emphasize visual simulation and controller-based operations, lacking integration of bodily and sensory involvement.
    The study was conducted in a professional hospitality course at a university of technology, dividing participants into three groups based on instructional intervention level: (1) a fully integrated intervention group using the VISTA model; (2) a basic intervention group using digital reality with controller-based interaction; and (3) a control group adopting traditional role-play teaching methods. Pre- and post-tests, along with questionnaires, were employed to analyze group differences in learning outcomes, professional identity, future self-continuity, learning motivation, cognitive load, and sense of agency.
    The results indicate that the fully integrated group significantly outperformed the others in multimodal learning outcomes, professional identity, and future self-continuity. It also effectively reduced extraneous cognitive load while enhancing germane load and sense of agency. The study validates the VISTA model′s potential in promoting the integration of embodied and multimodal knowledge, enhancing situated learning motivation and self-awareness, and optimizing cognitive resource allocation. It also offers empirical evidence and practical design implications for the digital transformation of practice-oriented professional education.
    显示于类别:[資訊工程研究所] 博碩士論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML10检视/开启


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明