中大機構典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/98636
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 83696/83696 (100%)
造访人次 : 56306236      在线人数 : 997
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: https://ir.lib.ncu.edu.tw/handle/987654321/98636


    题名: 結合批次交叉注意力與混合排序損失以強化多面向評分模型;Enhancing Cross-Prompt Automated Essay Scoring with Batch Cross-Attention and Hybrid Ranking Loss
    作者: 蔡博維;Tsai, Po-Wei
    贡献者: 資訊工程學系
    关键词: 自動評分;排序損失;Cross-Attention;Automated Essay Scoring;Ranking Loss;交叉注意力
    日期: 2025-08-28
    上传时间: 2025-10-17 13:02:03 (UTC+8)
    出版者: 國立中央大學
    摘要: 隨著自然語言處理在教育、商業與生成任務中的廣泛應用,多面向評分成為語言理
    解領域的重要挑戰。然而,現有方法多僅依賴單樣本或成對樣本進行評分,缺乏全局視
    角,易產生排序傳遞性不一致的問題。為了解決此局限,本研究提出結合 Batch Cross
    Attention 與 Hybrid Ranking Loss 的跨任務多面向評分模型。Batch Cross-Attention 允許
    模型在訓練階段同時將同一批次所有文本作為 Query、Key、Value,透過注意力機制捕
    捉樣本間的細微差異與整體分布,以提升排序的穩定性與比較性,Hybrid Ranking Loss
    將局部 Pairwise Rank Loss 與全局 List-wise Loss 結合,既懲罰局部排序錯誤,又維持
    全局一致性,避免傳遞性矛盾。所提模型能兼容作文、自動問句生成與評論等場景,在
    內容、組織與可答性等多個面向中實現一致性評估。實驗結果顯示,相較於傳統 point
    wise、pair-wise 及純 list-wise 方法,本方法在分數一致性(如 QWK)與排序相關性(如
    Kendall’s τ)均有顯著提升,證明 Batch Cross-Attention 與 Hybrid Ranking Loss 的有效
    性與通用性。;Language education plays a vital role in globalization and cross-cultural communication,
    and Automated Essay Scoring (AES) has gained increasing attention due to its fast and
    consistent assessment capabilities. Traditional AES methods typically adopt prompt-specific
    training, achieving high accuracy on familiar prompts but lacking generalization ability to
    unseen prompts due to the unavailability of annotated data. To address this, recent cross-prompt
    approaches train and test models across multiple prompts, yet most rely on point-wise or pair
    wise comparisons that learn only relative rankings between pairs of essays, neglecting the
    positioning of individual essays within the overall distribution. Building upon the MOOSE
    framework, this study proposes a two-stage batch-aware ranking and regression framework. In
    the first stage, we introduce Batch Cross-Attention within the MOOSE architecture, allowing
    all essays in the same mini-batch to attend to each other during forward propagation, thereby
    jointly considering global semantic differences. Optimization employs a combination of list
    wise and pair-wise losses to ensure both global and local ranking consistency. In the second
    stage, predicted ranking scores are discretized into K bins based on quantiles, and bin position
    embeddings are concatenated with original essay features. A Bin Regressor is then trained with
    mean squared error combined with pair-wise loss to fine-tune the continuous scores.
    Experimental results demonstrate that our method improves ranking transitivity and QWK
    regression accuracy across multiple prompts, yielding more stable and interpretable scoring by
    incorporating global ranking information.
    显示于类别:[資訊工程研究所] 博碩士論文

    文件中的档案:

    档案 描述 大小格式浏览次数
    index.html0KbHTML6检视/开启


    在NCUIR中所有的数据项都受到原著作权保护.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明