中大學術數位典藏-NCU Institutional Repository-提供博碩士論文、考古題、期刊論文、研究計畫等下載:Item 987654321/98032
English  |  正體中文  |  简体中文  |  Items with full text/Total items : 83956/83956 (100%)
Visitors : 62615087      Online Users : 137
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
Scope Tips:
  • please add "double quotation mark" for query phrases to get precise results
  • please goto advance search for comprehansive author search
  • Adv. Search
    HomeLoginUploadHelpAboutAdminister Goto mobile version


    Please use this identifier to cite or link to this item: https://ir.lib.ncu.edu.tw/handle/987654321/98032


    Title: SAMFNet:結合多實例學習的段落感知多模態融合模型, 用於印象管理行為偵測;SAMFNet: A Segment-Aware Multi-Modal Fusion in Impression Management Behavior Detection with Multiple Instance Learning
    Authors: 鍾承翰;Chung, Cheng-Han
    Contributors: 軟體工程研究所
    Keywords: 印象管理;多重實例學習;多模態融合;Impression Management;Multiple Instance Learning;multi-modal fusion
    Date: 2025-08-04
    Issue Date: 2025-10-17 12:16:28 (UTC+8)
    Publisher: 國立中央大學
    Abstract: 非同步視訊面試(Asynchronous Video Interviews, AVI)在辨識短
    暫且具情境依賴性的印象管理(Impression Management, IM)行為時
    面臨挑戰。為此,我們提出 SAMFNet,一個具段落感知的多模態融
    合網路,透過多實例學習(Multiple Instance Learning, MIL)來辨識
    局部的印象管理行為 ,無需嚴格對齊時間段。SAMFNet 整合了文
    字、語音、臉部表情、心率變異(HRV)及眼動資訊,實現對行為線
    索的穩健融合。模型訓練基於一組來自實際非同步視訊面試平台的
    121 名求職者資料,並採用留一交叉驗證(Leave-One-Out
    Cross-Validation, LOOCV)以提升在小樣本情境下的評估可靠性。
    SAMFNet 分別在四類 IM 行為上達成 92%(誠實-自我推銷)、82%
    (誠實-自我防衛)、94%(欺騙-誇大不實)及 84%(欺騙-避重就輕)
    的準確率,表現優於 HireNet 模型在整體面試評估上的 74.9%。本
    系統具備非侵入性、可擴展性,並適用於實務的非同步視訊面試平台
    面試應用情境。;Asynchronous Video Interviews (AVI) present difficulties in
    detecting brief and context-dependent impression management (IM)
    behaviors. We propose SAMFNet, a Segment-Aware Multi-Modal Fusion
    Network that leverages Multiple Instance Learning (MIL) to identify
    localized IM behaviors without requiring strict temporal alignment.
    SAMFNet integrates text, audio, facial expressions, heart rate variability
    (HRV), and eye movement, enabling robust fusion of behavioral cues.
    Trained on a real-world dataset of 121 applicants, we employ
    leave-one-out cross-validation (LOOCV) to ensure reliable evaluation
    under limited data conditions. SAMFNet achieves accuracies of 92%
    (Honest–self-promotion), 82% (Honest–defensive), 94% (Deceptive
    image creation), and 84% (Deceptive–image protection). Compared to
    HireNet’s 74.9% accuracy in overall candidate adequacy prediction,
    SAMFNet demonstrates superior performance in fine-grained IM
    detection. The framework is non-invasive, scalable, and suitable for
    practical AVI applications.
    Appears in Collections:[Software Engineer] Electronic Thesis & Dissertation

    Files in This Item:

    File Description SizeFormat
    index.html0KbHTML59View/Open


    All items in NCUIR are protected by copyright, with all rights reserved.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明