English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 83696/83696 (100%)
造訪人次 : 56299549      線上人數 : 1564
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: https://ir.lib.ncu.edu.tw/handle/987654321/98479


    題名: 專業排球訓練者的跨感官經驗對嘈雜環境中 語句辨識影響的腦神經機制;Experience-dependent effects on multisensory speech processing in noisy environments: An fMRI study with expert volleyball players
    作者: 林婉婷;Lin, Wan-Ting
    貢獻者: 認知與神經科學研究所
    關鍵詞: 多感官整合;視聽效益;噪音環境下的語言;運動員;音樂家;Multisensory integration;Audiovisual benefits;Speech in noise;Athletes;Musicians
    日期: 2025-08-27
    上傳時間: 2025-10-17 12:49:50 (UTC+8)
    出版者: 國立中央大學
    摘要: 日常的對話經常發生於充滿噪音的環境,研究指出,在此種環境中,結合多感官提示(例如: 視覺與聽覺)有助於提升語句理解能力。因此,更敏銳的多感官同步處理能力可能在噪音環境中提升語句理解,其中,音樂家與運動員可能因其多感官訓練經驗,而展現出較佳的整合優勢。然而,至今尚缺乏比較不同類型的多感官經驗 (如:運動、音樂等) 之多感官整合能力對語句理解的影響。因此,本研究以視聽線索效益 (audiovisual benefits) 和反映大腦整合多感官訊息整合精確度的時間整合窗口 (temporal binding window, TBW) 為主要指標,探討音樂家、運動員和非專業者作為對照組的多感官訓練經驗如何影響在噪音環境中語句辨識之行為與腦神經機制。
    本研究的行為實驗包含兩個部分。第一部分為同步判斷作業 (synchrony judgment task, SJ) 與時序判斷作業 (temporal-order judgment task, TOJ),受試者需對在不同刺激起始非同步 (stimulus onset asynchrony, SOA: ±360, ±300, ±240, ±180, ±120, ±60和0 ms)經由視覺與聽覺呈現的中文語句進行判斷。在同步判斷作業中,受試者需判斷刺激是否同步呈現;在時序判斷作業中,則需判定哪一個刺激先出現。結果顯示,音樂家與排球員的時間整合窗口及洽辨差 (just noticeable difference, JND) 皆較小,推測多感官訓練經驗可能提升其對視聽同步性的敏感度。第二部分為噪音語句辨識任務 (speech-in-noise task, SINT),包括純音訊、純視覺與視聽整合的語句三種條件,並搭配不同訊噪比 (signal-to-noise ratio, SNR = 0, −6, −9, −12 dB)。受試者需聆聽目標中文語句並以鍵盤輸入所聽到的語句。結果顯示,相較於對照組,音樂家與排球員在噪音條件下展現出較高的視聽線索效益 (audiovisual benefits)。此外,視聽效益越大者,其視聽整合的時間窗口越窄,代表多感官整合能力越好,其同步知覺也越敏銳。三組在單感官 (純音訊或純視覺) 條件下的語句辨識無顯著差異,表示語句辨識優勢主要來自多感官整合能力的差異。同時,也發現音樂與運動訓練的持續時間與視聽效益呈正相關。這些結果顯示整合視覺與聽覺訊息對於噪音環境下語句辨識的重要性,並指出音樂與運動等多感官訓練活動可能促進噪音環境中的語句處理能力。
    功能性磁振造影 (functional magnetic resonance imaging, fMRI) 實驗中,測試了運動員和對照組在三種不同視聽條件下之腦區活化情形,受試者需在純音訊、純視覺和視聽整合三種條件搭配三種訊噪比 (SNRs: 無噪音、0 dB、−12 dB) 下進行語句聽辨,並以磁振造影相容之按鍵盒從四個選項中選出語句中曾播放過的單詞做答。結果顯示,在單一感官條件下 (純音訊和純視覺),對照組在與認知功能高度相關的額葉區域,包含額中回 (middle frontal gyrus, MFG) 和額下回 (inferior frontal gyrus, IFG)會比排球員有更高的激活。在視聽條件下,排球員於視聽整合樞紐區之一的顳上回 (superior temporal gyrus, STG),以及負責將詞彙訊息連結至語義的顳中回 (middle temporal gyrus, MTG),有更顯著的活化。此外,這兩區域的活化程度和語句辨識準確率呈現正相關,顯示了運動員較多的整合相關腦區激活可能跟視聽語句處理能力有關聯。
    整體而言,本研究結果指出,多感官整合能力與噪音環境中語句辨識的表現有關聯,而這些能力可能透過音樂和運動訓練進一步增強。神經影像資料也支持了運動經驗能強化語句處理中整合視聽覺訊息相關腦區的效能。研究結果期能對於利用多感官訓練來增進在嘈雜環境中的語句處理能力有助益。
    ;Daily communication often takes place in noisy environments. Research has shown that combining multisensory cues (such as visual and auditory) can be most beneficial for speech comprehension under non-optimal hearing conditions, either due to external or internal noise. Therefore, more acute multisensory processing ability may benefit speech perception in noisy environments. Some studies have reported that musicians and sports players, due to the multisensory nature of training experience, show multisensory benefits. However, little research has investigated the effects of different types of multisensory experiences (such as sports, music, etc.) on speech perception. Therefore, this study used the audiovisual (AV) benefits and the temporal binding window (TBW), which reflects the accuracy of multisensory integration in the brain, as the main indicators to explore how the multisensory integration ability affects the behavior and neural mechanisms of speech recognition in noisy environments among musicians, athletes, and non-experts (as a control group).
    In behavioral experiments, participants first completed the synchrony judgment task (SJ) and the temporal-order judgment task (TOJ) using Mandarin Chinese sentences presented under various stimulus onset asynchrony (SOAs: ±360, ±300, ±240, ±180, ±120, ±60 and 0 ms) to measure multisensory integration. In the SJ task, participants judged whether the stimuli are presented synchronously; in the TOJ task, they judged which stimulus (auditory vs. visual) appears first. Results show that musicians and volleyball players had significantly smaller TBW and just noticeable difference (JND), suggesting that training experience may enhance sensitivity to audiovisual synchronization. The second part is the speech-in-noise task (SINT), which includes three conditions: audio-only, visual-only, and audiovisual, and is combined with different signal-to-noise ratios (SNR = 0, −6, −9, −12 dB). Participants were required to listen to Chinese sentences and input the words they heard via a keyboard. Compared with the control group, musicians and volleyball players showed significant audiovisual benefits under noise conditions, whereas no significant difference exists between musicians and volleyball players. In addition, narrower TBW was associated with greater audiovisual benefits for speech perception in noise. Moreover, there was no significant difference in speech intelligibility among the three groups under unisensory (audio-only or visual-only) conditions, indicating that the audiovisual speech advantage mainly came from differences in multisensory integration ability. In addition, music or sports training experience (e.g., total length of experience in years) was positively correlated with audiovisual speech benefits. These results emphasize the importance of integrating visual and auditory information for speech perception under challenging listening conditions and point to the possible facilitation effects of multisensory training activities such as music and exercise on multisensory speech processing.
    In the fMRI experiment, we compare the brain activation among groups under three speech conditions: audio-only, visual-only, and audiovisual under no-noise and two SNRs (0, −12 dB). The task was to choose a word that was presented auditorily in the target sentence from four options with a key press response. Results show that under unisensory conditions (audio-only and visual-only), the control group exhibited higher activation than the volleyball players in frontal lobe areas highly related to cognitive functions, including the middle frontal gyrus (MFG) and the inferior frontal gyrus (IFG). Under audiovisual conditions, compared with the control group, volleyball players had stronger activation in the superior temporal gyrus (STG), one of the key hubs for audiovisual integration, and the middle temporal gyrus (MTG), which is responsible for linking speech information to meaning and context. In addition, the activation levels of these two areas were positively correlated with volleyball players′ speech recognition accuracy, indicating that their greater activation of integration-related brain areas may be related to their ability of audiovisual speech processing.
    Overall, these results indicate that multisensory integration ability is associated with speech-in-noise perception. Neuroimaging data also supported that the multisensory nature of sports experience may enhance the effectiveness of brain areas involved in multisensory speech processing. Findings have implications for the potential use of multisensory training, such as via sports or music activities, to enhance speech processing in noisy environments.
    .
    顯示於類別:[認知與神經科學研究所 ] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML3檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明