<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>DSpace collection: 博碩士論文</title>
    <link>https://ir.lib.ncu.edu.tw/handle/987654321/365</link>
    <description />
    <textInput>
      <title>The collection's search engine</title>
      <description>Search the Channel</description>
      <name>s</name>
      <link>https://ir.lib.ncu.edu.tw/simple-search</link>
    </textInput>
    <item>
      <title>絕對音感、自閉症特質與靜息態功能性連結關聯性之行為與腦造影研究;Relationship between absolute pitch, autistic traits and resting-state functional connectivity: Behavioral and neuroimaging studies</title>
      <link>https://ir.lib.ncu.edu.tw/handle/987654321/99424</link>
      <description>title: 絕對音感、自閉症特質與靜息態功能性連結關聯性之行為與腦造影研究;Relationship between absolute pitch, autistic traits and resting-state functional connectivity: Behavioral and neuroimaging studies abstract: 絕對音感（Absolute Pitch, AP）與自閉症光譜障礙（Autism Spectrum Disorder, ASD）常共同出現在特定個體中，兩者皆具有遺傳性與連續性，並與非典型的大腦連結模式相關。值得注意的是，在自閉症族群中，絕對音感能力的盛行率估計介於 5% 至 11% 之間。為解釋此絕對音感與自閉症特質之間的重疊現象，真實映射理論認為兩者可能共享某些特定的認知風格與神經連結特徵。然而，目前仍不清楚：（1）自閉症特質是否會影響絕對音感音樂家的表現；（2）這種關聯是否涉及自閉症三大腦網絡模型中的功能性連結異常。為釐清上述問題，我們進行了行為與靜息態功能性磁振造影實驗，以探討絕對音感能力與自閉症特質之間的關聯。
行為實驗共招募了120 名受試者參與，依據絕對音感能力篩選準確度，將音樂家與非音樂家分為絕對音感組、非絕對音感組與非音樂家組，並比較其自閉症特質與音樂表現。所有受試者皆完成自閉症光譜量表，以評估自閉症特質的五個面向。此外，亦施測一系列與絕對音感能力相關的作業，包括音感調整測驗、相對音感辨識與音樂能力測驗。行為結果顯示，絕對音感組音樂家在「想像力」與「社交溝通」相關的自閉症特質分量表上得分顯著高於非絕對音感組與非音樂家。此外，在音樂能力測驗上絕對音感組音樂家音高辨識能力較佳，但在節奏辨識能力方面則無顯著關聯。
我們運用靜息態功能性磁振造影技術探討絕對音感能力與自閉症特質在自閉症三大腦網絡模型中的功能性連結變化。招募了80名受試者參與，依據絕對音感能力篩選準確度，將音樂家與非音樂家分為絕對音感組、非絕對音感組與非音樂家組，並比較其自閉症特質與靜息態大腦功能之關聯性。結果顯示絕對音感組音樂家在預設模式網絡、警覺網絡與額頂葉網絡等自閉症相關核心腦區的功能性連結，與非絕對音感組及非音樂家組相比具有顯著差異。種子至體素與網絡內外分析結果進一步指出，絕對音感組音樂家呈現局部過度連結與整體低連結，且此連結模式與自閉症特質得分呈正相關。特別的是，在額頂葉網絡中，絕對音感組音樂家於中央前迴與額中迴呈現較高的活化程度，且此活化與自閉症特質及音樂節奏辨識能力呈正相關，可能反映兩者在此腦區的共同功能性參與。
實驗結果顯示，來自行為與靜息態功能性磁振造影分析的結果提供了證據，支持絕對音感與自閉症特質之間的關聯性，並指出此關聯可能源自自閉症三大腦網絡模型中的功能性連結異常。儘管音樂訓練經驗與自閉症特質未呈現顯著相關，我們仍觀察到與自閉症特質及音樂能力相關的腦區活化。進一步地，我們排除行為共變數的影響，例如開始音樂訓練的年齡與音樂訓練累積時數。結果顯示，絕對音感音樂家的優勢仍然顯現在顳上迴、緣上迴、黑索氏迴、中央前迴、中央後迴、顳平面等腦區。此結果揭示，長期音樂訓練可能透過大腦可塑性，強化特定神經網絡的功能參與，並為理解音樂能力與自閉症特質的交互作用提供新的研究視角。
;Absolute pitch (AP) and autism spectrum disorder (ASD) frequently co-occur in individuals. Both are heritable and continuous traits and are associated with atypical patterns of brain connectivity. Notably, the prevalence of AP in individuals with ASD has been estimated at 5% to 11%. To account for the overlap between absolute pitch and autistic traits, the veridical mapping theory proposes that the two may share specific cognitive styles and neural connectivity features. However, it remains unclear (1) whether autistic traits influence absolute pitch performance and (2) whether this relationship involves functional connectivity abnormalities within the three core large-scale networks implicated in autism. To address these questions, we conducted both behavioral and resting-state functional magnetic resonance imaging (rs-fMRI) experiments to investigate the association between absolute pitch ability and autistic traits.
In the behavioral experiment, a total of 120 participants were recruited. Based on the accuracy of absolute pitch screening, categorized into AP, non-AP, and non-musician groups. All participants completed the Autism Spectrum Quotient (AQ), which evaluates autistic traits across five domains. In addition, a battery of AP-related tasks was administered, including the Pitch Adjustment Test (PAT), the Relative Pitch Test (RP), and the Advanced Measures of Music Audiation (AMMA). The behavioral results showed that AP musicians scored significantly higher than both non-AP musicians and non-musicians on the AQ subscales related to imagination and social communication. Furthermore, AP musicians outperformed others in tonal ability, while no significant effects were observed for rhythm ability.
We employed resting-state functional magnetic resonance imaging (rs-fMRI) to examine alterations in functional connectivity between AP ability and autistic traits within the triple-network model of autism. We recruited 80 participants and found significant differences between AP musicians, non-AP musicians, and non-musicians in core autism-related networks, including the Default Mode Network (DMN), Salience Network (SN), and Fronto-Parietal Network (FPN). Seed-to-voxel and within–between network analyses further demonstrated that the AP group exhibited a pattern of local hyperconnectivity alongside global hypoconnectivity, which was positively correlated with autistic trait scores. Notably, within the FPN, AP musicians showed increased activation in the precentral gyrus (preCG) and middle frontal gyrus (MFG). This activation was positively correlated with both autistic traits and rhythm ability, which may reflect their shared functional involvement in this brain region.
The observed behavioral and neural profiles suggest a meaningful convergence between AP and specific autistic traits, suggesting that this relationship may be driven by functional connectivity abnormalities within the triple-network model of autism. Despite musical training experience not significantly correlated with autistic traits, we observed neural activation in regions associated with both autistic traits and musical abilities. To further clarify these results, we controlled behavioral covariates such as age of onset of musical training and cumulative training hours. The findings revealed that AP musicians continued to exhibit enhanced activation in key auditory and sensorimotor regions. Overall, long-term musical training may enhance neuroplasticity and shed new light on the link between musical ability and autistic traits.
&lt;br&gt;</description>
      <pubDate>Fri, 06 Mar 2026 10:57:56 GMT</pubDate>
    </item>
    <item>
      <title>音頻拓樸對時間性音高知覺的影響;Tonotopic Effects on Temporal-Based Pitch Perception</title>
      <link>https://ir.lib.ncu.edu.tw/handle/987654321/99421</link>
      <description>title: 音頻拓樸對時間性音高知覺的影響;Tonotopic Effects on Temporal-Based Pitch Perception abstract: 音高知覺是語音理解、音樂欣賞以及聽覺場域分析中的關鍵因素。音高的產生大致來自兩個主要的機制理論：時間理論，源自神經元相位鎖定的放電模式以追蹤聲波的週期性；以及位置理論，源自耳蝸音頻拓樸(tonotopy)上的分佈。雖然兩種機制皆獲得支持，但它們的交互作用仍未被完全釐清，尤其在高頻音，調幅(AM)扮演更重要的角色。音訊中的調幅或包絡如何影響音高知覺，以及載波特性如何影響時間性音高 (time pitch)，仍是未解之謎。 本研究使用轉置音(transposed tones)，將位置(載波)與時間(包絡)線索分離，以探討音頻拓樸對時間性音高的影響。
實驗一，包絡被轉置到 1至10 kHz 的高頻載波以及噪音載波上。透過音高辨別、音程辨別與旋律識別任務，我們確認高頻聲音中的時間包絡能引發強而明顯的音高知覺，其表現隨著載波頻率增加而提升，且在純音載波上優於噪音載波。這些結果顯示，時間包絡對音高知覺的重要性，及其所受音頻拓樸的影響。
實驗二進一步檢驗多載波的轉置音。雖然諧波會產生基音的音高知覺，但此現象在轉置後並不存在。我們發現，轉置音的音高知覺乃是由最低頻載波的包絡週期性所決定的。此外，我們更發現，包絡頻率比載波更能決定轉置音的音高知覺。
為了分析這些結果，我們採用全域希爾伯特頻譜分析（Holo-Hilbert Spectral Analysis, HHSA），這是一種非線性方法，特別適合分析非線性與非穩態訊號，能提供載波頻率與調幅頻率的二維呈現。結果顯示，HHSA 呈現出的主要調幅頻率與轉置音的音高知覺完全相符。
總結而言，本研究證明了時間性的資訊(包絡)能夠在高頻聲音中提供強而明顯的音高知覺。同時，HHSA 提供一個良好的聽覺訊號分析，可以分析出與音高知覺相符的調幅頻率。這些基於轉置音實驗與 HHSA 的研究，不僅深化了在音高知覺中，音頻拓樸與時間訊息交互作用的理解，也為聽覺輔具的發展提出了新的方向。
;Pitch perception is essential to speech understanding, music appreciation, and auditory scene analysis. It arises from two complementary mechanisms: the time theory, derived from phase-locked neural firing patterns tracking waveform periodicity, and the place theory, derived from excitation along the cochlear tonotopic map. While both mechanisms are supported, their interaction remains unresolved, particularly in high-frequency hearing where amplitude modulation (AM) cues dominate. Although AM is preserved throughout the auditory system, how it contributes to pitch perception and how carrier properties shape temporal pitch remain open questions.
This dissertation uses transposed tones, which dissociate spectral (carrier/place) and temporal (envelope/time) cues, to probe tonotopic influences on temporal-based pitch. In Experiment 1, AM envelopes were transposed onto carriers from 1 to 10 kHz and onto noise carriers. Pitch discrimination, interval discrimination, and melody identification tasks confirmed that temporal envelope fluctuations in high-frequency sounds evoke a robust pitch percept. Performance improved with increasing carrier frequency and was stronger for tonal than noise carriers. These findings indicate that pitch information provided by temporal envelope is more pronounced than previously assumed and shaped by tonotopic position.
Experiment 2 extended this by examining transposed tones on multiple carriers. Though harmonic complexes normally produce a fundamental pitch, this phenomenon failed to preserved after transposition. We found the envelope periodicity on the lowest-frequency tonotopy dominates the pitch of transposed tones. Furthermore, we found pitch perception is driven more by envelope frequency than carrier spectrum in transposed tones.
To analyze these results, we employed Holo-Hilbert Spectral Analysis (HHSA), a nonlinear method providing a two-dimensional representation of instantaneous frequency and AM. Unlike Fourier or wavelet analyses, HHSA is adaptive and suitable for nonlinear and non-stationary signals such as speech or music. HHSA consistently revealed the dominant AM frequency that matched perceived pitch.
In summary, this work demonstrates that temporal envelope cues can support robust pitch perception at high frequencies. HHSA further provides a powerful analytic framework to reveal the AM dynamics underlying these percepts. These findings from transposed-tone experiments and HHSA advance understanding of the interplay between spectral and temporal coding in pitch perception and suggest new directions for auditory prosthetics.
&lt;br&gt;</description>
      <pubDate>Fri, 06 Mar 2026 10:57:43 GMT</pubDate>
    </item>
    <item>
      <title>專業排球訓練者的跨感官經驗對嘈雜環境中 語句辨識影響的腦神經機制;Experience-dependent effects on multisensory speech processing in noisy environments: An fMRI study with expert volleyball players</title>
      <link>https://ir.lib.ncu.edu.tw/handle/987654321/98479</link>
      <description>title: 專業排球訓練者的跨感官經驗對嘈雜環境中 語句辨識影響的腦神經機制;Experience-dependent effects on multisensory speech processing in noisy environments: An fMRI study with expert volleyball players abstract: 日常的對話經常發生於充滿噪音的環境，研究指出，在此種環境中，結合多感官提示(例如: 視覺與聽覺)有助於提升語句理解能力。因此，更敏銳的多感官同步處理能力可能在噪音環境中提升語句理解，其中，音樂家與運動員可能因其多感官訓練經驗，而展現出較佳的整合優勢。然而，至今尚缺乏比較不同類型的多感官經驗 (如：運動、音樂等) 之多感官整合能力對語句理解的影響。因此，本研究以視聽線索效益 (audiovisual benefits) 和反映大腦整合多感官訊息整合精確度的時間整合窗口 (temporal binding window, TBW) 為主要指標，探討音樂家、運動員和非專業者作為對照組的多感官訓練經驗如何影響在噪音環境中語句辨識之行為與腦神經機制。
本研究的行為實驗包含兩個部分。第一部分為同步判斷作業 (synchrony judgment task, SJ) 與時序判斷作業 (temporal-order judgment task, TOJ)，受試者需對在不同刺激起始非同步 (stimulus onset asynchrony, SOA: ±360, ±300, ±240, ±180, ±120, ±60和0 ms)經由視覺與聽覺呈現的中文語句進行判斷。在同步判斷作業中，受試者需判斷刺激是否同步呈現；在時序判斷作業中，則需判定哪一個刺激先出現。結果顯示，音樂家與排球員的時間整合窗口及洽辨差 (just noticeable difference, JND) 皆較小，推測多感官訓練經驗可能提升其對視聽同步性的敏感度。第二部分為噪音語句辨識任務 (speech-in-noise task, SINT)，包括純音訊、純視覺與視聽整合的語句三種條件，並搭配不同訊噪比 (signal-to-noise ratio, SNR = 0, −6, −9, −12 dB)。受試者需聆聽目標中文語句並以鍵盤輸入所聽到的語句。結果顯示，相較於對照組，音樂家與排球員在噪音條件下展現出較高的視聽線索效益 (audiovisual benefits)。此外，視聽效益越大者，其視聽整合的時間窗口越窄，代表多感官整合能力越好，其同步知覺也越敏銳。三組在單感官 (純音訊或純視覺) 條件下的語句辨識無顯著差異，表示語句辨識優勢主要來自多感官整合能力的差異。同時，也發現音樂與運動訓練的持續時間與視聽效益呈正相關。這些結果顯示整合視覺與聽覺訊息對於噪音環境下語句辨識的重要性，並指出音樂與運動等多感官訓練活動可能促進噪音環境中的語句處理能力。
功能性磁振造影 (functional magnetic resonance imaging, fMRI) 實驗中，測試了運動員和對照組在三種不同視聽條件下之腦區活化情形，受試者需在純音訊、純視覺和視聽整合三種條件搭配三種訊噪比 (SNRs: 無噪音、0 dB、−12 dB) 下進行語句聽辨，並以磁振造影相容之按鍵盒從四個選項中選出語句中曾播放過的單詞做答。結果顯示，在單一感官條件下 (純音訊和純視覺)，對照組在與認知功能高度相關的額葉區域，包含額中回 (middle frontal gyrus, MFG) 和額下回 (inferior frontal gyrus, IFG)會比排球員有更高的激活。在視聽條件下，排球員於視聽整合樞紐區之一的顳上回 (superior temporal gyrus, STG)，以及負責將詞彙訊息連結至語義的顳中回 (middle temporal gyrus, MTG)，有更顯著的活化。此外，這兩區域的活化程度和語句辨識準確率呈現正相關，顯示了運動員較多的整合相關腦區激活可能跟視聽語句處理能力有關聯。
整體而言，本研究結果指出，多感官整合能力與噪音環境中語句辨識的表現有關聯，而這些能力可能透過音樂和運動訓練進一步增強。神經影像資料也支持了運動經驗能強化語句處理中整合視聽覺訊息相關腦區的效能。研究結果期能對於利用多感官訓練來增進在嘈雜環境中的語句處理能力有助益。
;Daily communication often takes place in noisy environments. Research has shown that combining multisensory cues (such as visual and auditory) can be most beneficial for speech comprehension under non-optimal hearing conditions, either due to external or internal noise. Therefore, more acute multisensory processing ability may benefit speech perception in noisy environments. Some studies have reported that musicians and sports players, due to the multisensory nature of training experience, show multisensory benefits. However, little research has investigated the effects of different types of multisensory experiences (such as sports, music, etc.) on speech perception. Therefore, this study used the audiovisual (AV) benefits and the temporal binding window (TBW), which reflects the accuracy of multisensory integration in the brain, as the main indicators to explore how the multisensory integration ability affects the behavior and neural mechanisms of speech recognition in noisy environments among musicians, athletes, and non-experts (as a control group).
In behavioral experiments, participants first completed the synchrony judgment task (SJ) and the temporal-order judgment task (TOJ) using Mandarin Chinese sentences presented under various stimulus onset asynchrony (SOAs: ±360, ±300, ±240, ±180, ±120, ±60 and 0 ms) to measure multisensory integration. In the SJ task, participants judged whether the stimuli are presented synchronously; in the TOJ task, they judged which stimulus (auditory vs. visual) appears first. Results show that musicians and volleyball players had significantly smaller TBW and just noticeable difference (JND), suggesting that training experience may enhance sensitivity to audiovisual synchronization. The second part is the speech-in-noise task (SINT), which includes three conditions: audio-only, visual-only, and audiovisual, and is combined with different signal-to-noise ratios (SNR = 0, −6, −9, −12 dB). Participants were required to listen to Chinese sentences and input the words they heard via a keyboard. Compared with the control group, musicians and volleyball players showed significant audiovisual benefits under noise conditions, whereas no significant difference exists between musicians and volleyball players. In addition, narrower TBW was associated with greater audiovisual benefits for speech perception in noise. Moreover, there was no significant difference in speech intelligibility among the three groups under unisensory (audio-only or visual-only) conditions, indicating that the audiovisual speech advantage mainly came from differences in multisensory integration ability. In addition, music or sports training experience (e.g., total length of experience in years) was positively correlated with audiovisual speech benefits. These results emphasize the importance of integrating visual and auditory information for speech perception under challenging listening conditions and point to the possible facilitation effects of multisensory training activities such as music and exercise on multisensory speech processing.
In the fMRI experiment, we compare the brain activation among groups under three speech conditions: audio-only, visual-only, and audiovisual under no-noise and two SNRs (0, −12 dB). The task was to choose a word that was presented auditorily in the target sentence from four options with a key press response. Results show that under unisensory conditions (audio-only and visual-only), the control group exhibited higher activation than the volleyball players in frontal lobe areas highly related to cognitive functions, including the middle frontal gyrus (MFG) and the inferior frontal gyrus (IFG). Under audiovisual conditions, compared with the control group, volleyball players had stronger activation in the superior temporal gyrus (STG), one of the key hubs for audiovisual integration, and the middle temporal gyrus (MTG), which is responsible for linking speech information to meaning and context. In addition, the activation levels of these two areas were positively correlated with volleyball players′ speech recognition accuracy, indicating that their greater activation of integration-related brain areas may be related to their ability of audiovisual speech processing.
Overall, these results indicate that multisensory integration ability is associated with speech-in-noise perception. Neuroimaging data also supported that the multisensory nature of sports experience may enhance the effectiveness of brain areas involved in multisensory speech processing. Findings have implications for the potential use of multisensory training, such as via sports or music activities, to enhance speech processing in noisy environments.
.
&lt;br&gt;</description>
      <pubDate>Fri, 17 Oct 2025 04:49:50 GMT</pubDate>
    </item>
    <item>
      <title>跨文化臉孔記憶辨識：同種族效應、情緒表情與跨群接觸之交互作用;Cross-Cultural Face Memory Recognition: The Interplay of the Own-Race Effect, Emotional Expression, and Intergroup Contact.</title>
      <link>https://ir.lib.ncu.edu.tw/handle/987654321/98475</link>
      <description>title: 跨文化臉孔記憶辨識：同種族效應、情緒表情與跨群接觸之交互作用;Cross-Cultural Face Memory Recognition: The Interplay of the Own-Race Effect, Emotional Expression, and Intergroup Contact. abstract: 本論文旨在探討不同文化群體在人臉辨識記憶中的認知與社會機制，聚焦於同種族效應（Own-Race Effect, ORE）、情緒表達，以及跨種族接觸等因素。研究基於知覺專精（perceptual expertise）、社會分類（social categorization）與注意力導向編碼（attention-based encoding）等理論，設計兩項實驗。第一項實驗以受試者內設計（within-subject design）探討臉孔種族（亞洲、黑人、白人）、參與者族群（台灣人、居住於台灣的非裔後裔、居住於台灣的白人）及臉部情緒表達（憤怒、恐懼、中性）對記憶表現的交互影響。由於試次數不足及白人受試者人數有限，第二項實驗將情緒表達設計為受試者間變項（between-subject factor），並僅納入台灣人與非裔後裔兩組受試者，排除白人群體，以持續探討相同變項的影響。在編碼階段，受試者觀看嵌入中性背景的人臉，並進行性別判斷任務。隨後進行突襲式辨識記憶測驗，分別就人臉與背景在不同區塊中進行「舊／新」判斷，並以1至5分評估其記憶信心水準。此外，研究亦使用一份全面的跨種族接觸問卷，評估受試者一生中於各種社會情境中與不同族群互動的經驗。研究結果顯示，兩組族群皆展現明顯的同種族記憶優勢，並伴隨不對稱的異族效應。非裔後裔受試者不僅對黑人臉孔記憶較佳，對亞洲人臉亦優於白人臉，顯示其在台灣的社會接觸與曝光經驗可能提高其動機與個體化處理傾向。台灣受試者則在黑人臉孔的記憶表現上優於白人臉孔，儘管其自評與白人互動的頻率較高。此結果可由「顯著性導向編碼」（distinctiveness-based encoding）解釋，即在社會上較少見或知覺上更突出的臉孔在編碼階段更容易吸引注意力。此外，情緒表達並未顯著調節記憶表現，無主效應亦無交互作用，顯示在本研究情境中，種族處理與接觸經驗可能為記憶表現的主導因素。綜合而言，本研究透過整合情緒與社會接觸變項，深化對同種族效應的理解，並強調視覺注意、熟悉度與群際互動等因素如何在多元文化脈絡中影響人臉記憶。研究結果亦對實務應用領域（如目擊者辨識與跨文化溝通）提出潛在啟發。;This thesis investigates the cognitive and social mechanisms underlying face recognition memory across different cultural groups, with a focus on the Own-Race Effect (ORE), emotional expression, and interracial contact. Drawing on theories of perceptual expertise, social categorization, and attention-based encoding, two experiments were conducted. The first one examined how face race (Asian, Black, Caucasian), participant ethnicity (Taiwanese, African Descendants in Taiwan, Caucasians in Taiwan), and emotional expression as a within subject (angry, fearful, neutral) jointly influence memory performance. Due to limited number of trials and not enough number of Caucasian participants, the second experiment explored the same factors but emotion was a between subject factor and studied on Taiwanese and African Descendants excluding Caucasians. In the encoding phase, participants were asked to view faces and make the gender judgment of faces embedded in neutral backgrounds. Later, a surprise recognition memory test followed, where participants made a memory confidence judgment on a scale of 1-5 for both old and new faces and backgrounds in separate blocks. Additionally, a comprehensive interracial contact questionnaire assessed participants’ lifetime exposure to different racial groups across social settings. Results revealed robust own-race recognition advantages in both ethnic groups, accompanied by asymmetric cross-race effects. African Descendant participants displayed significantly enhanced memory not only for Black faces but also for Asian faces compared to Caucasian faces likely reflecting increased motivation and individuation due the exposure and social contact in Taiwan. Taiwanese participants demonstrated significantly better memory for Black faces than Caucasian faces, despite reporting more frequent contact with the latter. This pattern was interpreted through the lens of distinctiveness-based encoding, where socially rare or perceptually salient faces attracted greater attention at encoding. Moreover, emotional expression did not significantly modulate memory recognition performance as there were no main effects or interactions. This finding suggests that race-based processing and contact experience may play a more dominant role in memory outcomes under these conditions. These findings advance current understanding of the ORE by integrating emotional and social exposure variables and underscore the nuanced ways in which visual attention, familiarity, and intergroup dynamics shape memory for faces in multicultural contexts. The study offers new insights into the interplay between cognitive mechanisms and social experience, with implications for applied settings such as eyewitness identification and intercultural communication.
&lt;br&gt;</description>
      <pubDate>Fri, 17 Oct 2025 04:49:24 GMT</pubDate>
    </item>
  </channel>
</rss>

