本研究旨在探討使用者在不同應用情境下對人工智慧(AI)推薦系統之信任程度,並進一步檢驗個人AI素養與任務風險程度是否對此關係產生調節作用。隨著AI技術逐漸滲透日常生活,信任成為使用者是否接受AI建議的關鍵因素,而應用場域的屬性與使用者特質可能會顯著影響信任形成歷程。為此,本研究設計操弄式實驗問卷,情境分為功利性與享樂性兩類,風險程度分為高風險與低風險,並以AI素養作為連續調節變數,收集有效樣本共325份,透過多元線性迴歸進行假說驗證。 實證結果顯示:(1)應用情境具有顯著主效應,功利性任務相較於享樂性任務,能顯著提升使用者對AI推薦之信任程度;(2)AI素養與應用情境存在顯著交互作用,高AI素養使用者在功利性情境下展現更高信任,而在享樂性情境中則變化不大,顯示AI素養為條件式調節因子;(3)風險程度亦對應用情境與信任之關係產生調節效果,在低風險下功利性情境可顯著提升信任,但在高風險條件下則效果遞減甚至趨於一致。 本研究結果不僅補充過去文獻對「應用屬性 × 使用者特徵」交互影響的理解,也突顯AI應用推廣策略應視情境類型與風險敏感度調整介面設計與信任建構方式,對AI系統開發與信任管理具有重要理論與實務貢獻。 ;This study investigates how users′ trust in artificial intelligence (AI) recommendation systems varies across different application contexts and whether AI literacy and perceived risk level moderate this relationship. As AI technologies increasingly integrate into everyday decision-making, trust plays a critical role in user acceptance. However, how contextual attributes and individual characteristics interact to shape trust remains underexplored. To address this gap, this research adopts a scenario-based experimental design, manipulating application type (utilitarian vs. hedonic) and task risk (high vs. low), with AI literacy measured as a continuous moderator. A total of 325 valid responses were collected and analyzed using multiple linear regression. The results reveal that: (1) application context significantly affects trust, with utilitarian tasks generating higher trust in AI recommendations compared to hedonic ones; (2) a significant interaction between AI literacy and application type indicates that individuals with higher AI literacy exhibit greater trust in utilitarian contexts, but show little change in hedonic scenarios, suggesting a conditional moderation effect; (3) perceived risk also moderates the effect of application type on trust—under low-risk conditions, utilitarian contexts significantly enhance trust, while under high-risk conditions, the difference diminishes. This study enriches the understanding of the interplay between task attributes and user traits in shaping trust in AI. The findings highlight the importance of adaptive trust design in AI systems, suggesting that interface strategies should be tailored according to context type and user sensitivity to risk, offering valuable theoretical and practical implications for AI deployment and trust calibration.