隨著人工智慧技術日益成熟,擬人化聊天機器人(Anthropomorphic Chatbots)逐漸融入服務互動中,以提升使用者體驗與情感連結。然而,當演算法誤判導致推薦服務失敗時,不僅破壞功能性信賴,也因期望落差放大情感失望,進而影響使用者對整體服務評價與使用意願。而在推薦失敗後如何解析使用者心理歷程以重建使用者信任,成為值得關注的議題。因此本研究利用社會臨場感區分心智化與自我參照歷程構面,整合恢復期望理論以及信任理論,建構一個心理期望整合模型以釐清使用者信任重建過程,最後加入任務關鍵性觀察其在恢復期望對信任影響間之干擾效果,更完整的解釋在面臨擬人化聊天機器人推薦失敗情境下使用者之心路歷程。本研究採用網路與實體問卷交互發放方式,共計回收有效問卷454份,以線性結構方程式進行研究假說之分析。本研究經實證分析結果發現,擬人化設計需透過自我參照歷程的中介機制才能促使社會臨場感產生並影響使用者之恢復期望,進一步證實透過互動過程引發使用者的情感投射與社會連結時,更容易賦予聊天機器人人類心智。此外,情感型恢復期望對信任建構具有最強正向影響,顯示使用者特別重視聊天機器人是否能透過情感理解與同理心來緩解負面情緒。另一方面,任務關鍵性僅在功能型恢復期望與信任之間存在負向的干擾作用,代表任務屬性差異將導致功能性補償遭遇理性信任阻力,而情感性補償建構之情感信任,較不依賴任務屬性風險。本研究的研究結果能為能夠提供學術與企業對於聊天機器人擬人化設計與補償機制策略的參考方向與建議。;As AI technologies mature, anthropomorphic chatbots are increasingly used in service interactions to enhance user experience and emotional engagement. However, algorithmic errors in recommendation services not only undermine functional trust but also amplify emotional disappointment due to unmet expectations, negatively affecting overall service evaluations and usage intentions. Understanding how to reconstruct user trust by uncovering their psychological processes after recommendation failures is therefore crucial. This study employs Social Presence Theory to distinguish mentalization from self‐referencing, integrates Recovery Expectation Theory and trust theory to develop a Psychological Expectation Integration Model, and incorporates task criticality as a moderator to explain user responses in anthropomorphic chatbot recommendation‐failure scenarios. Using a mixed online and paper survey, 454 valid responses were collected and analyzed via structural equation modeling. Results reveal that anthropomorphic design must operate through self‐referencing to elicit social presence, which in turn shapes users’ recovery expectations. Emotional recovery expectations exert the strongest positive effect on trust reconstruction, indicating that users particularly value empathetic understanding to alleviate negative emotions. Task criticality negatively moderates only the link between functional recovery expectations and trust, suggesting that functional compensation faces rational resistance in high‐risk tasks, whereas emotional compensation builds affective trust regardless of task risk. These findings clarify the mediating roles of social presence and recovery expectations in rebuilding trust after recommendation failures and offer theoretical and practical guidance for designing anthropomorphic chatbots and effective compensation strategies.