針對平行語料稀缺的「台灣台語」,本研究提出了一套結合溝通式學習與知識結晶化機制的創新「跨模型師生架構」。我們採用 Gemini 2.5 Pro 擔任「教師模型」,負責生成合成資料,並透過互動式對話指導 GPT-5.1 與 DeepSeek-v3 等「學生模型」。為解決直接堆疊原始對話所導致的指令碎片化與高推論成本,我們引入了「知識結晶化」機制。此機制的設計初衷並非僅為緩解記憶體限制,而是旨在透過「後設認知反思」,將稍縱即逝的互動過程提煉為高純度的語言規則。
實驗結果顯示,在錯誤修正方面,互動式策略的表現顯著優於被動示範。值得注意的是,DeepSeek-v3 展現了卓越的適應性,能有效激活其潛在的多語言能力,在無需更新參數的情況下即逼近當前最先進LLM的水準。效率分析指出互動成效在第5輪達到峰值,實證了結晶化機制對於維持高資訊密度及優化 Token 消耗的必要性。本研究建立了一套具成本效益且免微調的範式,成功將通用模型與低資源語言對齊,為台灣台語的數位保存做出了貢獻。;To ensure instructional stability and economic efficiency in Low-Resource Language (LRL) adaptation, this study addresses the challenges of instructional fragmentation, high inference costs, and reduced rule-adherence during long-form LLM interactions. This study proposes a Knowledge Crystallization mechanism to achieve knowledge portability, distilling ephemeral interactions into high-purity rules to transfer linguistic expertise from closed-source giants to more cost-effective models. Focusing on Taiwanese Hokkien, a language characterized by limited parallel corpora, this study proposes a novel Cross-Model Teacher-Student framework that integrates communicative learning with a knowledge crystallization mechanism. We employ Gemini 2.5 Pro as the Teacher model to generate synthetic data and guide Student models (GPT-5.1 and DeepSeek-v3) through interactive dialogue. To address the instructional fragmentation and high inference costs associated with raw dialogue stacking, we introduce a "Knowledge Crystallization" mechanism. This process is designed not merely to accommodate memory constraints, but to distill ephemeral interactions into high-purity linguistic rules via metacognitive reflection. Experimental results demonstrate that interactive strategies significantly outperform passive demonstration in error correction. Notably, the DeepSeek-v3 model exhibits exceptional adaptability, effectively activating latent multilingual capabilities to approach SOTA levels without parameter updates. An efficiency analysis identifies a peak at 5 interaction turns, empirically validating that crystallization is essential for maintaining high information density and optimizing token consumption. This work establishes a cost-effective, fine-tuning-free paradigm for aligning general-purpose models with LRLs, contributing to the digital preservation of Taiwanese Hokkien.