在本研究中,我們提出了一種基於 Diffusion 架構的音色轉換模型,該模型旨在將多 種樂器演奏的樂曲轉換為二胡演奏版本。我們的模型通過 Pitch Encoder 和 Loudness Encoder 擷取樂曲的音高和響度特徵,並將這些特徵作為條件輸入至 Diffusion Model base 的 Decoder 中,以生成高品質的二胡音色樂曲。在實驗部分,我們系統地評估了模 型的性能,包括音高準確性(Pitch Accuracy)、餘弦相似度(Cosine Similarity)和弗雷 歇音頻距離(Fréchet Audio Distance)。結果表明,我們的模型在音高準確性方面達到了 95% 至 96% 的高準確率,並且生成的二胡音色與真實二胡演奏接近。此外,通過消融 實驗驗證了 Loudness Encoder 在模型中的重要性,確保了模型在無聲輸入時能夠正確地 生成無聲音波。本研究展示了基於 Diffusion 架構的音色轉換模型在音樂生成領域的潛 力,為未來的音樂生成和音色轉換研究提供了新的思路。;In this study, we propose a timbre transfer model based on the Diffusion architecture, which aims to convert musical pieces performed by various instruments into erhu performances. Our model utilizes Pitch Encoder and Loudness Encoder to extract the pitch and loudness features of the music, and these features are then used as conditions input into the Diffusion Model-based Decoder to generate high-quality erhu timbre music. In the experimental section, we system atically evaluated the model’s performance, including Pitch Accuracy, Cosine Similarity, and Fréchet Audio Distance. The results show that our model achieved a high pitch accuracy of 95% to 96% and that the generated erhu timbre closely matches the real erhu performances. Further more, ablation experiments confirmed the importance of the Loudness Encoder, ensuring that the model correctly generates silent waveforms when given silent inputs. This study demon strates the potential of Diffusion architecture-based timbre transfer models in the field of music generation, providing new insights for future research in music generation and timbre transfer.