博碩士論文 111522160 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:51 、訪客IP:18.221.243.29
姓名 雷聿文(Yu-Wen Lei)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱
(Melodic Skeleton Generation using Simulated Annealing)
相關論文
★ 基於edX線上討論板社交關係之分組機制★ 利用Kinect建置3D視覺化之Facebook互動系統
★ 利用 Kinect建置智慧型教室之評量系統★ 基於行動裝置應用之智慧型都會區路徑規劃機制
★ 基於分析關鍵動量相關性之動態紋理轉換★ 基於保護影像中直線結構的細縫裁減系統
★ 建基於開放式網路社群學習環境之社群推薦機制★ 英語作為外語的互動式情境學習環境之系統設計
★ 基於膚色保存之情感色彩轉換機制★ 一個用於虛擬鍵盤之手勢識別框架
★ 分數冪次型灰色生成預測模型誤差分析暨電腦工具箱之研發★ 使用慣性傳感器構建即時人體骨架動作
★ 基於多台攝影機即時三維建模★ 基於互補度與社群網路分析於基因演算法之分組機制
★ 即時手部追蹤之虛擬樂器演奏系統★ 基於類神經網路之即時虛擬樂器演奏系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2026-7-5以後開放)
摘要(中) 本研究提出了一種基於模擬退火演算法的旋律骨架生成方法,旨在開發出一種能夠
隨機生成輸入音樂上連貫且有意義的旋律結構之方法,同時保持計算效率。旋律骨架是
音樂創作過程中的關鍵基礎,提供了主要的旋律框架和節奏結構,為更複雜的旋律細節
發展奠定了基礎。本研究方法利用模擬退火算法在探索廣闊搜索空間及避免局部最佳解
的優勢,通過理解旋律中相對重要的音樂元素,並設計評估生成旋律品質的標準來實現
旋律骨架的生成。
我們詳細介紹了模擬退火算法在本問題上的具體實現,包括解空間的表示、目標函
數的設計和退火過程。為驗證所提方法的有效性,我們進行了與原始作曲的相似性比較
實驗。結果顯示,基於模擬退火演算法的方法能夠隨機生成符合音樂先驗知識且具有多
樣性的高品質旋律骨架,可參考其附錄 1。此外,我們還探索了該方法在自動伴奏生成、
風格轉換、數據集創建、音樂理論教育工具和互動音樂生成系統等各個領域的潛在應用。
總結來說,本研究引入了一種生成音樂旋律骨架的穩健且創新的方法,利用模擬退
火演算法的隨機性和音樂理論的先驗知識,未來具有在自動伴奏生成、風格轉換、數據
集創建、音樂理論教育工具和互動音樂生成系統等方面的廣泛應用前景。
摘要(英) This study proposes a melody skeleton generation method based on the simulated
annealing algorithm, aiming to develop a method that can randomly generate musically
coherent and meaningful melodic structures while maintaining computational efficiency.
Melody skeletons are a key foundation in the music composition process, providing the
primary melodic framework and rhythmic structure, thus laying the groundwork for the
development of more complex melodic details. This study′s method leverages the advantages
of the simulated annealing algorithm in exploring a vast search space and avoiding local
optima, by understanding relatively important musical elements in melodies and designing
criteria to evaluate the quality of generated melodies.
We provide a detailed explanation of the specific implementation of the simulated
annealing algorithm in this problem, including the representation of the solution space, the
design of the objective function, and the annealing process. To validate the effectiveness of
the proposed method, we conducted similarity comparison experiments with original
compositions, the results of which can be referenced in Appendix 1. The results show that the
simulated annealing algorithm-based method can randomly generate high-quality melodic
skeletons that are consistent with musical prior knowledge and exhibit diversity. Additionally,
we explored the potential applications of this method in various fields, such as automatic
accompaniment generation, style transformation, dataset creation, music theory education
tools, and interactive music generation systems.
In conclusion, this study introduces a robust and innovative method for generating
musical melody skeletons, utilizing the randomness of the simulated annealing algorithm and
iii
the prior knowledge of music theory. In the future, this method has broad application
prospects in automatic accompaniment generation, style transformation, dataset creation,
music theory education tools, and interactive music generation systems.
關鍵字(中) ★ 模擬退火演算法
★ 旋律骨架
★ 風格轉換
★ 相似度比較
★ 生成伴奏
關鍵字(英) ★ simulated annealing algorithm
★ melody skeleton
★ style transformation
★ similarity comparison
★ accompaniment generation
論文目次 Chinese Abstract i
English Abstract ii
Table of Contents iv
List of Figures vi
List of Tables vii
I Introduction 1
II Related Works 4
III Background 6
3-1 Musical Structure 6
3-1-1 Duration 6
3-1-2 Pitch 8
3-1-3 Spiral Array 8
3-2 Simulated Annealing Algorithm 12
IV Method 15
4-1 Architecture Overview 15
4-2 Data Preprocessing 15
4-3 Algorithm Implementation Framework 16
4-4 Objective Functions Design 20
V Experiments 23
5-1 Dataset 23
5-2 Characteristics 23
5-3 Object Metrics 23
5-3-1 Cosine Similarity 24
5-3-2 Dynamic Time Warping 24
5-3-3 Dynamic Time Warping Similarity 25
5-3-4 Fréchet Audio Distance 27
5-4 Evaluation 28
5-4-1 Japanese Nakashi 30
5-4-2 Traditional Jiangnan Style 32
5-4-3 Hakka Folk Song 34
5-4-4 Traditional Qin Style 37
5-4-5 Taiwanese Opera 39
5-4-6 Summary 41
VI Conclusions and Future work 43
6-1 Conclusions 43
6-2 Future work 43
References 45
Appendix 48
Appendix1 48
Appendix2 49
參考文獻 [1] D.Eck and J.Schmidhuber, “A First Look at Music Composition using LSTM Recurrent Neural Networks. Technical Report, IDSIA/USI-SUPSI,” 2002. [Online]. Available: https://people.idsia.ch/~juergen/blues/IDSIA-07-02.pdf
[2] A.Vaswani et al., “Attention is all you need,” Adv. Neural Inf. Process. Syst., vol. 2017-December, pp. 5999–6009, Jun.2017, [Online]. Available: http://arxiv.org/abs/1706.03762
[3] C. Z. A.Huang et al., “Music transformer: Generating music with long-term structure,” 7th Int. Conf. Learn. Represent. ICLR 2019, pp. 1–15, 2019.
[4] Magenta, Google. https://magenta.tensorflow.org/
[5] Y. S.Huang and Y. H.Yang, “Pop Music Transformer: Beat-based Modeling and Generation of Expressive Pop Piano Compositions,” in MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia, Association for Computing Machinery, Inc, Oct. 2020, pp. 1180–1188. doi: 10.1145/3394171.3413671.
[6] Y.Ren, J.He, X.Tan, T.Qin, Z.Zhao, and T. Y.Liu, “PopMAG: Pop Music Accompaniment Generation,” in MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia, Association for Computing Machinery, Inc, Oct. 2020, pp. 1198–1206. doi: 10.1145/3394171.3413721.
[7] Z.Dai, Z.Yang, Y.Yang, J.Carbonell, Q.V.Le, and R.Salakhutdinov, “Transformer-XL: Attentive language models beyond a fixed-length context,” ACL 2019 - 57th Annu. Meet. Assoc. Comput. Linguist. Proc. Conf., pp. 2978–2988, Jan.2020, doi: 10.18653/v1/p19-1285.
[8] B.Yu et al., “Museformer: Transformer with Fine- and Coarse-Grained Attention for Music Generation,” Adv. Neural Inf. Process. Syst., vol. 35, Oct.2022, [Online]. Available: http://arxiv.org/abs/2210.10349
[9] G.Brunner, Y.Wang, R.Wattenhofer, and S.Zhao, “Symbolic music genre transfer with CycleGAN,” Proc. - Int. Conf. Tools with Artif. Intell. ICTAI, vol. 2018-November, pp. 786–793, Sep.2018, doi: 10.1109/ICTAI.2018.00123.
[10] Z.Hu, Y.Liu, G.Chen, and Y.Liu, “Can Machines Generate Personalized Music? A Hybrid Favorite-Aware Method for User Preference Music Transfer,” IEEE Trans. Multimed., vol. 25, pp. 2296–2308, Jan.2023, doi: 10.1109/TMM.2022.3146002.
[11] C.Hernandez-Olivan and J. R.Beltrán, “Music Composition with Deep Learning: A Review,” Signals Commun. Technol., pp. 25–50, Aug.2023, doi: 10.1007/978-3-031-18444-4_2.
[12] E. R. Hammer and M. S. Cole, Guided Listening: A Textbook for Music Appreciation: Wm. C. Brown, 1992.
[13] Cooper G. W., Meyer L. B. (1963). The Rhythmic Structure of Music. Chicago, IL: University of Chicago Press.
[14] Perricone, J. (2000). Melody in songwriting: Tools and techniques for writing hit songs. Boston, MA: Berklee press.
[15] Lerdahl, F., & Jackendoff, R. (1983). A generative theory of tonal music. Cambridge MA: MIT Press.
[16] H. C.Longuet-Higgins and C. S.Lee, “The Rhythmic Interpretation of Monophonic Music,” 1984. doi: 10.2307/40285271.
[17] Schenker H. (1935), Neue musikalische Theorien und Phantasien, III: Der freie Satz, Universal Edition, Wien; second edition 1956.
[18] E.Chew, “Mathematical and Computational Modelling of Tonality: Theory and Applications,” in International Series in Operations Research and Management Science, vol. 204, Springer New York LLC, 2014, pp. 41–60. doi: 10.1007/978-1-4614-9475-1_3.
[19] S.Kirkpatrick, C. D.Gelatt, and M. P.Vecchi, “Optimization by simulated annealing,” 1983. doi: 10.1126/science.220.4598.671.
[20] R. A.Rutenbar, “Simulated annealing algorithms: An overview,” IEEE Circuits Devices Mag., vol. 5, no. 1, pp. 19–26, 1989, doi: 10.1109/101.17235.
[21] H.Sakoe and S.Chiba, “Dynamic Programming Algorithm Optimization for Spoken Word Recognition,” 1978. doi: 10.1109/TASSP.1978.1163055.
[22] K.Kilgour, M.Zuluaga, D.Roblek, and M.Sharifi, “Fréchet audio distance: A reference-free metric for evaluating music enhancement algorithms,” Proc. Annu. Conf. Int. Speech Commun. Assoc. INTERSPEECH, vol. 2019-September, pp. 2350–2354, 2019, doi: 10.21437/Interspeech.2019-2219.
指導教授 施國琛(Kuo-Chen Shih) 審核日期 2024-7-13
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明