參考文獻 |
[1] P. Shang, S. Ni, and L. Zhou, “A probabilistic and random method for the generation of bai
nationality music fragments,” in 2021 IEEE 4th International Conference on Multimedia Infor
mation Processing and Retrieval (MIPR), 2021, pp. 303–307.
[2] L. Mou, Y. Sun, Y. Tian, Y. Sun, Y. Liu, Z. Zhang, R. He, J. Li, J. Li, Z. Li, F. Gao, Y. Shi, and
R. Jain, “Memomusic 3.0: Considering context at music recommendation and combining music
theory at music generation,” in 2023 IEEE International Conference on Multimedia and Expo
Workshops (ICMEW), 2023, pp. 296–301.
[3] W.Wang,X.Li, C.Jin, D. Lu, Q. Zhou, and Y. Tie, “Cps: Full-song and style-conditioned music
generation with linear transformer,” in 2022 IEEE International Conference on Multimedia and
Expo Workshops (ICMEW), 2022, pp. 1–6.
[4] A. Remesh, A. P. K, and M. S. Sinith, “Symbolic domain music generation system based on lstm
architecture,” in 2022 Second International Conference on Next Generation Intelligent Systems
(ICNGIS), 2022, pp. 1–4.
[5] D. Gangal and Y. Kadam, “Unleashing the melodic potential: Music generation with char rnns,”
in 2023 2nd International Conference on Futuristic Technologies (INCOFT), 2023, pp. 1–6.
[6] S. S. Patil, S. H. Patil, A. M. Pawar, R. Shandilya, A. K. Kadam, R. B. Jadhav, and M. S. Bewoor,
“Music generation using rnn-lstm with gru,” in 2023 International Conference on Integration of
Computational Intelligent System (ICICIS), 2023, pp. 1–5.
[7] M. Singhal, B. Saxena, A. P. Singh, and A. Baranwal, “Study of the effectiveness of generative
adversarial networks towards music generation,” in 2023 Second International Conference on
Informatics (ICI), 2023, pp. 1–5.
[8] C.-F. Huang and C.-Y. Huang, “Emotion-based ai music generation system with cvae-gan,” in
2020 IEEE Eurasia Conference on IOT, Communication and Engineering (ECICE), 2020, pp.
220–222.
[9] S. Sajad, S. Dharshika, and M. Meleet, “Music generation for novices using recurrent neural
network (rnn),” in 2021 International Conference on Innovative Computing, Intelligent Commu
nication and Smart Electrical Systems (ICSES), 2021, pp. 1–6.
[10] J. Wang and C. Li, “Chinese style pop music generation based on recurrent neural network,” in
2022 IEEE 5th Advanced Information Management, Communicates, Electronic and Automation
Control Conference (IMCEC), vol. 5, 2022, pp. 513–516.
[11] L. Yi, H. Hu, J. Zhao, and G. Xia, “Accomontage2: A complete harmonization and accompani
ment arrangement system,” 2022.
[12] Z. Wang, Y. Zhang, Y. Zhang, J. Jiang, R. Yang, J. Zhao, and G. Xia, “Pianotree vae: Structured
representation learning for polyphonic music,” 2020.
[13] Y. Zhao, X. Liu, and T. Su, “Piano accompaniment features and performance processing based
on music feature matching algorithm,” in 2021 IEEE International Conference on Advances in
Electrical Engineering and Computer Applications (AEECA), 2021, pp. 525–529.
[14] H. Niu, “Accompaniment generation based on deep learning and genetic algorithm,” in 2023
IEEE International Conference on Control, Electronics and Computer Technology (ICCECT),
2023, pp. 58–65.
[15] H. Liu, “Improvisational dance piano accompaniment system based on bp neural network,” in
2022 International Conference on Computers and Artificial Intelligence Technologies (CAIT),
2022, pp. 21–25.
[16] Q. Wang, S. Zhang, and L. Zhou, “Emotion-guided music accompaniment generation based on
variational autoencoder,” in 2023 International Joint Conference on Neural Networks (IJCNN),
2023, pp. 1–8.
[17] B. Banar and S. Colton, “Autoregressive self-evaluation: A case study of music generation using
large language models,” in 2023 IEEE Conference on Artificial Intelligence (CAI), 2023, pp.
264–265.
[18] N. Imasato, K. Miyazawa, C. Duncan, and T. Nagai, “Using a language model to generate music
in its symbolic domain while controlling its perceived emotion,” IEEE Access, vol. 11, pp.
52412–52428, 2023.
[19] M. R. Bjare, S. Lattner, and G. Widmer, “Exploring Sampling Techniques for Generating
Melodies With a Transformer Language Model,” in Proceedings of the 24th International
Society for Music Information Retrieval Conference. ISMIR, Dec. 2023, pp. 810–816.
[Online]. Available: https://doi.org/10.5281/zenodo.10265411
[20] J. Liu, Y. Dong, Z. Cheng, X. Zhang, X. Li, F. Yu, and M. Sun, “Symphony Generation with
Permutation Invariant Language Model,” in Proceedings of the 23rd International Society for
Music Information Retrieval Conference. ISMIR, Nov. 2022, pp. 551–558. [Online]. Available:
https://doi.org/10.5281/zenodo.7316722
[21] Y. Guo, Y. Liu, T. Zhou, L. Xu, and Q. Zhang, “An automatic music generation and evaluation
method based on transfer learning,” PLOS ONE, vol. 18, no. 5, pp. 1–21, 05 2023. [Online].
Available: https://doi.org/10.1371/journal.pone.0283103
[22] A. Radford, J. Wu, R. Child, D. Luan, D. Amodei, and I. Sutskever, “Language models are
unsupervised multitask learners,” 2019.
[23] A. Vaswani, N. Shazeer, N. Parmar, J. Uszkoreit, L. Jones, A. N. Gomez, L. Kaiser, and
I. Polosukhin, “Attention is all you need,” 2023.
[24] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction. MIT Press, 2018.
[25] J. Schulman, F. Wolski, P. Dhariwal, A. Radford, and O. Klimov, “Proximal policy optimization
algorithms,” 2017.
[26] L. von Werra, Y. Belkada, L. Tunstall, E. Beeching, T. Thrush, N. Lambert, and S. Huang, “Trl:
Transformer reinforcement learning,” https://github.com/huggingface/trl, 2020.
[27] V. Mnih, K. Kavukcuoglu, D. Silver, A. A. Rusu, J. Veness, M. G. Bellemare, A. Graves,
M. Riedmiller, A. K. Fidjeland, G. Ostrovski, S. Petersen, C. Beattie, A. Sadik, I. Antonoglou,
H. King, D. Kumaran, D. Wierstra, S. Legg, and D. Hassabis, “Human-level control through
deep reinforcement learning,” Nature, vol. 518, no. 7540, pp. 529–533, 2015.
[28] T. P. Lillicrap, J. J. Hunt, A. Pritzel, N. Heess, T. Erez, Y. Tassa, D. Silver, and D. Wierstra,
“Continuous control with deep reinforcement learning,” 2019.
[29] J.Schulman,S.Levine, P.Moritz, M.I.Jordan, andP.Abbeel, “Trustregionpolicyoptimization,”
2017.
[30] C. Finn, P. Abbeel, and S. Levine, “Model-agnostic meta-learning for fast adaptation of deep
networks,” 2017. |