參考文獻 |
[1] C.A.P.E. “Statistics on the number of applicants for music subjects over the years.”
(2024), [Online]. Available: https://www.cape.edu.tw/statistics/ (visited on 05/07/2024).
[2] 我是江老師. “鋼琴伴奏月薪?一小時賺多少?到底都在做什麼?.” (2020), [Online].
Available: https://youtu.be/8MBaTBXLzEw?t=100 (visited on 05/07/2024).
[3] A. Défossez, N. Usunier, L. Bottou, and F. Bach, Music source separation in the waveform domain, 2021.
[4] E. Cano, D. FitzGerald, A. Liutkus, M. D. Plumbley, and F.-R. Stöter, “Musical source
separation: An introduction,” IEEE Signal Processing Magazine, vol. 36, no. 1, pp. 31–
40, 2019.
[5] Z. Rafii, A. Liutkus, F.-R. Stöter, S. I. Mimilakis, D. FitzGerald, and B. Pardo, “An
overview of lead and accompaniment separation in music,” IEEE/ACM Transactions on
Audio, Speech, and Language Processing, vol. 26, no. 8, pp. 1307–1335, 2018.
[6] M. E. P. Davies, “Towards automatic rhythmic accompaniment,” Ph.D. dissertation,
Citeseer, 2007.
[7] Y. Li, “Application of computer-based auto accompaniment in music education,” International Journal of Emerging Technologies in Learning (iJET), vol. 15, no. 6, pp. 140–
151, 2020.
[8] X. Zhang and C. Liu, “Design of piano automatic accompaniment system based on artificial intelligence algorithm,” in International Conference on Computational Finance
and Business Analytics, Springer, 2023, pp. 249–258.
[9] N. Orio, S. Lemouton, and D. Schwarz, “Score following: State of the art and new developments,” New Interfaces for Musical Expression (NIME), 2003.
[10] M. Dorfer, A. Arzt, and G. Widmer, “Towards score following in sheet music images,”
arXiv preprint arXiv:1612.05050, 2016.
[11] S. Ji, J. Luo, and X. Yang, “A comprehensive survey on deep music generation: Multilevel representations, algorithms, evaluations, and future directions,” arXiv preprint arXiv:2011.06801,
2020.
67
[12] C. Hernandez-Olivan and J. R. Beltran, “Music composition with deep learning: A review,” Advances in speech and music technology: computational aspects and applications, pp. 25–50, 2022.
[13] A. Solanki and S. Pandey, “Music instrument recognition using deep convolutional neural networks,” International Journal of Information Technology, vol. 14, no. 3, pp. 1659–
1668, 2022.
[14] K. Racharla, V. Kumar, C. B. Jayant, A. Khairkar, and P. Harish, “Predominant musical
instrument classification based on spectral features,” in 2020 7th International Conference on Signal Processing and Integrated Networks (SPIN), IEEE, 2020, pp. 617–622.
[15] E. Manilow, G. Wichern, and J. Le Roux, “Hierarchical musical instrument separation.,”
in ISMIR, 2020, pp. 376–383.
[16] P. Mangla, “Spotify music recommendation systems,” in PyImageSearch, P. Chugh,
A. R. Gosthipaty, S. Huot, K. Kidriavsteva, and R. Raha, Eds., 2023.
[17] Ableton. “Ableton live 11 lite.” (2024), [Online]. Available: https://www.ableton.com/
en/live/ (visited on 06/04/2024).
[18] Apple. “Logic pro.” (2024), [Online]. Available: https://www.apple.com/tw/logic-pro/
(visited on 06/04/2024).
[19] PreSonus. “Studio one.” (2024), [Online]. Available: https://www.presonus.com/en/
studio-one.html (visited on 06/04/2024).
[20] Ronimusic. “Amazing slow downer.” (2024), [Online]. Available: https://www.ronimusic.
com/ (visited on 06/04/2024).
[21] FORSCORE. “Forscore turbocharge tour sheet music.” (2024), [Online]. Available: https:
//forscore.co/ (visited on 06/04/2024).
[22] ISMIR. “International society for music information retrieval.” (2024), [Online]. Available: https://ismir.net/ (visited on 05/06/2024).
[23] X. Zhao, Q. Tuo, R. Guo, and T. Kong, “Research on music signal processing based on
a blind source separation algorithm,” Annals of Emerging Technologies in Computing
(AETiC), vol. 6, no. 4, 2022.
[24] Y. Mitsufuji, G. Fabbro, S. Uhlich, and F.-R. Stöter, Music Demixing Challenge 2021,
2021.
[25] G. Fabbro, S. Uhlich, C.-H. Lai, et al., “The Sound Demixing Challenge 2023 Music
Demixing Track,” arXiv e-prints, arXiv:2308.06979, arXiv:2308.06979, Aug. 2023.
[26] Z. Wang, K. Zhang, Y. Wang, et al., “Songdriver: Real-time music accompaniment generation without logical latency nor exposure bias,” in Proceedings of the 30th ACM International Conference on Multimedia, 2022, pp. 1057–1067.
[27] F. Ding and Y. Cui, “Museflow: Music accompaniment generation based on flow,” Applied Intelligence, vol. 53, no. 20, pp. 23 029–23 038, 2023.
68
[28] C. Brazier and G. Widmer, “Improving real-time score following in opera by combining
music with lyrics tracking,” arXiv preprint arXiv:2110.02592, 2021.
[29] Antescofo. “Metronautapp.” (2024), [Online]. Available: https://metronautapp.com/zhTW (visited on 06/04/2024).
[30] P. Comon, “Independent component analysis, a new concept?” Signal processing, vol. 36,
no. 3, pp. 287–314, 1994.
[31] D. Lee and H. S. Seung, “Algorithms for non-negative matrix factorization,” Advances
in neural information processing systems, vol. 13, 2000.
[32] A. Maćkiewicz and W. Ratajczak, “Principal components analysis (pca),” Computers &
Geosciences, vol. 19, no. 3, pp. 303–342, 1993.
[33] F.-R. Stöter, S. Uhlich, A. Liutkus, and Y. Mitsufuji, “Open-unmix - a reference implementation for music source separation,” Journal of Open Source Software, vol. 4, no. 41,
p. 1667, 2019.
[34] Z. Rafii, A. Liutkus, F.-R. Stöter, S. I. Mimilakis, and R. Bittner, The MUSDB18 corpus
for music separation, Dec. 2017.
[35] Y. Luo and J. Yu, “Music Source Separation with Band-split RNN,” arXiv e-prints,
arXiv:2209.15174, arXiv:2209.15174, Sep. 2022.
[36] D. Stoller, S. Ewert, and S. Dixon, “Wave-u-net: A multi-scale neural network for endto-end audio source separation,” arXiv preprint arXiv:1806.03185, 2018.
[37] A. Défossez, “Hybrid spectrogram and waveform source separation,” arXiv preprint
arXiv:2111.03600, 2021.
[38] S. Rouard, F. Massa, and A. Défossez, “Hybrid transformers for music source separation,” in ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and
Signal Processing (ICASSP), IEEE, 2023, pp. 1–5.
[39] M. Miron, J. Janer Mestres, and E. Gómez Gutiérrez, “Generating data to train convolutional neural networks for classical music source separation,” in Lokki T, Pätynen J,
Välimäki V, editors. Proceedings of the 14th Sound and Music Computing Conference;
2017 Jul 5-8; Espoo, Finland. Aalto: Aalto University; 2017. p. 227-33., Aalto University, 2017.
[40] C.-Y. Chiu, W.-Y. Hsiao, Y.-C. Yeh, Y.-H. Yang, and A. Wen-Yu Su, “Mixing-Specific
Data Augmentation Techniques for Improved Blind Violin/Piano Source Separation,”
arXiv e-prints, arXiv:2008.02480, arXiv:2008.02480, Aug. 2020.
[41] R. Hennequin, A. Khlif, F. Voituret, and M. Moussallam, “Spleeter: A fast and efficient
music source separation tool with pre-trained models,” Journal of Open Source Software,
vol. 5, no. 50, p. 2154, 2020.
69
[42] M. Heydari and Z. Duan, “Don't look back: An online beat tracking method using rnn
and enhanced particle filtering,” in ICASSP 2021-2021 IEEE International Conference
on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2021, pp. 236–240.
[43] B. Di Giorgi, M. Mauch, and M. Levy, “Downbeat tracking with tempo-invariant convolutional neural networks,” arXiv preprint arXiv:2102.02282, 2021.
[44] F. Henkel, S. Balke, M. Dorfer, and G. Widmer, “Score following as a multi-modal reinforcement learning problem.,” Trans. Int. Soc. Music. Inf. Retr., vol. 2, no. 1, pp. 67–81,
2019.
[45] P. Cano, A. Loscos, and J. Bonada, “Score-performance matching using hmms,” in ICMC,
Citeseer, 1999.
[46] H. Sakoe and S. Chiba, “Dynamic programming algorithm optimization for spoken word
recognition,” IEEE Transactions on Acoustics, Speech, and Signal Processing, vol. 26,
no. 1, pp. 43–49, 1978.
[47] A. Arzt, G. Widmer, and S. Dixon, “Adaptive distance normalization for real-time music tracking,” in 2012 Proceedings of the 20th European Signal Processing Conference
(EUSIPCO), 2012, pp. 2689–2693.
[48] N. Takahashi, T. Yoshihisa, Y. Sakurai, and M. Kanazawa, “A parallelized data stream
processing system using dynamic time warping distance,” in 2009 International Conference on Complex, Intelligent and Software Intensive Systems, IEEE, 2009, pp. 1100–
1105.
[49] I.-C. Wei and L. Su, “Online music performance tracking using parallel dynamic time
warping,” in 2018 IEEE 20th International Workshop on Multimedia Signal Processing
(MMSP), 2018, pp. 1–6.
[50] S. Dixon, “Live tracking of musical performances using on-line time warping,” in Proceedings of the 8th International Conference on Digital Audio Effects, Citeseer, vol. 92,
2005, p. 97.
[51] A. Arzt and G. Widmer, “Towards effective ’any-time’ music tracking,” in Proceedings of the 2010 Conference on STAIRS 2010: Proceedings of the Fifth Starting AI Researchers’ Symposium, NLD: IOS Press, 2010, pp. 24–36.
[52] Y.-J. Lin, H.-K. Kao, Y.-C. Tseng, M. Tsai, and L. Su, “A human-computer duet system
for music performance,” in Proceedings of the 28th ACM International Conference on
Multimedia, ser. MM '20, ACM, Oct. 2020.
[53] Python. “Multiprocessing—process-based parallelism.” (2024), [Online]. Available: https:
//docs.python.org/3/library/multiprocessing.html (visited on 06/02/2024).
[54] Python. “Pyaudio package.” (2024), [Online]. Available: https://people.csail.mit.edu/
hubert/pyaudio/ (visited on 06/02/2024).
[55] B. McFee, M. McVicar, D. Faronbi, et al., Librosa/librosa: 0.10.2.post1, 2024.
70
[56] Z. K. Abdul and A. K. Al-Talabani, “Mel frequency cepstral coefficient and its applications: A review,” IEEE Access, vol. 10, pp. 122 136–122 158, 2022.
[57] J. Thickstun, Z. Harchaoui, and S. M. Kakade, “Learning features of music from scratch,”
in International Conference on Learning Representations (ICLR), 2017.
[58] J. Thickstun, Z. Harchaoui, D. P. Foster, and S. M. Kakade, “Invariances and data augmentation for supervised music transcription,” in International Conference on Acoustics,
Speech, and Signal Processing (ICASSP), 2018.
[59] F. J. Muneratti Ortega, Expressive solo violin, 2021.
[60] H.-W. Dong, C. Zhou, T. Berg-Kirkpatrick, and J. McAuley, Bach violin dataset, 2021.
[61] R. Bittner, J. Salamon, M. Tierney, M. Mauch, C. Cannam, and J. Bello, “Medleydb: A
multitrack dataset for annotation-intensive mir research,” Oct. 2014.
[62] L. Yu-Jie. “音源分離資料集.” (2024), [Online]. Available: https : / / drive . google .
com/drive/ folders/1IPGv2l - 6QjIwMtAq9m0ijQ -ilvTCFjfU?usp=sharing (visited on
06/24/2024).
[63] E. Vincent, R. Gribonval, and C. Févotte, “Performance measurement in blind audio
source separation,” Audio, Speech, and Language Processing, IEEE Transactions on,
vol. 14, pp. 1462–1469, Aug. 2006.
[64] L. Yu-Jie. “不同旋律的音源分離結果音檔.” (2024), [Online]. Available: https : / /
github.com/a0950088/Master/tree/main/paper/resource/4.1 (visited on 07/15/2024).
[65] L. Yu-Jie. “系統在不同速度下的追蹤結果所使用的 midi 檔案與結果音檔.” (2024),
[Online]. Available: https://github.com/a0950088/Master/tree/main/paper/resource/4.2.
1 (visited on 06/11/2024).
[66] kopikostar. “Beethoven’s ”spring sonata” op.24 allegro.” (2024), [Online]. Available:
https://youtu.be/uDSfijK1qxo?list=PLc0i4xi7nsQRG0UTdRKRc2bxjh9pHYYJw (visited on 06/03/2024).
[67] L. Yu-Jie. “Beethoven 音源分離結果音檔.” (2024), [Online]. Available: https://github.
com/a0950088/Master/tree/main/paper/resource/4.2.2/music%20source%20separation
(visited on 06/11/2024).
[68] L. Yu-Jie. “不同特徵下的追蹤結果音檔.” (2024), [Online]. Available: https://github.
com/a0950088/Master/tree/main/paper/resource/4.2.2 (visited on 06/11/2024).
[69] 林承勳. “中央研究院研之有物: 今晚,想來場臨時音樂會?讓 ai 虛擬音樂家幫你
實現!.” (2021), [Online]. Available: https://research.sinica.edu.tw/ai-virtual-musicianli-su/ (visited on 06/23/2024). |