參考文獻 |
[1] M. Yates, A. Kelemen, and C. Sik Lanyi, “Virtual reality gaming in the rehabilitation of the upper extremities post-stroke,” Brain injury, vol. 30, no. 7, pp. 855–863, 2016.
[2] A. H. Sadeghi, A. R. Wahadat, A. Dereci, et al., “Remote multidisciplinary heart team meetings in immersive virtual reality: A first experience during the covid-19 pandemic,” BMJ innovations, vol. 7, no. 2, 2021.
[3] S. Kavanagh, A. Luxton-Reilly, B. Wuensche, and B. Plimmer, “A systematic review of virtual reality in education,” Themes in Science and Technology Education, vol. 10, no. 2, pp. 85–119, 2017.
[4] E. Degli Innocenti, M. Geronazzo, D. Vescovi, et al., “Mobile virtual reality for musical genre learning in primary education,” Computers & Education, vol. 139, pp. 102–117, 2019.
[5] J. Pirker and A. Dengel, “The potential of 360° virtual reality videos and real vr for education—a literature review,” IEEE Computer Graphics and Applications, vol. 41, no. 4, pp. 76–89, 2021.
[6] R. McCloy and R. Stone, “Virtual reality in surgery,” Bmj, vol. 323, no. 7318, pp. 912–915, 2001.
[7] S. Serafin, C. Erkut, J. Kojs, N. C. Nilsson, and R. Nordahl, “Virtual reality musical instruments: State of the art, design principles, and future directions,” Computer Music Journal, vol. 40, no. 3, pp. 22–40, 2016.
[8] A. Broersen and A. Nijholt, “Developing a virtual piano playing environment,” in IEEE International conference on Advanced Learning Technologies (ICALT 2002), 2002, pp. 278–282.
[9] A. Goodwin and R. Green, “Key detection for a virtual piano teacher,” in 2013 28th International Conference on Image and Vision Computing New Zealand (IVCNZ 2013), IEEE, 2013, pp. 282–287.
[10] T. Ishiyama and T. Kitahara, “A prototype of virtual drum performance system with a head-mounted display,” in 2019 IEEE Conference on Virtual Reality and 3D User Interfaces (VR), IEEE, 2019, pp. 990–991.
[11] K. E. Onderdijk, L. Bouckaert, E. Van Dyck, and P.-J. Maes, “Concert experiences in virtual reality environments,” Virtual Reality, pp. 1–14, 2023.
[12] C. Jin, F. Wu, J. Wang, Y. Liu, Z. Guan, and Z. Han, “Metamgc: A music generation framework for concerts in metaverse,” EURASIP Journal on Audio, Speech, and Music Processing, vol. 2022, no. 1, pp. 1–15, 2022.
[13] F. K. Brian, D. Poirier-Quinot, and J.-M. Lyzwa, “La vierge 2020: Reconstructing a virtual concert performance through historic auralisation of notre-dame cathedral,” in 2021 Immersive and 3D Audio: from Architecture to Automotive (I3DA), IEEE, 2021, pp. 1–9.
[14] M. Karjalainen, T. Mäki-Patola, A. Kanerva, and A. Huovilainen, “Virtual air guitar,” Journal of the Audio Engineering Society, vol. 54, no. 10, pp. 964–980, 2006.
[15] L. S. Figueiredo, J. M. X. N. Teixeira, A. S. Cavalcanti, V. Teichrieb, and J. Kelner, “An open-source framework for air guitar games,” in 2009 VIII Brazilian Symposium on Games and Digital Entertainment, 2009, pp. 74–82.
[16] T. Ooaku, T. D. Linh, M. Arai, T. Maekawa, and K. Mizutani, “Guitar chord recognition based on finger patterns with deep learning,” in Proceedings of the 4th International Conference on Communication and Information Processing, 2018, pp. 54–57.
[17] 周恒瑋, “基於深度學習之虛擬吉他音樂演奏系統設計,” 2020.
[18] R. S. Armiger and R. J. Vogelstein, “Air-guitar hero: A real-time video game interface for training and evaluation of dexterous upper-extremity neuroprosthetic control algorithms,” in 2008 IEEE Biomedical Circuits and Systems Conference, IEEE, 2008, pp. 121–124.
[19] Anotherway. “Unplugged air guitar.” (2023), [Online]. Available: https://unpluggedairguitar.com/ (visited on 06/15/2023).
[20] L. R. Skreinig, A. Stanescu, S. Mori, et al., “Ar hero: Generating interactive augmented reality guitar tutorials,” in 2022 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), IEEE, 2022, pp. 395–401.
[21] D. Meagher, I. Murphy, M. Mulligan, P. Bolger, A. Leahy, and H. Moss, “Developing an air guitar group for an inpatient psychiatry unit: A pilot study.,” Music and Medicine, 2020.
[22] T. Vanhatalo, P. Legrand, M. Desainte-Catherine, et al., “A review of neural network-based emulation of guitar amplifiers,” Applied Sciences, vol. 12, no. 12, p. 5894, 2022.
[23] C. Steinmetz, J. Reiss, et al., “Efficient neural networks for real-time modeling of analog dynamic range compression,” 2022.
[24] D. Sudholt, A. Wright, C. Erkut, and V. Valimaki, “Pruning deep neural network models of guitar distortion effects,” IEEE/ACM Transactions on Audio, Speech, and Language
Processing, vol. 31, pp. 256–264, 2023.
[25] A. Wright, E.-P. Damskägg, L. Juvela, and V. Välimäki, “Real-time guitar amplifier emulation with deep learning,” Applied Sciences, vol. 10, no. 3, p. 766, Jan. 2020.
[26] P. Bognár, “Audio effect modeling with deep learning methods,” Ph.D. dissertation, Wien, 2022.
[27] E.-P. Damskägg, L. Juvela, E. Thuillier, and V. Välimäki, “Deep learning for tube amplifier emulation,” in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2019, pp. 471–475.
[28] I. Multimedia. “Amplitube 5 the tone.” (2023), [Online]. Available: https://www.ikmultimedia.com/products/amplitube5/?pkey=amplitube-5%5C#thetone (visited on 06/14/2023).
[29] Wiwi. “原來百分之 66 的流行歌,都是用這 3 種和弦進行?2019 流行歌和弦大調查.” (2019), [Online]. Available: https://youtu.be/zL_14UGziy4 (visited on 06/14/2023).
[30] Fender. “Fender 68 custom princeton reverb.” (2023), [Online]. Available: https://www.muziker.hu/fender-68-custom-princeton-reverb (visited on 06/14/2023).
[31] Musora. “What guitar pedals should you buy? (beginner’s guide).” (2022), [Online]. Available: https://youtu.be/VfYM-wWJNWw (visited on 06/14/2023).
[32] D. Roos. “How making music with midi works.” (2008), [Online]. Available: https://entertainment.howstuffworks.com/midi.htm4 (visited on 06/14/2023).
[33] S. Han, B. Liu, R. Cabezas, et al., “Megatrack: Monochrome egocentric articulated handtracking for virtual reality,” ACM Trans. Graph., vol. 39, no. 4, Aug. 2020.
[34] S. Han, P.-C. Wu, Y. Zhang, et al., “Umetrack: Unified multi-view end-to-end handtracking for vr,” in SIGGRAPH Asia 2022 Conference Papers, ser. SA ’22, Daegu, Republic of Korea: Association for Computing Machinery, 2022.
[35] D. Abdlkarim, M. Di Luca, P. Aves, et al., “A methodological framework to assess the accuracy of virtual reality hand-tracking systems: A case study with the oculus quest 2,” BioRxiv, pp. 2022–02, 2022.
[36] A. Adhikari, T. S. Rao, K. Kar, and T. Dhruw, “Computer vision based virtual musical instruments,” Mathematical Statistician and Engineering Applications, vol. 71, no. 4, pp. 9600–9608, 2022.
[37] J. McGowan, G. Leplâtre, and I. McGregor, “Cymasense: A novel audio-visual therapeutic tool for people on the autism spectrum,” in Proceedings of the 19th International ACM SIGACCESS Conference on Computers and Accessibility, 2017, pp. 62–71.
[38] J. A. Deja, J. P. Tobias, R. C. Roque, D. David, and K. Chan, “Towards modeling guitar chord fretboard finger positioning using electromyography,” in Proceedings of the 17th Philippine Computing Science Congress, Cebu, Philippines, 2017, pp. 16–18.
[39] V. Nagpurkar, N. Pattankar, T. Nayak, A. D'Souza, and N. Henriques, “Guitarguru: A realtime guitar chords detection system,” in 2023 International Conference on Communication System, Computing and IT Applications (CSCITA), 2023, pp. 107–112.
[40] J. Pakarinen and D. T. Yeh, “A review of digital techniques for modeling vacuum-tube guitar amplifiers,” Computer Music Journal, vol. 33, no. 2, pp. 85–100, 2009.
[41] D. T.-M. Yeh, Digital implementation of musical distortion circuits by analysis and simulation. Stanford University, 2009.
[42] B. Kuznetsov, J. D. Parker, and F. Esqueda, “Differentiable iir filters for machine learning applications,” in Proc. Int. Conf. Digital Audio Effects (eDAFx-20), 2020, pp. 297–303.
[43] J. D. Parker, F. Esqueda, and A. Bergner, “Modelling of nonlinear state-space systems using a deep neural network,” in Proceedings of the International Conference on Digital Audio Effects (DAFx), Birmingham, UK, 2019, pp. 2–6.
[44] J. Wilczek, A. Wright, V. Välimäki, and E. Habets, “Virtual analog modeling of distortion circuits using neural ordinary differential equations,” arXiv preprint arXiv:2205.01897, 2022.
[45] E. Perez, F. Strub, H. De Vries, V. Dumoulin, and A. Courville, “Film: Visual reasoning with a general conditioning layer,” in Proceedings of the AAAI Conference on Artificial Intelligence, vol. 32, 2018.
[46] OpenXR. “Openxr hand tracking.” (2023), [Online]. Available: https://mbucchia.github.io/OpenXR-Toolkit/hand-tracking.html (visited on 06/14/2023).
[47] N. Juillerat, S. M. Arisona, and S. Schubiger-Banz, “Enhancing the quality of audio transformations using the multi-scale short-time fourier transform,” in Proceedings of
the 10th IASTED International Conference on Signal and Image Processing, vol. 623, 2008, p. 054.
[48] R. Yamamoto, E. Song, and J.-M. Kim, “Parallel wavegan: A fast waveform generation model based on generative adversarial networks with multi-resolution spectrogram,” in ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), IEEE, 2020, pp. 6199–6203.
[49] W. Jang, D. Lim, J. Yoon, B. Kim, and J. Kim, “Univnet: A neural vocoder with multi-resolution spectrogram discriminators for high-fidelity waveform generation,” arXiv preprint arXiv:2106.07889, 2021.
[50] A. v. d. Oord, S. Dieleman, H. Zen, et al., “Wavenet: A generative model for raw audio,” arXiv preprint arXiv:1609.03499, 2016.
[51] Audacity. “Audacity real-time effect.” (2023), [Online]. Available: https://support.audacityteam.org/audio-editing/using-realtime-effects (visited on 06/15/2023).
[52] GuitarML. “Pedalnetrt: Real-time guitar pedal emulation.” (2020), [Online]. Available: https://github.com/GuitarML/PedalNetRT (visited on 07/11/2023). |