參考文獻 |
[1] Ghani, D. A., & Ishak, M. S. B. A. (2011). Preserving Wayang Kulitfor Future Generations. IEEE MultiMedia, 18(4), 70-74.
[2] Ghani, D. B. A. (2012). Seri Rama: Converting a shadow play puppet to street fighter. IEEE computer graphics and applications, 32(1), 8-11.
[3] Yang, L., Zhang, L., Dong, H., Alelaiwi, A., & El Saddik, A. (2015). Evaluating and improving the depth accuracy of Kinect for Windows v2. IEEE Sensors Journal, 15(8), 4275-4285.
[4] Jais, H. M., Mahayuddin, Z. R., & Arshad, H. (2015, August). A review on gesture recognition using Kinect. In Electrical Engineering and Informatics (ICEEI), 2015 International Conference on (pp. 594-599). IEEE.
[5] Aziz, M. A. A., Niu, J., Zhao, X., & Li, X. (2016). Efficient and robust learning for sustainable and reacquisition-enabled hand tracking. IEEE transactions on cybernetics, 46(4), 945-958.
[6] Morariu, V. I., Harwood, D., & Davis, L. S. (2013). Tracking people′s hands and feet using mixed network and/or search. IEEE transactions on pattern analysis and machine intelligence, 35(5), 1248-1262.
[7] Wen, L., Lei, Z., Lyu, S., Li, S. Z., & Yang, M. H. (2016). Exploiting hierarchical dense structures on hypergraphs for multi-object tracking. IEEE transactions on pattern analysis and machine intelligence, 38(10), 1983-1996.
[8] Choi, J., & Maurer, M. (2016). Local volumetric hybrid-map-based simultaneous localization and mapping with moving object tracking. IEEE Transactions on Intelligent Transportation Systems, 17(9), 2440-2455.
[9] Zhang, X., Li, W., Ye, X., & Maybank, S. (2015). Robust hand tracking via novel multi-cue integration. Neurocomputing, 157, 296-305.
[10] Wu, X., Mao, X., Chen, L., Xue, Y., & Rovetta, A. (2015). Depth image-based hand tracking in complex scene. Optik-International Journal for Light and Electron Optics, 126(20), 2757-2763.
[11] Kim, J., Yu, S., Kim, D., Toh, K. A., & Lee, S. (2017). An adaptive local binary pattern for 3d hand tracking. Pattern Recognition, 61, 139-152.
[12] Sun, L., Liu, G., & Liu, Y. (2014). 3D hand tracking with head mounted gaze-directed camera. IEEE Sensors Journal, 14(5), 1380-1390.
[13] Kusumanugraha, S., Ito, A., Mikami, K., & Kondo, K. (2011, October). An Analysis of Indonesian Traditional” Wayang Kulit Kulit” Puppet 3D Shapes Based on Their Roles in the Story. In Culture and Computing (Culture Computing), 2011 Second International Conference on (pp. 147-148). IEEE.
[14] Tomo, T. P., Enriquez, G., & Hashimoto, S. (2015, December). Indonesian puppet theater robot with gamelan music emotion recognition. In Robotics and Biomimetics (ROBIO), 2015 IEEE International Conference on (pp. 1177-1182). IEEE.
[15] Kia, K. K., & Chan, Y. M. (2009, August). A study on the visual styles of Wayang KulitKelantan and its capturing methods. In Computer Graphics, Imaging and Visualization, 2009. CGIV′09. Sixth International Conference on(pp. 423-428). IEEE.
[16] Ahmad, J., & Jamaludin, Z. (2014, September). Embedding interaction design in Wayang Kulitmathematics courseware. In User Science and Engineering (i-USEr), 2014 3rd International Conference on (pp. 7-12). IEEE.
[17] Ghani, D. A. (2011, April). Wayang Kulit kulit: Digital puppetry character rigging using Maya MEL language. In Modeling, Simulation and Applied Optimization (ICMSAO), 2011 4th International Conference on (pp. 1-5). IEEE.
[18] Grahita, B., Komma, T., & Kushiyama, K. (2013, September). CG Programming Approach to Generate Pattern of Wayang Kulit Beber Pacitan Character′s Cloth. In Culture and Computing (Culture Computing), 2013 International Conference on (pp. 183-184). IEEE.
[19] Bhawar, P., Ayer, N., & Sahasrabudhe, S. (2013, December). Methodology to create optimized 3d models using blender for android devices. In Technology for Education (T4E), 2013 IEEE Fifth International Conference on (pp. 139-142). IEEE.
[20] Baglivo, A., Ponti, F. D., De Luca, D., Guidazzoli, A., Liguori, M. C., & Fanini, B. (2013, October). X3D/X3DOM, Blender Game Engine and OSG4WEB: open source visualisation for cultural heritage environments. In Digital Heritage International Congress (DigitalHeritage), 2013 (Vol. 2, pp. 711-718). IEEE.
[21] Dere, S., Sahasrabudhe, S., & Iyer, S. (2010, July). Creating open source repository of 3D models of laboratory equipments using Blender. In Technology for Education (T4E), 2010 International Conference on (pp. 149-156). IEEE.
[22] Starzyk, J. A., & Raif, P. (2013, April). Cognitive agent and its implementation in the blender game engine environment. In Computational Intelligence for Human-like Intelligence (CIHLI), 2013 IEEE Symposium on (pp. 1-8). IEEE.
[23] Kadam, K., & Iyer, S. (2014, July). Improvement of Problem Solving Skills in Engineering Drawing Using Blender Based Mental Rotation Training. In Advanced Learning Technologies (ICALT), 2014 IEEE 14th International Conference on (pp. 401-402). IEEE.
[24] Haschka, T., Dauchez, M., & Henon, E. (2015, March). Visualization of molecular properties at the qantum mechanical level using blender. In Virtual and Augmented Reality for Molecular Science (VARMS@ IEEEVR), 2015 IEEE 1st International Workshop on (pp. 7-13). IEEE.
[25] Kadam, K., & Iyer, S. (2015, July). Impact of blender based 3d mental rotation ability training on engineering drawing skills. In Advanced Learning Technologies (ICALT), 2015 IEEE 15th International Conference on (pp. 370-374). IEEE.
[26] Jiyuan, L., & Wenfeng, H. (2016, May). Development of puzzle game about children′s etiquette based on Unity3D. In Software Engineering, Artificial Intelligence, Networking and Parallel/Distributed Computing (SNPD), 2016 17th IEEE/ACIS International Conference on (pp. 495-500). IEEE.
[27] Jie, J., Yang, K., & Haihui, S. (2011, October). Research on the 3D game scene optimization of mobile phone based on the Unity 3D engine. In Computational and Information Sciences (ICCIS), 2011 International Conference on (pp. 875-877). IEEE.
[28] Mattingly, W. A., Chang, D. J., Paris, R., Smith, N., Blevins, J., & Ouyang, M. (2012, July). Robot design using Unity for computer games and robotic simulations. In Computer Games (CGAMES), 2012 17th International Conference on (pp. 56-59). IEEE.
[29] Pires, F. A., Santos, W. M., Andrade, K. D. O., Caurin, G. A., & Siqueira, A. A. (2014, May). Robotic platform for telerehabilitation studies based on unity game engine. In Serious Games and Applications for Health (SeGAH), 2014 IEEE 3rd International Conference on (pp. 1-6). IEEE.
[30] Bartneck, C., Soucy, M., Fleuret, K., & Sandoval, E. B. (2015, August). The robot engine—Making the unity 3D game engine work for HRI. In Robot and Human Interactive Communication (RO-MAN), 2015 24th IEEE International Symposium on (pp. 431-437). IEEE.
[31] Zhong, H., & Xiao, J. (2015, September). Apply technology acceptance model with big data analytics and unity game engine. In Software Engineering and Service Science (ICSESS), 2015 6th IEEE International Conference on (pp. 19-24). IEEE.
[32] Adinandra, S., Adhilaga, N. A., & Erfawan, D. (2015, October). WayBot: A low cost manipulator for playing Javanese puppet. In Information Technology and Electrical Engineering (ICITEE), 2015 7th International Conference on(pp. 376-381). IEEE.
[33] Ahmed, M., Idrees, M., ul Abideen, Z., Mumtaz, R., & Khalique, S. (2016, July). Deaf talk using 3D animated sign language: A sign language interpreter using Microsoft′s kinect v2. In SAI Computing Conference (SAI), 2016 (pp. 330-335). IEEE.
[34] Liu, L., & Mehrotra, S. (2016, August). Patient walk detection in hospital room using Microsoft Kinect V2. In Engineering in Medicine and Biology Society (EMBC), 2016 IEEE 38th Annual International Conference of the (pp. 4395-4398). IEEE.
[35] Samoil, S., & Yanushkevich, S. N. (2016, July). Multispectral hand recognition using the Kinect v2 sensor. In Evolutionary Computation (CEC), 2016 IEEE Congress on (pp. 4258-4264). IEEE.
[36] Noonan, P. J., Ma, J., Cole, D., Howard, J., Hallett, W. A., Glocker, B., & Gunn, R. (2015, October). Simultaneous multiple kinect v2 for extended field of view motion tracking. In Nuclear Science Symposium and Medical Imaging Conference (NSS/MIC), 2015 IEEE (pp. 1-4). IEEE.
[37] Darby, J., Sánchez, M. B., Butler, P. B., & Loram, I. D. (2016). An evaluation of 3D head pose estimation using the Microsoft Kinect v2. Gait & posture, 48, 83-88.
[38] Capecci, M., Ceravolo, M. G., Ferracuti, F., Iarlori, S., Longhi, S., Romeo, L., ... & Verdini, F. (2016, August). Accuracy evaluation of the Kinect v2 sensor during dynamic movements in a rehabilitation scenario. In Engineering in Medicine and Biology Society (EMBC), 2016 IEEE 38th Annual International Conference of the (pp. 5409-5412). IEEE.
[39] Corti, A., Giancola, S., Mainetti, G., & Sala, R. (2016). A metrological characterization of the Kinect V2 time-of-flight camera. Robotics and Autonomous Systems, 75, 584-594.
[40] Ning, X., & Guo, G. (2013). Assessing spinal loading using the Kinect depth sensor: A feasibility study. IEEE Sensors Journal, 13(4), 1139-1140.
[41] Wang, J., Xiong, Z., Wang, Z., Zhang, Y., & Wu, F. (2016). FPGA Design and Implementation of Kinect-Like Depth Sensing. IEEE Transactions on Circuits and Systems for Video Technology, 26(6), 1175-1186.
[42] Stommel, M., Beetz, M., & Xu, W. (2014). Inpainting of Missing Values in the Kinect Sensor′s Depth Maps Based on Background Estimates. IEEE Sensors Journal, 14(4), 1107-1116.
[43] Liu, K., Chen, C., Jafari, R., & Kehtarnavaz, N. (2014). Fusion of inertial and depth sensor data for robust hand gesture recognition. IEEE Sensors Journal, 14(6), 1898-1903.
[44] Landau, M. J., Choo, B. Y., & Beling, P. A. (2016). Simulating kinect infrared and depth images. IEEE transactions on cybernetics, 46(12), 3018-3031.
[45] Hou, J., Gao, H., & Li, X. (2016). Dsets-dbscan: a parameter-free clustering algorithm. IEEE Transactions on Image Processing, 25(7), 3182-3193.
[46] Edla, D. R., Jana, P. K., & Member, I. S. (2012). A prototype-based modified DBSCAN for gene clustering. Procedia Technology, 6, 485-492.
[47] Kumar, K. M., & Reddy, A. R. M. (2016). A fast DBSCAN clustering algorithm by accelerating neighbor searching using Groups method. Pattern Recognition, 58, 39-48.
[48] Dudik, J. M., Kurosu, A., Coyle, J. L., & Sejdić, E. (2015). A comparative analysis of DBSCAN, K-means, and quadratic variation algorithms for automatic identification of swallows from swallowing accelerometry signals. Computers in biology and medicine, 59, 10-18.
[49] Shen, J., Hao, X., Liang, Z., Liu, Y., Wang, W., & Shao, L. (2016). Real-Time Superpixel Segmentation by DBSCAN Clustering Algorithm. IEEE Transactions on Image Processing, 25(12), 5933-5942.
[50] D’Orazio, T., Marani, R., Renó, V., & Cicirelli, G. (2016). Recent trends in gesture recognition: how depth data has improved classical approaches. Image and Vision Computing, 52, 56-72.
[51] Ibañez, R., Soria, Á., Teyseyre, A., Rodríguez, G., & Campo, M. (2017). Approximate string matching: A lightweight approach to recognize gestures with Kinect. Pattern Recognition, 62, 73-86.
[52] Cheng, H., Yang, L., & Liu, Z. (2016). Survey on 3D hand gesture recognition. IEEE Transactions on Circuits and Systems for Video Technology, 26(9), 1659-1673.
[53] Ren, Z., Yuan, J., Meng, J., & Zhang, Z. (2013). Robust part-based hand gesture recognition using kinect sensor. IEEE transactions on multimedia, 15(5), 1110-1120.
[54] Zhou, Y., Jiang, G., & Lin, Y. (2016). A novel finger and hand pose estimation technique for real-time hand gesture recognition. Pattern Recognition, 49, 102-114.
[55] Cheng, H., Dai, Z., Liu, Z., & Zhao, Y. (2016). An image-to-class dynamic time warping approach for both 3D static and trajectory hand gesture recognition. Pattern Recognition, 55, 137-147.
[56] Plouffe, G., & Cretu, A. M. (2016). Static and dynamic hand gesture recognition in depth data using dynamic time warping. IEEE transactions on instrumentation and measurement, 65(2), 305-316.
[57] Wang, C., Liu, Z., & Chan, S. C. (2015). Superpixel-based hand gesture recognition with kinect depth camera. IEEE transactions on multimedia, 17(1), 29-39.
[58] Theodoridis, S., Pikrakis, A., Koutroumbas, K., & Cavouras, D. (2010). Introduction to pattern recognition: a matlab approach. Academic Press.
[59] Nguyen-Dinh, L. V., Roggen, D., Calatroni, A., & Tröster, G. (2012, November). Improving online gesture recognition with template matching methods in accelerometer data. In Intelligent Systems Design and Applications (ISDA), 2012 12th International Conference on (pp. 831-836). IEEE.
[60] Yun, L., Lifeng, Z., & Shujun, Z. (2012). A hand gesture recognition method based on multi-feature fusion and template matching. Procedia Engineering, 29, 1678-1684.
[61] Rose, E. J., Racadio, R., Wong, K., Nguyen, S., Kim, J., & Zahler, A. (2017). Community-Based User Experience: Evaluating the Usability of Health Insurance Information with Immigrant Patients. IEEE Transactions on Professional Communication, 60(2), 214-231.
[62] Zhou, F., Ji, Y., & Jiao, R. J. (2014). Prospect-theoretic modeling of customer affective-cognitive decisions under uncertainty for user experience design. IEEE Transactions on Human-Machine Systems, 44(4), 468-483.
[63] Bao, Y., Wu, H., & Liu, X. (2017). From Prediction to Action: Improving User Experience with Data-Driven Resource Allocation. IEEE Journal on Selected Areas in Communications.
[64] Rauschenberger, M., Schrepp, M., Cota, M. P., Olschner, S., & Thomaschewski, J. (2003). Efficient measurement of the user experience of interactive products. How to use the user experience questionnaire (ueq). Example: spanish language version. International Journal of Artificial Intelligence and Interactive Multimedia. 2003; 2 (1): 39-45.
[65] https://docs.unity3d.com/ScriptReference/ParticleSystem.html access on July 18th 2017
[66] http://homes.cs.washington.edu/~edzhang/tutorials/kinect2/kinect3.html access on July 18th 2017
[67] https://www.freesoundeffects.com/ access on July 18th 2017
[68] http://www.fromtexttospeech.com/ access on July 18th 2017
[69] https://goo.gl/forms/HFlGdjEwjpvQMhO32 access on July 18th 2017
[70] http://www.dataschool.io/simple-guide-to-confusion-matrix-terminology/ access on July 18th 2017 |