參考文獻 |
1. S. Melumad, and M. T. Pham, “The Smartphone as a Pacifying Technology,” Journal of Consumer Research 47, 237-235 (2020).
2. IDC, “Global Shipments of AR/VR Headsets Decline in 2022 While Rebounding in 2023,” https://www.idc.com/promo/arvr.
3. Proven Reality, “applications of augmented reality : 8 key industries to consider,” https://provenreality.com/augmented-reality/applications-of-augmented-reality-8-key-industries-to-consider/.
4. T. Armstrong and B. O. Olatunji, “Eye tracking of attention in the affective disorders: A meta-analytic review and synthesis,” Clinical Psychology Review 32(8), 704-723 (2012).
5. J. Z. Lim, J. Mountstephens, and J. Teo, “Emotion Recognition Using Eye-Tracking: Taxonomy, Review and Current Challenges,” Sensors 20(8), 2384 (2020).
6. G. Roper-Hall, “Louis Émile Javal (1839–1907): The Father of Orthoptics,” American Orthoptic Journal 57(1), 131-136 (2017).
7. M. PŁUŻYCZKA, “The First Hundred Years: a History of Eye Tracking as a Research Method,” Applied Linguistics Papers 25(4), 101-106 (2018).
8. A. Patney, M. Salvi, J. Kim, A. Kaplanyan, C. Wyman, N. Benty, D. Luebke, and A. Lefohn, “Towards foveated rendering for gaze-tracked virtual reality,” ACM Trans. Graph. 35(6), 1-12 (2016).
9. A. Kar and P. Corcoran, “A Review and Analysis of Eye-Gaze Estimation Systems, Algorithms and Performance Evaluation Methods in Consumer Platforms,” IEEE Access 5, 16495-16519 (2017).
10. P. Blignaut, ‘‘Mapping the pupil-glint vector to gaze coordinates in a simple video-based eye tracker,’’ J. Eye Movement Res. 7(1), 1–11 (2014).
11. K. Harezlak, P. Kasprowski, and M. Stasch, “Towards accurate eye tracker calibration – methods and procedures,” Procedia Computer Science 35, 1073 – 1081 (2014).
12. Z. R. Cherif, A. Nait-Ali, J. F. Motsch, and M. O. Krebs, ‘‘An adaptive calibration of an infrared light device used for gaze tracking,’’ Proc. 19th IEEE Instrum. Meas. Technol. Conf. 2, 1029-1033 (2002).
13. J. Wang, G. Zhang, and J. Shi, ‘‘2D gaze estimation based on pupil-glint vector using an artificial neural network,’’ Appl. Sci. 6(6), 174 (2016).
14. Z. Zhu and Q. Ji, ‘‘Eye and gaze tracking for interactive graphic display,’’ Mach. Vis. Appl. 15(3), 139-148 (2004).
15. C. Jian-Nan, Z. Chuang, Y. Yan-Tao, L. Yang, and Z. Han, ‘‘Eye gaze calculation based on nonlinear polynomial and generalized regression neural network,’’ Int. Conf. Natural Comput. 5, 617-623 (2009).
16. T. Nagamatsu, Y. Iwamoto, J. Kamahara, N. Tanaka, and M. Yamamoto, “Gaze estimation method based on an aspherical model of the cornea: surface of revolution about the optical axis of the eye,” in Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications (2010), 255-258.
17. F. L. Coutinho and C. H. Morimoto, “Free head motion eye gaze tracking using a single camera and multiple light sources,” Brazilian Symposium on Computer Graphics and Image Processing 19, 171-178 (2006).
18. I. Bacivarov, M. Ionita, and P. Corcoran, “Statistical models of appearance for eye tracking and eye-blink detection and measurement,” IEEE Transactions on Consumer Electronics 54(3), 1312-1320 (2008).
19. P. Koutras, and P. Maragos, “Estimation of eye gaze direction angles based on active appearance models,” in 2015 IEEE International Conference on Image Processing (ICIP)(IEEE2015), 2424-2428.
20. W. Wang, Y. Huang, and R. Zhang, “Driver gaze tracker using deformable template matching,” Proceedings of 2011 IEEE International Conference on Vehicular Electronics and Safety, 244-247 (2011).
21. M. Reinders, “Eye tracking by template matching using an automatic codebook generation scheme,” in Third Annual Conference of ASCI, ASCI, Delft (1997), 85-91.
22. S. Ramadan, W. Abd-Almageed, and C. E. Smith, “Eye Tracking Using Active Deformable Models,” in ICVGIP (2002).
23. I. F. Ince, and J. W. Kim, “A 2D eye gaze estimation system with low-resolution webcam images,” EURASIP Journal on Advances in Signal Processing 2011, 1-11 (2011).
24. L. Gabriel, “Épreuves réversibles donnant la sensation du relief,” J. Phys. Theor. Appl. 7(1), 821-825 (1908).
25. D. Lanman and D. Luebke, “Near-eye light field displays,” ACM Trans. Graph. 32(6), 1-10 (2013).
26. K. Hong, J. Yeom, C. Jang, J. Hong, and B. Lee, “Full-color lens-array holographic optical element for three-dimensional optical see-through augmented reality,” Opt. Lett. 39, 127-130 (2014).
27. C. C. Sun, “Simplified model for diffraction analysis of volume holograms,” Opt. Eng. 42, 1184-1185 (2003).
28. O. I. Abiodun and et al., “State-of-the-art in artificial neural network applications: A survey,” Heliyon 4(11), e00938 (2018).
29. H. Simon, Neural networks and learning machines (Pearson Education India, 2009).
30. D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by backpropagating errors,” Nature 323(6088), 533–536 (1986).
31. S. Ruder, “An overview of gradient descent optimization algorithms,” arXiv preprint arXiv:1609.04747 (2016).
32. M. Andrychowicz, M. Denil, S. Gomez, M. W. Hoffman, D. Pfau, T. Schaul, B. Shillingford, and N. De Freitas, “Learning to learn by gradient descent by gradient descent,” Advances in neural information processing systems 29 (2016).
33. C. Francois. Deep learning with Python (Simon and Schuster, 2021).
34. K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition,” https://arxiv.org/abs/1409.1556.
35. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” in Proceedings of the IEEE conference on computer vision and pattern recognition (2015), 1-9.
36. R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for accurate object detection and semantic segmentation,” in Proceedings of the IEEE conference on computer vision and pattern recognition (2014), 580-587.
37. K. He and et al., “Spatial pyramid pooling in deep convolutional networks for visual recognition,” IEEE transactions on pattern analysis and machine intelligence, 37(9), 1904-1916 (2015).
38. P. Rolet, M. Sebag, and O. Teytaud, “Integrated recognition, localization and detection using convolutional networks,” in Proceedings of the ECML Conference (2012), 1255-1263.
39. W. Wang, and J. Gang, “Application of convolutional neural network in natural language processing,” in 2018 international conference on information Systems and computer aided education (ICISCAE)(IEEE2018), 64-70.
40. T. Ishikawa, S. Baker, I. Matthews, and T. Kanade, “Passive driver gaze tracking with active appearance models,” in Proc. 11th World Congress on Intelligent Transportation Systems (2004).
41. K. Yamashiro, D. Deguchi, T. Takahashi, I. Ide, H. Murase, K. Higuchi, and T. Naito, “Automatic calibration of an in-vehicle gaze tracking system using driver′s typical gaze behavior,” in 2009 IEEE Intelligent Vehicles Symposium (IEEE2009), 998-1003.
42. H. C. Lee, D. T. Luong, C. W. Cho, E. C. Lee, and K. R. Park, “Gaze tracking system at a distance for controlling IPTV,” IEEE Trans. Consum. Electron. 56(4), 2577-2583 (2010).
43. J. W. Lee, C. W. Cho, K. Y. Shin, E. C. Lee, and K. R. Park, “3D gaze tracking method using Purkinje images on eye optical model and pupil,” Opt. Lasers Eng. 50(5), 736-751 (2012).
44. K. Takemura, K. Takahashi, J. Takamatsu, and T. Ogasawara, “Estimating 3-D point-of-regard in a real environment using a head-mounted eye-tracking system,” IEEE Trans. Human-Mach. Syst. 44(4), 531-536 (2014).
45. W. J. Ryan, A. T. Duchowski, and S. T. Birchfield, “Limbus/pupil switching for wearable eye tracking under variable lighting conditions,” in Proceedings of the 2008 symposium on Eye tracking research & applications (2008), 61-64.
46. M. Stengel, S. Grogorick, M. Eisemann, E. Eisemann, and M. A. Magnor, “An affordable solution for binocular eye tracking and calibration in head-mounted displays,” in Proceedings of the 23rd ACM international conference on Multimedia (2015), 15-24. |