參考文獻 |
[1] L. N. Han, C. Y. Chiang, and H. C. Chu, “Recognizing the Degree of Human Attention Using EEG Signals from Mobile Sensors,” Sensors, vol. 13, no. 8, pp. 10273-10286, 2013.
[2] C. M. Chen, J. Y. Wang, and C. M. Yu, “Assessing the Attention Levels of Students by Using a Novel Attention Aware System based on Brainwave Signals,” 2015 IIAI 4th International Congress on Advanced Applied Informatics, pp. 379-384, 2015.
[3] M. Raca and P. Dillenbourg, “System for Assessing Classroom Attention,” Proceedings of the Third International Conference on Learning Analytics and Knowledge, pp. 265-269, 2013.
[4] J. Zaletelj and A. Ko?ir, “Predicting students’ attention in the classroom from Kinect facial and body features,” EURASIP Journal on Image and Video Processing, pp. 1-12, 2017.
[5] E. M. Chutorian and M. M. Trivedi, “Head Pose Estimation in Computer Vision: A Survey,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 31, no. 4, pp. 607-626, 2009.
[6] K. Hara and R. Chellappa, “Growing Regression Forests by Classification: Applications to Object Pose Estimation,” European Conference on Computer Vision, pp. 552-567, 2014.
[7] X. Zhen, Z. Wang, M. Yu, and S. Li, “Supervised descriptor learning for multi-output regression,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1211-1218, 2015.
[8] R. O. Mbouna, S. G. Kong, and M. G. Chun, “Visual Analysis of Eye State and Head Pose for Driver Alertness Monitoring,” IEEE Transactions on Intelligent Transportation Systems, vol. 14, no. 3, pp. 1462-1469, 2013.
[9] S. Tulyakov and N. Sebe, “Regressing a 3D Face Shape from a Single Image,” IEEE International Conference on Computer Vision, pp. 3748-3755, 2015.
[10] R. Poppe, “A survey on vision-based human action recognition,” Image and Vision Computing, vol. 28, no. 6, pp. 976-990, 2010.
[11] A. F. Bobick and J. W. Davis, “The recognition of human movement using temporal templates,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 23, no. 3, pp. 257-267, 2001.
[12] A. A. Efros, A. C. Berg, G. Mori, and J. Malik, “Recognizing action at a distance,” Proceedings Ninth IEEE International Conference on Computer Vision, vol. 2, pp. 726-733, 2003.
[13] P. Scovanner, S. Ali, and M. Shah, “A 3-dimensional sift descriptor and its application to action recognition,” Proceedings of the 15th ACM international conference on Multimedia, pp. 357-360, 2007.
[14] D. G. Lowe, “Distinctive image features from scale-invariant keypoints,” International Journal of Computer Vision, vol. 60, no. 2, pp. 91-110, 2004.
[15] I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld, “Learning realistic human actions from movies,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1-8, 2008.
[16] H. Wang, A. Klaser, C. Schmid, and C. L. Liu, “Action recognition by dense trajectories,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 3169-3176, 2011.
[17] H. Wang and C. Schmid, “Action Recognition with Improved Trajectories,” IEEE International Conference on Computer Vision, pp. 3551-3558, 2013.
[18] Wikipedia Kinect. [Online]. Available: https://en.wikipedia.org/wiki/
Kinect. [Accessed: 13-Jun-2018].
[19] H. T. Kam, “Random Decision Forest,” Proceedings of the 3rd International Conference on Document Analysis and Recognition, pp. 278-282, 1995.
[20] Y. Cheng, “Mean Shift, Mode Seeking, and Clustering,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, no. 8, pp. 790-799, 1995.
[21] J. Shotton, A. Fitzgibbon, M. Cook, T. Sharp, M. Finocchio, R. Moore, A. Kipman, and A. Blake, “Real-time human pose recognition in parts from single depth images,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 1297-1304, 2011.
[22] Z. Cao, T. Simon, S. E. Wei, and Y. Sheikh, “Realtime Multi-Person 2D Pose Estimation using Part Affinity Fields,” Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 7291-7299, 2017.
[23] M. Andriluka, L. Pishchulin, P. Gehler, and B. Schiele, “2D Human Pose Estimation: New Benchmark and State of the Art Analysis,” IEEE Conference on Computer Vision and Pattern Recognition, pp. 3686-3693, 2014.
[24] 維基百科:感知機. [Online]. Available: https://zh.wikipedia.org/wiki/
感知機. [Accessed: 20-Jun-2018].
[25] 蘇木春、張孝德,機器學習:類神經網路、模糊系統以及基因演算法則,第二版,全華科技圖書,民國一百零一年。
[26] IBM Deep learning architectures. [Online]. Available: https://www.
ibm.com/developerworks/library/cc-machine-learning-deep-learning-
architectures/index.html. [Accessed: 21-Jun-2018].
[27] Deep learning for complete beginners: convolutional neural networks with keras. [Online]. Available: https://cambridgespark.com/content/
tutorials/convolutional-neural-networks-with-keras/index.html. [Accessed: 21-Jun-2018].
[28] Wikipedia Convolution Neural Network. [Online]. Available: https://
en.wikipedia.org/wiki/Convolutional_neural_network.
[Accessed: 21-Jun-2018].
[29] GitHub openpose. [Online]. Available: https://github.com/CMU-
Perceptual-Computing-Lab/openpose. [Accessed: 19-Jun-2018].
[30] 蘇昭銘,「漫談疲勞駕駛」,中華民國運輸協會,運輸人通訊第41期,2005。
[31] Dlib CNN face detector example. [Online]. Available:http://dlib.
net/cnn_face_detector.py.html. [Accessed: 12-Jun-2018].
[32] A. Bulat and G. Tzimiropoulos, “How far are we from solving the 2D & 3D Face Alignment problem? (and a dataset of 230,000 3D facial landmarks),” International Conference on Computer Vision, pp. 1021-1030, 2017.
[33] 2D and 3D FAN GitHub code. [Online]. Available: https://github.
com/1adrianb/2D-and-3D-face-alignment. [Accessed: 12-Jun-2018].
[34] A. Newell, K. Yang, and J. Deng, “Stacked hourglass networks for human pose estimation,” European Conference on Computer Vision, pp. 483-499, 2016.
[35] A. Bulat and G. Tzimiropoulos, “Binarized Convolutional Landmark Localizers for Human Pose Estimation and Face Alignment with Limited Resources,” International Conference on Computer Vision, pp. 3726-3734, 2017.
[36] 68 facial landmarks. [Online]. Available:https://ibug.doc.ic.ac.uk/
resources/300-W/. [Accessed: 12-Jun-2018].
[37] T. Soukupova and J. ?ech, “Real-Time Eye Blink Detection using Facial Landmarks,” 21st Computer Vision Winter Workshop, pp. 1-8, 2016.
[38] B. Shankar, D. Jayachandra, and K. K. Hati, “Face Pose Estimation From Rigid Face Landmarks For Driver Monitoring Systems,” Electronic Imaging, pp. 83-88, 2017.
[39] K. Simonyan and A. Zisserman, “Very Deep Convolutional Networks for Large-Scale Image Recognition”, International Conference on Learning Representations, pp. 1-14, 2014.
[40] J. Redmon, S. Divvala, R. Girshick, and A. Farhadi, “You Only Look Once: Unified, Real-Time Object Detection”, IEEE Conference on Computer Vision and Pattern Recognition, pp. 779-788, 2016.
[41] YOLOv3 Comparison to Other Detectors. [Online]. Available: https://
pjreddie.com/darknet/yolo/. [Accessed: 18-Jul-2018].
[42] S. Abtahi, M. Omidyeganeh, S. Shirmohammadi, and B. Hariri, “YawDD: A Yawning Detection Dataset”, Proceedings of the 5th ACM Multimedia Systems Conference, pp. 24-28, 2014.
[43] N. Gourier, D. Hall, and J. L. Crowley, “Estimating Face Orientation from Robust Detection of Salient Facial Features”, ICPR International Workshop on Visual Observation of Deictic Gestures, pp. 17-25, 2004.
[44] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet Classification with Deep Convolutional Neural Networks”, Advances in Neural Information Processing Systems, pp. 1097-1105, 2012.
[45] T. Y. Lin, M. Maire, S. Belongie, L. Bourdev, R. Girshick, J. Hays, P. Perona, D. Ramanan, C. L. Zitnick, and P. Dollar, “Microsoft COCO: Common Objects in Context”, European conference on computer vision, pp. 740-755, 2014. |