參考文獻 |
[1] J. Wu, P. Davuluri, K. R. Ward, C. Cockrell, R. Hobson, and K. Najarian, “Fracture detection in traumatic pelvic CT images,” Int. J. Biomed. Imaging, vol. 2012, 2012.
[2] H. R. Roth, Y. Wang, J. Yao, L. Lu, J. E. Burns, and R. M. Summers, “Deep convolutional networks for automated detection of posterior-element fractures on spine CT,” in Proc. of the SPIE Med. Imaging, 2016, p97850P.
[3] Y. D. Pranata et al., “Deep learning and SURF for automated classification and detection of calcaneus fractures in CT images,” Comput. Methods Programs Biomed., vol. 171, pp. 27–37, 2019.
[4] M. Yazdi and T. Bouwmans, “New trends on moving object detection in video images captured by a moving camera: A survey,” Comput. Sci. Rev., vol. 28, pp. 157–177, 2018.
[5] A. F. M. S. Saif, A. S. Prabuwono, and Z. R. Mahayuddin, “Moving object detection using dynamic motion modelling from UAV aerial images,” SCI. World J., vol. 2014, pp. 1–12, 2014.
[6] J. Maier and M. Humenberger, “Movement detection based on dense optical flow for unmanned aerial vehicles,” Int. J. Adv. Robot. Syst., vol. 10, no. 2, pp. 146–157, 2013.
[7] B. Kalantar, S. Bin Mansor, A. Abdul Halin, H. Z. M. Shafri, and M. Zand, “Multiple moving object detection from UAV videos using trajectories of matched regional adjacency graphs,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 9, pp. 5198–5213, 2017.
[8] Y. Wu, X. He, and T. Q. Nguyen, “Moving object detection with a freely moving camera via background motion subtraction,” IEEE Trans. Circuits Syst. Video Technol., vol. 27, no. 2, pp. 236–248, 2017.
[9] S. Cai, Y. Huang, B. Ye, and C. Xu, “Dynamic illumination optical flow computing for sensing multiple mobile robots from a drone,” IEEE Trans. Syst. Man, Cybern. Syst., vol. 48, no. 8, pp. 1370–1382, 2017.
[10] S. Minaeian, J. Liu, and Y. J. Son, “Effective and efficient detection of moving targets from a UAV’s camera,” IEEE Trans. Intell. Transp. Syst., vol. 19, no. 2, pp. 497–506, 2018.
[11] D. Ryan, S. Denman, C. Fookes, and S. Sridharan, “Scene invariant multi camera crowd counting - ScienceDirect,” Pattern Recognit. Lett., vol. 44, pp. 98–112, 2014.
[12] O. Perdikaki, S. Kesavan, and J. M. Swaminathan, “Effect of traffic on sales and conversion rates of retail stores,” Manuf. Serv. Oper. Manag., vol. 14, no. 1, pp. 145–162, Dec. 2012.
[13] T. M. Tsai, P. C. Yang, and W. N. Wang, “Pilot study toward realizing social effect in O2O commerce services,” in Proc. of IEEE International Conference on Pervasive Computing and Communication Workshops, 2015, pp. 372 377.
[14] N. C. Tang, Y. Y. Lin, M. F. Weng, and H. Y. M. Liao, “Cross-camera knowledge transfer for multiview people counting,” IEEE Trans. Image Process., vol. 24, no. 1, pp. 80–93, 2015.
[15] S. Sun, N. Akhtar, H. Song, C. Zhang, J. Li, and A. Mian, “Benchmark data and method for real-time people counting in cluttered scenes using depth sensors,” IEEE Trans. Intell. Transp. Syst., vol. 20, no. 10, pp. 3599–3612, 2019.
[16] N. Ahmed, A. Ghose, A. K. Agrawal, C. Bhaumik, V. Chandel, and A. Kumar, “SmartEvacTrak: A people counting and coarse-level localization solution for efficient evacuation of large buildings,” in Proc. of International Conference on Pervasive Computing and Communication Workshops, 2015, pp. 372–377..
[17] W. J. Wang, H. G. Chou, C. H. Chiu, Y. P. Liao, and P. J. Lee, “A DSP embedded control system with people number counting for energy saving,” in Proc. of Int. Conf. Informatics Appl., 2013, pp. 138–142.
[18] X. Yang, W. Yin, L. Li, and L. Zhang, “Dense people counting using IR-UWB radar with a hybrid feature extraction method,” IEEE Geosci. Remote Sens. Lett., vol. 16, no. 1, pp. 30–34, 2019.
[19] J. W. Choi, X. Quan, and S. H. Cho, “Bi-directional passing people counting system based on IR-UWB radar sensors,” IEEE Internet Things J., vol. 5, no. 2, pp. 512–522, 2018.
[20] S. Kumar, T. K. Marks, and M. Jones, “Improving person tracking using an inexpensive thermal infrared sensor,” in Proc. of IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2014, pp. 217–224.
[21] H. Li, E. C. L. Chan, X. Guo, J. Xiao, K. Wu, and L. M. Ni, “Wi-Counter: Smartphone-based people counter using crowdsourced wi-fi signal data,” IEEE Trans. Human-Machine Syst., vol. 45, no. 4, pp. 442–452, 2015.
[22] O. T. Ibrahim, W. Gomaa, and M. Youssef, “CrossCount: A deep learning system for device-free human counting using WiFi,” IEEE Sens. J., vol. 19, no. 21, pp. 9921–9928, 2019.
[23] Z. Q. H. Al-Zaydi, D. L. Ndzi, Y. Yang, and M. L. Kamarudin, “An adaptive people counting system with dynamic features selection and occlusion handling,” J. Vis. Commun. Image Represent., vol. 39, pp. 218–225, 2016.
[24] H. C. Chen, Y. C. Chang, N. J. Li, C. F. Weng, and W. J. Wang, “Real-time people counting method with surveillance cameras implemented on embedded system,” in Proc. of World Congress on Engineering and Computer Science, 2013, pp. 512-515.
[25] A. Tokta and A. K. Hocaoglu, “A fast people counting method based on optical flow,” in Proc. of Int. Conf. Artif. Intell. Data Process, 2018, pp. 1–4.
[26] J. Garcia, A. Gardel, I. Bravo, J. L. Lazaro, M. Martinez, and D. Rodriguez, “Directional people counter based on head tracking,” IEEE Trans. Ind. Electron., vol. 60, no. 9, pp. 3991–4000, 2013.
[27] E. Bondi, L. Seidenari, A. D. Bagdanov, and A. Del Bimbo, “Real-time people counting from depth imagery of crowded environments,” in Proc. of IEEE International Conference on Advanced Video and Signal-Based Surveillance, 2014, pp. 337–342.
[28] S. I. Cho and S. J. Kang, “Real-time people counting system for customer movement analysis,” IEEE Access, vol. 6, pp. 55264–55272, 2018.
[29] K. M. Moussa, M. A. E. Hassaan, A. N. Moharram, and M. D. Elmahdi, “The role of multidetector CT in evaluation of calcaneal fractures,” Egypt. J. Radiol. Nucl. Med., vol. 46, no. 2, pp. 413–421, Jun. 2015.
[30] M. Galluzzo et al., “Calcaneal fractures: radiological and CT evaluation and classification systems.,” Acta Biomed., vol. 89, no. 1-S, pp. 138–150, Jan. 2018.
[31] S. A. Swanson, M. P. Clare, and R. W. Sanders, “Management of intra-articular fractures of the calcaneus,” Foot Ankle Clin., vol. 13, no. 4, pp. 659–678, Dec. 2008.
[32] R. Sanders, P. Fortin, T. DiPasquale, and A. Walling, “Operative treatment in 120 displaced intraarticular calcaneal fractures. Results using a prognostic computed tomography scan classification.,” Clin. Orthop. Relat. Res., no. 290, pp. 87–95, May 1993.
[33] R. Sanders, “Displaced intra-articular fractures of the calcaneus,” J. Bone Joint Surg. Am., vol. 82, no. 2, pp. 225–50, Feb. 2000.
[34] D. L. Janzen, D. G. Connell, P. L. Munk, R. E. Buckley, R. N. Meek, and M. T. Schechter, “Intraarticular fractures of the calcaneus: value of CT findings in determining prognosis.,” Am. J. Roentgenol., vol. 158, no. 6, pp. 1271–1274, Jun. 1992.
[35] S. Rammelt, A. L. Godoy-Santos, W. Schneiders, G. Fitze, and H. Zwipp, “Foot and ankle fractures during childhood: review of the literature and scientific evidence for appropriate treatment.,” Rev. Bras. Ortop., vol. 51, no. 6, pp. 630–639, 2016.
[36] K. Badillo, J. a Pacheco, S. O. Padua, A. a Gomez, E. Colon, and J. a Vidal, “Multidetector CT evaluation of calcaneal fractures.,” Radiographics, vol. 31, no. 1, pp. 81–92, 2011.
[37] D. Huang, C. Shan, M. Ardebilian, Y. Wang, and L. Chen, “Local binary patterns and its application to facial image analysis : A Survey,” IEEE Trans. Syst. Man, Cybern. Part C, vol. 41, pp. 765–781, 2011.
[38] O. Ludwig, U. Nunes, B. Ribeiro, and C. Premebida, “Improving the generalization capacity of cascade classifiers,” IEEE Trans. Cybern., vol. 43, no. 6, pp. 2135–2146, Dec. 2013.
[39] H. Yu and P. Moulin, “Regularized adaboost learning for identification of time-varying content,” IEEE Trans. Inf. Forensics Secur., vol. 9, no. 10, pp. 1606–1616, Oct. 2014.
[40] P. Viola and M. J. Jones, “Robust real-time face detection,” Int. J. Comput. Vis., vol. 57, no. 2, pp. 137–154, May 2004.
[41] S. Suzuki and K. Abe, “Topological structural analysis of digital binary image by border following,” Comput. Vision, Graph. Image Process., vol. 30, no. 1, pp. 32–46, 1985.
[42] X. Yang, X. Shen, J. Long, and H. Chen, “An improved median-based otsu image thresholding algorithm,” AASRI Procedia, vol. 3, pp. 468–473, Jan. 2012.
[43] R. C. Gonzalez and R. E., Digital image processing. Prentice Hall, 2002.
[44] J. Seo, S. Chae, J. Shim, D. Kim, C. Cheong, and T.-D. Han, “Fast contour-tracing algorithm based on a pixel-following method for image sensors.,” Sensors., vol. 16, no. 3, Mar. 2016.
[45] Y. Zhu and C. Huang, “An improved median filtering algorithm for image noise reduction,” Phys. Procedia, vol. 25, pp. 609–616, Jan. 2012.
[46] C.-M. Chen et al., “Automatic contrast enhancement of brain MR images using hierarchical correlation histogram analysis,” J. Med. Biol. Eng., vol. 35, no. 6, pp. 724–734, 2015.
[47] T. Harnroongroj, T. Harnroongroj, T. Suntharapa, and M. Arunakul, “The new intra-articular calcaneal fracture classification system in term of sustentacular fragment configurations and incorporation of posterior calcaneal facet fractures with fracture components of the calcaneal body,” Acta Orthop. Traumatol. Turc., vol. 50, no. 5, pp. 519–526, Oct. 2016.
[48] The wiki-based collaborative Radiology resource, https://radiopaedia.org/. [Accessed: 09-Jan-2018].
[49] Fractures and dislocations of the tarsal bones, https://aneskey.com/fractures-and-dislocations-of-the-tarsal-bones/. [Accessed: 09-Jan-2018].
[50] L. Leal-Taixé, A. Milan, K. Schindler, D. Cremers, I. Reid, and S. Roth, “Tracking the trackers: an analysis of the state of the art in multiple object tracking,” arXiv 2017, arXiv:1704.02781.
[51] H. Bay, A. Ess, T. Tuytelaars, and L. Van Gool, “Speeded-Up Robust Features (SURF),” Comput. Vis. Image Underst., vol. 110, no. 3, pp. 346–359, Jun. 2008.
[52] T. N. Shene, K. Sridharan, and N. Sudha, “Real-time SURF-based video stabilization system for an FPGA-driven mobile robot,” IEEE Trans. Ind. Electron., vol. 63, no. 8, pp. 5012–5021, 2016.
[53] W. Rahmaniar and W.-J. Wang, “A novel object detection method based on Fuzzy sets theory and SURF,” in Proc. of International Conference on System Science and Engineering, 2015, pp. 570–584.
[54] S. Kumar, H. Azartash, M. Biswas, and T. Nguyen, “Real-time affine global motion estimation using phase correlation and its application for digital image stabilization.,” IEEE Trans. image Process., vol. 20, no. 12, pp. 3406–18, 2011.
[55] C. Wang, J. Kim, K. Byun, J. Ni, and S. Ko, “Robust digital image stabilization using the Kalman filter,” IEEE Trans. Consum. Electron., vol. 55, no. 1, pp. 6–14, Feb. 2009.
[56] Y. G. Ryu and M. J. Chung, “Robust online digital image stabilization based on point-feature trajectory without accumulative global motion estimation,” IEEE Signal Process. Lett., vol. 19, no. 4, pp. 223–226, 2012.
[57] G. Farneback, “Two-frame motion estimation based on polynomial expansion,” in Proc. of Scandinavian Conference on Image Analysis, 2003, vol. 2749, no. 1, pp. 363–370.
[58] R. J. O. Cayon, “Online Video Stabilization for UAV,” Politecnico di Milano, 2013.
[59] J. Li, T. Xu, and K. Zhang, “Real-Time Feature-Based Video Stabilization on FPGA,” IEEE Trans. Circuits Syst. Video Technol., vol. 27, no. 4, pp. 907–919, 2017.
[60] S. Hong, A. Dorado, G. Saavedra, J. C. Barreiro, and M. Martinez-Corral, “Three-dimensional integral-imaging display from calibrated and depth-hole filtered kinect information,” J. Disp. Technol., vol. 12, no. 11, pp. 1301–1308, Nov. 2016.
[61] M. Muja and D. G. Lowe, “Fast approximate nearest neighbors with automatic algorithm configuration,” in Proc. of International Conference on Computer Vision Theory and Applications, 2009, pp. 331–340.
[62] M. A. Fischler and R. C. Bolles, “Random sample consensus: a paradigm for model fitting with applications to image analysis and automated cartography,” Commun. ACM, vol. 24, no. 6, pp. 381–395, Jun. 1981.
[63] Center for Research in Computer Vision at the University of Central Florida, http://crcv.ucf.edu/data/UCF_Aerial_Action.php. [Accessed: 02-Jul-2018].
[64] RGB-D Camera Module - iSSA-SYS003-01, http://issatek.com. [Accessed: 13-Jan-2020].
[65] Jetson TX2 Module, https://developer.nvidia.com/embedded/jetson-tx2. [Accessed: 13-Jan-2020].
[66] L. Xu and E. Oja, “Randomized hough transform (RHT): Basic mechanisms, algorithms, and computational complexities,” Comput. Vis. Image Underst., vol. 57, no. 2, pp. 131–154, 1993.
[67] R. K. K. Yip, P. K. S. Tam, and D. N. K. Leung, “Modification of hough transform for circles and ellipses detection using a 2-dimensional array,” Pattern Recognit., vol. 25, no. 9, pp. 1007–1022, 1992.
[68] A. Lukežič, T. Vojíř, L. Čehovin Zajc, J. Matas, and M. Kristan, “Discriminative Correlation Filter Tracker with Channel and Spatial Reliability,” Int. J. Comput. Vis., vol. 126, no. 7, pp. 671–688, 2018.
[69] S. Boyd, N. Parikh, E. Chu, B. Peleato, and J. Eckstein, “Distributed optimization and statistical learning via the alternating direction method of multipliers,” Found. Trends Mach. Learn., vol. 3, no. 1, pp. 1–122, 2010. |