參考文獻 |
參考文獻
1. A. Zeng, S. Song, S. Welker, J. Lee, A. Rodriguez, and T. Funkhouser, “Learning synergies between pushing and grasping with self-supervised deep reinforcement learning,” IEEE Int. C. Int. Robot.’2018 Digest, 4238-4245 (2018).
2. S. Borhade, M. Shah, P. Jadhav, D. Rajurkar, and A. Bhor, “Advanced driver assistance system,” I. Conf. Sens. Technol.’2012 Digest, 718-722 (2012).
3. C.-H. Chen, and K.-T. Song, “Complete coverage motion control of a cleaning robot using infrared sensors,” IEEE Int. C. Mech.’2005 Conference Proceedings, 543-548 (2005).
4. Z. Feng-Ji, G. Hai-Jiao, and K. Abe, “A mobile robot localization using ultrasonic sensors in indoor environment,” IEEE Int. Worksh. Rob. 97 52-57 (1997).
5. R. C. Smith, and P. Cheeseman, “On the representation and estimation of spatial uncertainty,” Int. J. Robotics. Res. 5, 56-68 (1986).
6. Z. GU, and H. LIU, “A survey of monocular simultaneous localization and mapping,” CAAI T. Intelligen. S. 10, 499-507 (2015).
7. R. Mur Artal, and J. D. Tardós, “Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras,” IEEE T. Robot. 33, 1255-1262 (2017).
8. S. Kohlbrecher, O. Von Stryk, J. Meyer, and U. Klingauf, “A flexible and scalable slam system with full 3d motion estimation,” IEEE Int. S. Saf. Sec. R.’2011 Digest, 155-160 (2011).
9. M. Labbé, and F. Michaud, “Long-term online multi-session graph-based SPLAM with memory management,” Auton. Robot. 42, 1133-1150 (2018).
10. S. Thrun, and A. Bücken, “Integrating grid-based and topological maps for mobile robot navigation,” P. Nat. C. Art. Int.’1996 Digest, 944-951 (1996).
11. S. Thrun, “Learning metric-topological maps for indoor mobile robot navigation,” Artif. Intell. 99, 21-71 (1998).
12. R. B. Rusu, N. Blodow, Z. Marton, A. Soos, and M. Beetz, “Towards 3D object maps for autonomous household robots,” IEEE Int. C. Int. Robot.’2007 Digest, 3191-3198 (2007).
13. R. B. Rusu, Z. C. Marton, N. Blodow, M. Dolha, and M. Beetz, “Towards 3D point cloud based object maps for household environments,” Robot. Auton. Syst. 56, 927-941 (2008).
14. R. A. Newcombe, S. Izadi, O. Hilliges, D. Molyneaux, D. Kim, A. J. Davison, P. Kohli, J. Shotton, S. Hodges, and A. W. Fitzgibbon, “Kinectfusion: Real-time dense surface mapping and tracking,” IEEE Int. S. Mixed. Aug.’2011 Digest, 127-136 (2011).
15. S. Bauer, A. Seitel, H. Hofmann, T. Blum, J. Wasza, M. Balda, H.-P. Meinzer, N. Navab, J. Hornegger, and L. Maier-Hein, Real-time range imaging in health care: a survey (Springer, 2013).
16. M. Nießner, M. Zollhöfer, S. Izadi, and M. Stamminger, “Real-time 3D reconstruction at scale using voxel hashing,” ACM T. Graphic. 32, 169 (2013).
17. L. Gallo, A. P. Placitelli, and M. Ciampi, “Controller-free exploration of medical image data: Experiencing the Kinect,” CBMS 24, 1-6 (2011).
18. L. Vera, J. Gimeno, I. Coma, and M. Fernández, “Augmented mirror: interactive augmented reality system based on kinect,” IFIP C. Hum. Comp. Int.’2011 Digest, 483-486 (2011).
19. 周錫珉,具有空間三維幾何量測之八進位井字編碼結構光的設計,國立中央大學光電所碩士論文,中華民國一百零七年.
20. 林坤政,線性雷射掃描與結構光投影掃描於室內空間點雲建立之研究,國立中央大學光電所碩士論文,中華民國一百零六年.
21. K. Fukushima, S. Miyake, and T. Ito, “Neocognitron: A neural network model for a mechanism of visual pattern recognition,” IEEE T. Syst. Man. Cyb., 826-834 (1983).
22. A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” Adv. Neural. Inform. Pr. 25, 1097-1105 (2012).
23. K. Simonyan, and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” ICLR’2015 Conf. Proceedings (2014).
24. C. Szegedy, W. Liu, Y. Jia, P. Sermanet, S. Reed, D. Anguelov, D. Erhan, V. Vanhoucke, and A. Rabinovich, “Going deeper with convolutions,” P. IEEE C. Comp. Vis. Pa.’2015 Digest, 1-9 (2015).
25. K. He, X. Zhang, S. Ren, and J. Sun, “Deep residual learning for image recognition,” P. IEEE C. Comp. Vis. Pa.’2016 Digest, 770-778 (2016).
26. O. Russakovsky, J. Deng, H. Su, J. Krause, S. Satheesh, S. Ma, Z. Huang, A. Karpathy, A. Khosla, and M. Bernstein, “Imagenet large scale visual recognition challenge,” Int. J. Comput. Vision. 115, 211-252 (2015).
27. B. Zhou, A. Lapedriza, J. Xiao, A. Torralba, and A. Oliva, “Learning deep features for scene recognition using places database,” Adv. Neural. Inform. Pr. 27, 487-495 (2014).
28. B. Zhou, H. Zhao, X. Puig, S. Fidler, A. Barriuso, and A. Torralba, “Scene parsing through ade20k dataset,” P. IEEE C. Comp. Vis. Pa.’2017 Digest, 633-641 (2017).
29. B. Zhou, H. Zhao, X. Puig, T. Xiao, S. Fidler, A. Barriuso, and A. Torralba, “Semantic understanding of scenes through the ade20k dataset,” Int. J. Comput. Vision. 127, 302-321 (2019).
30. 王祖鎧,以光學二維影像輔助三維空間點雲之人工智慧自動建模技術,國立中央大學光電所碩士論文,中華民國一百零六年.
31. G. Bradski, and A. Kaehler, Learning OpenCV: Computer vision with the OpenCV library (O′Reilly Media, Inc., 2008).
32. F. Cheevasuvit, H. Maitre, and D. Vidal Madjar, “A robust method for picture segmentation based on a split-and-merge procedure,” Comput. Vision. Graph. 34, 268-281 (1986).
33. S. Y. Chen, W. C. Lin, and C. T. Chen, “Split-and-merge image segmentation based on localized feature analysis and statistical tests,” CVGIP Graph. Model. Im. Proc. 53, 457-475 (1991).
34. T. Pavlidis, and Y. T. Liow, “Integrating region growing and edge detection,” IEEE T. Pattern Anal. 12, 225-233 (1990).
35. S. L. Horowitz, “Picture segmentation by a directed split-and-merge procedure,” IJCPR, 424-433 (1974).
36. R. Adams, and L. Bischof, “Seeded region growing,” IEEE T. Pattern Anal. 16, 641-647 (1994).
37. M. Herbin, N. Bonnet, and P. Vautrot, “A clustering method based on the estimation of the probability density function and on the skeleton by influence zones. Application to image processing,” Pattern Recogn. Lett. 17, 1141-1150 (1996).
38. E. H. Ruspini, “A new approach to clustering,” Inform. Control. 15, 22-32 (1969).
39. A. Touzani, and J. G. Postaire, “Clustering by mode boundary detection,” Pattern Recogn. Lett. 9, 1-12 (1989).
40. J. Guo, J. Kim, and C. C. J. Kuo, “Fast and accurate moving object extraction technique for MPEG-4 object-based video coding,” Visual. Communication. 3653, 1210-1221 (1998).
41. E. Choi, and P. Hall, “Miscellanea. Data sharpening as a prelude to density estimation,” Biometrika 86, 941-947 (1999).
42. D. Comaniciu, and P. Meer, “Mean shift: A robust approach toward feature space analysis,” IEEE T. Pattern. Anal. 24, 603-619 (2002).
43. K. Popat, and R. W. Picard, “Cluster-based probability model and its application to image and texture processing,” IEEE T. Image. Process. 6, 268-284 (1997).
44. T.-Y. Lin, M. Maire, S. Belongie, J. Hays, P. Perona, D. Ramanan, P. Dollár, and C. L. Zitnick, “Microsoft coco: Common objects in context,” Eur. C. Comp. Vis. 13, 740-755 (2014).
45. H. Zhao, J. Shi, X. Qi, X. Wang, and J. Jia, “Pyramid scene parsing network,” P. IEEE C. Comp. Vis. Pa.’2017 Digest, 2881-2890 (2017).
46. N. Ketkar, Introduction to pytorch (Springer, 2017).
47. M. Hansard, S. Lee, O. Choi, and R. P. Horaud, Time-of-flight cameras: principles, methods and applications (Springer Science & Business Media, 2012).
48. S. Foix, G. Alenya, and C. Torras, “Lock-in time-of-flight (ToF) cameras: A survey,” IEEE Sens. J. 11, 1917-1926 (2011).
49. B. Kang, S.-J. Kim, S. Lee, K. Lee, J. D. Kim, and C.-Y. Kim, “Harmonic distortion free distance estimation in ToF camera,” Proc. SPIE 7864, 786403 (2011).
50. A. Kolb, E. Barth, R. Koch, and R. Larsen, “Time‐of‐flight cameras in computer graphics,” Comput. Graph. Forum 29, 141-159 (2010).
51. K. Shoemake, “Animating Rotation with Quaternion Curves,” Proc. Siggraph’85 Proceedings 245 - 254 (1985).
52. Matlab, “What Is Camera Calibration?,” https://www.mathworks.com/help/vision/ug/camera-calibration.html?w.mathworks.com.
53. A. Fetić, D. Jurić, and D. Osmanković, “The procedure of a camera calibration using Camera Calibration Toolbox for MATLAB,” MIPRO 35, 1752-1757 (2012).
54. J. Heikkila, and O. Silven, “A four-step camera calibration procedure with implicit image correction,” CVPR 97, 1106-1112 (1997).
55. Z. Zhang, “A flexible new technique for camera calibration,” IEEE T. Pattern Anal. 22 (2000).
56. E. Rublee, V. Rabaud, K. Konolige, and G. R. Bradski, “ORB: An efficient alternative to SIFT or SURF,” ICCV’2011 Digest, 2564-2571 (2011).
57. H. Bay, T. Tuytelaars, and L. Van Gool, “Surf: Speeded up robust features,” Eur. C. Comp. Vis., 404-417 (2006).
58. E. Rosten, and T. Drummond, “Machine learning for high-speed corner detection,” Eur. C. Comp. Vis.’2006 Digest, 430-443 (2006).
59. P. C. Ng, and S. Henikoff, “SIFT: Predicting amino acid changes that affect protein function,” Nucleic acids Res. 31, 3812-3814 (2003).
60. P. L. Rosin, “Measuring corner properties,” Comput. Vis. Image Und. 73, 291-307 (1999).
61. M. Calonder, V. Lepetit, C. Strecha, and P. Fua, “Brief: Binary robust independent elementary features,” Eur. C. Comp. Vis.’2010 Digest, 778-792 (2010).
62. S. Fuchs, “Multipath interference compensation in time-of-flight camera images,” Int. C. Patt. Recog.’2010 Digest, 3583-3586 (2010).
63. J. Mure-Dubois, and H. Hügli, “Real-time scattering compensation for time-of-flight camera,” ICVS 5 (2007).
64. M. Reynolds, J. Doboš, L. Peel, T. Weyrich, and G. J. Brostow, “Capturing time-of-flight data with confidence,” CVPR’2011 Digest, 945-952 (2011).
65. Z. Zhang, “Iterative point matching for registration of free-form curves and surfaces,” Int. J. Comput. Vision. 13, 119-152 (1994).
66. H. Hoppe, T. DeRose, T. Duchamp, J. McDonald, and W. Stuetzle, Surface reconstruction from unorganized points (ACM, 1992).
67. Y. Yu, “Surface reconstruction from unorganized points using self-organizing neural networks,” IEEE Visualization 99, 61-64 (1999).
68. M. Pauly, M. Gross, and L. P. Kobbelt, “Efficient simplification of point-sampled surfaces,” Proceedings. Visualization. 2, 163-170 (2002).
69. R. B. Rusu, “Semantic 3d object maps for everyday manipulation in human living environments,” KI-Künstliche Intelligenz 24, 345-348 (2010). |