博碩士論文 985401015 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:64 、訪客IP:18.223.172.199
姓名 張峻瑋(Jun-Wei Chang)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 電腦視覺應用於智慧型控制與人體姿態辨識
(Implementation of computer vision for intelligent control and posture recognition)
相關論文
★ 直接甲醇燃料電池混合供電系統之控制研究★ 利用折射率檢測法在水耕植物之水質檢測研究
★ DSP主控之模型車自動導控系統★ 旋轉式倒單擺動作控制之再設計
★ 高速公路上下匝道燈號之模糊控制決策★ 模糊集合之模糊度探討
★ 雙質量彈簧連結系統運動控制性能之再改良★ 桌上曲棍球之影像視覺系統
★ 桌上曲棍球之機器人攻防控制★ 模型直昇機姿態控制
★ 模糊控制系統的穩定性分析及設計★ 門禁監控即時辨識系統
★ 桌上曲棍球:人與機械手對打★ 麻將牌辨識系統
★ 相關誤差神經網路之應用於輻射量測植被和土壤含水量★ 三節式機器人之站立控制
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本論文主要基於電腦視覺技術的研究並且實作三種類型應用,包含人體姿態辨識、機器手臂的物件夾取與移動式機器人的地形辨識。在人體姿態辨識的部分,使用了Kinect深度影像感測器擷取影像,透過影像相減、形態學與物件聯通法等影像裡技術,進行前處理切割出人體的影像。再以人體影像的重心點位置,區分人體的上半身與下半身,藉由上、下半身最大寬度的比例,辨識當前姿態是否為跪姿。接著,使用星狀骨骼化擷取人體的特徵向量,配合Learning Vector Quantization (LVQ)類神經網路分辨當前姿態是坐姿、彎腰、躺姿、站姿或側坐。最後,因側坐與站姿的特徵向量非常相似,不容易被LVQ類神經網路分辨出,因此需要再比對其人體的寬度、高度的比例確認當前的姿態為站姿或是側坐。在機器手臂物件夾取的研究中,由於機器手臂結構與背隙的問題,難以準確地移動夾爪至目標物的位置。因此,採用雙眼視覺裝置辨識夾爪在空間中的實際位置,並且設計一個模糊控制器補償機器手臂位置的誤差,以提升機器手臂控制的精度。藉由本論文提出的補償方法,機器手臂可以準確的移動到目標物的位置,並且成功的夾取該物件。第三種研究使用了XtionPro深度影像感測器,擷取機器人前方地面的影像。接著,透過影像幾何的分析建立一個虛擬的平坦地形影像,做為機器人前方地形平坦度評估的標準。將建立的虛擬影像與實際擷取到的地面影像進行比對,比對的結果中差異性較大的像素點部分則歸類為不平坦的區域。藉由此方法可以利用感測器偵測出前方的障礙物與凹洞的位置,配合本研究提出的機器人動作策略,驅動機器人安全地到達目標點位置。
摘要(英) This dissertation presents three studies based on computer vision containing posture recognition, object grasping by a robot arm, terrain traversability estimation for wheeled mobile robot. In the first study, an effectivity posture recognition method is proposed based on depth image captured by Kinect sensor. Several image processing techniques are applied to extract the features on the human postures such as ratio of body and star skeleton. The ratio of upper and lower body width can fast distinguish the posture whether posture is kneeling or not. Then, Learning Vector Quantization neural network is used to recognize the four categories of human postures forward sitting, stooping, lying and the other. One more check is the final step of the posture recognition method to judge standing and non-forward sitting. The second study utilizes the stereo vision to enhance the accuracy of the robot arm without any external sensors. Due to the backlash, the gripper cannot approach the target object. The stereo vision is applied to recognize the actual position of the gripper and then the fuzzy control is adopted to compensate the position error. After compensation the robot arm can successfully grasp the target object demonstrated in the experiments. The third study develops and implements a fast terrain traversability estimation method using a depth image sensor XtionPro. In this study, a virtual terrain surface image is created and compared with captured upcoming terrain image to extract the features of the terrain. Based on the features, any obstacle and hollow are found. Then, a movement strategy is proposed for robot to make reaction to the obstacle and hollow and approach the goal position.
關鍵字(中) ★ 影像處理
★ 模糊控制
★ 姿態辨識
★ 機器手臂
★ 移動式機器人
關鍵字(英) ★ Image processing
★ fuzzy control
★ posture recognition
★ robot arm
★ mobile robot
論文目次 摘要 I
Abstract II
誌謝 III
Contents IV
List of Figures VII
List of Tables XI
Chapter 1
Introduction 1
1.1 Background and Motivation 1
1.2 Organization and Main Tasks 3
Chapter 2
Human Posture Recognition Based on Images Captured by the Kinect Sensor 5
2.1 Introduction 5
2.2 Hardware 6
2.3 Depth Image Processing 7
2.3.1 Human Silhouette Segmentation 7
2.3.2 Feature Extraction 8
2.4 LVQ Neural Network and a Final Identification 13
2.4.1 Inputs Arrangement in LVQ Neural Network 14
2.4.2 Feature Vectors Normalization 18
2.4.3 The Operation of the LVQ Network 19
2.4.4 One More Check 20
2.4.5 Procedure of the Recognition Process 23
2.5 Experiment Results and Discussion 24
2.6 Summary 30
Chapter 3
Implementation of an Object-Grasping Robot Arm Using Stereo Vision Measurement and Fuzzy Control 31
3.1 Introduction 31
3.2 Description of Experimental Platform 32
3.2.1 The Robot Arm 32
3.2.2 The Stereo Vision Device 34
3.2.3 The Laptop Computer and Software 34
3.2.4 The Batteries 35
3.3 Stereo Vision Method for Object Position Measurement 35
3.3.1 Target Object Identification 36
3.3.2 Target Object Position Measurement 37
3.4 Robot Arm Inverse Kinematics Analysis 39
3.5 Fuzzy Control for Position Error Compensation 43
3.6 Experimental Results and Discussion 46
3.7 Summary 53
Chapter 4
Uneven Terrain Estimation and Movement Strategy Planning for a Real Mobile Robot 54
4.1 Introduction 54
4.2 Hardware 55
4.3 Terrain Surface Feature Estimation 56
4.3.1 Analysis of the Depth Image and Its Coordinates. 56
4.3.2 Image Processing for the Depth Image 60
4.3.3 Terrain Estimation 63
4.3.4 Obstacle Judgment 68
4.4 The Movement Strategy of the Robot 69
4.5 Fuzzy Control 74
4.6 Experiment and Simulation Result 75
4.6.1 Terrain Estimation 76
4.6.2 Movement Trajectory Simulation 79
4.6.3 The Mobile Robot Motion Experiment. 81
4.7 Summary 83
Chapter 5
Conclusions and Future Prospects 84
5.1 Conclusions 84
5.2 Future Prospects 85
References 87
Publication list 95
參考文獻 [1] Y. Yang and X. Zhao, “Development of a fall detection algorithm based on a tri-axial accelerometer,” in 2011 4th International Conference on Biomedical Engineering and Informatics (BMEI), Shanghai, China, Oct. 2011, pp. 1371-1374.
[2] J. Cheng, X. Chen, and M. Shen, “A framework for daily activity monitoring and fall detection based on surface electromyography and accelerometer signals.” IEEE Journal of Biomedical and Health Informatics, vol. 17, no. 1, pp. 38-45, 2013.
[3] L. J. Kau, and C. S. Chen, “A smart phone-based pocket fall accident detection, positioning, and rescue system,” IEEE Journal of Biomedical and Health Informatics, vol. 19, no. 1, pp. 44-56, 2015.
[4] M. Yu, A. Rhuma, S. M. Naqvi, L. Wang, and J. Chambers, “A posture recognition-based fall detection system for monitoring and elderly person in as smart home environment,” IEEE Transactions on Information Technology in Biomedicine, vol. 16, no. 6, pp. 1274-1286, 2012.
[5] E. S. L. Ho, J. C. P. Chan, D. C. K. Chan, H. P. H. Shum, Y.-M. Cheung, and P. C. Yuen, “Improving posture classification accuracy for depth sensor-based human activity monitoring in smart environments,” Computer Vision and Image Understanding, vol. 148, pp. 91-110, 2016.
[6] F. Lin, A. Wang, L. Cavuoto, and W. Wu, “Towards unobtrusive patient handling activity recognition for injury reduction among at-risk caregivers,” IEEE Journal of Biomedical and Health Informatics, Article in press.
[7] K.-T. Song and S.-C. Tsai, “Vision-based adaptive grasping of a humanoid robot arm,” in Proc. IEEE Int. Conf. Autom. Logist., Zhengzhou, China, August 2012, pp. 155-160.
[8] Y. Yang and Q.-X. Cao, “Monocular vision based 6D object localization for service robot’s intelligent grasping,” Comput. Math., vol. 64, no. 5, pp. 1235-1241, 2012.
[9] H. H. Kim, D. J. Kim, and K. H. Park, “Robust elevator button recognition in the presence of partial occlusion and clutter by specular reflections,” IEEE Transactions on Industrial Electronics, vol. 59, no. 3, pp. 1597-1611, 2012.
[10] J. Zhao, H. Zhang, Y. Liu, J. Yan, X. Zang, and Z. Zhou, “Development of the hexapod robot HITCR-II for walking on unstructured terrain,” in Proceedings of 2012 IEEE International Conference on Mechatronics and Automation, Chengdu, China, Aug. 2012, pp. 64-69.
[11] V. G. Loc, I. M. Koo, D. T. Tran, S. Park, H. Moon, and H. R. Choi, “Body workspace of quadruped walking robot and its applicability in legged locomotion,” Journal of Intelligent & Robotic Systems, vol. 67, no. 3-4, pp. 271-284, 2012.
[12] W. Wang, Z. Du and L. Sun, “Dynamic load effect on tracked robot obstacle performance,” in Proceedings of International Conference on Mechatronics, Kumamoto, Japan, May 2007, pp. 1-6.
[13] D. Choi, J. R. Kim, S. Cho, S. Jung and J. Kim, “Rocker-Pillar : Design of the Rough Terrain Mobile Robot Platform with Caterpillar Tracks and Rocker Bogie Mechanism,” in 2012 IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, Oct., 2012, pp. 3405-3410.
[14] V. G. Loc, S. G. Roh, I. M. Koo, D. T. Tran, H. M. Kim, H. Moon, and H. R. Choi, “Sensing and gait planning of quadruped walking and climbing robot for traversing in complex environment,” Robotics and Autonomous Systems, vol. 58, no. 5, pp. 666-675, 2010.
[15] D. Chugo, K. Kawabata, H. Kaetsu, H. Asama, and T. Mishima, “Terrain-surface estimation from body configurations of passive linkages,” International Journal of Advanced Robotic Systems, vol. 11, 2014.
[16] J. Sock, K. Kwak, J. Min, Y.-W. Park, “Probabilistic traversability map building for autonomous navigation,” in International Conference on Control, Automation and Systems (ICCAS 2014), Gyeonggi-do, Korea, Oct. 2014, pp. 652-655.
[17] M. Norouzi, J. V. Miro, and G. Dissanayake, “Probabilistic stable motion planning with stability uncertainty for articulated vehicles on challenging terrains,” Autonomous Robots, vol. 40, no. 2, pp. 361-381, 2016.
[18] I. H. Li, W. Y. Wang, Y. H. Chien, and N. H. Fang, “Autonomous ramp detection and climbing systems for tracked robot using Kinect sensor,” International Journal of Fuzzy Systems, vol. 15, no. 4, pp. 452-459, 2013.
[19] T. Fujita and Y. Kondo, “3D terrain measurement system with movable laser range finder,” in IEEE International Workshop on Safety, Security & Rescue Robotics (SSRR 2009), Denver, Colorado, Nov. 2009, pp. 1-6.
[20] Zenbo, https://zenbo.asus.com/
[21] C. C. Li and Y. Y. Chen, “Human posture recognition by simple rules,” in IEEE International Conference on Systems, Man and Cybernetics, Taipei, Taiwan, Oct. 2006, pp. 3237-3240.
[22] B. Boulay, F. Bremond, and M. Thonnat, “Posture recognition with a 3d human model,” in The IEE International Symposium on Imaging for Crime Detection and Prevention, London, UK, Jun. 2005, pp 135-138.
[23] B. Boulay, F. Bremond, and M. Thonnat, “Applying 3d human model in a posture recognition system,” Pattern Recognition Letters, vol. 27, no. 15, pp. 1788-1796, 2006.
[24] C. Castiello, T. D’Orazio, A. M. Fanelli, P. Spagnolo, and M. A. Torsello, “A model-free approach for posture classification,” in IEEE Conference on Advanced Video and Signal Based Surveillance, Cerno, Italy, Sep. 2005, pp. 276-281
[25] F. Xie, G. Xu, Y. Cheng, and Y. Tian, “Human body and posture recognition system based on an improved thinning algorithm,” IET Image Processing, vol. 5, no. 5, pp. 420-428, 2011.
[26] H. Fujiyoshi and A. J. Lipton, “Real-time human motion analysis by image skeletonization,” in IEEE Workshop on Applications of Computer Vision, Princeton, New Jersey, Oct. 1998, pp. 15-21.
[27] J. W. Hsieh, C. H. Chuang, S. Y. Chen, C. C. Chen, and K. C. Fan, “Segmentation of human body parts using deformable triangulation,” IEEE Transactions on Systems Man and Cybernetics Part a-Systems and Humans, vol. 40, no. 3, pp. 596-610, 2010.
[28] C. C. Chen, J. W. Hsieh, Y. T. Hsu, and C. Y. Huang, “Segmentation of human body parts using deformable triangulation,” in International Conference on Pattern Recognition (ICPR′06), Hong Kong, Aug. 2006, pp. 355-358.
[29] C. H. Chuang, J. W. Hsieh, L. W. Tsai, and K. C. Fan, “Human action recognition using star templates and delaunay triangulation,” in International Conference on Intelligent Information Hiding and Multimedia Signal Processing, Harbin, China, Aug. 2008. pp. 179-182.
[30] J. W. Hsieh, Y. T. Hsu, H. Y. M. Liao, and C. C. Chen, “Video-based human movement analysis and its application to surveillance systems,” IEEE Transactions on Multimedia, vol. 10, no. 3, pp. 372-384, 2008.
[31] C. F. Juang, C. M. Chang, J. R. Wu, and D. M. Lee, “Computer vision-based human body segmentation and posture estimation,” IEEE Transactions on Systems Man and Cybernetics Part a-Systems and Humans, vol. 39, no. 1, pp. 119-133, 2009.
[32] D. T. Chen, H. Y. M. Liao, H. R. Tyan, and C. W. Lin, “Automatic key posture selection for human behavior analysis,” in IEEE Workshop on Multimedia Signal Processing, Shanghai, China, Oct. 2005, pp. 1-4.
[33] S. Chen, P. Akselrod, B. Zhao, J. A. P. Carrasco, B. Linares-Barranco, and E. Culurciello, “Efficient feedforward categorization of objects and human postures with address-event image sensors,” IEEE Trans Pattern Anal Mach Intell, vol. 34, no. 2, pp. 302-314, 2012.
[34] C. F. Juang, and C. M. Chang, “Human body posture classification by a neural fuzzy network and home care system application,” IEEE Transactions on Systems Man and Cybernetics Part a-Systems and Humans, vol. 37, no. 6, pp. 984-994, 2007.
[35] G. Diraco, A. Leone, and P. Siciliano, “Human posture recognition with a time-of-flight 3D sensor for in-home applications,” Expert Systems with Applications, vol. 40, no. 2, pp. 744-751, 2013.
[36] D. Brulin, Y. Benezeth, and E. Courtial, “Posture recognition based on fuzzy logic for home monitoring of the elderly,” IEEE Transactions on Information Technology in Biomedicine, vol. 16, no. 5, pp. 974-982, 2012.
[37] L. Xia, C. C. Chen, and J. K. Aggarwal, “View invariant human action recognition using histograms of 3D joints,” in IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, Providence, RI, Jun. 2012, pp. 20-27.
[38] T. L. Le, M. Q. Nguyen, and T. T. M. Nguyen, “Human posture recognition using human skeleton provided by Kinect,” in International Conference on Computing, Management and Telecommunications (ComManTel), Ho Chi Minh, Vietnam, Jan. 2013, pp. 340-345.
[39] I. Patsadu, C. Nukoolkit, and B. Watanapa, “Human gesture recognition using Kinect camera. In International Joint Conference on Computer Science and Software Engineering (JCSSE), Bangkok, Thailand, May 2012, pp. 28-32.
[40] B. J. Southwell and G. Fang, “Human object recognition using colour and depth information from an RGB-D Kinect sensor,” International Journal of Advanced Robotic Systems, vol. 10, 2013.
[41] C. Granata, A. Ibanez, and P. Bidaud, “Human activity-understanding: a multilayer approach combining body movements and contextual descriptors analysis,” International Journal of Advanced Robotic Systems, vol. 12, 2015.
[42] Z. Zhang, Y. Liu, A. Li, and M. Wang, “A novel method for user-defined human posture recognition using Kinect,” in International Congress on Image and Signal Processing (CISP), Dalian, China, Oct. 2014, pp. 736-740.
[43] D. Catuhe, Programming with the Kinect for windows software development kit redmond. 1st ed. Microsoft Press; 2012.
[44] M. T. Pilevar, H. Feili, and M. Soltani, “Classification of persian textual documents using Learning Vector Quantization,” in national Conference on Natural Language Processing and Knowledge Engineering, Dalian, China, Sep. 2009, pp. 1-6.
[45] H. Xiao, L. Yu, and K. Chen, “An efficient method of language identification using LVQ Network,” in International Conference on Signal Processing, Beijing, China, Oct. 2008, pp. 1690-1694.
[46] C. Wang, H. Zhang, and C. Yu, “Research on color recognition of urine test paper based on learning vector quantization (LVQ),” in International Conference on Instrumentation & Measurement, Computer, Communication and Control (IMCCC), Harbin, China, Dec. 2012, pp. 850-853.
[47] Kinect for windows, https://developer.microsoft.com/en-us/windows/kinect/
[48] J. Aleotti and S. Caselli, “Interactive teaching of task-oriented robot grasps,” Robotics and Autonomous Systems, vol. 58, no. 5, pp. 539-550, 2010.
[49] S. R. Munasinghe, M. Nakamura, S. Goto, and N. Kyura, “Optimum contouring of industrial robot arms under assigned velocity and torque constraints,” IEEE Transactions on Systems, Man, and Cybernetics Part C: Applications And Reviews, vol. 31, no. 2, pp. 159-167, 2001.
[50] C. W. Kennedy and J. P. Desai, “Modeling and control of the Mitsubishi PA-10 robot arm harmonic drive system,” IEEE-ASME Transactions on Mechatronics, vol. 10, no. 3, pp. 263-274, 2005.
[51] W. Shen, J. Gu, and E. E. Milios, “Self-configuration fuzzy system for inverse kinematics of robot manipulators,” in Annual Meeting of the North American Fuzzy Information Processing Society, Montreal, QC, Canada, Jun. 2006, pp. 41-45.
[52] V. Feliu, J. A. Somolinos, and A. Garcia, “Inverse dynamics based control system for a three-degree-of-freedom flexible arm,” IEEE Transactions on Robotics and Automation, vol. 19, no. 6, pp. 1007-1014, 2003.
[53] M. Shimizu, H. Kakuya, W. K. Yoon, K. Kitagaki, and K. Kosuge, “Analytical inverse kinematic computation for 7-DOF redundant manipulators with joint limits and its application to redundancy resolution,” IEEE Transactions on Robotics, vol. 24, no. 5, pp. 1131-1142, 2008.
[54] W. J. Wang, C. H. Huang, I. H. Lai, and H. C. Chen, “A robot arm for pushing elevator buttons,” in Proceedings of SICE Annual Conference, Taipei, Taiwan, Aug. 2010, pp. 1844-1848.
[55] A. Nilsson and P. Holmberg, “Combining a stable 2-D vision camera and an ultrasonic range detector for 3-D position estimation,” IEEE Transactions on Instrumentation and Measurement, vol. 43, no. 2, pp. 272-276, 1994.
[56] J. Y. Baek and M. C. Lee, “A study on detecting elevator entrance door using stereo vision in multi floor environment,” in ICROS-SICE International Joint Conference, Fukuoka, Japan, Aug. 2009, pp. 1370-1373.
[57] C. S. Fraser and S. Cronk, “A hybrid measurement approach for close-range photogrammetry,” Isprs Journal of Photogrammetry and Remote Sensing, vol. 64, no. 3, pp. 328-333, 2009.
[58] F. A. van den Heuvel, “3D reconstruction from a single image using geometric constraints,” Isprs Journal of Photogrammetry and Remote Sensing, vol. 53, no. 6, pp. 354-368, 1998.
[59] D. H. Zhang, J. Liang, and C. Guo, “Photogrammetric 3D measurement method applying to automobile panel,” in International Conference on Computer and Automation Engineering (ICCAE), Singapore, Feb. 2010, pp. 70-74.
[60] T. Egami, S. Oe, K. Terada, and T. Kashiwagi, “Three dimensional measurement using color image and movable CCD system,” in Annual Conference of the IEEE Industrial Electronics Society, Denver, Colorado, USA, Nov. 2001, pp. 1932-1936.
[61] C. C. Hsu, M. C. Lu, W. Y. Wang, and Y. Y. Lu, “Three-dimensional measurement of distant objects based on laser-projected CCD images,” IET Science Measurement & Technology, vol. 3, no. 3, pp. 197-207, 2009.
[62] J. J. Aguilar, F. Torres, and M. A. Lope, “Stereo vision for 3D measurement: accuracy analysis, calibration and industrial applications,” Measurement, vol. 18, no. 4, pp. 193-200, 1996.
[63] L. Feng, L. Xiaoyu, and C. Yi, “An efficient detection method for rare colored capsule based on RGB and HSV color space.” In IEEE International Conference on Granular Computing (GrC), Noboribetsu, Japan, Oct. 2014, pp. 175-175.
[64] R. Laganie`re, OpenCV2 computer vision application programming Cookbook, Birmingham: Packt Publishing, 2011.
[65] R. Jain, R. Kasturi, and B. G. Schunk, Machine Vision. New York: McGraw-Hill, 1995.
[66] B. S. Kim, S. H. Lee, and N. I. Cho, “Real-time panorama canvas of natural images,” IEEE Transactions on Consumer Electronics, vol. 57, no. 4, pp. 1961-1968, 2011.
[67] M. Bellone, G. Reina, N. I. Giannoccaro, and L. Spedicato, “Unevenness point descriptor for terrain analysis in mobile robot applications,” International Journal of Advanced Robotic Systems, vol. 10, 2013.
[68] S. Laible, Y. N. Khan, and A. Zell, “Terrain classification with conditional random fields on fused 3D LIDAR and camera data,” European Conference on Mobile Robots (ECMR), Barcelona, Spain, Set. 2013, pp. 172-177.
[69] G. Jia, X. Wang, and H. Wei, “An effective approach for selection of terrain modeling methods,” IEEE Geoscience and Remote Sensing Letters, vol. 10, no. 4, pp. 875-879, 2013.
[70] G. Reina, A. Milella, and J. Underwood, “Self-learning classification of radar features for scene understanding,” Robotics and Autonomous Systems, vol. 60, no. 11, pp. 1377-1388, 2012.
[71] F. L. G. Bermudez, R. C. Julian, D. W. Haldane, P. Abbeel, and R. S. Fearing, “Performance analysis and terrain classification for a legged robot over rough terrain,” in IEEE/RSJ International Conference on Intelligent Robots and Systems, Vilamoura, Algarve, Portugal, Oct. 2012, pp. 513-519.
[72] D. Tick, T. Rahman, C. Busso, and N. Gans, “Indoor robotic terrain classification via angular velocity based hierarchical classifier selection,” in IEEE International Conference on Robotics and Automation (ICRA), Saint Paul, Minnesota, May 2012, pp. 3594-3600.
[73] M. Häselich, M. Arends, N. Wojke, F. Neuhaus, and D. Paulus, “Probabilistic terrain classification in unstructured environments,” Robotics and Autonomous Systems, vol. 61, no. 10, pp. 1051-1059, 2013.
[74] K. Walas, “Terrain classification and negotiation with a walking robot,” Journal of Intelligent & Robotic Systems, vol. 78, pp. 401-423, 2015.
[75] J. Jiang, D. Tu, S. Xu, and Q. Zhao, “Cognitive response navigation algorithm for mobile robots using biological antennas,” Robotica, vol. 32, no. 5, pp. 743-756, 2014.
[76] X. Yang, M. Moallem, and R. V. Patel, “A layered goal-oriented fuzzy motion planning strategy for mobile robot navigation,” IEEE Transactions on Systems, Man, and Cybernetics, Part B Cybernetics, vol. 35, no. 6, pp. 1214-1224, 2005.
[77] C. Ye and P. Webb, “A sub goal seeking approach for reactive navigation in complex unknown environments,” Robotics and Autonomous Systems, vol. 57, no. 9, pp. 877-888, 2009.
[78] O. R. E. Motlagh, T. S. Hong, and N. Ismail, “Development of a new minimum avoidance system for a behavior-based mobile robot,” Fuzzy Sets and Systems, vol. 160, no. 13, pp. 1929-1946, 2009.
[79] M. F. R. Lee, F. H. S. Chiu, C. W. de Silva, and C. Y. A. Shih, “Intelligent navigation and micro-spectrometer content inspection system for a homecare mobile robot,” International Journal of Fuzzy Systems, vol. 16, no. 3, pp. 389-399, 2014.
[80] XtionPro, https://www.asus.com/3D-Sensor/Xtion_PRO/
[81] A robot moves from lower level to higher level, https://youtu.be/NA3seYi9k0E
[82] A robot moves from higher level to lower level, https://youtu.be/MigPSI6wWPQ
指導教授 王文俊(Wen-June Wang) 審核日期 2016-8-19
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明