博碩士論文 110323125 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:203 、訪客IP:3.149.254.146
姓名 楊仁皓(Jen-Hao Yang)  查詢紙本館藏   畢業系所 機械工程學系
論文名稱 精微產品組裝的智能人機協作系統
(Intelligent Human-Robot Collaboration System for Fine Product Assembly)
相關論文
★ 微波化學強化碳化矽表面拋光之研究★ 智慧製造垂直系統整合之資產管理殼
★ 應用於智慧製造之網宇實體系統訓練資料異常檢知★ 應用深度學習與物聯網評估CNC加工時間
★ 混合視覺與光達感測的感知融合機器人定位系統★ 結合遺傳演算法與類神經網路之 分散式機械結構最佳化系統之研究
★ 以資料分散式服務發展智慧產品與其系統之研究★ YOLOv7 模型於小物件檢測之改良與應用
★ 應用分治法於刀具壽命預測模型之研究★ 基於多通道 RSS 的可見光定位系統之設計與其訊號處理方法
★ 自動化工作站排程系統之設計與應用★ 基於區塊鏈之去中心化製造執行系統
★ 應用於專案排程之混合蟻群演算法
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 製造業正面臨著勞動力短缺的問題,同時也面臨訂單量少交期短的挑戰。傳統的生產方式已無法應對這些需求。在這種情況下,機器手臂成為了一個關鍵的角色。協作型機器手臂可以與人類協同作業,能適應「少量多樣」的生產模式。這些協作型機器手臂具有安全性、靈活性和人機協作等特點,能夠在同一工作場域與人類共同工作。儘管如此,協作型機器手臂的應用仍然面臨一些挑戰,例如缺少人類行為辨識的功能以及整合各種系統的需求。
本研究的目的是解決人類和機器人在共享工作空間中工作時的效率問題,所提出的智能人機協作系統使用立體相機進行人類行為辨識,並使用穿戴手環來檢測人類手勢。接著發展深度學習模型,依據相機和手環收集的數據識別人類行為,並控制機械臂輔助人類操作。結果證明了所提出的方法在即時識別人類行為方面的可行性和有效性,因此系統能夠更全面地理解使用者的動作意圖,並在合適的時間讓機器人介入組裝。這樣的系統設計能夠充份提升現有的人機協作應用中之人類行為辨識技術的性能,促進人與機器人能夠在同一場域中協同作業的可能性。
摘要(英) The manufacturing industry is facing labor shortages and the challenge of low order volumes with shorter deliery. Conventional production methods are no longer able to cope with these demands, in which case robotic arms are becoming a key player. Collaborative robotic arms can work in tandem with humans and can adapt to the ′small amount, varied′ production model. These collaborative robotic arms are characterized by safety, flexibility, and human-machine collaboration, and are able to work with humans in the same workplace. Nevertheless, the application of collaborative robotic arms still faces some challenges, such as the lack of human behavior recognition and the need to integrate various systems.
The aim of this research is to address the efficiency of humans and robots when working in a shared workspace.The proposed intelligent human-robot collaborative system uses a stereo camera for human behavior recognition and a wearable bracelet to detect human gestures. Then a deep learning model is developed to recognize human behaviors using the data collected by the camera and the bracelet, in order to control a robotic arm to assist human operations. The results demonstrate that the feasibility and effectiveness of the proposed approach canreal-time recognize human behaviors in . There fore the system can understand the user′s movement intentions more comprehensively and allow the robot to intervene in the assembly at the right time. The proposed system can improve the lack of human behavior recognition in existing human-robot collaborative applications, enabling humans and robots to work together in the same field.
關鍵字(中) ★ 人機協作
★ 手勢辨識
★ 人類行為辨識
★ 深度學習
關鍵字(英)
論文目次 摘要 I
Abstract II
致謝 III
目錄 IV
圖目錄 VI
表目錄 X
第一章、緒論 1
1-1 研究背景 1
1-2 文獻回顧 5
1-3 研究動機 11
1-4 論文架構 12
第二章、相關技術介紹 13
2-1 ROS機器人作業系統 13
2-2 深度學習技術 14
2-3 人體動作辨識 23
第三章、研究方法 24
3-1 系統架構 24
3-2 手勢辨識模組 25
3-3 人類動作辨識模組 31
3-4 決策模組 33
第四章、實驗設計與結果 34
4-1 實驗設備 34
4-2 實驗設計 39
4-3 模型訓練 50
4-4 實驗結果 54
第五章、討論 75
5-1 人類動作辨識模組 75
5-2 手勢辨識模組 75
5-3 決策模組 77
第六章、結果與討論 78
6-1 具體貢獻 78
6-2 限制與範圍 79
6-3 未來展望 79
參考文獻 81
參考文獻 [1] 林顯易 and 謝名豐, "工業 4.0 中的智慧機器人," 科儀新知, no. 205, pp. 12-20, 2015.
[2] J. Krüger, T. K. Lien, and A. Verl, "Cooperation of human and machines in assembly lines," CIRP annals, vol. 58, no. 2, pp. 628-646, 2009.
[3] 蘇瑞堯, "工業機器人協作應用安全規範-國際標準 ISO 10218 系列發展," 臺灣勞工季刊, no. 68, pp. 74-80, 2021.
[4] H. Liu and L. Wang, "Gesture recognition for human-robot collaboration: A review," International Journal of Industrial Ergonomics, vol. 68, pp. 355-367, 2018.
[5] S. Pellegrinelli, H. Admoni, S. Javdani, and S. Srinivasa, "Human-robot shared workspace collaboration via hindsight optimization," in 2016 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2016: IEEE, pp. 831-838.
[6] S. Pellegrinelli, A. Orlandini, N. Pedrocchi, A. Umbrico, and T. Tolio, "Motion planning and scheduling for human and industrial-robot collaboration," CIRP Annals, vol. 66, no. 1, pp. 1-4, 2017.
[7] P. Tsarouchi, A.-S. Matthaiakis, S. Makris, and G. Chryssolouris, "On a human-robot collaboration in an assembly cell," International Journal of Computer Integrated Manufacturing, vol. 30, no. 6, pp. 580-589, 2017.
[8] T. B. Pulikottil, S. Pellegrinelli, and N. Pedrocchi, "A software tool for human-robot shared-workspace collaboration with task precedence constraints," Robotics and Computer-Integrated Manufacturing, vol. 67, p. 102051, 2021.
[9] P. Zheng, S. Li, L. Xia, L. Wang, and A. Nassehi, "A visual reasoning-based approach for mutual-cognitive human-robot collaboration," CIRP annals, vol. 71, no. 1, pp. 377-380, 2022.
[10] Z. Zhang, G. Peng, W. Wang, Y. Chen, Y. Jia, and S. Liu, "Prediction-based human-robot collaboration in assembly tasks using a learning from demonstration model," Sensors, vol. 22, no. 11, p. 4279, 2022.
[11] J. Liu, A. Shahroudy, M. Perez, G. Wang, L.-Y. Duan, and A. C. Kot, "Ntu rgb+ d 120: A large-scale benchmark for 3d human activity understanding," IEEE transactions on pattern analysis and machine intelligence, vol. 42, no. 10, pp. 2684-2701, 2019.
[12] B. Ren, M. Liu, R. Ding, and H. Liu, "A survey on 3d skeleton-based action recognition using learning method," arXiv preprint arXiv:2002.05907, 2020.
[13] W. Zhu et al., "Co-occurrence feature learning for skeleton based action recognition using regularized deep LSTM networks," in Proceedings of the AAAI conference on artificial intelligence, 2016, vol. 30, no. 1.
[14] P. Zhang, C. Lan, J. Xing, W. Zeng, J. Xue, and N. Zheng, "View adaptive neural networks for high performance skeleton-based human action recognition," IEEE transactions on pattern analysis and machine intelligence, vol. 41, no. 8, pp. 1963-1978, 2019.
[15] W. Peng, X. Hong, H. Chen, and G. Zhao, "Learning graph convolutional network for skeleton-based human action recognition by neural searching," in Proceedings of the AAAI conference on artificial intelligence, 2020, vol. 34, no. 03, pp. 2669-2676.
[16] L. Guo, Z. Lu, and L. Yao, "Human-machine interaction sensing technology based on hand gesture recognition: A review," IEEE Transactions on Human-Machine Systems, vol. 51, no. 4, pp. 300-309, 2021.
[17] Y. Ma et al., "Hand gesture recognition with convolutional neural networks for the multimodal UAV control," in 2017 Workshop on Research, Education and Development of Unmanned Aerial Systems (RED-UAS), 2017: IEEE, pp. 198-203.
[18] M.-K. Liu, Y.-T. Lin, Z.-W. Qiu, C.-K. Kuo, and C.-K. Wu, "Hand gesture recognition by a MMG-based wearable device," IEEE Sensors Journal, vol. 20, no. 24, pp. 14703-14712, 2020.
[19] X. Wang, D. Veeramani, and Z. Zhu, "Wearable Sensors-Based Hand Gesture Recognition for Human–Robot Collaboration in Construction," IEEE Sensors Journal, vol. 23, no. 1, pp. 495-505, 2022.
[20] S. K. Hopko, R. Khurana, R. K. Mehta, and P. R. Pagilla, "Effect of cognitive fatigue, operator sex, and robot assistance on task performance metrics, workload, and situation awareness in human-robot collaboration," IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 3049-3056, 2021.
[21] "ROS:Home." https://www.ros.org/ (accessed.
[22] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, 1998.
[23] "An Intuitive Explanation of Convolutional Neural Networks." https://ujjwalkarn.me/2016/08/11/intuitive-explanation-convnets/ (accessed.
[24] K. He, X. Zhang, S. Ren, and J. Sun, "Deep residual learning for image recognition," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 770-778.
[25] "Residual Leaning: 認識ResNet與他的冠名後繼者ResNeXt、ResNeSt." https://medium.com/ai-blog-tw/deep-learning-residual-leaning-%E8%AA%8D%E8%AD%98resnet%E8%88%87%E4%BB%96%E7%9A%84%E5%86%A0%E5%90%8D%E5%BE%8C%E7%B9%BC%E8%80%85resnext-resnest-6bedf9389ce (accessed.
[26] S. Xie, R. Girshick, P. Dollár, Z. Tu, and K. He, "Aggregated residual transformations for deep neural networks," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2017, pp. 1492-1500.
[27] "Recurrent Neural Networks, the Vanishing Gradient Problem, and Long Short-Term Memory." https://medium.com/@pranavp802/recurrent-neural-networks-the-vanishing-gradient-problem-and-lstms-3ac0ad8aff10 (accessed.
[28] "The Unreasonable Effectiveness of Recurrent Neural Networks." http://karpathy.github.io/2015/05/21/rnn-effectiveness/ (accessed.
[29] S. Hochreiter and J. Schmidhuber, "Long short-term memory," Neural computation, vol. 9, no. 8, pp. 1735-1780, 1997.
[30] "Bidirectional LSTM." https://paperswithcode.com/method/bilstm (accessed.
[31] S. M. Vieira, U. Kaymak, and J. M. Sousa, "Cohen′s kappa coefficient as a performance measure for feature selection," in International conference on fuzzy systems, 2010: IEEE, pp. 1-8.
[32] R. R. Selvaraju, M. Cogswell, A. Das, R. Vedantam, D. Parikh, and D. Batra, "Grad-cam: Visual explanations from deep networks via gradient-based localization," in Proceedings of the IEEE international conference on computer vision, 2017, pp. 618-626.
[33] B. Zhou, A. Khosla, A. Lapedriza, A. Oliva, and A. Torralba, "Learning deep features for discriminative localization," in Proceedings of the IEEE conference on computer vision and pattern recognition, 2016, pp. 2921-2929.
[34] "Mediapipe." https://developers.google.com/mediapipe (accessed.
[35] "ZED Body Tracking Overview." https://www.stereolabs.com/docs/body-tracking/ (accessed.
[36] "CoolSo." https://coolsotech.com/ (accessed.
[37] "UR5." https://www.universal-robots.com/tw/%E7%94%A2%E5%93%81/ur5/ (accessed.
[38] "OMRON NX1." https://www.omron.com.tw/products/category/automation-systems/programmable-controllers/nx1/cpu-units/index.html (accessed.
指導教授 林錦德(Chin-Te Lin) 審核日期 2023-7-27
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明