博碩士論文 107552024 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:45 、訪客IP:18.116.118.198
姓名 鍾禮安(Li-An Chung)  查詢紙本館藏   畢業系所 資訊工程學系在職專班
論文名稱 基於深度神經網路的智慧指套應用於裝配動作辨識
(Deep Neural Network-based Smart Finger Sleeve Applying to Assembly Activity Recognition)
相關論文
★ 整合GRAFCET虛擬機器的智慧型控制器開發平台★ 分散式工業電子看板網路系統設計與實作
★ 設計與實作一個基於雙攝影機視覺系統的雙點觸控螢幕★ 智慧型機器人的嵌入式計算平台
★ 一個即時移動物偵測與追蹤的嵌入式系統★ 一個固態硬碟的多處理器架構與分散式控制演算法
★ 基於立體視覺手勢辨識的人機互動系統★ 整合仿生智慧行為控制的機器人系統晶片設計
★ 嵌入式無線影像感測網路的設計與實作★ 以雙核心處理器為基礎之車牌辨識系統
★ 基於立體視覺的連續三維手勢辨識★ 微型、超低功耗無線感測網路控制器設計與硬體實作
★ 串流影像之即時人臉偵測、追蹤與辨識─嵌入式系統設計★ 一個快速立體視覺系統的嵌入式硬體設計
★ 即時連續影像接合系統設計與實作★ 基於雙核心平台的嵌入式步態辨識系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2025-7-20以後開放)
摘要(中) 在工廠量產產品時,依據產品規格與產量指派產線作業員進行產品裝配,但由於產
線員工裝配產品時,易發生人為操作失誤導致產品破裂或是產品漏裝,進而導致影響整
體產品出貨品質與製造成本。
故本篇論文提出以作業員戴上指套且指套上黏附九軸感測器和手握自動螺絲起子,
蒐集動作感測數據,再以卡爾曼濾波進行感測數據濾波,以越零率方式進行動作切割,
再利用Mahony 濾波計算誤差模型來校正感測器數據後計算出四元數,利用四元數當作
特徵後利用一維卷積神經網路來分類 (1) 右上頂螺絲、(2) 右下頂螺絲、(3) 右斜上頂
螺絲、(4) 左斜下頂螺絲、(5) 向下鎖螺絲、(6) 抬手放下,共六種作業員裝配基本動作,
由此六種裝配動作組合順序以判別裝配產品是否正確。
實驗結果發現利用Mahony 濾波校正感測器後再計算出四元數為特徵進行一維卷積
神經網路1DCNN 能提升72% 的辨識率,進行長短期記憶模型LSTM 能提升30%辨識
率,進行CNN-LSTM 也能提升70%的辨識率,而校正後的數據在1DCNN 有高達94%
的平均辨識率,比使用LSTM 模型高39%辨識率,比使用CNN-LSTM 高3%辨識率,
故使用此方法能預防現今作業員鎖螺絲動作出錯與降低作業員動作訓練成本,與改善產
線員工管理智能化不足等問題。
摘要(英) When mass-producing products in the factory, the production line operators are assigned
to carry out product assembly based on product specifications and product number. However,
when the production line employees assemble products, human operation errors are prone to
cause product rupture or product miss screw , which in turn affects the overall product quality
and manufacturing cost.
Therefore, This paper proposes that the operator wears a finger sleeve with a nine-axis
sensor attached and holds an automatic screwdriver to collect motion sensing data, and then
uses the Kalman filter to filter the sensing data, and performs motion cutting with a zerocrossing
rate. Then use Mahony filter to calculate the error model to correct the sensor data and
calculate the quaternion, use the quaternion as a feature, and then use a one-dimensional
Convolutional neural network to classify (1) upper right screw , (2) lower right screw ,(3) right
diagonal screw,(4) Left diagonal screw,(5) down lock screw ,(6) Raise hand and lay down , total
of six basic assembly actions for operators, The sequence of six assembly actions is used to
judge whether the assembled product is correct.
The experimental results found that the use of Mahony filter to calibrate the sensor and
then calculate the quaternion as the feature for one-dimensional convolutional neural
network 1DCNN can increase the recognition rate by 72%, and the long- and short-term
memory model LSTM can increase the recognition rate by 30%. CNN-LSTM can also increase
the recognition rate by 70%, and the corrected data has an average recognition rate of 94%
in 1DCNN, which is 39% higher than the LSTM model and 3% higher than the CNN-LSTM.
Using this method can prevent the current operator from locking screws and reduce the cost
of operator training, and improve the lack of intelligent management of production line
employees.
關鍵字(中) ★ 九軸控制器
★ 深度學習
關鍵字(英) ★ CNN
★ Deep learning
論文目次 中文摘要 i
英文摘要 ii
誌 謝 iii
目 錄 iiiv
圖目錄 vii
表目錄 ix
一、 緒論 1
1-1 研究背景 1
1-2 研究目的 4
1-3 論文架構 5
二、 文獻回顧 6
2-1遞歸神經網路 6
2-2卷積神經網路 .7
2-2-1 一維卷積神經網路 8
2-3長短期記憶模型與雙向長短期記憶網路 9
2-4卡爾曼濾波器 12
2-5 Mahony濾波融合四元數 12
三、 鎖螺絲動作辨識系統 15
3-1 嵌入式高階系統設計方法論 15
3-1-1 IDEF0 17
3-1-2 GRAFCET 18
3-2 鎖螺絲動作辨識系統IDEF0 20
3-2-1 感測器校正模組 21
3-2-2 感測器融合模組 22
3-2-3 感測器動作切割模組 22
3-2-4 動作辨識模組 23
3-3 鎖螺絲動作辨識系統 Grafcet 24
3-3-1 感測器校正 Grafcet 25
3-3-2感測器融合GRAFCET 27
3-3-3感測器動作切割GRAFCET 28
3-3-4動作辨識模組GRAFCET 29
四、 實驗結果與分析 30
4-1 嵌入式軟硬體開發平台 30
4-1-1 MetaMotion C 感測器 30
4-1-2 深度開發平台 32
4-2 裝配動作感測資料庫建立流程 33
4-3 裝配動作感測資料庫預處理 37
4-3-1 感測器Mahony Filter濾波校正 37
4-3-2 動作感測資料庫 43
4-3-3 卡爾曼濾波動作切割 45
4-4 實驗評比指標介紹 49
4-5深度學習實驗比較 51
4-5-1 1D CNN 實驗比較 51
4-5-2 LSTM 實驗比較 60
4-5-3 神經網路校驗前後辨識率比較 63
五、 結論與未來展望 65
5-1 結論 65
5-2 未來展望 66
參考文獻 66
參考文獻 Bastian C. Müllera,The Duy Nguyenb, Quang-Vinh Dangc, Bui Minh Ducc, Günther Seligera ,Jörg Krügerb, Holger Kohld, “Motion tracking applied in assembly for worker training in different locations”, The 23rd CIRP Conference on Life Cycle Engineering, Volume 48, pp. 460-465,2016
[2] Gartner Says Global End-User Spending on Wearable Devices to Total $52 Billion in2020,2019[Online].Available:https://www.gartner.com/en/newsroom/press-releases/2019-10-30-gartner-says-global-end-user-spending-on-wearable-dev
[3] Yi-Chen Huang , Tsung-Long Chen , Bo-Chun Chiu ,Chih-Wei Yi , Chung-Wei Lin ; Yu-Jung Yeh ; Lun-Chia Kuo, "Calculate Golf Swing Trajectories from IMU Sensing Data," 2012 41st International Conference on Parallel Processing Workshops, Pittsburgh, PA, pp. 505-513,2012
[4] Anguita, Davide,Ghio, Alessandro,Oneto, Luca,Parra Perez, Xavier. “A public domain dataset for human activity recognition using smartphones. “,A: European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning. "Proceedings of the 21th International European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning". , pp. 437-442,2013
[5] Y. Guo, L. Yang, X. Ding, J. Han and Y. Liu, "OpenSesame: Unlocking smart phone through handshaking biometrics," 2013 Proceedings IEEE INFOCOM, Turin, pp. 365-369,2013
[6] F.Pilatia, Maurizio Facciob, Mauro Gamberic, Alberto Regattieri, “Learning manual assembly through real-time motion capture for operator training with augmented reality”, 10th Conference on Learning Factories,pp189-195,2020
[7] C. Munroe, Y. Meng, H. Yanco and M. Begum, "Augmented reality eyeglasses for promoting home-based rehabilitation for children with cerebral palsy," 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), Christchurch, pp. 565-565,2016
[8] M.T.D. G. N. M. Karunarathna, C.S. A. Siriwardana and M. Y. R. Dharmawardana, "An Activity Analysis to Investigate the Root Causes of Worker Productivity Losses in Sri Lankan Building Construction Project," 2019 Moratuwa Engineering Research Conference (MERCon), Moratuwa, Sri Lanka, pp. 412-417,2019
[9] Y.Seol Lee,Sung-Bae Cho, "Activity Recognition Using Hierarchical Hidden Markov Models on a Smartphone with 3D Accelerometer " Hybrid Artificial Intelligent Systems: 6th International Conference, pp.460-467,2011
[10] S.Dara and P. Tumma,"Feature Extraction By Using Deep Learning: A Survey," 2018 Second International Conference on Electronics, Communication and Aerospace Technology (ICECA), Coimbatore, pp. 1795-1801,2018
[11] F.J. Ordóñez, D. Roggen, “Deep Convolutional and LSTM Recurrent Neural Networks for Multimodal Wearable Activity Recognition”. Sensors 2016, vol 16, pp. 115,2016
[12] N.Michel, "Recurrent neural networks: overview and perspectives," Proceedings of the 2003 International Symposium on Circuits and Systems, pp. III-III, 2003
[13] S. Albawi, T. A. Mohammed and S. Al-Zawi, "Understanding of a convolutional neural network," 2017 International Conference on Engineering and Technology (ICET), pp. 1-6,2017
[14] Drumond, Rafael Rego, Bruno A. D. Marques, Cristina Nader Vasconcelos and Esteban Walter Gonzalez Clua. “PEEK - An LSTM Recurrent Network for Motion Classification from Sparse Data.” VISIGRAP,2018
[15] S. Albawi, T. A. Mohammed and S. Al-Zawi, "Understanding of a convolutional neural network," 2017 International Conference on Engineering and Technology, pp. 1-6,2017
[16] R. E. Kalman, A new approach to linear filtering and prediction problems, Journal of basic Engineering, Volume 82 , no.1 , pp35-45,1960
[17] mblientlab, Metawearc product specification v1.0, 2018 [Online]. Available: https://mbientlab.com/documents/MetaWearC-CPRO-PS.pdf
[18] Keras, Keras API ,2020 [Online]. Available: https://keras.io/api/
[19] D. Tedaldi, A. Pretto and E. Menegatti, "A robust and easy to implement method for IMU calibration without external equipments," 2014 IEEE International Conference on Robotics and Automation (ICRA), pp. 3042-3049,2014
[20] H. El-Ghaish, M. E. Hussien, A. Shoukry and R. Onai, "Human Action Recognition Based on Integrating Body Pose, Part Shape, and Motion," in IEEE Access, vol. 6, pp. 49040-49055, 2018
[21] P. Elias, J. Sedmidubsky and P. Zezula, "Understanding the Gap between 2D and 3D Skeleton-Based Action Recognition," 2019 IEEE International Symposium on Multimedia (ISM), pp. 192-1923,2019
[22] Huang, S. , Tang, J. , Dai, J. , Wang, Y. Signal Status Recognition Based on 1DCNN and Its Feature Extraction Mechanism Analysis. Sensors ,pp19,2019
[23] F. Hernández, L. F. Suárez, J. Villamizar and M. Altuve, "Human Activity Recognition on Smartphones Using a Bidirectional LSTM Network," 2019 XXII Symposium on Image, Signal Processing and Artificial Vision (STSIVA), Bucaramanga, Colombia, pp. 1-5,2019,
指導教授 陳慶瀚(Chen Qing-han) 審核日期 2020-8-24
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明