博碩士論文 111522150 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:103 、訪客IP:3.12.147.119
姓名 程劭予(Shao-Yu Cheng)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 用於輔助工業零件辨識之尺寸估算系統
(A Size Estimation Mechanism for Assisting Industrial Component Recognition)
相關論文
★ 以伸展樹為基礎的Android Binder Driver★ 應用增量式學習於多種農作物判釋之研究
★ 應用分類重建學習偵測航照圖幅中的新穎坵塊★ 使用無紋理之3D CAD工業零件模型結合長度檢測實現細粒度真實工業零件影像分類
★ 一個建立在平行工作系統上的動態全球計算平台★ 用權重參照計數演算法執行主動物件垃圾收集
★ 一個動態負載平衡之最大可能性估算計算架構★ 利用多項系統負載資訊進行動態P2P系統重組的策略研究
★ 基於Hadoop系統的雲端應用程式特徵擷取與計算監測架構★ 適用於大型動態分散式系統的調適性計算模型
★ 一個提供彈性虛擬資料中心的雲端服務平台★ 雲端彈性虛擬機房服務平台之資源控管中心
★ 一個適用於自動供應雲端系統的動態調適計算架構★ 線性相關工作與非相關工作的探索式排程策略
★ 適用於大資料集高效率的分散式階層分群演算法★ 混合雲端環境上的多重代理人動態調適計算管理架構
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 ( 永不開放)
摘要(中) 隨著科技的日新月異,許多技術逐漸應用於工業領域,帶來了第四次工業革命,亦稱為工業4.0。其中,人工智慧(Artificial Intelligence, AI)與擴增實境(Augmented Reality, AR)的結合,為工業帶來了許多的優勢。在現代工廠中,技術人員在組裝零件時,可以即時透過AR眼鏡來查看零件的辨識結果,這需要高度準確的模型來支持。然而,僅依賴於零件的外觀和特徵進行辨識,在面對外觀相似但尺寸有差異的零件時,模型容易產生混淆,導致模型準確度下降。此外,零件的不同角度、照明條件和背景等外在環境因素也會影響模型的準確度。為了解決這一問題,我們開發了一套系統,該系統要求在外在環境干擾較低的情況下,利用擁有深度感測器的HoloLens 2來取得單一零件的RGB影像及深度影像,深度影像能提供感測器與物體之間的距離資訊。然後利用這些影像並透過我們所提出的方法來找出能容納零件的最小矩形,進而估算出其實際寬度與長度,並將我們的方法與K-means聚類演算法進行比較,該演算法也能找出容納物體的最小矩形並進行尺寸估算。找出最小矩形的目的是為了能夠統一各種形狀的零件,提供一個標準化的尺寸參考,減少計算複雜度。最後,再使用我們所估算出的尺寸資訊來對模型的辨識結果進行額外的處理。實驗結果表明,我們所提出的方法在尺寸估算上明顯優於K-means聚類演算法,並證明了模型在尺寸資訊的輔助下,能有效提升其準確度。
摘要(英) With the rapid advancement of technology, many innovations have gradually been applied to the industrial sector, ushering in the Fourth Industrial Revolution, also known as Industry 4.0. Among these innovations, the combination of Artificial Intelligence (AI) and Augmented Reality (AR) has brought numerous advantages to the industry. In modern factories, technicians can use AR glasses to view real-time identification results of components during assembly, which requires highly accurate models. However, relying solely on the appearance and features of components can lead to confusion when dealing with components that look similar but differ in size, resulting in decreased model accuracy. Additionally, factors such as different angles, lighting conditions, and backgrounds can also affect the model′s accuracy. To address this issue, we developed a system that, under conditions of minimal environmental interference, uses the HoloLens 2 equipped with a depth sensor to capture RGB and depth images of the individual component. The depth images provide information on the distance between the sensor and the object. Using these images, our proposed method identifies the minimum rectangle that can enclose the component, estimates its actual width and length, and compares our method with the K-means clustering algorithm, which also identifies the minimum enclosing rectangle and estimates dimensions. The purpose of finding the minimum rectangle is to standardize the dimensions of components of various shapes, reducing computational complexity. Finally, we use the size information we estimated to further process the model′s recognition results. Experimental results show that our proposed method significantly outperforms the K-means clustering algorithm in size estimation, and demonstrates that the model can effectively improve its accuracy with the assistance of size information.
關鍵字(中) ★ 擴增實境
★ HoloLens 2
★ 深度影像
★ 尺寸估算
關鍵字(英) ★ Augmented Reality
★ HoloLens 2
★ Depth Image
★ Size Estimation
論文目次 摘要 i
Abstract ii
目錄 iii
圖目錄 iv
表目錄 v
一、 緒論 1
1-1 研究背景 1
1-2 研究動機與目的 1
1-3 論文架構 3
二、 背景知識 4
2-1 HoloLens 2 4
2-2 Unity 6
2-3 HoloLens 2 Sensor Streaming 6
2-4 MRTK3 7
2-5 OpenCV 7
三、 相關研究 8
3-1 深度攝影機在理解物體大小中的應用與重要性 8
3-2 透過深度影像找出容納物體之矩形並進行尺寸估算 8
四、 系統架構與流程 10
4-1 系統架構 10
4-2 系統流程 11
五、 估算物體尺寸之方法 14
5-1 容納物體之最小矩形 14
5-1-1 灰階轉換 14
5-1-2 高斯模糊 15
5-1-3 邊緣偵測 17
5-1-4 膨脹與侵蝕操作 19
5-1-5 找出容納物體之最小矩形 21
5-2 座標轉換 23
5-3 尺寸估算 30
六、 實驗與結果 34
6-1 實驗環境與設置 34
6-2 尺寸估算方法之比較 35
6-3 模型準確度之比較 41
七、 結論與未來研究方向 45
參考文獻 46
參考文獻 [1]F. A. G. Frank, L. S. Dalenogare, and N. F. Ayala, "Industry 4.0 technologies: Implementation patterns in manufacturing companies," International Journal of Production Economics, vol. 210, pp. 15-26, 2019.
[2]T. Kalsoom, et al., "Impact of IOT on Manufacturing Industry 4.0: A new triangular systematic review," Sustainability, vol. 13, no. 22, pp. 12506, 2021.
[3]S. Sundaram and A. Zeid, "Artificial intelligence-based smart quality inspection for manufacturing," Micromachines, vol. 14, no. 3, pp. 570, 2023.
[4]M. Javaid, A. Haleem, R. P. Singh, and R. Suman, "Substantial capabilities of robotics in enhancing industry 4.0 implementation," Cognitive Robotics, vol. 1, pp. 58-75, 2021.
[5]V. Reljić, I. Milenković, S. Dudić, J. Šulc, and B. Bajči, "Augmented reality applications in industry 4.0 environment," Applied Sciences, vol. 11, no. 12, pp. 5592, 2021.
[6]S. K. Jagatheesaperumal, M. Rahouti, K. Ahmad, A. Al-Fuqaha, and M. Guizani, "The duo of artificial intelligence and big data for industry 4.0: Applications, techniques, challenges, and future research directions," IEEE Internet of Things Journal, vol. 9, no. 15, pp. 12861-12885, 2021.
[7]E. Marino, L. Barbieri, F. Bruno, and M. Muzzupappa, "Assessing user performance in augmented reality assembly guidance for industry 4.0 operators," Computers in Industry, vol. 157, pp. 104085, 2024.
[8]H. Subakti and J. R. Jiang, "Indoor augmented reality using deep learning for industry 4.0 smart factories," in 2018 IEEE 42nd Annual Computer Software and Applications Conference (COMPSAC), vol. 2, pp. 63-68, IEEE, July 2018.
[9]T. Akhmetov, "Industrial Safety using Augmented Reality and Artificial Intelligence," Doctoral dissertation, Nazarbayev University School of Engineering and Digital Sciences, 2023.
[10]J. S. Devagiri, S. Paheding, Q. Niyaz, X. Yang, and S. Smith, "Augmented Reality and Artificial Intelligence in industry: Trends, tools, and future challenges," Expert Systems with Applications, vol. 207, pp. 118002, 2022.
[11]A. Malta, M. Mendes, and T. Farinha, "Augmented reality maintenance assistant using yolov5," Applied Sciences, vol. 11, no. 11, pp. 4758, 2021.
[12]I. K. Mirani, C. Tianhua, M. A. A. Khan, S. M. Aamir, and W. Menhaj, "Object Recognition in Different Lighting Conditions at Various Angles by Deep Learning Method," arXiv preprint arXiv:2210.09618, 2022.
[13]H. Patel, "A Comprehensive Study on Object Detection Techniques in Unconstrained Environments," arXiv preprint arXiv:2304.05295, 2023.
[14]"HoloLens 2 Hardware," Microsoft, 2023. [Online]. Available: https://www.microsoft.com/en-us/hololens/hardware. [Accessed: Jun. 20, 2024].
[15]"Unity," Unity Technologies, 2023. [Online]. Available: https://unity.com/. [Accessed: Jun. 20, 2024].
[16]J. C. Dibene and E. Dunn, "HoloLens 2 Sensor Streaming," arXiv preprint arXiv:2211.02648, 2022.
[17]J. Dibenes, "HL2SS," GitHub, 2023. [Online]. Available: https://github.com/jdibenes/hl2ss. [Accessed: Jun. 20, 2024].
[18]"MRTK3 Overview," Microsoft, 2023. [Online]. Available: https://learn.microsoft.com/en-us/windows/mixed-reality/mrtk-unity/mrtk3-overview/. [Accessed: Jun. 20, 2024].
[19]"OpenCV Documentation," OpenCV, 2023. [Online]. Available: https://docs.opencv.org/4.x/. [Accessed: Jun. 20, 2024].
[20]F. P. W. Lo, Y. Sun, J. Qiu, and B. Lo, "Image-based food classification and volume estimation for dietary assessment: A review," IEEE Journal of Biomedical and Health Informatics, vol. 24, no. 7, pp. 1926-1939, 2020.
[21]D. S. Lee and S. K. Kwon, "Amount Estimation Method for Food Intake Based on Color and Depth Images through Deep Learning," Sensors, vol. 24, no. 7, pp. 2044, 2024.
[22]S. SrirangamSridharan, O. Ulutan, S. N. T. Priyo, S. Rallapalli, and M. Srivatsa, "Object localization and size estimation from RGB-D images," arXiv preprint arXiv:1808.00641, 2018.
[23]"Color Conversions," OpenCV, 2023. [Online]. Available: https://docs.opencv.org/4.x/de/d25/imgproc_color_conversions.html. [Accessed: Jun. 20, 2024].
[24]"Image Filtering," OpenCV, 2023. [Online]. Available: https://docs.opencv.org/4.x/d4/d13/tutorial_py_filtering.html. [Accessed: Jun. 20, 2024].
[25]"Canny Edge Detection," OpenCV, 2023. [Online]. Available: https://docs.opencv.org/4.x/da/d22/tutorial_py_canny.html. [Accessed: Jun. 20, 2024].
[26]"Erosion and Dilation," OpenCV, 2023. [Online]. Available: https://docs.opencv.org/4.x/db/df6/tutorial_erosion_dilatation.html. [Accessed: Jun. 20, 2024].
[27]Z. Zhao, Y. Zhu, Y. Li, Z. Qiu, Y. Luo, C. Xie, and Z. Zhang, "Multi-camera-based universal measurement method for 6-DOF of rigid bodies in world coordinate system," Sensors, vol. 20, no. 19, pp. 5547, 2020.
[28]D. Wang, J. Yue, P. Chai, H. Sun, and F. Li, "Calibration of camera internal parameters based on grey wolf optimization improved by levy flight and mutation," Scientific Reports, vol. 12, no. 1, pp. 7828, 2022.
[29]D. Ungureanu, F. Bogo, S. Galliani, P. Sama, X. Duan, C. Meekhof, et al., "Hololens 2 research mode as a tool for computer vision research," arXiv preprint arXiv:2008.11239, 2020.
[30]"Issue #47," GitHub, 2023. [Online]. Available: https://github.com/jdibenes/hl2ss/issues/47. [Accessed: Jun. 20, 2024].
指導教授 王尉任 審核日期 2024-7-24
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明