博碩士論文 106521083 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:45 、訪客IP:3.135.213.83
姓名 鄭元豪(Yuan-Hao JHENG)  查詢紙本館藏   畢業系所 電機工程學系
論文名稱 基於深度學習之3D醫療護具特徵再構建與變形
相關論文
★ 直接甲醇燃料電池混合供電系統之控制研究★ 利用折射率檢測法在水耕植物之水質檢測研究
★ DSP主控之模型車自動導控系統★ 旋轉式倒單擺動作控制之再設計
★ 高速公路上下匝道燈號之模糊控制決策★ 模糊集合之模糊度探討
★ 雙質量彈簧連結系統運動控制性能之再改良★ 桌上曲棍球之影像視覺系統
★ 桌上曲棍球之機器人攻防控制★ 模型直昇機姿態控制
★ 模糊控制系統的穩定性分析及設計★ 門禁監控即時辨識系統
★ 桌上曲棍球:人與機械手對打★ 麻將牌辨識系統
★ 相關誤差神經網路之應用於輻射量測植被和土壤含水量★ 三節式機器人之站立控制
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 本論文旨在設計一個基於深度學習的網路架構來進行3D醫療護具的再構建與變形,分別針對三種不同的病症構建出其相應的3D醫療護具,手的部分為媽媽手及腕隧道,腳的部分為鞋墊。現階段製作3D醫療護具的方式為針對每位病患不同大小的手、腳手動進行繪製,相當耗費時間及人力,因此本文透過深度學習的方式訓練一個AutoEncoder自編碼網路,讓網路自動構建出符合輸入資料尺寸的3D醫療護具,節省中間人工繪製的時間,達到精準且有效率製作3D醫療護具的目的。
本文以自身手、腳的3D掃描資料當作訓練資料,然後以人工的方式繪製訓練資料相應的3D醫療護具當作訓練ground truth,接著對資料進行表面平均採點的動作,讓訓練資料及訓練ground truth皆以點雲資料的形式輸入到自編碼網路中進行訓練,網路在編碼及解碼的過程中會學習中間層latent code的主要特徵,隨著網路訓練次數的增加,解碼器再構建出來的結果會越來越接近ground truth,網路訓練完成後會保留該訓練權重,接著我們縮放及旋轉自身手、腳的3D掃描資料當作測試資料,然後一樣以點雲資料的形式輸入到已經訓練好的自編碼網路中,網路會使用訓練好的權重對測試資料進行3D醫療護具再構建與變形的動作,網路輸出即為符合該測試資料尺寸的點雲形式的3D醫療護具,為了評估網路再構建的輸出結果好壞,我們使用MMD-CD及JSD兩種驗證指標對其進行驗證,最後將點雲形式的3D醫療護具還原成面的形式再透過3D列印機將網路再構建的3D醫療護具打印出來。
摘要(英) The purpose of this dissertation is to design a network architecture based on deep learning to reconstruct and deform the 3D medical protector. There are three different types of protector to target the de Quervain Syndrome, carpal tunnel syndrome for hands and insoles for correcting feet, respectively. Usually, traditional methods in protector production are that designers draw the protectors manually, which spend a lot of time. Hence, we train an AutoEncoder network to make the 3D medical protector be reconstructed automatically and satisfy the size of the input data. The costs of time and labor can be reduced; meanwhile, the goal with effectivity and accuracy can be achieved finally for producing the 3D medical protector.
Firstly, we use 3D scanner to collect the data of my hands and feet as training data, after then, the corresponding protector is built manually and it will be regarded as the training ground truth in this study. The points of the training data and ground truth are sampled uniformly, and then, inputting them to the AutoEncoder deep net architecture. The network will learn the main features of latent code during the encoding and decoding processes. As the training steps increase, the results of the decoder reconstruction will be closer and closer to the ground truth. When the training is completed, the trained weights will be saved. In addition, we zoomed and rotated the 3D scan data of my hands and feet as verification data, then, the verification data is inputted to the trained AutoEncoder network as well. The network will reconstruct the 3D medical protector which can satisfy the size of verification data. To quantitatively evaluate performances of the experimental results, we apply MMD-CD and JSD verification metric to verify. Consequently, the suitable 3D medical protector is printed by 3D printer.
關鍵字(中) ★ 再構建與變形
★ 3D醫療護具
★ 深度學習
★ 點雲
★ 自編碼網路
關鍵字(英) ★ reconstruction and deformation
★ 3D medical protector
★ deep learning
★ point cloud
★ AutoEncoder
論文目次 摘要 ii
Abstract iii
致謝 iv
圖目錄 vii
表目錄 ix
第一章 緒論 1
1.1研究動機與背景 1
1.2文獻回顧 1
1.3研究目標 3
1.4論文架構 4
第二章 系統架構與軟硬體介紹 5
2.1系統架構 5
2.2硬體介紹 5
2.3軟體介紹 10
第三章 主要方法與自編碼網路 12
3.1 3D資料前置處理 14
3.1.1 去除雜訊 14
3.1.2表面及邊緣平滑化 16
3.1.3製作不同尺寸、角度的資料 17
3.1.4對3D資料做平均表面採點 19
3.2深度學習3D醫療護具再構建與變形 20
3.3點雲資料的性質 22
3.3.1最大池化(max pooling) 24
3.4 AutoEncoder自編碼網路 25
3.4.1網路架構 28
3.4.2度量學習損失(metric learning loss) 29
3.4.3 最小匹配距離(minimum matching distance) 32
第四章 資料訓練及測試 33
4.1 AutoEncoder自編碼網路的訓練資料 33
4.2 AutoEncoder自編碼網路的測試資料 34
第五章 實驗結果 37
5.1自編碼網路再構建測試 37
5.1.1 測試結果 39
5.2 3D醫療護具再構建與變形 41
5.2.1 再構建與變形結果 45
第六章 結論與未來展望 53
6.1結論 53
6.2未來展望 54
參考文獻 55
參考文獻 [1] D. Maturana and S. Scherer, "Voxnet: A 3D convolutional neural network for real-time object recognition," Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Hamburg, Germany, 2015, pp. 922-928.
[2] Z. Wu et al., "3D shapenets: A deep representation for volumetric shapes," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 1912-1920.
[3] Y. Li, S. Pirk, H. Su, C. R. Qi, and L. J. Guibas, "FPNN: Field probing neural networks for 3D data," Neural Information Processing Systems, pp. 307-315, 2016.
[4] D. Z. Wang and I. Posner, "Voting for voting in online point cloud object detection," Robotics: Science and Systems, vol. 1, no. 3, pp. 10.15607, 2015.
[5] H. Su, S. Maji, E. Kalogerakis, and E. G. Learned-Miller, "Multi-view convolutional neural networks for 3D shape recognition," Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 2015, pp. 945-953.
[6] C. R. Qi, H. Su, M. Niessner, A. Dai, M. Yan, and L. J. Guibas, "Volumetric and multi-view cnns for object classification on 3D data," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 2016, pp. 5648-5656.
[7] C. R. Qi, H. Su, K. Mo, and L. J. Guibas, "Pointnet: Deep learning on point sets for 3D classification and segmentation," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 652-660.
[8] C. R. Qi, L. Yi, H. Su, and L. J. Guibas, "Pointnet++: Deep hierarchical feature learning on point sets in a metric space," Neural Information Processing Systems, Long Beach, CA, USA, 2017, pp. 5099-5108.
[9] B.-S. Hua, M.-K. Tran, and S.-K. Yeung, "Pointwise convolutional neural networks," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 2018, pp. 984-993.
[10] J. Bruna, W. Zaremba, A. Szlam, and Y. LeCun, "Spectral networks and locally connected networks on graphs," arXiv preprint arXiv:1312.6203, 2013.
[11] J. Masci, D. Boscaini, M. Bronstein, and P. Vandergheynst, "Geodesic convolutional neural networks on riemannian manifolds," Proceedings of the IEEE International Conference on Computer Vision Workshops, Santiago, Chile, 2015, pp. 37-45.
[12] Y. Fang et al., "3D deep shape descriptor," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 2015, pp. 2319-2328.
[13] K. Guo, D. Zou, and X. J. A. T. o. G. Chen, "3D mesh labeling via deep convolutional neural networks," ACM Transactions on Graphics, vol. 35, no. 1, pp. 3, 2015.
[14] D. P. Kingma and M. J. a. p. a. Welling, "Auto-encoding variational bayes," arXiv preprint arXiv:1312.6114, 2013.
[15] M. E. Yumer and N. J. Mitra, "Learning semantic deformation flows with 3D convolutional networks," Proceedings of the European Conference on Computer Vision, Amsterdam, Netherlands, 2016, pp. 294-311.
[16] A. Kurenkov et al., "Deformnet: Free-form deformation network for 3D shape reconstruction from a single image," Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision, Lake Tahoe, NV, USA, 2018, pp. 858-866.
[17] D. Jack et al., "Learning free-form deformations for 3D object reconstruction," arXiv preprint arXiv:1803.10932, 2018.
[18] H. Fan, H. Su, and L. J. Guibas, "A point set generation network for 3D object reconstruction from a single image," Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 2017, pp. 605-613.
[19] P. Achlioptas, O. Diamanti, I. Mitliagkas, and L. J. Guibas, "Learning representations and generative models for 3D point clouds," arXiv preprint arXiv:1707.02392, 2017.
[20] J. Lin, "Divergence measures based on the Shannon entropy," IEEE Transactions on Information Theory, vol. 37, no. 1, pp. 145-151, 1991.
[21] 林新醫院. (2019). 媽媽手(狹窄性肌腱滑膜炎). Available: http://www.lshosp.com.tw/%E8%A1%9B%E6%95%99%E5%9C%92%E5%9C%B0/%E5%BE%A9%E5%81%A5%E7%A7%91/%E5%AA%BD%E5%AA%BD%E6%89%8B/
[22] 許維志、陳威宏. (2019). 腕隧道症候群. Available: http://www.skh.org.tw/Neuro/CTS.htm
[23] D. Chen. (Mar. 2019). 高足弓與扁平足. Available: http://lovespine.pixnet.net/blog/post/357577964-%E9%81%B8%E4%B8%80%E9%9B%99%E5%A5%BD%E9%9E%8B
指導教授 王文俊(Wen-June Wang) 審核日期 2019-7-15
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明