博碩士論文 107522094 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:23 、訪客IP:3.148.145.130
姓名 楊章豪(Zhang-Hao Yang)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 用於社群網路壓縮的階層式複數區塊自動編碼器
(A Hierarchical Multi-Block Autoencoder on Social Network Compression)
相關論文
★ 基於edX線上討論板社交關係之分組機制★ 利用Kinect建置3D視覺化之Facebook互動系統
★ 利用 Kinect建置智慧型教室之評量系統★ 基於行動裝置應用之智慧型都會區路徑規劃機制
★ 基於分析關鍵動量相關性之動態紋理轉換★ 基於保護影像中直線結構的細縫裁減系統
★ 建基於開放式網路社群學習環境之社群推薦機制★ 英語作為外語的互動式情境學習環境之系統設計
★ 基於膚色保存之情感色彩轉換機制★ 一個用於虛擬鍵盤之手勢識別框架
★ 分數冪次型灰色生成預測模型誤差分析暨電腦工具箱之研發★ 使用慣性傳感器構建即時人體骨架動作
★ 基於多台攝影機即時三維建模★ 基於互補度與社群網路分析於基因演算法之分組機制
★ 即時手部追蹤之虛擬樂器演奏系統★ 基於類神經網路之即時虛擬樂器演奏系統
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 隨著機器學習的日益普及,越來越多的產業引入機器學習來輔助產業發展,也使得它更加融入我們的生活,與之對應的所需技術也層出不窮。然而擴展到更多領域的機器學習都勢必會經歷到的瓶頸,那就是設備資源的限制。以常用在圖形辨識任務的卷積神經網路來說,作為輸入資料的圖片可以自由縮放成訓練所需的尺寸。但是,對於社群網路來說,社群網路圖的圖象化尺寸遠超過一般的圖形資料,並且難以割捨其中的資料內容而無法使用一般的縮放技術,也就不可能以普通的操作進行機器學習的訓練。我們提出了一套應用於動態社群網路分析時所使用的系統。藉由我們提出的階層式群集演算法,並採用多區塊分割法,最後結合 autoencoder 所形成的複合壓縮技術。在確保資料不失真以及壓縮效率上取得最佳平衡。實驗證明,我們所提出的方法能大幅增加神經網路模型能處理的社群網路資料量,並且能減少預測模型的運算負擔,同時也可降低對硬體設備的依賴程度。
摘要(英) With the increasing popularity of machine learning, more and more industries have introduced machine learning to assist the development of the industry, which has made it more integrated into our lives, and the corresponding required technologies have also emerged. However, the bottleneck that machine learning is bound to experience is the limitation of equipment resources when expands to more fields. In the case of convolutional neural networks, which is commonly used in graphic recognition tasks, uses the pictures as input data that can be freely scaled to the size required for training. However, for the social network, the image size of the social network graph is much larger than the general graphic data, and it is difficult to discard the data content in it, so it is impossible to use the general scaling technology, and it is impossible to use ordinary operations for machine learning training. We have proposed a system for use in dynamic social network analysis. With the hierarchical clustering algorithm that we proposed, and then using the multi-block partition method, finally combined with the
autoencoder, it becomes a composite compression technology formed. The best balance is achieved in ensuring that the data is not distorted and compression efficiency. Experiments show that our proposed method can greatly increase the amount of social network data that the neural network model can process, and can reduce the
computational burden of the prediction model, and can also reduce the degree of dependence on hardware devices.
關鍵字(中) ★ 機器學習
★ 深度學習
★ 自動編碼器
★ 社區檢測
★ 群集分析
★ 動態社群網路分析
關鍵字(英)
論文目次 中文摘要 ....................................................................................................................................... i
Abstract......................................................................................................................................... ii
Contents....................................................................................................................................... iv
List of figures................................................................................................................................ v
List of tables................................................................................................................................. vi
1 Introduction...................................................................................................................... 1
2 Related work .................................................................................................................... 7
2.1 Dimensionality reduction and machine learning feature extraction methods.............. 7
2.2 Autoencoder ................................................................................................................ 9
2.3 Multi-level / hierarchical autoencoder application.................................................... 14
3 Preliminary..................................................................................................................... 18
4 Proposed Model: HM-AE.............................................................................................. 19
4.1 Network transformation............................................................................................. 20
4.2 Hierarchical clustering............................................................................................... 21
4.3 HM-AE learning........................................................................................................ 24
5 Experiment..................................................................................................................... 29
5.1 Accuracy discussion .................................................................................................. 33
5.2 Hyper parameter setting discussion........................................................................... 37
5.3 Threshold influence ................................................................................................... 50
6 Conclusion ..................................................................................................................... 56
7 Reference ....................................................................................................................... 58
參考文獻 [1] G. E. Hinton and R. R. Salakhutdinov, “Reducing the dimensionality of data with
neural networks” Science Volume 313, July 2006.
[2] P. Vincent, H. Larochelle, Y. Bengio, and P. A. Manzagol, “Extracting and
Composing Robust Features with Denoising Autoencoders” Machine Learning,
Proceedings of the 25th International Conference (ICML) 2008, June 2008.
[3] Andrew Ng, “Sparse autoencoder” CS294A Lecture notes, 72(2011).
[4] A. Makhzani, and B. Frey, “k-Sparse Autoencoders”, ICLR 2014, Mar 2014
[5] M. Udommitrak and B. Kijsirikul, “Incremental Feature Construction for Deep
Learning Using Sparse Auto-Encoder”, International Journal of Electrical Energy,
Vol. 1, No. 3, pp. 173-176, September 2013
[6] P. Baldi, “Autoencoders, unsupervised learning, and deep architectures.” Journal
of Machine Learning Research, Workshop and Conference Proceedings,
Proceedings of the 2011 ICML Workshop on Unsupervised and Transfer Learning,
vol. 27, Bellevue, WA, pp. 37–50 (2012)
[7] D. P. Kingma and M. Welling, “Auto-Encoding Variational Bayes”, International
Conference on Learning Representations (ICLR), 2014
[8] S. Rifai, P. Vincent, X. Muller, X. Glorot, and Y. Bengio, “Contractive autoencoders: Explicit invariance during feature extraction”, ICML′11 Proceedings of the 28th International Conference on International Conference on Machine
Learning, page 833-840, June 2011.
[9] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle, “Greedy Layer-Wise
Training of Deep Networks”, NIPS′06 Proceedings of the 19th International
Conference on Neural Information Processing Systems, page 153-160, Dec 2006.
[10] G. E. Hinton, A. Krizhevsky, and S. D. Wang, “Transforming Auto-encoders”,
Artificial Neural Networks and Machine Learning – ICANN 2011 pp 44-51, 2011
[11] G. Alain and Y. Bengio, “What regularized auto-encoders learn from the datagenerating distribution”, The Journal of Machine Learning Research vol 15 issue
1, page 3563-3593, Jan 2014.
[12] M. Tschannen, O. Bachem, and M. Lucic, “Recent Advances in AutoencoderBased Representation Learning”, NeurIPS 2018, Dec 2018.
[13] Y. Pu, Z. Gan, R. Henao, X. Yuan, C. Li, A. Stevens, and L. Carin, “Variational
Autoencoder for Deep Learning of Images, Labels and Captions”, Advances in
Neural Information Processing Systems 29 (NIPS 2016), Sep 2016.
[14] J. Li, M. T. Luong, and D. Jurafsky, “A Hierarchical Neural Autoencoder for
Paragraphs and Documents”, Proceedings of the 53rd Annual Meeting of the
Association for Computational Linguistics and the 7th International Joint
Conference on Natural Language Processing, July 2015.
[15] X. Lu, Y. Tsao, S. Matsuda, and C. Hori, “Speech Enhancement Based on Deep
Denoising Autoencoder”, INTERSPEECH 2013, January 2013.
[16] Boser, B. E.; Guyon, I. M.; Vapnik, V. N. “A training algorithm for optimal margin
classifiers.” Proceedings of the fifth annual workshop on Computational learning
theory – COLT ′92. 1992: 144.
[17] Yann LeCun, L. Bottou, Y. Bengio, and P Haffner, “Gradient-Based Learning
Applied to Document Recognition” proc of the IEEE, November 1998.
[18] C. Cortes, and V. Vapnik, “Support-vector networks”, Machine Learning. 1995, 20
(3): 273–297
[19] P. Smolensky, Chapter 6: Information Processing in Dynamical Systems:
Foundations of Harmony Theory. MIT Press. 1986: 194–281.
[20] G.E. Hinton, S. Osindero, and Y.W. Teh, “A fast learning algorithm for deep belief
nets”, Neural Computation 18, 1527–1554 (2006)
[21] R. Girshick, J. Donahue, T. Darrell, and J. Malik, “Rich feature hierarchies for
accurate object detection and semantic segmentation Tech report (v5)”, CVPR
2014, Computer Vision and Pattern Recognition, October 2014.
[22] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “ImageNet Classification with Deep
Convolutional Neural Networks”, Advances in neural information processing
systems 25(2), January 2012.
[23] F. Scarselli, M. Gori, A. C. Tsoi, M. Hagenbuchner and G. Monfardini, "The Graph
Neural Network Model," in IEEE Transactions on Neural Networks, vol. 20, no. 1,
pp. 61-80, Jan. 2009, doi: 10.1109/TNN.2008.2005605.
[24] K. Gregor, I. Danihelka, A. Graves, D. J. Rezende, and D. Wierstra, “DRAW: A
Recurrent Neural Network For Image Generation”, Computer Vision and Pattern
Recognition (cs.CV); Machine Learning (cs.LG); Neural and Evolutionary
Computing (cs.NE), May 2015.
[25] J. Wang, Y. Yang, J. Mao, Z. Huang, C. Huang, and W. Xu, “CNN-RNN: A Unified
Framework for Multi-label Image Classification”, 2016 IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), June 2016.
[26] W. Byeon, T. M. Breuel, F. Raue and M. Liwicki, "Scene labeling with LSTM
recurrent neural networks," 2015 IEEE Conference on Computer Vision and
Pattern Recognition (CVPR), Boston, MA, 2015, pp. 3547-3555, doi:
10.1109/CVPR.2015.7298977.
[27] K. Simonyan, and A. Zisserman, “Very Deep Convolutional Networks for LargeScale Image Recognition”, Computer Vision and Pattern Recognition (cs.CV),
April 2015.
[28] https://medium.com/@chenchoulo/convolution-neural-network-cnn-175d924bfcc1
[29] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition”, Computer Vision and Pattern Recognition (cs.CV), December 2015.
[30] S, Hochreiter, and J. Schmidhuber, “LONG SHORT-TERM MEMORY”, Neural
Computation 9(8):1735-1780, November 1997.
[31] Olshausen, B. A. and Field, D. J. (1997). Sparse coding with an overcomplete basis
set: a strategy employed by V1? Vision Research, 37, 3311–3325.
[32] Y. Bengio, E. Thibodeau-Laufer, G. Alain, J. Yoinski, “Deep Generative Stochastic
Networks Trainable by Backprop”, arXiv preprint arXiv:1306.1091, June 2013.
[33] Y. Bengio, A. Courville, and P. Vincent, “Representation Learning: A Review and
New Perspectives”, arXiv:1206.5538 [cs.LG], April 2014.
[34] T. Wong, and Z. Luo, “Recurrent Auto-Encoder Model for Multidimensional Time
Series Representation”, ICLR 2018, January 2018.
[35] X. Wu, G. Jiang, X. Wang, P. Xie and X. Li, "A Multi-Level-Denoising
Autoencoder Approach for Wind Turbine Fault Detection," in IEEE Access, vol. 7,
pp. 59376-59387, 2019, doi: 10.1109/ACCESS.2019.2914731.
[36] J. Chen, S. Sathe, C. C. Aggarwal, and D. Turagea, “Outlier Detection with
Autoencoder Ensembles”, 2017 SIAM International Conference on Data Mining,
June 2017.
[37] J. Li, M. T. Luong, and D. Jurafsky, “A Hierarchical Neural Autoencoder for
Paragraphs and Documents”, arXiv:1506.01057 [cs.CL], June 2015.
[38] Li, Y., Wang, Z., Yang, X. et al. “Efficient convolutional hierarchical autoencoder
for human motion prediction.”, Vis Comput 35, 1143–1156 (2019).
https://doi.org/10.1007/s00371-019-01692-9, June 2019
[39] D. Bouchacourt, R. Tomioka, and S. Nowozin, “Multi-Level Variational
Autoencoder: Learning Disentangled Representations from Grouped
Observations”, The Thirty-Second AAAI Conference on Artificial Intelligence
(AAAI-18), May 2017.
[40] Y. C. Chen, W. C. Peng, and S. Y. Lee, “Efficient Algorithms for Influence
Maximization in Social Networks,” Knowledge and Information Systems, Vol. 33,
Issue 3, pp 577-601, 2012. [SCI, IF=2.397, 64/134]
[41] J. Shlens, “Notes on Kullback-Leibler Divergence and Likelihood Theory”,
arXiv:1404.2000, April 2014.
[42] https://tdhopper.com/blog/cross-entropy-and-kl-divergence
[43] T. Derr, C. Aggarwal, and J. Tang, “Signed Network Modeling Based on Structural
Balance Theory”, CIKM ′18: Proceedings of the 27th ACM International
Conference on Information and Knowledge Management, Pages 557–566 October
2018
[44] Chen, Y. A novel algorithm for mining opinion leaders in social networks. World
Wide Web 22, 1279–1295 (2019). https://doi.org/10.1007/s11280-018-0586-x
指導教授 施國琛 審核日期 2020-7-24
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明