博碩士論文 107221020 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:36 、訪客IP:18.218.172.249
姓名 蔡予中(Yu-Chung Tsai)  查詢紙本館藏   畢業系所 數學系
論文名稱 次模深度壓縮感知用於空間搜尋
(Spatial Search using Submodular Deep Compressed Sensing)
相關論文
★ 3D Map Exploration and Search using Topological Fourier Sparse Set★ Evasion and Pursuit Games via Adaptive Submodularity
★ Learning Spatial Search and Map Exploration using Adaptive Submodular Inverse Reinforcement Learning★ Maximal Coverage Problems with Routing Constraints using Cross-Entropy Monte Carlo Tree Search
★ Localizing Complex Terrain for Quadruped Robots using Adaptive Submodularity★ 使用成本效益生成樹的資訊軌跡規劃
★ Map Explorations via Dynamic Tree-Structured Graph
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   [檢視]  [下載]
  1. 本電子論文使用權限為同意立即開放。
  2. 已達開放權限電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。
  3. 請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。

摘要(中) 因次模函數的各式各樣應用(如物件搜尋、三維地圖重建),其已經引起人工智慧社群的注意,
搜尋受害者是搜救行動的關鍵技術,但是找到最佳搜尋路徑是NP困難問題。
因空間搜尋的目標函數具次模性,貪婪演算法可產生近似最佳解。
然而,因N個集合的次模函數輸出數量是$2^N$,導致學習次模函數是一大挑戰。
最先進的方法是以壓縮感測技術,在頻域學習次模函數,再於空間域還原。
然而,傅立葉基底的數量和集合感測重疊區域的數量相關。
為了能克服此問題,本研究提出次模深度壓縮感測(SDCS)方法來學習次模函數,
此演算法包含學習自編碼器與傅立葉係數,學習後的網路可以用於預測次模函數的$2^N$數值,
實驗證明此演算法比基準方案更有效率。
摘要(英) The AI community has been paying attention to submodular functions due to their various applications (e.g., target search and 3D mapping).
Searching for the victim is the key to search and rescue operations but finding an optimal search path is an NP-hard problem.
Since the objective function of the spatial search is submodular, greedy algorithms can generate near-optimal solutions.
However, learning submodular functions is a challenge since the number of a function′s outcomes of N sets is $2^N$.
The state-of-the-art approach is based on compressed sensing techniques, which are to learn submodular functions in the Fourier domain and then recover the submodular functions in the spatial domain.
However, the number of Fourier bases is relevant to the number of sets′ sensing overlapping.
To overcome this issue, this research proposed a submodular deep compressed sensing (SDCS) approach to learning submodular functions.
The algorithm consists of learning autoencoder networks and Fourier coefficients.
The learned networks can be applied to predict
$2^N$ values of submodular functions.
Experiments conducted with this approach demonstrate that the algorithm is more efficient than the benchmark approach.
關鍵字(中) ★ 次模性
★ 搜尋
★ 壓縮感知
★ 自動編碼
★ 深度學習
關鍵字(英) ★ Submodularity
★ Search
★ Compressed Sensing
★ Autoencoder
★ Deep Learning
論文目次 摘要. i
Abstract. ii
Acknowledgments . iii
Contents iv
Figures vi
Tables . x
1 Introduction 1
1.1 Publication Note 4
2 Relevant Work.. 5
2.1 Probabilistic search 5
2.2 Sparse Regression . 6
2.3 Submodularity . 6
2.4 Learning Submodular Functions 7
2.5 Deep Compressed Sensing . 7
2.6 Multi-layer Convolutional Sparse Coding . 8
3 Problem Reformulation 9
3.1 Learning Submodular Functions via Compressed Sensing. 10
3.2 Learning Submodular Functions via Deep Compressed Sensing . 11
3.3 Learning Submodular Function via Multi-Layer Convolutional
Sparse Coding . 14
4 The proposed algorithms . 16
4.1 SDCS Algorithm 16
4.2 ML-CSC Algorithm 17
4.3 Search Algorithm . 19
4.4 Theoretical Guarantees 20
5 Experiments 21
5.1 Reconstruction experiments 21
5.1.1 Experimental Setup 21
5.1.2 Reconstruction Results vs. Sparsity 26
5.1.3 Greedy Results vs. Sparsity 27
5.1.4 Computational Time 30
5.2 Search experiments . 31
5.2.1 Experimental Setup 34
5.2.2 ETTD and successful rate . 34
6 CONCLUSIONS.. 37
References. 38
參考文獻 [1] Kuo-Shih Tseng and Bérénice Mettler. Near-optimal probabilistic search using spatial fourier sparse set. Autonomous Robots, 2017.
[2] Georey Hollinger, Chiranjib Choudhuri, Urbashi Mitra, and Gaurav S. Sukhatme. Squared error distortion metrics for motion planning in robotic sensor networks. IEEE Globecom Workshop, At-lanta, pages 14261431, 2013.
[3] Amarjeet Singh, Andreas Krause, Carlos Guestrin, William Kaiser, and Maxim Batalin. Ecient planning of informative paths for multiple robots. International Joint Conference on Articial Intel-ligence, pages 22042211, 2007.
[4] George Lann Nemhauser, Laurence Alexander Wolsey, and Marshall Lee Fisher. An analysis of approximations for maximizing submodular set functions. Mathematical Programming, 14:265294, 1978.
[5] Samir Khuller, Anna Moss, and Joseph Naor. The budgeted maximum coverage problem. Information Processing Letters, 70(1):39-45, 1999.
[6] Chandra Chekuri and Martin Pal. A recursive greedy algorithm for walks in directed graphs. 46th Annual IEEE Symposium on Foundations of Computer Science, pages 245253, 2005.
[7] Uriel Feige. A threshold of ln n for approximating set cover. Journal of the ACM, 45(4):634652, 1998.
[8] Peter Stobbe and Andreas Krause. Learning fourier sparse set functions. Proceedings of the Fifteenth International Conference on Articial Intelligence and Statistics, pages 11251133, 2012.
[9] Emmanuel Candes, Justin Romberg, and Terence Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Transaction on Information Theory, 52(2):489509, 2006.
[10] Yu-Chung Tsai, Bing-Xian Lu, and Kuo-Shih Tseng. Spatial search via adaptive submodularity and deep learning. IEEE International Symposium on Safety, Security, and Rescue Robotics (SSRR), 2019.
[11] Yu-Chung Tsai and Kuo-Shih Tseng. Deep compressed sensing for learning submodular functions. Sensors, 20(9):2591, 2020.
[12] Lawrence D. Stone. The theory of optimal search. Operations Research Society of America, 1975.
[13] Frederic Bourgault, Tomonari Furukawa, and Hough F. Durrant-Whyte. Coordinated decentralized search for a lost target in a bayesian world. IEEE/RSJ International Intelligent Robots and Systems, pages 4853, 2003.
[14] Nassirou Lo, Jean Berger, and Martin Noel. Toward optimizing static target search path planning. IEEE Symposium on Computational Intelligence for Security and Defence Applications, pages 17, 2012.
[15] Haye Lau, Shoudong Huang, and Gamini Dissanayake. Discounted mean bound for the optimal searcher path problem with nonuniform travel times. European Journal of Operational Research, 190(2):383-397, 2008.
[16] Kuo-Shih Tseng and Bérénice Mettler. Near-optimal probabilistic search via submodularity and sparse regression. Autonomous Robots, 2015.
[17] Robert Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society, Series B, 58:267288, 1996.
[18] Hui Zou and Trevor Hastie. Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society, Series B, 67:301320, 2005.
[19] Alexander Lorbert, David Eis, Victoria Kostina, David M. Blei, and Peter J. Ramadge. Exploiting covariate similarity in sparse regression via the pairwise elastic net. the 13th International Conference on Articial Intelligence and Statistics, 9:477484, 2010.
[20] Alexander Lorbert and Peter J. Ramadge. The pairwise elastic net support vector machine for automatic fmri feature selection. International Conference on Acoustics, Speech and Signal Processing, pages 10361040, 2013.
[21] Jerome Friedman, Trevor Hastie, and Robert Tibshirani. Sparse inverse covariance estimation with the graphical lasso. Biostatistics, 9(3):432441, 2008.
[22] R. G. Baraniuk. Compressive sensing. IEEE Signal Processing Magazine, 24(4):118121, 2007.
[23] Qaisar S., Pakistan Islamabad, R.M. Bilal, W. Iqbal, and M. Naureen. Compressive sensing: From theory to applications, a survey. Journal of Communications and Networks, 15(5):443456, 2013.
[24] Kuo-Shih Tseng and Bérénice Mettler. Human planning and coordination in spatial search problems. IFAC Conference on Cyber-Physical and Human-Systems, 2016.
[25] Kuo-Shih Tseng and Bérénice Mettler. Analysis and augmentation of human performance on telerobotic search problems. IEEE Access, 8:5659056606, 2020.
[26] Andreas Krause, Carlos Guestrin, Anupam Gupta, and Jon Kleinberg. Near-optimal sensor placements: maximizing information while minimizing communication cost. International Conference on Information Processing in Sensor Networks, pages 210, 2006.
[27] Georey Hollinger and Gaurav S. Sukhatme. Sampling-based motion planning for robotic information gathering. Robotics: Science and Systems Conference, 2013.
[28] Maria-Florina Balcan and Nicholas J. A. Harvey. Learning submodular functions. Proceedings of the 43rd annual ACM symposium on Theory of computing, 2011.
[29] Amir Adler, David Boublil, and Michael Zibulevsky. Block-based compressed sensing of images via deep learning. IEEE International Workshop on Multimedia Signal Processing (MMSP), pages 16, 2017.
[30] Ali Mousavi, Ankit B. Patel, and Richard G. Baraniuk. A deep learning approach to structured signal recovery. Annual Allerton Conference on Communication, Control, and Computing (Allerton), pages 13361343, 2015.
[31] Wuzhen Shi, Feng Jiang, Shengping Zhang, and Debin Zhao. Deep networks for compressed image sensing. IEEE International Conference on Multimedia and Expo (ICME), pages 877882, 2017.
[32] Pavan Turaga Ronan Kerviche Amit Ashok Kuldeep Kulkarni,Suhas Lohit. Reconnet: Non-iterative reconstruction of images from compressively sensed measurements. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), pages 449458, 2016.
[33] Thuong Nguyen Canh and Byeungwoo Jeon. Multi-scale deep compressive sensing network. CoRR, abs/1809.05717, 2016.
[34] Ali Mousavi and Richard G. Baraniuk. Learning to invert: Signal recovery via deep convolutional networks. Acoustics, Speech and Signal Processing, 2017.
[35] Hantao Yao, Feng Dai, Shiliang Zhang, Yongdong Zhang, Qi Tian, and Changsheng Xu. Dr2-net: Deep residual reconstruction network for image compressive sensing. Neurocomputing, 359:483493, 2019.
[36] Karol Gregor and Yann LeCun. Learning fast approximations of sparse coding. Proceedings of the 27th International Conference on International Conference on Machine Learning, page 399406, 2010.
[37] Daisuke Ito, Satoshi Takabe, and Tadashi Wadayama. Trainable ista for sparse signal recovery. IEEE Transactions on Signal Processing, 67:31133125, 2019.
[38] Jian Zhang and Bernard Ghanem. Ista-net: nterpretable
optimization-inspired deep network for image compressive sensing. IEEE conference on computer vision and pattern recognition, pages 18281837, 2018.
[39] Vardan Papyan, Jeremias Sulam, and Michael Elad. Working locally thinking globally: Theoretical guarantees for convolutional sparse coding. IEEE Transactions on Signal Processing, 65(21):56875701, 2017.
[40] Vardan Papyan, Yaniv Romano, and Michael Elad. Convolutional neural networks analyzed via convolutional sparse coding. The Journal of Machine Learning Research, 18(1):28872938, 2017.
[41] Yaniv Romano Michael Elad Jeremias Sulam, Vardan Papyan. Multilayer convolutional sparse modeling: Pursuit and dictionary learning. IEEE Transactions on Signal Processing, 66(15):40904104, 2018.
[42] Mark Schmidt. Least squares optimization with l1-norm regularization. 2005.
[43] Timothy H. Chung and Joel W. Burdick. Analysis of search decision making using probabilistic search strategies. IEEE Transactions on Robotics, 28(1):132144, 2012.
[44] Kuo-Shih Tseng and Bérénice Mettler. Analysis of coordination patterns between gaze and control in human spatial search. 2nd IFAC Conference on Cyber-Physical and Human-Systems, 2018.
[45] Bing-Xian Lu, Ji-Jie Wu, Yu-Chung Tsai, Wan-Ting Jiang, and Kuo-Shih Tseng. A novel telerobotic search system using an unmanned aerial vehicle. IEEE International Conference on Robotic Computing, 2020.
[46] Markus Wulfmeier, Peter Ondruska, and Ingmar Posner. Maximum entropy deep inverse reinforcement learning. arxiv., 2015.
[47] Markus Wulfmeier, Dominic Zeng Wang, and Ingmar Posner. Maximum entropy deep inverse reinforcement learning. IEEE/RSJ International Conference on Intelligent Robots and Systems, pages 21530866, 2016.
指導教授 曾國師(Kuo-Shih Tseng) 審核日期 2020-7-20
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明