博碩士論文 103523032 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:9 、訪客IP:3.239.109.55
姓名 蔡炘宏(Hsin-Hung Tsai)  查詢紙本館藏   畢業系所 通訊工程學系
論文名稱 二使用者於能量採集網路架構之合作式傳輸策略設計及模擬
(Design and Simulation of Cooperative Transmission Policies for Two-User Energy Harvesting Networks)
相關論文
★ 基於干擾對齊方法於多用戶多天線下之聯合預編碼器及解碼器設計★ 應用壓縮感測技術於正交分頻多工系統之稀疏多路徑通道追蹤與通道估計方法
★ 應用於行動LTE 上鏈SC-FDMA 系統之通道等化與資源分配演算法★ 以因子圖為基礎之感知無線電系統稀疏頻譜偵測
★ Sparse Spectrum Detection with Sub-blocks Partition for Cognitive Radio Systems★ 中繼網路於多路徑通道環境下基於領航信號的通道估測方法研究
★ 基於代價賽局在裝置對裝置間通訊下之資源分配與使用者劃分★ 應用於多用戶雙向中繼網路之聯合預編碼器及訊號對齊與天線選擇研究
★ 多用戶波束成型和機會式排程於透明階層式蜂巢式系統★ 應用於能量採集中繼網路之最佳傳輸策略研究設計及模擬
★ 感知無線電中繼網路下使用能量採集的傳輸策略之設計與模擬★ 以綠能為觀點的感知無線電下最佳傳輸策略的設計與模擬
★ 基於Q-Learning之雙向能量採集通訊傳輸方法設計與模擬★ 多輸入多輸出下同時訊息及能量傳輸系統之設計與模擬
★ 附無線充電裝置間通訊於蜂巢式系統之設計與模擬★ 波束成形與資源分配於無線充電蜂巢式網路之設計與模擬
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 ( 永不開放)
摘要(中) 在本篇論文中,主要針對由中繼網路(Relay network)及能量採集網路(Energy harvesting network)架構這兩項現今熱門主題結合所發想之合作式通訊系統傳輸策略的設計及模擬。有別於傳統通訊模式,兩通訊節點間之資料傳輸,不再受限於不好的通道狀態,可另外找尋一中繼節點幫忙轉遞資料。此外,以往的通訊節點必須透過佈置電力網或定期更換電池,來供給傳送所需之能量,而隨著綠能通訊(Green communication)的崛起及電池硬體設備的提升,使通訊節點能從環境中進行能量採集的概念也被提出。因此,於本研究中考慮二具備能量採集功能之發射節點在任意能量採集狀態、電池狀態及通道狀態的情況下,決定是否進行合作式通訊。
在此,我們將各狀態量化後,採用馬可夫決策過程(Markov decision process, MDP)和Q-learning方法,藉由調整和設計各種參數,分析如何最佳化系統的效能,並找出在各狀態下的最佳決策策略。另外,我們發現透過傳送所需單位能量大小的設計,能使系統多樣性階數上升,更能促進系統進行合作式通訊。
總結本研究主要探討目標為—考慮能從環境中收集能量的使用者,在自身通道狀態不好時,可以透過和另外一使用者進行合作式通訊來增進雙方最大效益。
摘要(英) With the rising of green communication and the upgrading of battery technology, using energy harvesting (EH) nodes as cooperative relays is a promising solution to perpetually power-up wireless sensor networks. In this thesis, we investigate the optimal transmission policies for decode-and-forward (DF) relay networks in which the two-user pairs capable of energy harvesting and a user which has sufficient energy for transmission serving as a relay to the other user whose channel condition is worse. In order to maximize the long-term reward of the system and find out the optimal policies, we formulated this problem as a discounted Markov decision process (MDP) and Q-learning framework. Through designing and adjusting the parameters, we discover that the outage probability is inversely proportional to the solar cell size and battery size. In addition, the numerical results provide that the system tends to do non-cooperation when the signal-to-noise ratio (SNR) approaches to infinity, and at last it also indicate that we could improve the diversity orders to obtain better performance and stimulate the cooperative communication via the reducing the size of energy quantum size.
關鍵字(中) ★ 合作式通訊
★ 能量採集馬可夫決策過程
★ 能量採集
關鍵字(英) ★ Cooperative communication
★ Energy harvesting
★ Markov decision process
★ Q-learning
論文目次 致謝 i
摘要 ii
Abstract iii
目錄 iv
圖目錄 vii
表目錄 viii
第一章 緒論 1
1-1 研究背景 1
1-2 文獻探討 2
第二章 研究理論介紹 4
2-1 能量採集(Energy harvesting) 4
2-2 中繼網路(Relay network) 4
2-3 轉遞演算法(forwarding algorithm) 5
2-4 停止機率和成功機率(Outage probability and Success probability) 5
2-5 最大比例結合法(Maximal-ratio combining) 6
2-6 馬可夫決策過程 (Markov decision process) 6
2-7 Q-learning 7
第三章 系統架構-利用馬可夫決策過程之合作式通訊 8
3-1 系統模型圖 8
3-2 系統前提假設 9
3-3 相同動作之系統架構 (Type 1) 9
3-3-1 動作架構 9
3-3-2 訊號傳送 10
3-4 不同動作之系統架構 (Type 2) 12
3-4-1 動作架構 12
3-4-2 訊號傳送 13
3-5 馬可夫決策過程定義 19
3-5-1 所有決策時刻點集合 19
3-5-2 系統中所有可能狀態集合 19
3-5-3 可以採用動作集合 19
3-5-4 狀態及行動相關的轉移機率集合 19
3-5-5 狀態及行動相關的即時報酬集合 20
第四章 推論與證明 22
4-1 通道好壞和數值關係 22
4-2 電池多寡和數值之關係 24
4-3 訊雜比無限大之現象 27
4-4 系統執行動作與多樣性階數之關係討論。 29
4-5 電池大小與策略結構性之觀察。 31
第五章 系統架構-利用Q-learning方法之合作式通訊 32
5-1 Q-learning簡介 32
5-2 Q-learning問題模型 33
5-3 Q-learning演算法 : 34
5-4 選擇可用動作之方法 34
5-5 學習速率參數設計 35
第六章 模擬結果與討論 36
第七章 結論 44
附錄一 狀態轉移機率模型 45
附錄二 馬可夫決策過程之相同動作(Type 1)之報酬 49
附錄三 馬可夫決策過程之不同動作(Type 2)之報酬 53
參考文獻 62
參考文獻 [1] Yingda Chen, S. Kishore, “A game-theoretic analysis of decode-and-forward user cooperation,” IEEE Trans. Wireless Communications, Vol.7, no. 5, pp. 1941-1951, May 2008
[2] Qian Li, Li K.H., Teh K.C., “Diversity-multiplexing tradeoff of wireless communication systems with user cooperation,” IEEE transactions, Vol. 57, no. 9, pp. 5794 – 5819, Sept. 2011
[3] Meng-Lin Ku, Wei Li, Yan Chen and K. J. Ray Liu, “Advances in Energy Harvesting Communications: Past, Present, and Future Challenges,” submitted to IEEE Communications Surveys and Tutorials, 2015.
[4] Huijiang Li, Neeraj Jaggi, and Biplab Sikdarm, “Relay scheduling for cooperative communications in sensor networks with energy harvesting,” IEEE Trans. Wireless Communications, Vol. 10, no. 9, Sept. 2011
[5] Bacinoglu, B.T.; Uysal-Biyikoglu, E., “Finite-horizon online transmission scheduling on an energy harvesting communication link with a discrete set of rates,” IEEE Journal of communications and networks, Vol. 16, no. 3, pp. 300-393, June 2014
[6] Shadrach Joseph Roundy, “Energy Scavenging for Wireless Sensor Nodes with a Focus on Vibration to Electricity Conversion,” University of California, Berkeley, PhD dissertation , 2003
[7] Martin L. Puterman, Markov Decision Process-discrete stochastic dynamic programming, John Wiley & Sons, Inc., Hoboken, New Jersey, June 2012
[8] Meng-Lin. Ku, Yan Chen, K. J. Ray Liu, “Data-Driven Stochastic Models and Policies for Energy Harvesting Sensor Communications,” IEEE Journal On Selected Areas In Communications, Jan. 2015.
[9] Wei Li, Meng-Lin Ku, Yan Chen, and K. J. Ray Liu, “On Outage Probability for Stochastic Energy Harvesting Communications in Fading Channels,” IEEE Signal Processing Letters, June 2015
[10] H. S. Wang and N. Moayeri, “Finite-state Markov channel-a useful model for radio communication channels, ” IEEE Trans. Veh. Technol., Vol. 44, no. 1, pp. 163-171, Feb. 1995
[11] Wei Li, Meng-Lin Ku, Yan Chen, and K. J. Ray Liu, “On Outage Probability for Two-Way Relay Networks with Stochastic Energy Harvesting,” submitted to IEEE Trans. Wireless Communications, 2015.
[12] M. Kashef and Anthony Ephremides, “Optimal Scheduling for Energy Harvesting Sources on Time Varying Wireless Channels,” Annual Allerton Conference, pp. 712-718, Sept. 2011
[13] Michael L. Littman, “Value-function reinforcement learning in Markov games,” Journal of Cognitive Systems Research 2, Vol. 2, no. 1, pp. 55-66, April 2001
[14] Humphrys Mark, “Action selection methods using reinforcement learning,” University of Cambridge, PhD dissertation , August 1996
指導教授 古孟霖(Meng-Lin Ku) 審核日期 2015-7-23
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明