DC 欄位 |
值 |
語言 |
DC.contributor | 通訊工程學系 | zh_TW |
DC.creator | 楊宗祐 | zh_TW |
DC.creator | Zong-You Yang | en_US |
dc.date.accessioned | 2024-8-20T07:39:07Z | |
dc.date.available | 2024-8-20T07:39:07Z | |
dc.date.issued | 2024 | |
dc.identifier.uri | http://ir.lib.ncu.edu.tw:444/thesis/view_etd.asp?URN=111523037 | |
dc.contributor.department | 通訊工程學系 | zh_TW |
DC.description | 國立中央大學 | zh_TW |
DC.description | National Central University | en_US |
dc.description.abstract | 隨著自動駕駛技術的快速發展,車載感測器能夠更精確地感知車輛周圍的環境,然而,這也導致了大量的感測數據產生,進而使自動駕駛系統面臨計算能力限制所帶來的延遲增加和能耗過大的挑戰。為了緩解車載計算的負擔,將計算任務轉移至外部計算資源,並通過雲端、邊緣計算節點或附近車載設備進行協作計算,成為一種解決方案。本文提出了一種基於賽局理論和深度強化學習的車載計算任務卸載策略。首先,我們設計了一個綜合考量延遲、功耗和計算資源租賃成本的任務成本函數,旨在制定卸載策略並評估不同卸載決策的優劣。接著,我們將車輛間競爭計算資源的問題描述為一個賽局,並證明該賽局存在納許均衡解。最後,我們將賽局問題整合至多代理人強化學習問題中,採用MATD3架構進行模型訓練,以尋找賽局中的最優策略均衡解。該方法能根據當前網路環境和任務特性,選擇最佳的計算任務卸載決策。在滿足時間容忍度要求的前提下,顯著提高計算任務的完成率並降低任務完成成本。實驗結果顯示,與以往研究相比,本文提出的方法能更好地評估不同卸載決策,制定出最佳的卸載策略,從而降低任務成本並提升任務完成率,為自動駕駛等應用提供更加靈活和高效的解決方案。 | zh_TW |
dc.description.abstract | With the rapid advancement of autonomous driving technology, vehicular sensors have increasingly enhanced their capability to accurately perceive the vehicle′s surrounding environment. However, this advancement has resulted in the generation of substantial amounts of sensor data, which presents significant challenges for autonomous driving systems, particularly in terms of increased latency and excessive energy consumption arising from computational capacity limitations. To mitigate the burden on vehicular computing resources, the strategy of offloading computational tasks to external resources through collaboration with cloud computing, edge computing nodes, or nearby vehicular devices has emerged as a viable solution.This thesis proposes a vehicle computing task offloading strategy grounded in game theory and deep reinforcement learning. Initially, a comprehensive task cost function is designed, taking into account latency, power consumption, and the costs associated with renting computational resources. This function aims to formulate offloading strategies while evaluating the advantages and disadvantages of various offloading decisions. Subsequently, the competition for computational resources among vehicles is framed as a game, wherein the existence of a Nash equilibrium solution is demonstrated. Finally, the game problem is integrated into a multi-agent reinforcement learning framework, utilizing the MATD3 architecture for model training to identify the optimal strategy equilibrium solution within the context of the game.This method enables the selection of the optimal computational task offloading decision based on current network environment and task characteristics. Under the conditions of meeting time tolerance requirements, it significantly enhances the completion rate of computational tasks while concurrently reducing task completion costs. Experimental results indicate that, in comparison to previous studies, the proposed method provides a superior evaluation of different offloading decisions and facilitates the formulation of optimal offloading strategies. Consequently, this approach reduces task costs and improves task completion rates, thereby offering a more flexible and efficient solution for applications in autonomous driving. | en_US |
DC.subject | 自動駕駛 | zh_TW |
DC.subject | 車聯網 | zh_TW |
DC.subject | 計算卸載 | zh_TW |
DC.subject | 協作計算 | zh_TW |
DC.subject | 賽局理論 | zh_TW |
DC.subject | 多代理人強化學習 | zh_TW |
DC.subject | Autonomous Driving | en_US |
DC.subject | Vehicular Networks, Computing Offloading, Collaborative Computing, Game Theory, Multi-Agent Reinforcement Learning | en_US |
DC.subject | Computing Offloading | en_US |
DC.subject | Collaborative Computing | en_US |
DC.subject | Game Theory | en_US |
DC.subject | Multi-Agent Reinforcement Learning | en_US |
DC.title | 車載網路中基於多代理人強化學習和賽局理論之最小化任務成本的方法 | zh_TW |
dc.language.iso | zh-TW | zh-TW |
DC.title | Using Multi-Agent Reinforcement Learning and Game Theory to Minimize the Task Cost in Vehicular Networks | en_US |
DC.type | 博碩士論文 | zh_TW |
DC.type | thesis | en_US |
DC.publisher | National Central University | en_US |