English  |  正體中文  |  简体中文  |  全文筆數/總筆數 : 80990/80990 (100%)
造訪人次 : 41627481      線上人數 : 2354
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜尋範圍 查詢小技巧:
  • 您可在西文檢索詞彙前後加上"雙引號",以獲取較精準的檢索結果
  • 若欲以作者姓名搜尋,建議至進階搜尋限定作者欄位,可獲得較完整資料
  • 進階搜尋


    請使用永久網址來引用或連結此文件: http://ir.lib.ncu.edu.tw/handle/987654321/83816


    題名: 基於多代理人強化學習方法多架無人機自主追蹤之研究;Multi-agent Reinforcement Learning for Autonomous Tracking Using a Swarm of UAVs
    作者: 張登凱;Chang, Deng-Kai
    貢獻者: 通訊工程學系
    關鍵詞: 多代理人強化學習;無人機;追蹤與定位;受限制馬可夫決策過程;Multi-agent Reinforcement learning;Unmanned aerial vehi cles (UAVs);Localization and tracking;Constrained Markov decision pro cess
    日期: 2020-07-23
    上傳時間: 2020-09-02 17:09:48 (UTC+8)
    出版者: 國立中央大學
    摘要: 在本論文中,我們目標為無人機群設計一種自主追蹤系統,以定位配戴射頻傳感器之移動目標。在此系統中,配戴全向性接收訊號強度傳感器之無人機可以在給定的追蹤精度下協同合作搜索目標。為了在高度動態的通道環境中實現快速追蹤與定位,我們將無人機飛行決策問題表示為受約束馬可夫決策過程,其主要目的為避免執行冗餘的飛行決策。緊接著,我們提出一種增強的多代理人強化學習,以協調多台無人機執行實時目標追蹤任務。所提出之框架的核心是一個反饋控制系統,此系統同時考慮了通道估計的不確定性。此外,我們證明了該演算法可以收斂至最優決策。最後,我們通過建置高動態通道環境並生成人工數據來評估所提框架與演算法之性能。根據模擬結果與嚴格的數學證明,本論文所提之框架可以在有限的時間內完成追蹤與定位之任務。此外,結果更表明所提出之系統框架的可行性,與傳統強化學習方法相比,可以減少30%-50%之搜索時間,並提高20%的任務完成率。;In this thesis, we aim to design an autonomous tracking system for a swarm of unmanned aerial vehicles (UAVs) to localize a radio frequency (RF) mobile target.
    In this system, each UAV equipped with omnidirectional received signal strength (RSS) sensor can cooperatively search the target with a specified accuracy.
    However, to achieve rapid tracking and localization in the highly dynamic channel environment (e.g., time-varting transmit power and intermittent signal), we formulate the UAV flight decision problem as a constrained Markov decision process.
    The main objective is to avoid redundant UAV flight decisions.
    Then, we propose an enhanced multi-agent reinforcement learning to perform multiple UAVs real-time tracking missions in cooperation.
    The core of the proposed scheme is a feedback control system that takes into account the uncertainty of the channel estimate.
    Also, we prove the proposed algorithm can converge to the optimal decision.
    Finally, our simulation results show that the proposed scheme outperforms traditional reinforcement learning algorithms (i.e., Q-learning, multi-agent Q-learning) in terms of searching time and successful localization probability by 30% to 50% and 20%, respectively.
    顯示於類別:[通訊工程研究所] 博碩士論文

    文件中的檔案:

    檔案 描述 大小格式瀏覽次數
    index.html0KbHTML109檢視/開啟


    在NCUIR中所有的資料項目都受到原著作權保護.

    社群 sharing

    ::: Copyright National Central University. | 國立中央大學圖書館版權所有 | 收藏本站 | 設為首頁 | 最佳瀏覽畫面: 1024*768 | 建站日期:8-24-2009 :::
    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 隱私權政策聲明