博碩士論文 111522133 詳細資訊




以作者查詢圖書館館藏 以作者查詢臺灣博碩士 以作者查詢全國書目 勘誤回報 、線上人數:73 、訪客IP:3.144.3.183
姓名 翁庭凱(Ting-Kai Weng)  查詢紙本館藏   畢業系所 資訊工程學系
論文名稱 隨機性巡邏排程對抗具有不同攻擊時長的敵手
(Randomized patrolling schedules to counter adversaries with varying attack durations)
相關論文
★ PXGen:生成模型的事後可解釋方法★ 多機器人在樹結構上最小化最大延遲巡邏調度
★ 使用時序圖卷積網絡進行環境異常檢測
檔案 [Endnote RIS 格式]    [Bibtex 格式]    [相關文章]   [文章引用]   [完整記錄]   [館藏目錄]   至系統瀏覽論文 (2027-7-31以後開放)
摘要(中) 我們研究了一種擴展的零和巡邏安全遊戲,其中攻擊者可以自由決定攻擊的時間、地點和持續時間,並在三種不同的攻擊者模型下進行考察。在這個遊戲中,攻擊者的收益由攻擊獲得的效用減去被巡邏防守者抓到時的懲罰來確定。我們的主要目標是最小化攻擊者的總收益。為此,我們將遊戲轉化為一個具有明確目標函數的組合極小極大問題

在沒有被捕獲懲罰的情況下,我們發現最佳策略涉及根據所使用的攻擊者模型,最小化預期的到達時間或返回時間。此外,我們發現,在高懲罰情況下增加巡邏日程的隨機性顯著降低了攻擊者的預期收益。為了應對一般情況下呈現的挑戰,我們定義了一個雙標準優化問題,並比較了四種算法,旨在平衡最大化預期獎勵和增加巡邏日程隨機性之間的權衡。
摘要(英) We explore an extended version of the zero-sum patrolling security game where the attacker has the flexibility to decide the timing, location, and duration of their attack, examined under three distinct attacker models. In this game, the attacker′s payoff is determined by the utilities gained from the attack minus any penalties incurred if caught by the patrolling defender. Our primary objective is to minimize the attacker′s overall payoff. To achieve this, we transform the game into a combinatorial minimax problem with a clearly defined objective function.

In cases where there is no penalty for getting caught, we establish that the optimal strategy involves minimizing either the expected hitting time or return time, contingent on the attacker model employed. Furthermore, we find that enhancing the randomness of the patrol schedule significantly reduces the attacker′s expected payoff in scenarios involving high penalties. To address the challenges presented in general scenarios, we developed a bi-criteria optimization problem and compare four algorithms designed to balance the trade-off between maximizing expected rewards and increasing randomness in patrol scheduling.
關鍵字(中) ★ 機器人巡邏
★ 賽局理論
★ 機器學習
★ 旅行家問題
關鍵字(英) ★ Robot patrol
★ Game theory
★ Machine learning
★ Traverse salesman problem
論文目次 中文摘要/Chinese Abstract i
英文摘要/English Abstract ii
內容目次/Table of Contents iii
圖片列表/List of Figures v
Introduction 1
RelatedWork 3
Problemdefinition 5
DefenderStrategy 7
Experiments 16
6 Conclusion 27
Bibliography 28
參考文獻 [1] Noa Agmon, Vladimir Sadov, Gal A Kaminka, and Sarit Kraus. The impact of adver-
sarial knowledge on adversarial planning in perimeter patrol. In Proceedings of the 7th
international joint conference on Autonomous agents and multiagent systems-Volume
1, pages 55–62. International Foundation for Autonomous Agents and Multiagent
Systems, 2008.
[2] Bo An, Eric Shieh, Milind Tambe, Rong Yang, Craig Baldwin, Joseph DiRenzo, Ben
Maule, and Garrett Meyer. Protect–a deployed game theoretic system for strategic
security allocation for the united states coast guard. Ai Magazine, 33(4):96, 2012.
[3] Ahmad Bilal Asghar and Stephen L Smith. Stochastic patrolling in adversarial set-
tings. In American Control Conference (ACC), 2016, pages 6435–6440. IEEE, 2016.
[4] Nicola Basilico. Recent trends in robotic patrolling. Current Robotics Reports,
3(2):65–76, 2022.
[5] Nicola Basilico, Giuseppe De Nittis, and Nicola Gatti. Adversarial patrolling with
spatially uncertain alarm signals. Artificial Intelligence, 246:220–257, 2017.
[6] Nicola Basilico, Nicola Gatti, and Francesco Amigoni. Leader-follower strategies for
robotic patrolling in environments with arbitrary topologies. In Proceedings of The
8th International Conference on Autonomous Agents and Multiagent Systems-Volume
1, pages 57–64. International Foundation for Autonomous Agents and Multiagent
Systems, 2009.
[7] Nicola Basilico, Nicola Gatti, and Francesco Amigoni. Patrolling security games:
Definition and algorithms for solving large instances with single patroller and single
intruder. Artificial Intelligence, 184:78–123, 2012.
[8] Irwan Bello, Hieu Pham, Quoc V Le, Mohammad Norouzi, and Samy Bengio.
Neural combinatorial optimization with reinforcement learning. arXiv preprint
arXiv:1611.09940, 2016.
[9] Branislav Boˇsansk`y, Viliam Lis`y, Michal Jakob, and Michal Pˇechouˇcek. Comput-
ing time-dependent policies for patrolling games with mobile targets. In The 10th
International Conference on Autonomous Agents and Multiagent Systems-Volume 3,
pages 989–996. International Foundation for Autonomous Agents and Multiagent
Systems, 2011.
[10] Am´ılcar Branquinho, Ana Foulqui´e-Moreno, Manuel Ma˜nas, Carlos ´Alvarez-
Fern´andez, and Juan E Fern´andez-D´ıaz. Multiple orthogonal polynomials and ran-
dom walks. arXiv preprint arXiv:2103.13715, 2021.
[11] Salih C¸ am. Asset allocation with combined models based on game-theory approach
and markov chain models. EKOIST Journal of Econometrics and Statistics, (39):26–
36, 2023.
[12] Jewgeni H Dshalalow and Ryan T White. Current trends in random walks on random
lattices. Mathematics, 9(10):1148, 2021.
[13] Nicola Gatti. Game theoretical insights in strategic patrolling: Model and algorithm
in normal-form. In ECAI, pages 403–407, 2008.
[14] Mishel George, Saber Jafarpour, and Francesco Bullo. Markov chains with maximum
entropy for robotic surveillance. IEEE Transactions on Automatic Control, 2018.
[15] Andr´e Hottung, Bhanu Bhandari, and Kevin Tierney. Learning a latent search space
for routing problems using variational autoencoders. In International Conference on
Learning Representations.
[16] Kyle Hunt and Jun Zhuang. A review of attacker-defender games: Current state and
paths forward. European Journal of Operational Research, 313(2):401–417, 2024.
[17] Stef Janssen, Diogo Matias, and Alexei Sharpanskykh. An agent-based empirical
game theory approach for airport security patrols. Aerospace, 7(1):8, 2020.
[18] Minsu Kim, Jinkyoo Park, et al. Learning collaborative policies to solve np-hard
routing problems. Advances in Neural Information Processing Systems, 34:10418–
10430, 2021.
[19] Lisa-Ann Kirkland, Alta De Waal, and Johan Pieter De Villiers. Evaluation of
a pure-strategy stackelberg game for wildlife security in a geospatial framework.
In Southern African Conference for Artificial Intelligence Research, pages 101–118.
Springer, 2020.
[20] Wouter Kool, Herke Van Hoof, and Max Welling. Attention, learn to solve routing
problems! arXiv preprint arXiv:1803.08475, 2018.
[21] Zhongkai Li, Chengcheng Huo, and Xiangwei Qi. Analysis and study of several
game algorithms for public safety. In 2020 International Conference on Computer
Engineering and Application (ICCEA), pages 575–579. IEEE, 2020.
[22] Qiang Ma, Suwen Ge, Danyang He, Darshan Thaker, and Iddo Drori. Combinatorial
optimization by graph pointer networks and hierarchical reinforcement learning. In
AAAI Workshop on Deep Learning on Graphs: Methodologies and Applications, 2020.
[23] Jie Min and Tomasz Radzik. Bamboo garden trimming problem. In SOFSEM 2017:
Theory and Practice of Computer Science: 43rd International Conference on Current
Trends in Theory and Practice of Computer Science, Limerick, Ireland, January 16-
20, 2017, Proceedings, volume 10139, page 229. Springer, 2017.
[24] Thanh H. Nguyen, Debarun Kar, Matthew Brown, Arunesh Sinha, Albert Xin Jiang,
and Milind Tambe. Towards a science of security games. In B. Toni, editor, New
Frontiers of Multidisciplinary Research in STEAM-H, 2016.
[25] Rushabh Patel, Pushkarini Agharkar, and Francesco Bullo. Robotic surveillance
and markov chains with minimal weighted kemeny constant. IEEE Transactions on
Automatic Control, 60(12):3156–3167, 2015.
[26] James Pita, Manish Jain, Janusz Marecki, Fernando Ord´o˜nez, Christopher Portway,
Milind Tambe, Craig Western, Praveen Paruchuri, and Sarit Kraus. Deployed armor
protection: the application of a game theoretic model for security at the los ange-
les international airport. In Proceedings of the 7th international joint conference on
Autonomous agents and multiagent systems: industrial track, pages 125–132. Inter-
national Foundation for Autonomous Agents and Multiagent Systems, 2008.
[27] Jerry H Ratcliffe, Travis Taniguchi, Elizabeth R Groff, and Jennifer D Wood. The
philadelphia foot patrol experiment: A randomized controlled trial of police patrol
effectiveness in violent crime hotspots. Criminology, 49(3):795–831, 2011.
[28] Sukanya Samanta, Goutam Sen, and Soumya Kanti Ghosh. A literature review on
police patrolling problems. Annals of Operations Research, 316(2):1063–1106, 2022.
[29] Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. Advances in
neural information processing systems, 28, 2015.
[30] Yevgeniy Vorobeychik, Bo An, Milind Tambe, and Satinder P Singh. Computing
solutions in infinite-horizon discounted adversarial patrolling games. In ICAPS, 2014.
[31] Hao-Tsung Yang, Shih-Yu Tsai, Kin Sum Liu, Shan Lin, and Jie Gao. Patrol schedul-
ing against adversaries with varying attack durations. In Proceedings of the 18th In-
ernational Conference on Autonomous Agents and MultiAgent Systems, pages 1179–
1188, 2019.
指導教授 楊晧琮(Hao-Tsung Yang) 審核日期 2024-8-21
推文 facebook   plurk   twitter   funp   google   live   udn   HD   myshare   reddit   netvibes   friend   youpush   delicious   baidu   
網路書籤 Google bookmarks   del.icio.us   hemidemi   myshare   

若有論文相關問題,請聯絡國立中央大學圖書館推廣服務組 TEL:(03)422-7151轉57407,或E-mail聯絡  - 隱私權政策聲明