參考文獻 |
[1] Noa Agmon, Vladimir Sadov, Gal A Kaminka, and Sarit Kraus. The impact of adver-
sarial knowledge on adversarial planning in perimeter patrol. In Proceedings of the 7th
international joint conference on Autonomous agents and multiagent systems-Volume
1, pages 55–62. International Foundation for Autonomous Agents and Multiagent
Systems, 2008.
[2] Bo An, Eric Shieh, Milind Tambe, Rong Yang, Craig Baldwin, Joseph DiRenzo, Ben
Maule, and Garrett Meyer. Protect–a deployed game theoretic system for strategic
security allocation for the united states coast guard. Ai Magazine, 33(4):96, 2012.
[3] Ahmad Bilal Asghar and Stephen L Smith. Stochastic patrolling in adversarial set-
tings. In American Control Conference (ACC), 2016, pages 6435–6440. IEEE, 2016.
[4] Nicola Basilico. Recent trends in robotic patrolling. Current Robotics Reports,
3(2):65–76, 2022.
[5] Nicola Basilico, Giuseppe De Nittis, and Nicola Gatti. Adversarial patrolling with
spatially uncertain alarm signals. Artificial Intelligence, 246:220–257, 2017.
[6] Nicola Basilico, Nicola Gatti, and Francesco Amigoni. Leader-follower strategies for
robotic patrolling in environments with arbitrary topologies. In Proceedings of The
8th International Conference on Autonomous Agents and Multiagent Systems-Volume
1, pages 57–64. International Foundation for Autonomous Agents and Multiagent
Systems, 2009.
[7] Nicola Basilico, Nicola Gatti, and Francesco Amigoni. Patrolling security games:
Definition and algorithms for solving large instances with single patroller and single
intruder. Artificial Intelligence, 184:78–123, 2012.
[8] Irwan Bello, Hieu Pham, Quoc V Le, Mohammad Norouzi, and Samy Bengio.
Neural combinatorial optimization with reinforcement learning. arXiv preprint
arXiv:1611.09940, 2016.
[9] Branislav Boˇsansk`y, Viliam Lis`y, Michal Jakob, and Michal Pˇechouˇcek. Comput-
ing time-dependent policies for patrolling games with mobile targets. In The 10th
International Conference on Autonomous Agents and Multiagent Systems-Volume 3,
pages 989–996. International Foundation for Autonomous Agents and Multiagent
Systems, 2011.
[10] Am´ılcar Branquinho, Ana Foulqui´e-Moreno, Manuel Ma˜nas, Carlos ´Alvarez-
Fern´andez, and Juan E Fern´andez-D´ıaz. Multiple orthogonal polynomials and ran-
dom walks. arXiv preprint arXiv:2103.13715, 2021.
[11] Salih C¸ am. Asset allocation with combined models based on game-theory approach
and markov chain models. EKOIST Journal of Econometrics and Statistics, (39):26–
36, 2023.
[12] Jewgeni H Dshalalow and Ryan T White. Current trends in random walks on random
lattices. Mathematics, 9(10):1148, 2021.
[13] Nicola Gatti. Game theoretical insights in strategic patrolling: Model and algorithm
in normal-form. In ECAI, pages 403–407, 2008.
[14] Mishel George, Saber Jafarpour, and Francesco Bullo. Markov chains with maximum
entropy for robotic surveillance. IEEE Transactions on Automatic Control, 2018.
[15] Andr´e Hottung, Bhanu Bhandari, and Kevin Tierney. Learning a latent search space
for routing problems using variational autoencoders. In International Conference on
Learning Representations.
[16] Kyle Hunt and Jun Zhuang. A review of attacker-defender games: Current state and
paths forward. European Journal of Operational Research, 313(2):401–417, 2024.
[17] Stef Janssen, Diogo Matias, and Alexei Sharpanskykh. An agent-based empirical
game theory approach for airport security patrols. Aerospace, 7(1):8, 2020.
[18] Minsu Kim, Jinkyoo Park, et al. Learning collaborative policies to solve np-hard
routing problems. Advances in Neural Information Processing Systems, 34:10418–
10430, 2021.
[19] Lisa-Ann Kirkland, Alta De Waal, and Johan Pieter De Villiers. Evaluation of
a pure-strategy stackelberg game for wildlife security in a geospatial framework.
In Southern African Conference for Artificial Intelligence Research, pages 101–118.
Springer, 2020.
[20] Wouter Kool, Herke Van Hoof, and Max Welling. Attention, learn to solve routing
problems! arXiv preprint arXiv:1803.08475, 2018.
[21] Zhongkai Li, Chengcheng Huo, and Xiangwei Qi. Analysis and study of several
game algorithms for public safety. In 2020 International Conference on Computer
Engineering and Application (ICCEA), pages 575–579. IEEE, 2020.
[22] Qiang Ma, Suwen Ge, Danyang He, Darshan Thaker, and Iddo Drori. Combinatorial
optimization by graph pointer networks and hierarchical reinforcement learning. In
AAAI Workshop on Deep Learning on Graphs: Methodologies and Applications, 2020.
[23] Jie Min and Tomasz Radzik. Bamboo garden trimming problem. In SOFSEM 2017:
Theory and Practice of Computer Science: 43rd International Conference on Current
Trends in Theory and Practice of Computer Science, Limerick, Ireland, January 16-
20, 2017, Proceedings, volume 10139, page 229. Springer, 2017.
[24] Thanh H. Nguyen, Debarun Kar, Matthew Brown, Arunesh Sinha, Albert Xin Jiang,
and Milind Tambe. Towards a science of security games. In B. Toni, editor, New
Frontiers of Multidisciplinary Research in STEAM-H, 2016.
[25] Rushabh Patel, Pushkarini Agharkar, and Francesco Bullo. Robotic surveillance
and markov chains with minimal weighted kemeny constant. IEEE Transactions on
Automatic Control, 60(12):3156–3167, 2015.
[26] James Pita, Manish Jain, Janusz Marecki, Fernando Ord´o˜nez, Christopher Portway,
Milind Tambe, Craig Western, Praveen Paruchuri, and Sarit Kraus. Deployed armor
protection: the application of a game theoretic model for security at the los ange-
les international airport. In Proceedings of the 7th international joint conference on
Autonomous agents and multiagent systems: industrial track, pages 125–132. Inter-
national Foundation for Autonomous Agents and Multiagent Systems, 2008.
[27] Jerry H Ratcliffe, Travis Taniguchi, Elizabeth R Groff, and Jennifer D Wood. The
philadelphia foot patrol experiment: A randomized controlled trial of police patrol
effectiveness in violent crime hotspots. Criminology, 49(3):795–831, 2011.
[28] Sukanya Samanta, Goutam Sen, and Soumya Kanti Ghosh. A literature review on
police patrolling problems. Annals of Operations Research, 316(2):1063–1106, 2022.
[29] Oriol Vinyals, Meire Fortunato, and Navdeep Jaitly. Pointer networks. Advances in
neural information processing systems, 28, 2015.
[30] Yevgeniy Vorobeychik, Bo An, Milind Tambe, and Satinder P Singh. Computing
solutions in infinite-horizon discounted adversarial patrolling games. In ICAPS, 2014.
[31] Hao-Tsung Yang, Shih-Yu Tsai, Kin Sum Liu, Shan Lin, and Jie Gao. Patrol schedul-
ing against adversaries with varying attack durations. In Proceedings of the 18th In-
ernational Conference on Autonomous Agents and MultiAgent Systems, pages 1179–
1188, 2019. |