參考文獻 |
[1] T. Qiu, J. Chi, X. Zhou, Z. Ning, M. Atiquzzaman, and D. O. Wu, “Edge computing
in industrial internet of things: Architecture, advances and challenges,” IEEE
Communications Surveys and Tutorials, vol. 22, no. 4, pp. 2462–2488, 2020.
[2] T. Liu, L. Fang, Y. Zhu, W. Tong, and Y. Yang, “A near-optimal approach for online
task offloading and resource allocation in edge-cloud orchestrated computing,” IEEE
Transactions on Mobile Computing, vol. 21, no. 8, pp. 2687–2700, 2022.
[3] C. You, K. Huang, H. Chae, and B.-H. Kim, “Energy-efficient resource allocation for
mobile-edge computation offloading,” IEEE Transactions on Wireless Communications,
vol. 16, no. 3, pp. 1397–1411, 2017.
[4] S. M. A. Huda and S. Moh, “Deep reinforcement learning-based computation offloading
in uav swarm-enabled edge computing for surveillance applications,” IEEE
Access, vol. 11, pp. 68 269–68 285, 2023.
[5] G. Ji, B. Zhang, Z. Yao, and C. Li, “A reverse auction based incentive mechanism
for mobile crowdsensing,” in ICC 2019 - 2019 IEEE International Conference on
Communications (ICC), 2019, pp. 1–6.
[6] C.-L. Hu, K.-Y. Lin, and C. K. Chang, “Incentive mechanism for mobile crowdsensing
with two-stage stackelberg game,” IEEE Transactions on Services Computing, vol. 16,
no. 3, pp. 1904–1918, 2023.
[7] C. Su, F. Ye, T. Liu, Y. Tian, and Z. Han, “Computation offloading in hierarchical
multi-access edge computing based on contract theory and bayesian matching game,”
IEEE Transactions on Vehicular Technology, vol. 69, no. 11, pp. 13 686–13 701, 2020.
[8] N. Zhao, W. Du, F. Ren, Y. Pei, Y.-C. Liang, and D. Niyato, “Joint task offloading,
resource sharing and computation incentive for edge computing networks,” IEEE
Communications Letters, vol. 27, no. 1, pp. 258–262, 2023.
[9] N. Zhao, Y. Pei, Y.-C. Liang, and D. Niyato, “Deep-reinforcement-learning-based
contract incentive mechanism for joint sensing and computation in mobile crowdsourcing
networks,” IEEE Internet of Things Journal, vol. 11, no. 7, pp. 12 755–
12 767, 2024.
[10] C. Wang, W. Lu, S. Peng, Y. Qu, G. Wang, and S. Yu, “Modeling on energyefficiency
computation offloading using probabilistic action generating,” IEEE Internet
of Things Journal, vol. 9, no. 20, pp. 20 681–20 692, 2022.
[11] K. Zheng, G. Jiang, X. Liu, K. Chi, X. Yao, and J. Liu, “Drl-based offloading for
computation delay minimization in wireless-powered multi-access edge computing,”
IEEE Transactions on Communications, vol. 71, no. 3, pp. 1755–1770, 2023.
[12] H. Yu, Q. Wang, and S. Guo, “Energy-efficient task offloading and resource scheduling
for mobile edge computing,” in 2018 IEEE International Conference on Networking,
Architecture and Storage (NAS), 2018, pp. 1–4.
[13] T. X. Tran and D. Pompili, “Joint task offloading and resource allocation for multiserver
mobile-edge computing networks,” IEEE Transactions on Vehicular Technology,
vol. 68, no. 1, pp. 856–868, 2019.
[14] H. Tran-Dang and D.-S. Kim, “Impact of task splitting on the delay performance of
task offloading in the iot-enabled fog systems,” in 2021 International Conference on
Information and Communication Technology Convergence (ICTC), 2021, pp. 661–
663.
[15] K. Zhang, Y. Mao, S. Leng, Q. Zhao, L. Li, X. Peng, L. Pan, S. Maharjan, and
Y. Zhang, “Energy-efficient offloading for mobile edge computing in 5g heterogeneous
networks,” IEEE Access, vol. 4, pp. 5896–5907, 2016.
[16] C.-F. Liu, M. Bennis, M. Debbah, and H. V. Poor, “Dynamic task offloading and resource
allocation for ultra-reliable low-latency edge computing,” IEEE Transactions
on Communications, vol. 67, no. 6, pp. 4132–4150, 2019.
[17] Q. Luo, C. Li, T. H. Luan, and W. Shi, “Collaborative data scheduling for vehicular
edge computing via deep reinforcement learning,” IEEE Internet of Things Journal,
vol. 7, no. 10, pp. 9637–9650, 2020.
[18] J. X. Liao and X. W. Wu, “Resource allocation and task scheduling scheme in prioritybased
hierarchical edge computing system,” in 2020 19th International Symposium
on Distributed Computing and Applications for Business Engineering and Science
(DCABES), 2020, pp. 46–49.
[19] H. Ye, G. Y. Li, and B.-H. F. Juang, “Deep reinforcement learning based resource
allocation for v2v communications,” IEEE Transactions on Vehicular Technology,
vol. 68, no. 4, pp. 3163–3173, 2019.
[20] L. Zhang, Y. Sun, Y. Tang, H. Zeng, and Y. Ruan, “Joint offloading decision and
resource allocation in mec-enabled vehicular networks,” in 2021 IEEE 93rd Vehicular
Technology Conference (VTC2021-Spring), 2021, pp. 1–5.
[21] P. Bolton and M. Dewatripont, Contract theory. MIT press, 2004.
[22] Y. Zhang, L. Song, W. Saad, Z. Dawy, and Z. Han, “Contract-based incentive mechanisms
for device-to-device communications in cellular networks,” IEEE Journal on
Selected Areas in Communications, vol. 33, no. 10, pp. 2144–2155, 2015.
[23] G. Ji, Z. Yao, B. Zhang, and C. Li, “A reverse auction-based incentive mechanism for
mobile crowdsensing,” IEEE Internet of Things Journal, vol. 7, no. 9, pp. 8238–8248,
2020.
[24] M. Zeng, Y. Li, K. Zhang, M. Waqas, and D. Jin, “Incentive mechanism design for
computation offloading in heterogeneous fog computing: A contract-based approach,”
in 2018 IEEE International Conference on Communications (ICC), 2018, pp. 1–6.
[25] Z. Zhou, H. Liao, X. Zhao, B. Ai, and M. Guizani, “Reliable task offloading for
vehicular fog computing under information asymmetry and information uncertainty,”
IEEE Transactions on Vehicular Technology, vol. 68, no. 9, pp. 8322–8335, 2019.
[26] Z. Zhou, H. Liao, B. Gu, S. Mumtaz, and J. Rodriguez, “Resource sharing and task
offloading in iot fog computing: A contract-learning approach,” IEEE Transactions
on Emerging Topics in Computational Intelligence, vol. 4, no. 3, pp. 227–240, 2020.
[27] S. M. A. Kazmi, T. N. Dang, I. Yaqoob, A. Manzoor, R. Hussain, A. Khan, C. S.
Hong, and K. Salah, “A novel contract theory-based incentive mechanism for cooperative
task-offloading in electrical vehicular networks,” IEEE Transactions on
Intelligent Transportation Systems, vol. 23, no. 7, pp. 8380–8395, 2022.
[28] Z. Hu, Z. Zheng, L. Song, T. Wang, and X. Li, “Uav offloading: Spectrum trading
contract design for uav-assisted cellular networks,” IEEE Transactions on Wireless
Communications, vol. 17, no. 9, pp. 6093–6107, 2018.
[29] Z. Zhou, P. Liu, J. Feng, Y. Zhang, S. Mumtaz, and J. Rodriguez, “Computation
resource allocation and task assignment optimization in vehicular fog computing: A
contract-matching approach,” IEEE Transactions on Vehicular Technology, vol. 68,
no. 4, pp. 3113–3125, 2019.
[30] M. Diamanti, P. Charatsaris, E. E. Tsiropoulou, and S. Papavassiliou, “Incentive
mechanism and resource allocation for edge-fog networks driven by multi-dimensional
contract and game theories,” IEEE Open Journal of the Communications Society,
vol. 3, pp. 435–452, 2022.
[31] Y. Li, B. Yang, H. Wu, Q. Han, C. Chen, and X. Guan, “Joint offloading decision
and resource allocation for vehicular fog-edge computing networks: A contractstackelberg
approach,” IEEE Internet of Things Journal, vol. 9, no. 17, pp. 15 969–
15 982, 2022.
[32] H. Liu, H. Zhao, L. Geng, and W. Feng, “A policy gradient based offloading scheme
with dependency guarantees for vehicular networks,” in 2020 IEEE Globecom Workshops
(GC Wkshps), 2020, pp. 1–6.
[33] F. Jiang, X. Zhu, and C. Sun, “Double dqn based computing offloading scheme for
fog radio access networks,” in 2021 IEEE/CIC International Conference on Communications
in China (ICCC), 2021, pp. 1131–1136.
[34] C. Qiu, Y. Hu, Y. Chen, and B. Zeng, “Deep deterministic policy gradient (ddpg)-
based energy harvesting wireless communications,” IEEE Internet of Things Journal,
vol. 6, no. 5, pp. 8577–8588, 2019.
[35] N. C. Luong, D. T. Hoang, S. Gong, D. Niyato, P. Wang, Y.-C. Liang, and D. I. Kim,
“Applications of deep reinforcement learning in communications and networking: A
survey,” IEEE Communications Surveys & Tutorials, vol. 21, no. 4, pp. 3133–3174,
2019.
[36] Y. Zhan, C. H. Liu, Y. Zhao, J. Zhang, and J. Tang, “Free market of multi-leader
multi-follower mobile crowdsensing: An incentive mechanism design by deep reinforcement
learning,” IEEE Transactions on Mobile Computing, vol. 19, no. 10, pp.
2316–2329, 2020.
[37] C. Chen, S. Gong, W. Zhang, Y. Zheng, and Y. C. Kiat, “Deep reinforcement learning
based contract incentive for uavs and energy harvest assisted computing,” in GLOBECOM
2022 - 2022 IEEE Global Communications Conference, 2022, pp. 2224–2229.
[38] N. H. Tran, W. Bao, A. Zomaya, M. N. H. Nguyen, and C. S. Hong, “Federated
learning over wireless networks: Optimization model design and analysis,” in IEEE
INFOCOM 2019 - IEEE Conference on Computer Communications, 2019, pp. 1387–
1395.
[39] T. Liu, J. Li, F. Shu, M. Tao, W. Chen, and Z. Han, “Design of contract-based
trading mechanism for a small-cell caching system,” IEEE Transactions on Wireless
Communications, vol. 16, no. 10, pp. 6602–6617, 2017.
[40] W. Hou, H. Wen, N. Zhang, J. Wu, W. Lei, and R. Zhao, “Incentive-driven task
allocation for collaborative edge computing in industrial internet of things,” IEEE
Internet of Things Journal, vol. 9, no. 1, pp. 706–718, 2022. |