參考文獻 |
[1] A. Hard, K. Rao, R. Mathews, F. Beaufays, S. Augenstein, H. Eichner, C. Kiddon, and D. Ramage, “Federated Learning for Mobile Keyboard Prediction,” arXiv, vol. abs/1811.03604, 2018.
[2] H. B. McMahan, E. Moore, D. Ramage, S. Hampson, and B. A. y. Arcas, "Communication-Efficient Learning of Deep Networks from Decentralized Data."
[3] T. Li, A. K. Sahu, M. Sanjabi, M. Zaheer, A. Talwalkar, and V. Smith, “Federated Optimization in Heterogeneous Networks,” arXiv, 2018.
[4] S. P. Karimireddy, S. Kale, M. Mohri, S. J. Reddi, S. U. Stich, and A. T. Suresh, "SCAFFOLD: Stochastic Controlled Averaging for Federated Learning."
[5] J. Wang, Q. Liu, H. Liang, G. Joshi, and H. V. Poor, “Tackling the Objective Inconsistency Problem in Heterogeneous Federated Optimization,” ArXiv, vol. abs/2007.07481, 2020.
[6] D. A. E. Acar, Y. Zhao, R. M. Navarro, M. Mattina, P. N. Whatmough, and V. Saligrama, “Federated Learning Based on Dynamic Regularization,” arXiv, vol. abs/2111.04263, 2021.
[7] M. Duan, D. Liu, X. Chen, Y. Tan, J. Ren, L. Qiao, and L. Liang, "Astraea: SelfBalancing Federated Learning for Improving Classification Accuracy of Mobile Deep Learning Applications."
[8] Y. Zhao, M. Li, L. Lai, N. Suda, D. Civin, and V. Chandra, “Federated Learning with Non-IID Data,” arXiv, vol. abs/1806.00582, 2018.
[9] L. Huang, Y. Yin, Z. F. Zhang, H. Deng, and D. Liu, “LoAdaBoost: loss-based AdaBoost federated machine learning with reduced computational complexity on IID and non-IID intensive care data,” arXiv, vol. abs/1811.12629, 2020.
[10] E. Jeong, S. Oh, H. Kim, J. Park, M. Bennis, and S.-L. Kim, “Communication-Efficient On-Device Machine Learning: Federated Distillation and Augmentation under Non-IID Private Data,” arXiv, vol. abs/1811.11479, 2023.
[11] J. Konecný, H. B. McMahan, F. X. Yu, P. Richtárik, A. T. Suresh, and D. Bacon, “Federated Learning: Strategies for Improving Communication Efficiency,” arXiv, vol. abs/1610.05492, 2016.
[12] S. Caldas, J. Konecný, H. B. McMahan, and A. Talwalkar, “Expanding the Reach of Federated Learning by Reducing Client Resource Requirements,” arXiv, vol. abs/1812.07210, 2018.
[13] F. Sattler, S. Wiedemann, K.-R. Müller, and W. Samek, “Robust and CommunicationEfficient Federated Learning From Non-i.i.d. Data,” IEEE Transactions on Neural Networks and Learning Systems, vol. 31, pp. 3400-3413, 2019.
[14] X. Zhang, M. Hong, S. V. Dhople, W. Yin, and Y. Liu, “FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity to Non-IID Data,” ArXiv, vol. abs/2005.11418, 2020.
[15] T. Nishio, and R. Yonetani, “Client Selection for Federated Learning with Heterogeneous Resources in Mobile Edge,” ICC 2019 - 2019 IEEE International Conference on Communications (ICC), pp. 1-7, 2018.
[16] J. Kang, Z. Xiong, D. T. Niyato, H. Yu, Y.-C. Liang, and D. I. Kim, “Incentive Design for Efficient Federated Learning in Mobile Networks: A Contract Theory Approach,” 2019 IEEE VTS Asia Pacific Wireless Communications Symposium (APWCS), pp. 1-5, 2019.
[17] H. T. Nguyen, V. Sehwag, S. Hosseinalipour, C. G. Brinton, M. Chiang, and H. Vincent Poor, “Fast-Convergent Federated Learning,” IEEE Journal on Selected Areas in Communications, vol. 39, pp. 201-218, 2020.
[18] M. Mohri, G. Sivek, and A. T. Suresh, “Agnostic Federated Learning,” arXiv, vol. abs/1902.00146, 2019.
[19] T. Li, M. Sanjabi, and V. Smith, “Fair Resource Allocation in Federated Learning,” arXiv,vol. abs/1905.10497, 2019.
[20] K. Bonawitz, V. Ivanov, B. Kreuter, A. Marcedone, H. B. McMahan, S. Patel, D. Ramage, A. Segal, and K. Seth, "Practical Secure Aggregation for Privacy-Preserving Machine Learning." pp. 1175–1191.
[21] N. Agarwal, A. T. Suresh, F. Yu, S. Kumar, and H. B. Mcmahan, “cpSGD: Communication-efficient and differentially-private distributed SGD,” arXiv, vol. abs/1805.10559, 2018.
[22] G. Xu, H. Li, S. Liu, K. Yang, and X. Lin, “VerifyNet: Secure and Verifiable Federated Learning,” IEEE Transactions on Information Forensics and Security, vol. 15, pp. 911-926, 2020.
[23] S. J. Reddi, Z. B. Charles, M. Zaheer, Z. Garrett, K. Rush, J. Konecný, S. Kumar, and H. B. McMahan, “Adaptive Federated Optimization,” arXiv, vol. abs/2003.00295, 2020.
[24] T. Li, A. K. Sahu, M. Zaheer, M. Sanjabi, A. Talwalkar, and V. Smith, “FedDANE: A Federated Newton-Type Method,” 2019 53rd Asilomar Conference on Signals, Systems, and Computers, pp. 1227-1231, 2019.
[25] X. Li, K. Huang, W. Yang, S. Wang, and Z. Zhang, “On the Convergence of FedAvg on Non-IID Data,” ArXiv, vol. abs/1907.02189, 2019.
[26] H. Wang, M. Yurochkin, Y. Sun, D. Papailiopoulos, and Y. Khazaeni, “Federated Learning with Matched Averaging,” arXiv, vol. abs/2002.06440, 2020.
[27] E. Jeong, S. Oh, H. Kim, J. Park, M. Bennis, and S.-L. Kim, “Communication-Efficient On-Device Machine Learning: Federated Distillation and Augmentation under Non-IID Private Data,” arXiv, vol. abs/1811.11479, 2018.
[28] D. Li, and J. Wang, “FedMD: Heterogenous Federated Learning via Model Distillation,” arXiv, vol. abs/1910.03581, 2019.
[29] Z. Zhu, J. Hong, and J. Zhou, “Data-Free Knowledge Distillation for Heterogeneous Federated Learning,” in Proceedings of the 38th International Conference on Machine Learning, Proceedings of Machine Learning Research, 2021, pp. 12878--12889.
[30] G. Lee, M. Jeong, Y. Shin, S. Bae, and S.-Y. Yun, “Preservation of the Global Knowledge by Not-True Distillation in Federated Learning,” 2022, pp. 38461--38474.
[31] P. Qi, X. Zhou, Y. Ding, Z. Zhang, S. Zheng, and Z. Li, “FedBKD: Heterogenous Federated Learning via Bidirectional Knowledge Distillation for Modulation Classification in IoT-Edge System,” IEEE Journal of Selected Topics in Signal Processing, vol. 17, pp. 189-204, 2023.
[32] M. G. Arivazhagan, V. Aggarwal, A. K. Singh, and S. Choudhary, “Federated Learning with Personalization Layers,” arXiv, vol. abs/1912.00818, 2019.
[33] A. Fallah, A. Mokhtari, and A. Ozdaglar, “Personalized Federated Learning with Theoretical Guarantees: A Model-Agnostic Meta-Learning Approach,” 2020, pp. 3557--3568.
[34] P. P. Liang, T. Liu, L. Ziyin, R. Salakhutdinov, and L.-P. Morency, “Think Locally, Act Globally: Federated Learning with Local and Global Representations,” ArXiv, vol. abs/2001.01523, 2020.
[35] V. Smith, C.-K. Chiang, M. Sanjabi, and A. Talwalkar, “Federated Multi-Task Learning,” arXiv, vol. abs/1705.10467, 2017.
[36] H. Eichner, T. Koren, H. B. McMahan, N. Srebro, and K. Talwar, “Semi-Cyclic Stochastic Gradient Descent,” arXiv, vol. abs/1904.10120, 2019.
[37] M. Khodak, M.-F. Balcan, and A. Talwalkar, “Adaptive Gradient-Based Meta-Learning Methods,” arXiv, vol. abs/1906.02717, 2019.
[38] Q. Li, B. He, and D. Song, “Model-Contrastive Federated Learning,” arXiv, vol. abs/2103.16257, 2021.
[39] S. Kalra, J. Wen, J. C. Cresswell, M. Volkovs, and H. R. Tizhoosh, “Decentralized federated learning through proxy model sharing,” Nature Communications, vol. 14, 2021.
[40] H. Yue, K. Lanju, L. Qingzhong, and Z. Baochen, “Decentralized Federated Learning Via Mutual Knowledge Distillation,” 2023 IEEE International Conference on Multimedia and Expo (ICME), pp. 342-347, 2023.
[41] A. Gholami, N. Torkzaban, and J. S. Baras, “Trusted Decentralized Federated Learning,” 2022 IEEE 19th Annual Consumer Communications & Networking Conference (CCNC), pp. 1-6, 2022.
[42] Z. Tang, S. Shi, B. Li, and X. Chu, “GossipFL: A Decentralized Federated Learning Framework With Sparsified and Adaptive Communication,” IEEE Transactions on Parallel and Distributed Systems, vol. 34, pp. 909-922, 2023.
[43] N. Masmoudi, and W. Jaafar, “OCD-FL: A Novel Communication-Efficient Peer Selection-based Decentralized Federated Learning,” arXiv, vol. abs/2403.04037, 2024.
[44] C. Hu, J. Jiang, and Z. Wang, “Decentralized Federated Learning: A Segmented Gossip Approach,” arXiv, vol. abs/1908.07782, 2019.
[45] X. Li, B. Chen, and W. Lu, “FedDKD: Federated learning with decentralized knowledge distillation,” Applied Intelligence, pp. 1-17, 2022.
[46] Y. Huang, C. Bert, S. Fischer, M. Schmidt, A. Dörfler, A. Maier, R. Fietkau, and F. Putz, “Continual Learning for Peer-to-Peer Federated Learning: A Study on Automated Brain Metastasis Identification,” arXiv, vol. abs/2204.13591, 2022.
[47] K. Chang, N. Balachandar, C. K. Lam, D. Yi, J. M. Brown, A. L. Beers, B. R. Rosen, D. Rubin, and J. Kalpathy-Cramer, “Distributed deep learning networks among institutions for medical imaging,” Journal of the American Medical Informatics Association : JAMIA, vol. 25, pp. 945 - 954, 2018.
[48] J. Xu, B. S. Glicksberg, C. Su, P. Walker, J. Bian, and F. Wang, “Federated Learning for Healthcare Informatics,” arXiv, vol. abs/1911.06270, 2020.
[49] "The Complete Works of William Shakespeare by William Shakespeare," Project Gutenberg, 1994.
[50] S. Caldas, S. M. K. Duddu, P. Wu, T. Li, J. Konečný, H. B. McMahan, V. Smith, and A. Talwalkar, “LEAF: A Benchmark for Federated Settings,” arXiv, vol. abs/1812.01097, 2018.
[51] Y. Hou, J. Li, Z. He, A. Yan, X. Chen, and J. McAuley, “Bridging Language and Items for Retrieval and Recommendation,” arXiv, vol. abs/2403.03952, 2024.
[52] G. Cohen, S. Afshar, J. Tapson, and A. v. Schaik, “EMNIST: an extension of MNIST to handwritten letters,” arXiv, vol. abs/1702.05373, 2017 |