Skip to content

jjzgeeks/Federated_learning_papers

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 

Repository files navigation

About Resource Allocation

  1. Adaptive federated learning in resource constrained edge computing systems. Wang, Shiqiang and Tuor, Tiffany and Salonidis, Theodoros and Leung, Kin K and Makaya, Christian and He, Ting and Chan, Kevin IEEE Journal on Selected Areas in Communications 2019 p1.
    code & data adaptive-federated-learning
  2. Fair Resource Allocation in Federated Learning. Tian Li, Maziar Sanjabi, Ahmad Beirami, Virginia Smith. ICLR 2020. p2
    code & data fair_flearn

About Communication-Efficient

  1. Robust and Communication-Efficient Federated Learning From Non-i.i.d. Data. F. Sattler, S. Wiedemann, K. -R. Müller and W. Samek, IEEE Transactions on Neural Networks and Learning Systems, vol. 31, no. 9, pp. 3400-3413, Sept. 2020 p1

About Security and Privacy

Backdoor Attacks

  1. How To Backdoor Federated Learning. Eugene Bagdasaryan, Andreas Veit, Yiqing Hua, Deborah Estrin, Vitaly Shmatikov Proceedings of the Twenty Third International Conference on Artificial Intelligence and Statistics (AISTATS), PMLR 108:2938-2948, 2020. p1, codes.
  2. DBA: Distributed Backdoor Attacks against Federated Learning. Xie, Chulin and Huang, Keli and Chen, Pin-Yu and Li, Bo, ICLR 2020 p2.
  3. 3DFed: Adaptive and Extensible Framework for Covert Backdoor Attack in Federated Learning. Li, Haoyang and Ye, Qingqing and Hu, Haibo and Li, Jin and Wang, Leixia and Fang, Chengfang and Shi, Jie, 2023 IEEE Symposium on Security and Privacy (SP) p3, codes
  4. IBA: Towards Irreversible Backdoor Attacks in Federated Learning., Thuy Dung Nguyen, Tuan A. Nguyen, Anh Tran, Khoa D Doan, Kok-Seng Wong, Advances in Neural Information Processing Systems 36 (NeurIPS 2023) p4, codes
  5. Attack of the Tails: Yes, You Really Can Backdoor Federated Learning., Hongyi Wang, Kartik Sreenivasan, Shashank Rajput, Harit Vishwakarma, Saurabh Agarwal, Jy-yong Sohn, Kangwook Lee, Dimitris Papailiopoulos, 34th Conference on Neural Information Processing Systems (NeurIPS 2020) p5, codes
  6. A3FL: adversarially adaptive backdoor attacks to federated learning., Hangfan Zhang, Jinyuan Jia, Jinghui Chen, Lu Lin, Dinghao Wu, NeurIPS 2023, p6, codes
  7. Neurotoxin: Durable Backdoors in Federated Learning, Zhengming Zhang, Ashwinee Panda, Linyue Song, Yaoqing Yang, Michael Mahoney, Prateek Mittal, Ramchandran Kannan, Joseph Gonzalez Proceedings of the 39th International Conference on Machine Learning, PMLR 162:26429-26446, 2022. p7, codes
  8. Get Rid of Your Trail: Remotely Erasing Backdoors in Federated Learning, M. Alam, H. Lamri and M. Maniatakos, in IEEE Transactions on Artificial Intelligence, vol. 5, no. 12, pp. 6683-6698, Dec. 2024, doi: 10.1109/TAI.2024.3465441. p8 codes

Backdoor Defenses

  1. DeepSight: Mitigating Backdoor Attacks in Federated Learning Through Deep Model Inspection. Phillip Rieger, Thien Duc Nguyen, Markus Miettinen, Ahmad-Reza Sadeghi, NDSS 2022 p1.
  2. Backdoor Defense with Machine Unlearning. Liu, Yang, Mingyuan Fan, Cen Chen, Ximeng Liu, Zhuo Ma, Li Wang, and Jianfeng Ma, In IEEE INFOCOM 2022-IEEE conference on computer communications, pp. 280-289. IEEE, 2022. p2.
  3. Mitigating Distributed Backdoor Attack in Federated Learning Through Mode Connectivity., Walter, Kane and Mohammady, Meisam and Nepal, Surya and Kanhere, Salil S, ASIA CCS '24 p3.

Privacy Attacks

  1. Federated Learning Vulnerabilities: Privacy Attacks with Denoising Diffusion Probabilistic Models., Gu, Hongyan and Zhang, Xinyi and Li, Jiang and Wei, Hui and Li, Baiqi and Huang, Xinli, WWW '24 p1
  2. Inverting Gradients — How Easy Is It to Break Privacy in Federated Learning?. Geiping J, Bauermeister H, Dröge H, Moeller M, NeurIPS 2019, p2
  3. Deep Leakage from Gradients., Ligeng Zhu, Zhijian Liu, Song Han, NeurIPS 2019, p3

Privacy Defenses

  1. Concealing Sensitive Samples against Gradient Leakage in Federated Learning., Jing Wu, Munawar Hayat, Mingyi Zhou, Mehrtash Harandi, AAAI-24 p1 codes
  2. More Than Enough is Too Much: Adaptive Defenses Against Gradient Leakage in Production Federated Learning., Fei Wang, Ethan Hugh, Baochun Li, IEEE/ACM Transactions on Networking, p2

Poisoning Attacks

  1. Data Poisoning Attacks Against Federated Learning Systems. Vale Tolpegin, Stacey Truex, Mehmet Emre Gursoy, Ling Liu, ESORICS 2020 p1
  2. Manipulating the Byzantine: Optimizing Model Poisoning Attacks and Defenses for Federated Learning. Shejwalkar, Virat, and Amir Houmansadr. NDSS 2021 p2

Defense against Poisoning Attacks

  1. Byzantine-Robust Decentralized Federated Learning., Fang, Minghong, Zifan Zhang, Prashant Khanduri, Songtao Lu, Yuchen Liu, and Neil Gong, CCS ’24 p1.
  2. Byzantine-robust Decentralized Federated Learning via Dual-domain Clustering and Trust Bootstrapping., Peng Sun, Xinyang Liu, Zhibo Wang, Bo Liu; Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2024 p2

About Catastrophic Forgetting

  1. A Data-Free Approach to Mitigate Catastrophic Forgetting in Federated Class Incremental Learning for Vision Tasks., Sara Babakniya, Zalan Fabian, Chaoyang He, Mahdi Soltanolkotabi, Salman Avestimehr, NeurIPS 2023, p1

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published