Skip to content

Pytorch implementations of some general federated optimization methods.

Notifications You must be signed in to change notification settings

bobobe/FL-Simulator

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

FL-Simulator

Pytorch implementations of some general federated optimization methods.

Basic Methods

$FedAvg $: Communication-Efficient Learning of Deep Networks from Decentralized Data

$FedProx$: Federated Optimization in Heterogeneous Networks

$FedAdam$: Adaptive Federated Optimization

$SCAFFOLD$: SCAFFOLD: Stochastic Controlled Averaging for Federated Learning

$FedDyn$: Federated Learning Based on Dynamic Regularization

$FedCM$: FedCM: Federated Learning with Client-level Momentum

$FedSAM/MoFedSAM$: Generalized Federated Learning via Sharpness Aware Minimization

$FedSpeed$: FedSpeed: Larger Local Interval, Less Communication Round, and Higher Generalization Accuracy

Usage

Training

FL-Simulator works on one single CPU/GPU to simulate the training process of federated learning (FL) with the PyTorch framework. If you want to train the centralized-FL with FedAvg method on the ResNet-18 and Cifar-10 dataset (10% active clients per round of total 100 clients, and heterogeneous dataset split is Dirichlet-0.6), you can use:

python train.py --non-iid --dataset CIFAR-10 --model ResNet18 --split-rule Dirichlet --split-coef 0.6 --active-ratio 0.1 --total-client 100

Other hyperparameters are introduced in the train.py file.

How to create new?

FL-Simulator pre-define the basic Server class and Client class, which are executed according to the vanilla $FedAvg$ algorithm. If you want define a new method, you can define a new server file first with:

  • process_for_communication( ):

      how your method preprocesses the communication variables
    
  • global_update( ):

      how your method processes the update on the global model
    
  • postprocess( ):

      how your method processes the received variables from local clients
    

Then, you can define a new client file for this new method.

Experiments

We train the ResNet-18-GN on the CIFAR-10 dataset and test the global model.

CIFAR-10 (ResNet-18-GN) T=1000
10%-100 (bs=50 Local-epoch=5) 5%-200 (bs=25 Local-epoch=5)
IID Dir-0.6 Dir-0.3 Dir-0.1 IID Dir-0.6 Dir-0.3 Dir-0.1
FedAvg 82.52 80.65 79.75 77.31 81.09 79.93 78.66 75.21
FedProx 82.54 81.05 79.52 76.86 81.56 79.49 78.76 75.84
FedAdam 84.32 82.56 82.12 77.58 83.29 81.22 80.22 75.83
SCAFFOLD 84.88 83.53 82.75 79.92 84.24 83.01 82.04 78.23
FedDyn 85.46 84.22 83.22 78.96 81.11 80.25 79.43 75.43
FedCM 85.74 83.81 83.44 78.92 83.77 82.01 80.77 75.91
MoFedSAM 87.24 85.74 85.14 81.58 86.27 84.71 83.44 79.02
FedSpeed 87.72 86.05 85.25 82.05 86.87 85.07 83.94 79.66

ToDo

  • Decentralized FL Implementation.

Citation

If this codebase can help you, please cite our paper FedSpeed:

@article{sun2023fedspeed,
  title={Fedspeed: Larger local interval, less communication round, and higher generalization accuracy},
  author={Sun, Yan and Shen, Li and Huang, Tiansheng and Ding, Liang and Tao, Dacheng},
  journal={arXiv preprint arXiv:2302.10429},
  year={2023}
}

About

Pytorch implementations of some general federated optimization methods.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%