This is the implementation of our paper "Exploring Selective Layer Fine-Tuning in Federated Learning".
Install Python environment with conda:
conda create -n fedsel python=3.9 -y
conda activate fedsel
Then install the Python packages in requirements.txt
:
pip3 install -r requirements.txt
NOTE: you may need to check the version of some packages such as torch and transformers.
The CLIP and BERT models need to be downloaded from the Hugging Face to the ../models/
directory.
Or you can set the MODEL_PATH to official HF name of this model to enable the automatic download of the models.
For CIFAR-10:
python src/data_helpers/prepare_cifar.py --dataset=cifar10 --clip --client_num_in_total=100 --alpha=0.1
For DomainNet and XGLUE:
You can download the datasets from the official website and put them in the ../data/
directory. These datasets will be prepared automatically when running the experiment.
python src/server.py --dataset=[Dataset name] --model_type=[Model name] --strategy=[Strategy name] --n_layers=[Min_n_layers] --n_layers_inc=[Max_n_layers]
Argument | Description | Choices |
---|---|---|
dataset |
The name of the dataset | cifar10, domainnet, xglue |
model_type |
The name of the model | clip, xlm-roberta-large, roberta-large, xlm-roberta-base, roberta-base |
strategy |
The layer selection strategy | full, pro, top, bottom, both, sgn, rgn |
n_layers |
The number of minimal selected layers in each client | 1, 2 (Integer) |
n_layers_inc |
The number of maximal selected layers in each client | 0, 4 (Integer) |
For example, run the proposed method:
python src/server.py --dataset=domainnet --model_type=clip --strategy=pro --n_layers=1 --n_layers_inc=4
If you use this code in your research or find it helpful, please consider citing our paper:
@misc{sun2024exploring,
title={Exploring Selective Layer Fine-Tuning in Federated Learning},
author={Yuchang Sun and Yuexiang Xie and Bolin Ding and Yaliang Li and Jun Zhang},
year={2024},
eprint={2408.15600},
archivePrefix={arXiv},
primaryClass={cs.LG},
url={https://arxiv.org/abs/2408.15600},
}
If you have any questions, please feel free to contact us via [email protected].
The initial implement of this repo is based on the pFedLA. We thank the authors for their contribution.