This project separates a Federated Unlearning (FU) training and unlearning process into: [1] training environment, [2] server and client behavior definition (OOP), and [3] complex assembly. The motivation is to grant researchers the control and extensiveness of Federated Learning (FL). This exterior readMe (this one) file only describes the general usage of the following folders. You can further access these folders to view the detailed readMe file if interested.
[1], [2] include:
- env_generator: used to generate federated learning training environment i.e. iid, dilichilet etc.
- FedAvg: FedAvg server and client, which is the vanilla version of federated learning research.
- FedNaiveRetrain: naive retrain server and client.
- FedRecover: alias of FedNaiveRetrain.
- FedSGA: server and client supporting stochastic gradient ascent (SGA).
- FedQuickDrop: our work.
- utils: datasets, models, and others.
[3] includes:
- merit: further verification methods i.e. logits distribution described in the Kaggle NIPS23 Machine Unlearning Competition.
- reproduce: artifact evaluation. (If you want to quickly evaluate the work, go to this folder directly.)
If you are not familiar with the conda/pytorch environment installation, we provide a tutorial conda_env for you reference.
How to use the architecture?
- generate FL env using methods in the env_generator.
- choose or customize servers and clients.
- assemble your own experiment.