0.30.2
0.30.2 - 2022-12-13
- feat: expose the CP submission batch size in execute_experiment
0.30.1 - 2022-10-14
- fix: install your current package as local dependency
0.30.0 - 2022-09-26
Removed
- Return statement of both
predict
and_local_predict
methods from Torch Algorithms.
Tests
- Update the Client, it takes a backend type instead of debug=True + env variable to set the spawner - (#210)
- Do not use Model.category since this field is being removed from the SDK
- Update the tests and benchmark with the change on Metrics from substratools (#24)
Changed
- NOTABLE CHANGES due to breaking changes in substra-tools:
- the opener only exposes
get_data
andfake_data
methods - the results of the above method is passed under the
datasamples
keys within theinputs
dict arg of all
tools methods (train, predict, aggregate, score) - all method (train, predict, aggregate, score) now takes a
task_properties
argument (dict) in addition to
inputs
andoutputs
- The
rank
of a task previously passed under therank
key within the inputs is now given in thetask_properties
dict under therank
key
- the opener only exposes
This means that all opener.py
file should be changed from:
import substratools as tools
class TestOpener(tools.Opener):
def get_X(self, folders):
...
def get_y(self, folders):
...
def fake_X(self, n_samples=None):
...
def fake_y(self, n_samples=None):
...
to:
import substratools as tools
class TestOpener(tools.Opener):
def get_data(self, folders):
...
def fake_data(self, n_samples=None):
...
This also implies that metrics
has now access to the results of get_data
and not only get_y
as previously. The user should adapt all of his metrics
file accordingly e.g.:
class AUC(tools.Metrics):
def score(self, inputs, outputs):
"""AUC"""
y_true = inputs["y"]
...
def get_predictions(self, path):
return np.load(path)
if __name__ == "__main__":
tools.metrics.execute(AUC())
could be replace with:
class AUC(tools.Metrics):
def score(self, inputs, outputs, task_properties):
"""AUC"""
datasamples = inputs["datasamples"]
y_true = ... # getting target from the whole datasamples
def get_predictions(self, path):
return np.load(path)
if __name__ == "__main__":
tools.metrics.execute(AUC())
- BREAKING CHANGE:
train
andpredict
method of all substrafl algos now takesdatasamples
as argument instead ofX
abdy
. This is impacting the user code only
if he or she overwrite those methods instead of using the_local_train
and_local_predict
methods. - BREAKING CHANGE: The result of the
get_data
method from the opener is automatically provided to the givendataset
as__init__
arg
instead ofx
andy
within thetrain
andpredict
methods of allTorch*Algo
classes. The userdataset
should be adapted
accordingly e.g.:
from torch.utils.data import Dataset
class MyDataset(Dataset):
def __init__(self, x, y, is_inference=False) -> None:
...
class MyAlgo(TorchFedAvgAlgo):
def __init__(
self,
):
torch.manual_seed(seed)
super().__init__(
model=my_model,
criterion=criterion,
optimizer=optimizer,
index_generator=index_generator,
dataset=MyDataset,
)
should be replaced with
from torch.utils.data import Dataset
class MyDataset(Dataset):
def __init__(self, datasamples, is_inference=False) -> None:
...
class MyAlgo(TorchFedAvgAlgo):
def __init__(
self,
):
torch.manual_seed(seed)
super().__init__(
model=my_model,
criterion=criterion,
optimizer=optimizer,
index_generator=index_generator,
dataset=MyDataset,
)