Releases: Substra/substrafl
0.34.0rc0
0.34.0 - 2023-01-30
Added
- chore: Add contributing, contributors & code of conduct files (#68)
Removed
- chore: Test only field for datasamples (#67)
Changed
- feat: remove RemoteDataMethod and change RemoteMethod class to be fully flexible regarding function name.
The substra-tools methods is now generic, and load the inputs depending on the inputs dictionary content (#59)
0.33.0
0.33.0rc2
release Signed-off-by: ThibaultFy <[email protected]>
0.33.0rc1
release Signed-off-by: ThibaultFy <[email protected]>
0.30.2
0.30.2 - 2022-12-13
- feat: expose the CP submission batch size in execute_experiment
0.30.1 - 2022-10-14
- fix: install your current package as local dependency
0.30.0 - 2022-09-26
Removed
- Return statement of both
predict
and_local_predict
methods from Torch Algorithms.
Tests
- Update the Client, it takes a backend type instead of debug=True + env variable to set the spawner - (#210)
- Do not use Model.category since this field is being removed from the SDK
- Update the tests and benchmark with the change on Metrics from substratools (#24)
Changed
- NOTABLE CHANGES due to breaking changes in substra-tools:
- the opener only exposes
get_data
andfake_data
methods - the results of the above method is passed under the
datasamples
keys within theinputs
dict arg of all
tools methods (train, predict, aggregate, score) - all method (train, predict, aggregate, score) now takes a
task_properties
argument (dict) in addition to
inputs
andoutputs
- The
rank
of a task previously passed under therank
key within the inputs is now given in thetask_properties
dict under therank
key
- the opener only exposes
This means that all opener.py
file should be changed from:
import substratools as tools
class TestOpener(tools.Opener):
def get_X(self, folders):
...
def get_y(self, folders):
...
def fake_X(self, n_samples=None):
...
def fake_y(self, n_samples=None):
...
to:
import substratools as tools
class TestOpener(tools.Opener):
def get_data(self, folders):
...
def fake_data(self, n_samples=None):
...
This also implies that metrics
has now access to the results of get_data
and not only get_y
as previously. The user should adapt all of his metrics
file accordingly e.g.:
class AUC(tools.Metrics):
def score(self, inputs, outputs):
"""AUC"""
y_true = inputs["y"]
...
def get_predictions(self, path):
return np.load(path)
if __name__ == "__main__":
tools.metrics.execute(AUC())
could be replace with:
class AUC(tools.Metrics):
def score(self, inputs, outputs, task_properties):
"""AUC"""
datasamples = inputs["datasamples"]
y_true = ... # getting target from the whole datasamples
def get_predictions(self, path):
return np.load(path)
if __name__ == "__main__":
tools.metrics.execute(AUC())
- BREAKING CHANGE:
train
andpredict
method of all substrafl algos now takesdatasamples
as argument instead ofX
abdy
. This is impacting the user code only
if he or she overwrite those methods instead of using the_local_train
and_local_predict
methods. - BREAKING CHANGE: The result of the
get_data
method from the opener is automatically provided to the givendataset
as__init__
arg
instead ofx
andy
within thetrain
andpredict
methods of allTorch*Algo
classes. The userdataset
should be adapted
accordingly e.g.:
from torch.utils.data import Dataset
class MyDataset(Dataset):
def __init__(self, x, y, is_inference=False) -> None:
...
class MyAlgo(TorchFedAvgAlgo):
def __init__(
self,
):
torch.manual_seed(seed)
super().__init__(
model=my_model,
criterion=criterion,
optimizer=optimizer,
index_generator=index_generator,
dataset=MyDataset,
)
should be replaced with
from torch.utils.data import Dataset
class MyDataset(Dataset):
def __init__(self, datasamples, is_inference=False) -> None:
...
class MyAlgo(TorchFedAvgAlgo):
def __init__(
self,
):
torch.manual_seed(seed)
super().__init__(
model=my_model,
criterion=criterion,
optimizer=optimizer,
index_generator=index_generator,
dataset=MyDataset,
)
0.32.0
0.32.0 - 2022-11-22
Added
-
The metric registration is simplified. The user can now directly write a score function within
their script, and directly register it by specifying the right dependencies and permissions.
The score function must have(datasamples, predictions_path)
as signature. (#47)Example of new metric registration:
metric_deps = Dependency(pypi_dependencies=["numpy==1.23.1"]) permissions_metric = Permissions(public=True) def mse(datasamples, predictions_path): y_true = datasamples["target"] y_pred = np.load(predictions_path) return np.mean((y_true - y_pred)**2) metric_key = add_metric( client=substra_client, permissions=permissions_metric, dependencies=metric_deps, metric_function=mse, )
-
doc on the model loading page (#40)
-
The round 0 is now exposed.
Possibility to evaluate centralized strategies before any training (FedAvg, NR, Scaffold).
The round 0 is skipped for single org strategy and cannot be evaluated before training (#46)
Changed
- Github actions on Ubuntu 22.04 (#52)
- torch algo: test that
with_batch_norm_parameters
is only about the running mean and variance of the batch norm layers (#30) - torch algo:
with_batch_norm_parameters
- also take into account thetorch.nn.LazyBatchNorm{x}d
layers (#30) - chore: use the generic task (#31)
- Apply changes from algo to function in substratools (#34)
- add
tools_functions
method toRemoteDataMethod
andRemoteMethod
to return the function(s) to send totools.execute
.
- add
- Register functions in substratools using decorator
@tools.register
(#37) - Update substratools Docker image (#49)
Fixed
- Fix python 3.10 compatibility by catching OSError for Notebooks (#51)
- Free disk space in main github action to run the CI (#48)
- local dependencies are installed in one
pip
command to optimize the installation and avoid incompatibilities error (#39) - Fix error when installing current package as local dependency (#41)
- Fix flake8 repo for pre-commit (#50)
0.30.1
fix: install your current package as local dependency
0.21.7
fix: typo on "local_worker" to "local-worker"
0.21.6
fix: install current package as local dependency