Releases: zama-ai/concrete-ml
v1.0.0
Summary
This version features a stable API, better inference performance, and user-friendly error reporting. Most importantly, tools have been added to make your model deployment in cloud environments hassle-free. Support for Apple Silicon was also added.
Links
Docker Image: zamafhe/concrete-ml:v1.0.0
pip: https://pypi.org/project/concrete-ml/1.0.0
Documentation: https://docs.zama.ai/concrete-ml
v1.0.0
Feature
- Add structured pruning to QNNs (
70bff38
) - Add accumulator rounding (
8ee9267
) - Add bitwidth and value range report per layer (
dd37f4e
) - Extend deployment features (AWS and docker) (
77e2d80
) - Add sentiment analysis deployment use-case (
96c9158
) - Support ONNX operators Gather, Slice, Shape, ConstantOfShape (
667e9ae
) - Include quant and dequant steps in QuantizedModule's forward method (
55369ed
) - Add scikit-learn model serialization (
ae7658b
) - Add cifar-10 8-bit model deployment (
9177058
) - Support pandas and list inputs in predict and compile methods for NNs (
6a5e619
) - Add example of model deployment to use-cases (
be7bcb0
) - Support pandas, list and torch for trees and linear models (
8156bc9
) - Simplify the API by removing n_bits for compile_brevitas_qat (
d081212
)
Fix
- Fix ci packaging (
bdda1da
) - Fix deploy_to_aws for python 3.10 (
7665809
) - Non-quantized NN constant folding bug (
bdc04c4
) - Make the client-server API support Tweedie models (
0de7398
) - QNN API improvements and pruning fix (
33c4bf0
) - Set specific dependency versions (
9729dc6
) - Flaky client server (
3eb86c1
) - Fixing issues with pytest and macOS (
a0c22fa
)
Documentation
- Add tree experiments (
e1c0ce0
) - Add chapter on optimization and simulation (
5454620
) - Update simulation (
d298459
) - Good quantization configurations for target accumulator bit-widths (
ef6355f
) - Add rounding documentation (
b7f56d5
) - Add an example that separates encryption, FHE execution and decryption (
d4681fb
)
v0.6.1
Summary
This Concrete-ML release adds support for:
- 16-bits built-in NN models,
- 20+ bits purely leveled (i.e., very fast) linear models, which makes them match floating point models in term of accuracy
New tutorials show how to train large neural networks either from scratch or by transfer learning, how to convert them into an FHE-friendly models and finally how to evaluate them in FHE and with simulation. The release adds tools that leverage FHE simulation to select optimal parameters that speed up the inference of neural networks. Python 3.10 support is included in this release.
Links
Docker Image: zamafhe/concrete-ml:v0.6.1
pip: https://pypi.org/project/concrete-ml/0.6.1
Documentation: https://docs.zama.ai/concrete-ml
v0.6.1
Feature
- Support 20+ bits linear models (
4f112ca
) - Add python 3.10 support (
aede49b
) - Add a CIFAR-10 CNN with 8-bit accumulators and show p_error search (
35715e2
) - Add tutorials for transfer learning for CIFAR-10/100. (
42405c5
) - Add CIFAR-10 VGG CNN with split clear/FHE compilation. (
637c272
) - Change the license (
a52d917
) - Add support for global_p_error (
b54fcac
)
Fix
- Flaky FHE vs VirtualLib overflow (
1780cd5
) - Ensure all operations in QNNs are done in FP64 (
52e87b7
) - Raise error when model results are mismatched between Concrete-ML and VL (
b7fa8c1
) - Set specific dependency versions (
f2dfc3e
) - Flaky client server API (
1495214
) - Issues with pytest and macOS (
5196c68
)
Documentation
v0.5.1
Summary
The main objective of this release is to fix some issues with recent updates in dependencies, and to extend the support from python 3.7.14 to python 3.7.1.
Links
Docker Image: zamafhe/concrete-ml:v0.5.1
pip: https://pypi.org/project/concrete-ml/0.5.1
Documentation: https://docs.zama.ai/concrete-ml
v0.5.1
Feature
- Extend python 3.7.14 support to 3.7.1 (
eb212bf
)
Fix
- Fixing an issue with LinearRegression (
ebd06b4
)
v0.5.0
Summary
The main objective of this release is to add python 3.7 support.
Links
Docker Image: zamafhe/concrete-ml:v0.5.0
pip: https://pypi.org/project/concrete-ml/0.5.0
Documentation: https://docs.zama.ai/concrete-ml
v0.5.0
Feature
- Python 3.7 support (
fef90d1
) - Remove constraints in numpy_reduce_sum (
89668bf
) - Check if a network imported with import_qat=True is quantized (
24e8f88
)
Documentation
- Move titanic notebook to use_case_examples and cleaned SentimentClassification notebook (
14502f0
)
v0.4.0
Summary
This version of Concrete-ML adds more support for quantization-aware training neural networks, adds decision tree-ensemble regressors, and includes additional linear regression models. For custom models, first-class support for Brevitas quantization aware training neural networks was added: a dedicated function imports models containing Brevitas layers directly. Design rules for these neural networks using Brevitas are detailed in the documentation. Moreover, quantization aware training is now the default for built-in neural networks, giving good accuracy out-of-the-box, with low bit-widths for weights, activations, and accumulators. Tree-based RandomForest and XGBoost regression models are now supported, while the linear regressors are complemented by the Ridge, Lasso, and ElasticNet models. Many usage example notebooks were added, showing how to use the new models, and also showing more complex use-cases such as sentiment analysis, MNIST classification, and Kaggle Titanic dataset classification.
Links
Docker Image: zamafhe/concrete-ml:v0.4.0
pip: https://pypi.org/project/concrete-ml/0.4.0
Documentation: https://docs.zama.ai/concrete-ml/v/0.4/
v0.4.0
Feature
- Add a XGBoost regression tutorial (#1911) (
174f6f7
) - Add encrypted sentiment analysis demo (
fd684df
) - Make net_inputs and net_ouputs optional (
5ed9476
) - Add RandomForestRegressor (
0c7853c
) - Add XGBRegressor (
9736557
) - Add tree-regressor (
5b12d53
) - Add Ridge, Lasso and ElasticNet regression models. (
675c7b3
) - Import Quantized Brevitas ONNX graphs and upgrade QAT notebook (
13d8d74
)
Fix
- Remove pygraphviz dependancy (
f708ba7
) - Flaky client server (#1927) (
d162cd6
) - Tweedie overflow underterministic bug (
922d60e
) - XGBRegressor verifies that n_targets is 1 (
a15fe9d
) - Make linear models with fit_intercept=False possible (
90a50b4
) - Add quantize_inputs_with_net_outputs_precision to calibration process (
1dd9ba3
)
Documentation
- Update p_error with api call (
d214216
) - Improve ClassifierComparison notebook (
f45b79f
) - Integration of API docs with lazydocs (
985d0d5
) - Major Revision of Inner Workings and integration of Quantization Aware Training (
accfd3e
) - Improve contribution doc (
0a0534a
) - Adding new models to the docs (
081d8f9
) - Be more precise on installation (
88fed0b
) - Improve our README (
b2d81e4
) - Decrease net_ouputs values from 8 to 5 bits in notebooks (
49717bf
) - Add comment about tqdm in linear.md (
ba06e5e
) - Update the LICENSE (
30b2b27
) - Add tqdm and remove inference slicing in titanic notebook (
4419438
)
v0.3.0
Summary
Concrete-ML now gives the user the possibility to deploy models in a client-server setting, separating encryption and decryption from execution, which can now be done by a remote machine. The release also adds support for new models and for new neural network layers, but also allows importing ONNX directly, thus supporting some keras/tensorflow models. Furthermore, this release provides some support for importing Quantization Aware Training neural networks, which contain quantizers in the operation graph and can be built with brevitas.
Links
Docker Image: zamafhe/concrete-ml:v0.3.0
pip: https://pypi.org/project/concrete-ml/0.3.0
Documentation: https://docs.zama.ai/concrete-ml
v0.3.0
Feature
- Allow recompiling from onnx model (
9b69e73
) - Adding support for p_error (
fe03441
) - Adding GELU activation (
c732d15
) - Add random_state to models for client server reproducibility (
6f887d0
) - Add QAT notebook (
64c4512
) - Import Brevitas QAT networks (
6c40a0c
) - Integration of the encrypt decrypt api (
3c2a68a
) - Support more input types in predict() (
76e142c
) - Support more input types in fit() (
1fa74f7
) - Add Round and Pow operators (
d6880ce
) - Ability to import Quantization Aware Training networks (
c1bb947
) - Compile user supplied ONNX to support keras/tf (
fae3dc5
) - Implement Generalized Linear Regression models (
8e8e025
) - Add SoftSign activation (
3ce338e
) - Adding more activations (
e43ce5c
) - Implement Poisson Regression (
09eefa5
) - Use the 8b of precision of Concrete Numpy (
249c712
) - Add ONNX flatten support (
c5f215f
) - Handle more tree-based classifiers (
950cc6c
) - Add Batch Normalization ONNX operator (
7969739
) - Add Where, Greater, Mul, Sub ONNX operator support (
f939149
) - Add ONNX Average Pooling and Pad operator (
40f1ef9
) - Add more activation functions (
26b2221
)
Fix
- Make tree inference faster by creating new numpy boolean operators (
206caa5
) - Set a compatible version for protobuf (
97ccfc0
) - Improve IRIS FCNN FHE accuracy and visualization (
02e497c
) - Replace init call by set_params (
111419e
) - Fix wrong fit_benchmark in linear models (
f257def
) - Fix GridSearchCV on trees (
b614285
) - Support decision tree with custom classes (
baa3b4d
)
Documentation
- Major refresh of 0.3 doc (
e5e3205
) - Add sentiment classification notebook (
68ae7d0
) - Restrict hyperparameters in titanic notebook for faster inference (
9b63c8a
) - QAT explanation (
9430ba6
) - Document ONNX compilation (
972d05e
) - Explain quantized vs float ops and fusing (
fb9b409
) - Add doc for pandas support (
34652ce
) - Add notebook and docs client server api (
99ff1e7
) - Developing custom models (
1c6f571
) - Explain built-in quantized Neural Networks (
972d464
) - Add a notebook for Kaggle Titanic competition (
0a44853
)
v0.2.1
Summary
Fixing issues to updates in some dependancies that we used with a not-fixed version
Links
Docker Image: zamafhe/concrete-ml:v0.2.1
pip: https://pypi.org/project/concrete-ml/0.2.1
Documentation: https://docs.zama.ai/concrete-ml
v0.2.1
Fix
- Set a compatible version for protobuf (
4dd46cf
) - Force ONNX package version to 1.11.0 in CLM 0.2 (
fa90586
)
Documentation
- Update the LICENSE (
38c1062
)
v0.2.0
Summary
Use Concrete Numpy 0.5.
Add multi-class classification to XGBoost.
Fixing some minor broken links or issues.
Links
Docker Image: zamafhe/concrete-ml:v0.2.0
pip: https://pypi.org/project/concrete-ml/0.2.0
Documentation: https://docs.zama.ai/concrete-ml/ (old link https://docs.zama.ai/concrete-ml/0.2.0 has been moved)
v0.2.0
Breaking Changes (as compared to 0.1.x)
run
method is renamed toencrypt_run_decrypt
after changes in Concrete-Numpy 0.5.0. Individual APIs to encrypt/run/decrypt separately will be available in a further release of Concrete-ML
Feature
- Some breaking changes in Concrete Numpy API. (
ecbb26e
) - Using Concrete Numpy 0.5 (
ee5987b
) - Add multiclass xgboost (
9247f35
) - Adding a
set_version_and_push
command to the makefile (ab853c2
) - Add multiclass capability to decision trees (
df0a64d
)
Fix
- Fixing Pypi's Homepage button link (
bf52514
)
Breaking
- The API
.forward_fhe.run()
has been renamed into.forward_fhe.encrypt_run_decrypt()
(ecbb26e
)
Documentation
- Reorganize index page (
f1bafff
)
v0.1.1
Summary
Use Concrete Numpy 0.4.
Fixing some minor broken links or issues.
Links
Docker Image: zamafhe/concrete-ml:v0.1.1
pip: https://pypi.org/project/concrete-ml/0.1.1
Documentation: https://docs.zama.ai/concrete-ml/0.1.1
v0.1.1
Feature
- Add multiclass capability to decision trees (
6f9651e
) - Update to Concrete Numpy 0.4.0 and update the theme. (
27839be
)
Fix
Documentation
- Reorganize index page (
349c947
)
v0.1.0
Summary
First release of Concrete-ML package
Links
Docker Image: zamafhe/concrete-ml:v0.1.0
pip: https://pypi.org/project/concrete-ml/0.1.0
Documentation: https://docs.zama.ai/concrete-ml/0.1.0
v0.1.0
Feature
- Add tests for more torch functions that are supported, mention them in the docs (
0478854
) - Add FHE in xgboost notebook (
1367d4e
) - Make all classifier demos run in FHE for the datasets and in VL for the domain grid (
d95af58
) - Remove workaround reshape and remaining 3dmatmul (
28ea1eb
) - Change predict to predict_proba for average_precision (
a057881
) - Allow FHE on xgboost (
7b5c118
) - Add CNN notebook (
4acca2f
) - Optimize QuantizeAdd to use TLUs when one of the inputs is a constant(
1ffcdfb
) - Different n_bits for weights/activations/outputs (
321d151
) - Add virtual lib management to SklearnLinearModelMixin (
596d16e
) - Add quantized CNN. (
1a78593
) - Start refactoring tree based models (
8e62cf8
) - Set symmetric quantization by default in PTQ (
8fcd307
) - Add random forest + benchmark (
5630f17
) - Allow base_score with xgboost (
17d5cc4
) - Add predict_proba to logistic regression (
9aaeec5
) - Add xgboost (
699603d
) - Add NN regression benchmarks (
9de2ba4
) - Add symetric quantization (needed for tree output) (
4a173ee
) - Implement LinearSVC (
d048077
) - Implement LinearSVRegression (
36df77e
) - Remove identity nodes from ONNX models (
9719c08
) - Add binary + multiclass logistic regression (
85c25df
) - Improve r2 test for low variance targets. (
44ec0b3
) - Add sklearn linear regression model (
060a4c6
) - Add virtual lib basic class (
ad32509
) - Improve NN benchmarks (
ae8313e
) - Add NN benchmarks and sklearn wrapper for FHE NNs (
e73a514
) - More efficient numpy_gemm, since traced (
609f1df
) - Integrate hummingbird (
01c3a4a
) - Add ONNX quantized implementation for MatMul and Add (
716fc43
) - Allow multiple inputs for a QuantizedModule (
1fa530d
) - Allow QuantizedModule to handle complicated NN topologies (
da91e40
) - Let's allow (alpha, beta) == (1, 0) in Gemm (
4b9927a
) - Manage constant folding in PTQ (
a0c56d7
) - Replace numpy.isclose with r2 score (
65f0a6e
) - Replace the torch quantization functions with ones usable with ONNX (
ecdeb50
) - Add test when input is float to quantized module (
d58910d
) - Let user chose its error type (
e5d7440
) - Post training quantization for ONNX repr (
8b051df
) - Adding more activations and numpy functions (
73d885c
) - Let's have relu and relu6 (
f64c3bf
) - Add quantized tanh (
ca9c6e5
) - Add classification benchmarks, fix bugs in DecisionTreeClassifier (
d66d7bf
) - Provide quantized versions of ONNX ops (
b63eca2
) - Add darglint as a pluggin of flake8 (
bb568e2
) - Use ONNX as intermediate format to convert torch models to numpy (
072bd63
) - Add decision trees + update notebook (
db163f5
) - Restore quantized model benchmarks (
d1cfc4e
) - Port quantization and torch from concrete-numpy. (
a525e8b
)
Fix
- Remove fixmes, add HardSigmoid (
847db99
) - Docs (
8096acc
) - Safer default parameter for ensemble methods (
8da0988
) - Increase n_bits for clear vs quantized comparison for decision tree (
b9f1206
) - Fix notebook on macOS + some warnings (
ab2a821
) - Xgboost handle the edge case where n_estimators = 1 (
3673584
) - Issues in Classifier Comparison notebook (
3053085
) - One more bug about convergence (
c6cee4e
) - Fix convergence issues in tests (
7b92bd8
) - Remove metric evaluation for n_bits < 16 (
7c4bd0e
) - Wrong xgboost init (
2ed49b6
) - Workaround while #518 is being investigated (
7f521f9
) - Looks like a mistake (
69e9b15
) - Speedup qnn tests (
9d07f5c
) - Workaround for segfaults on macOS (
798662f
) - Remove check_r2_score with argmax predictions (
7d52750
) - Review (
82abb12
) - Fully connected notebook ([
1f7b92e
](1f7b92e2623ebf45...