Skip to content

v0.1.0

Compare
Choose a tag to compare
@bcm-at-zama bcm-at-zama released this 04 Apr 16:26
· 1675 commits to main since this release
94d351d

Summary

First release of Concrete-ML package

Links

Docker Image: zamafhe/concrete-ml:v0.1.0
pip: https://pypi.org/project/concrete-ml/0.1.0
Documentation: https://docs.zama.ai/concrete-ml/0.1.0

v0.1.0

Feature

  • Add tests for more torch functions that are supported, mention them in the docs (0478854)
  • Add FHE in xgboost notebook (1367d4e)
  • Make all classifier demos run in FHE for the datasets and in VL for the domain grid (d95af58)
  • Remove workaround reshape and remaining 3dmatmul (28ea1eb)
  • Change predict to predict_proba for average_precision (a057881)
  • Allow FHE on xgboost (7b5c118)
  • Add CNN notebook (4acca2f)
  • Optimize QuantizeAdd to use TLUs when one of the inputs is a constant(1ffcdfb)
  • Different n_bits for weights/activations/outputs (321d151)
  • Add virtual lib management to SklearnLinearModelMixin (596d16e)
  • Add quantized CNN. (1a78593)
  • Start refactoring tree based models (8e62cf8)
  • Set symmetric quantization by default in PTQ (8fcd307)
  • Add random forest + benchmark (5630f17)
  • Allow base_score with xgboost (17d5cc4)
  • Add predict_proba to logistic regression (9aaeec5)
  • Add xgboost (699603d)
  • Add NN regression benchmarks (9de2ba4)
  • Add symetric quantization (needed for tree output) (4a173ee)
  • Implement LinearSVC (d048077)
  • Implement LinearSVRegression (36df77e)
  • Remove identity nodes from ONNX models (9719c08)
  • Add binary + multiclass logistic regression (85c25df)
  • Improve r2 test for low variance targets. (44ec0b3)
  • Add sklearn linear regression model (060a4c6)
  • Add virtual lib basic class (ad32509)
  • Improve NN benchmarks (ae8313e)
  • Add NN benchmarks and sklearn wrapper for FHE NNs (e73a514)
  • More efficient numpy_gemm, since traced (609f1df)
  • Integrate hummingbird (01c3a4a)
  • Add ONNX quantized implementation for MatMul and Add (716fc43)
  • Allow multiple inputs for a QuantizedModule (1fa530d)
  • Allow QuantizedModule to handle complicated NN topologies (da91e40)
  • Let's allow (alpha, beta) == (1, 0) in Gemm (4b9927a)
  • Manage constant folding in PTQ (a0c56d7)
  • Replace numpy.isclose with r2 score (65f0a6e)
  • Replace the torch quantization functions with ones usable with ONNX (ecdeb50)
  • Add test when input is float to quantized module (d58910d)
  • Let user chose its error type (e5d7440)
  • Post training quantization for ONNX repr (8b051df)
  • Adding more activations and numpy functions (73d885c)
  • Let's have relu and relu6 (f64c3bf)
  • Add quantized tanh (ca9c6e5)
  • Add classification benchmarks, fix bugs in DecisionTreeClassifier (d66d7bf)
  • Provide quantized versions of ONNX ops (b63eca2)
  • Add darglint as a pluggin of flake8 (bb568e2)
  • Use ONNX as intermediate format to convert torch models to numpy (072bd63)
  • Add decision trees + update notebook (db163f5)
  • Restore quantized model benchmarks (d1cfc4e)
  • Port quantization and torch from concrete-numpy. (a525e8b)

Fix

  • Remove fixmes, add HardSigmoid (847db99)
  • Docs (8096acc)
  • Safer default parameter for ensemble methods (8da0988)
  • Increase n_bits for clear vs quantized comparison for decision tree (b9f1206)
  • Fix notebook on macOS + some warnings (ab2a821)
  • Xgboost handle the edge case where n_estimators = 1 (3673584)
  • Issues in Classifier Comparison notebook (3053085)
  • One more bug about convergence (c6cee4e)
  • Fix convergence issues in tests (7b92bd8)
  • Remove metric evaluation for n_bits < 16 (7c4bd0e)
  • Wrong xgboost init (2ed49b6)
  • Workaround while #518 is being investigated (7f521f9)
  • Looks like a mistake (69e9b15)
  • Speedup qnn tests (9d07f5c)
  • Workaround for segfaults on macOS (798662f)
  • Remove check_r2_score with argmax predictions (7d52750)
  • Review (82abb12)
  • Fully connected notebook (1f7b92e)
  • When we test determinism, it is fine if there is an issue in the underlying Concrete Numpy (6595495)
  • Change the md5, even if the licence hasn't changed (182084f)
  • Decision tree bug (84a65e4)
  • Remove gpl lib + update sphinx-zama-theme ^2.2.0 (65aa1b2)
  • Remove Hardsigmoid and Tanhshrink for a moment, since there are issues with precision (51c0bc5)
  • Remove fc comparison fhe vs quantization (1c527be)
  • Use right imports in docs (9fe43bf)
  • Change qvalues to values in quantized module and fix iris notebook mistake (11c5616)
  • Wrong fixture for a list + flaky test for decision tree + add fixture for model check is good execution (cc3c0b6)
  • Add missing docstrings (0c164f5)
  • Fix docstrings which are incomplete thanks to darglint (45d4fca)

Documentation

  • Refresh notebooks (ff771aa)
  • Update the theme (0d1e672)
  • Update simple example readme (21d9a77)
  • Readme (029237a)
  • Update compute with quantization (b836811)
  • Rewrite the developer section for Quantization, show how to to work with quantized operators (436e71e)
  • Add Pruning docs (33b044f)
  • Add info on skorch (6b3ca04)
  • Adding documentation (ed9ee3f)
  • Adding documentation (c4e73ec)
  • Improve quantization explanation in the User Guide (4508282)
  • Add a summary of our results (1046cc2)
  • Write Virtual Lib documentation for release (4f68f3f)
  • Add hummingbird usage (05103b3)
  • Update docs for release (95a1669)
  • Update our project setup doc (beef6c9)
  • Update README (51ed1be)
  • Add automatic ToC to README (4d51c96)
  • Add source in docs (37227c6)
  • Small update to the docker set up instructions (833d6e4)
  • Update contributing to mention make conformance (bff86ca)
  • No need to update releasing.md (179e235)
  • Add a pruning section. (c977a32)
  • No RF or SVM dedicated notebook (6307508)
  • Warn the user that GLM and PoissonRegression are currently not natively in the package (e3e0234)
  • Add Random Forest to our classifier comparison (858f193)
  • Add XGBClassifier to our classifier comparison (eff1b15)
  • Update our documentation (2b16560)
  • Add a comparison of our classifiers (ce0d24b)
  • Make the plan for the documentation (0306cee)
  • Add a sentence about quantized module 237 (a440de3)
  • Use 2.1.0 theme (4fb1445)
  • Add starter docs for how ONNX is used internally (16978b6)
  • Add relevant docs from concrete-numpy (235322a)
  • Check mdformat (c29504a)