Skip to content

v0.6.1

Compare
Choose a tag to compare
@bcm-at-zama bcm-at-zama released this 12 Jan 14:49
· 8 commits to release/0.6.x since this release

Summary

This Concrete-ML release adds support for:

  • 16-bits built-in NN models,
  • 20+ bits purely leveled (i.e., very fast) linear models, which makes them match floating point models in term of accuracy

New tutorials show how to train large neural networks either from scratch or by transfer learning, how to convert them into an FHE-friendly models and finally how to evaluate them in FHE and with simulation. The release adds tools that leverage FHE simulation to select optimal parameters that speed up the inference of neural networks. Python 3.10 support is included in this release.

Links

Docker Image: zamafhe/concrete-ml:v0.6.1
pip: https://pypi.org/project/concrete-ml/0.6.1
Documentation: https://docs.zama.ai/concrete-ml

v0.6.1

Feature

  • Support 20+ bits linear models (4f112ca)
  • Add python 3.10 support (aede49b)
  • Add a CIFAR-10 CNN with 8-bit accumulators and show p_error search (35715e2)
  • Add tutorials for transfer learning for CIFAR-10/100. (42405c5)
  • Add CIFAR-10 VGG CNN with split clear/FHE compilation. (637c272)
  • Change the license (a52d917)
  • Add support for global_p_error (b54fcac)

Fix

  • Flaky FHE vs VirtualLib overflow (1780cd5)
  • Ensure all operations in QNNs are done in FP64 (52e87b7)
  • Raise error when model results are mismatched between Concrete-ML and VL (b7fa8c1)
  • Set specific dependency versions (f2dfc3e)
  • Flaky client server API (1495214)
  • Issues with pytest and macOS (5196c68)

Documentation

  • Add a showcase of use-cases and tutorials (36adc09)
  • Add global_p_error (b6b4d7a)
  • Add CIFAR-10/100 examples for the fine-tuning approach. (45a4f66)
  • Fully connected NN on MNIST using 16b in VL (f0be5f3)
  • Provide an image filtering demo app (fd11f25)