diff --git a/README.md b/README.md index 9cd488eee..e650f0973 100644 --- a/README.md +++ b/README.md @@ -207,7 +207,7 @@ Concrete ML built-in models have APIs that are almost identical to their scikit- - [Encrypted Large Language Model](use_case_examples/llm/): converting a user-defined part of a Large Language Model for encrypted text generation. This demo shows the trade-off between quantization and accuracy for text generation and shows how to run the model in FHE. - [Private inference for federated learned models](use_case_examples/federated_learning/): private training of a Logistic Regression model and then importing the model into Concrete ML and performing encrypted prediction. -- [Titanic](use_case_examples/titanic/KaggleTitanic.ipynb): solving the [Kaggle Titanic competition](https://www.kaggle.com/c/titanic/). Implemented with XGBoost from Concrete ML, this example comes as a companion of the [Kaggle notebook](https://www.kaggle.com/code/concretemlteam/titanic-with-privacy-preserving-machine-learning). +- [Titanic](use_case_examples/titanic/KaggleTitanic.ipynb): solving the [Kaggle Titanic competition](https://www.kaggle.com/c/titanic/). Implemented with XGBoost from Concrete ML, this example comes as a companion of the [Kaggle notebook](https://www.kaggle.com/code/concretemlteam/titanic-with-privacy-preserving-machine-learning), and was the subject of a blogpost in [KDnuggets](https://www.kdnuggets.com/2022/08/machine-learning-encrypted-data.html). - [CIFAR10 FHE-friendly model with Brevitas](use_case_examples/cifar/cifar_brevitas_training): training a VGG9 FHE-compatible neural network using Brevitas, and a script to run the neural network in FHE. Execution in FHE takes ~4 minutes per image and shows an accuracy of 88.7%. - [CIFAR10 / CIFAR100 FHE-friendly models with Transfer Learning approach](use_case_examples/cifar/cifar_brevitas_finetuning): series of three notebooks, that convert a pre-trained FP32 VGG11 neural network into a quantized model using Brevitas. The model is fine-tuned on the CIFAR data-sets, converted for FHE execution with Concrete ML and evaluated using FHE simulation. For CIFAR10 and CIFAR100, respectively, our simulations show an accuracy of 90.2% and 68.2%. diff --git a/docs/guides/prediction_with_fhe.md b/docs/guides/prediction_with_fhe.md index df85a5b08..e201a3b7f 100644 --- a/docs/guides/prediction_with_fhe.md +++ b/docs/guides/prediction_with_fhe.md @@ -112,11 +112,12 @@ class FCSmall(nn.Module): super().__init__() self.quant_input = qnn.QuantIdentity(bit_width=3) self.fc1 = qnn.QuantLinear(in_features=input_output, out_features=input_output, weight_bit_width=3, bias=True) + self.quant_2 = qnn.QuantIdentity(bit_width=3) self.act_f = nn.ReLU() self.fc2 = qnn.QuantLinear(in_features=input_output, out_features=input_output, weight_bit_width=3, bias=True) def forward(self, x): - return self.fc2(self.act_f(self.fc1(self.quant_input(x)))) + return self.fc2(self.quant_2(self.act_f(self.fc1(self.quant_input(x))))) torch_model = FCSmall(3)