Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add SGD regressor #374

Merged
merged 8 commits into from
Oct 23, 2023
Merged

Add SGD regressor #374

merged 8 commits into from
Oct 23, 2023

Conversation

andrei-stoian-zama
Copy link
Collaborator

@andrei-stoian-zama andrei-stoian-zama commented Oct 19, 2023

@andrei-stoian-zama andrei-stoian-zama requested a review from a team as a code owner October 19, 2023 13:43
@cla-bot cla-bot bot added the cla-signed label Oct 19, 2023
Copy link
Collaborator

@bcm-at-zama bcm-at-zama left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thx a lot, this was quick.

Few questions:

  • we don't add the classifier at the same time?
  • Plus, I think there is a few places in the doc to add mention of the new model, like, here.
  • plus, nothing to change on the test side?

benchmarks/common.py Show resolved Hide resolved
Copy link
Collaborator

@RomanBredehoft RomanBredehoft left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks for this, though there are a few missing changes + maybe create an issue for the workaround !

@bcm-at-zama
Copy link
Collaborator

@RomanBredehoft : how was coverage 100%, if you say that @andrei-stoian-zama missed few things in tests?

@kcelia
Copy link
Collaborator

kcelia commented Oct 19, 2023

@andrei-stoian-zama, I think you must update:

  • tests/common/test_skearn_model_lists.py::test_sklearn_args | test_models_and_datasets | test_get_sklearn_models-> update the new number of models that we have + generated datasets (assert test_counter == 20, ...)
  • tests/sklearn/test_dump_onnx.py -> Funny exercise, where you have to "verifier si le model a une bonne tronche" lol
  • add it to _classifier_models in src/concrete/ml/pytest/utils.py
  • import it in tests/common/test_skearn_model_lists.py
  • add a tolerance threshold to check_correctness_with_sklearn in tests/sklearn/test_sklearn_models.py
  • import it in src/concrete/ml/sklearn/init.py from .linear_model import ElasticNet, Lasso, LinearRegression,...

If you encounter a flaky with: tests/sklearn/test_sklearn_models.py::test_separated_inference for KNeighborsClassifier, it's normal. https://github.com/zama-ai/concrete-ml/actions/runs/6575120273/job/17863923434?pr=372 has to be merged first.

@bcm-at-zama
Copy link
Collaborator

@RomanBredehoft : how was coverage 100%, if you say that @andrei-stoian-zama missed few things in tests?

ok my bad. Thanks to is_public_cml_model, the models are called but yes, pytest are red as explained by @RomanBredehoft / @kcelia

@bcm-at-zama
Copy link
Collaborator

I had in mind coverage 100% => tests are ok, since often test ko => coverage < 100%, but it's not always the case, as shown here

@RomanBredehoft
Copy link
Collaborator

yeah the model is still called and testsed, it's just that some tests need to consider it manually ! and it's intentional btw, to make sure everything is expected when adding new models (at least for the dump and test_sklearn_list tests)

@bcm-at-zama
Copy link
Collaborator

and it's intentional btw, to make sure everything is expected when adding new models (at least for the dump and test_sklearn_list tests)

yes, I like the fact that we check we really test the new models. Things which are too automatic can easily have mistakes we don't see

fd0r
fd0r previously approved these changes Oct 20, 2023
@kcelia
Copy link
Collaborator

kcelia commented Oct 20, 2023

I update my previous message

@kcelia
Copy link
Collaborator

kcelia commented Oct 23, 2023

You haven't added it to src/concrete/ml/sklearn/init.py

Copy link
Collaborator

@kcelia kcelia left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

Copy link
Collaborator

@RomanBredehoft RomanBredehoft left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

great thanks !

@github-actions
Copy link

⚠️ Known flaky tests have been re-run ⚠️

One or several tests initially failed but were detected as known flaky tests. They therefore have been re-run and passed. See below for more details.

Failed tests details

Known flaky tests that initially failed:

  • tests/torch/test_brevitas_qat.py::test_brevitas_tinymnist_cnn[True-True-3]

@github-actions
Copy link

Coverage passed ✅

Coverage details

---------- coverage: platform linux, python 3.8.18-final-0 -----------
Name    Stmts   Miss  Cover   Missing
-------------------------------------
TOTAL    6306      0   100%

51 files skipped due to complete coverage.

@andrei-stoian-zama andrei-stoian-zama merged commit abb143c into main Oct 23, 2023
8 checks passed
@andrei-stoian-zama andrei-stoian-zama deleted the feat/add_sgd_regressor branch October 23, 2023 09:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants