-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Problem with test_tools::assert_noise_distribution #423
Comments
Think I found another issue with noise estimation that's so small that I won't create a seperate ticket. In I believe this should be changed to: let b = (1 << base_log.0) as f64; |
Hello @mvanbeirendonck 👋 , Thanks for reporting those problems 🙂 For the first bug you noticed regarding For the second problem you mentioned regarding the base log, I created another ticket to track this (#129), as the problem is fairly separated. Best, Alex |
Hi @aPere3, Thanks for your reply! Next time I'll remember to just create two separate tickets 😉 I would also be interested in other ways to measure and verify the output noise. Is there actually any particular reason that you need to use statistical tests? Isn't the output noise deterministic? Of course, there is the extra FFT noise that I imagine makes this hard to measure. Is it the concrete channel in the FHE.org discord that you like to use for these type of questions? Or is there still another discord dedicated to concrete? Feel free to close this issue since you are aware of the problem with Best, |
Hello @mvanbeirendonck 👋 , The goal of this statistical test is to verify that the implementation of the operators matches the implementations of the noise formulas. For the operators using only integer arithmetic which are implemented in concrete, I think the result is indeed deterministic, because both integer arithmetic and algorithms are deterministic. On the other hand, as soon as we use the fft, we rely on floating point arithmetic and dynamic algorithm selection (by fftw). I think this makes us loose the determinism. This being said I am probably not the most knowledgeable on this topic 😅, and you will probably have better answers if you ask on the forum: Best, Alex |
Describe the bug
There appears to be an inconsistency in
test_tools::assert_noise_distribution
This function computes the modular distance of the errors on the torus:
https://github.com/zama-ai/concrete/blob/fbca4f8e998628e08a235550e4bb30fdae126972/concrete-core/src/backends/core/private/mod.rs#L119-L122
However, it is often called with std/variance that is the output of
concrete-npe
:https://github.com/zama-ai/concrete/blob/fbca4f8e998628e08a235550e4bb30fdae126972/concrete-core/src/backends/core/private/crypto/bootstrap/fourier/tests.rs#L362-L372
Those last standard deviations/variances are not on the torus, but given as u32 or u64.
Because of this inconsistency, even if you introduce maximum errors of +/- 0.5 on the torus in any of those tested routines, the tests will still pass. There are other tests that use
assert_noise_distribution
where the std or variance input doesn't come fromconcrete-npe
that don't seem to have this problem.To Reproduce
One of the easiest ways to reproduce: just comment out something critical in the external product. The bootstrapping tests will still pass.
The text was updated successfully, but these errors were encountered: