-
Notifications
You must be signed in to change notification settings - Fork 892
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1.9.1: improbable but not impossible divide-by-zero #1606
Comments
(I love diving into floating point.) Yes, the problem arises when all three components have values less than As you've mentioned, we could reject all points in the disk/sphere where the norm is less than some small value. In this case, a good lower limit would be I think it might make sense to create a dedicated function for
|
The old method had a floating-point weakness in which all three vector components, when small enough, can yield a vector length that underflows to zero, leading to a bogus [+/- infinity, +/- infinity, +/- infinity] result. This change also eliminates the `random_in_unit_sphere()` function, and does everything inside the `random_unit_vector()` function, which allows us to compute the vector length only once and then re-use it for normalization. Resolves #1606
The old method had a floating-point weakness in which all three vector components, when small enough, can yield a vector length that underflows to zero, leading to a bogus [+/- infinity, +/- infinity, +/- infinity] result. This change also eliminates the `random_in_unit_sphere()` function, and does everything inside the `random_unit_vector()` function, which allows us to compute the vector length only once and then re-use it for normalization. Resolves #1606
So the correct implementation would be inline vec3 random_unit_vector() {
while (true) {
auto p = vec3::random(-1,1+std::numeric_limits<double>::epsilon); // or add epsilon inside vec3::random
auto lensq = p.length_squared();
if (0 < lensq && lensq <= 1)
return p / sqrt(lensq);
}
} I believe :) maybe this could be fixed as part of #1637 |
I think you're missing the explanation above. When Regarding your comment about excluding 1.0 from the random range, this doesn't matter. It changes the range, yes, but not the uniformity. Indeed, we can select uniformly-distributed points from a square of any width, as the resulting accepted points are then converted to normalized vectors. The disk radius could be |
That statement is incorrect. In your original statement you correctly identified that the squaring is the potentially problematic part:
That said, while squaring has the potential of underflowing to zero, we are talking about numbers that are already virtually zero and the underflow on single vector components is not a problem. A problem which does arise however is when all 3 components end up being 0 resulting in a squared length of 0 and a division by 0 in the normalization of the vector. |
Yes, I flipped that. Good catch.
Perhaps, but we never test single components — only the vector length.
I'll re-open this to play with small denorm components. I'm curious how things degrade when trying to normalize a vector when you only have a couple of bits precision left. |
As a side note, all of this is quite dependent on values coming out of the double random function, given the exponential but piecewise linear nature of floating point. |
It's worth noting that the current (as of 2024-12-30) implementation uses Given this, the original mitigation with this issue is unnecessary. @MaliusArth's suggestion of using Given all this, a simple test for greater than zero should be fine. |
Inside
random_unit_vector()
, ifrandom_in_unit_sphere()
returns a vector very close tozero, passing that to
unit_vector()
could result in a divide-by-zero fault.One way to avoid this would be if the rejection method in
random_in_unit_sphere()
alsorejected near-zero values. There is a
near_zero()
function introduced later (in 10.3), thatwould be useful here.
The text was updated successfully, but these errors were encountered: