-
Notifications
You must be signed in to change notification settings - Fork 145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Faster Approximate Intersections Code #192
Comments
Basically since it divides the initial curve into about 50 segments, and if any of those segments intersect it zooms in on that segment and does 50 sub-segments within that 1/50th segment and with numpy acceleration goes lightning fast. The typical error in a typical curve is about 1e-5 or so, and we deal with t values near 1e-3. But, if the intersection occurs so very near one of the 50 segment breaks but actually occurs in a different segment it can false negative a clear intersection. So it hits on segment 30 at t=0.999995 but the error is such that it's actually at segment 31 at t=0.000004 the big slop in the error can search segment 30. Fail to find the intersection and default out declaring the first hit was due to a false positive, which actually results in a false negative. The scope recursion: for i, hit in enumerate(where_hit):
# Zoomed min+segment intersected.
# Fractional guess within intersected segment
at_guess = ta[0] + (hit[1] + ta_hit[i]) * step_a
bt_guess = tb[0] + (hit[0] + tb_hit[i]) * step_b
if depth == enhancements:
# We've enhanced as best as we can, yield the current + segment t-value to our answer
yield at_guess, bt_guess
else:
yield from self._find_intersections_intercept(
segment1,
segment2,
fun1,
fun2,
ta=(at_guess - step_a/2, at_guess + step_a/2, at_guess),
tb=(bt_guess - step_b/2, bt_guess + step_b/2, bt_guess),
samples=enhance_samples,
depth=depth + 1,
enhancements=enhancements,
enhance_samples=enhance_samples,
) Fixes the issue, generally by going by the full fractional hint ±step/2. |
@tatarize, fantastic contribution to enhancing intersection speed! The current implementation is effective for the QuadraticBezier and Line classes. However, a limitation arises as the .points() method is absent for the Path class. Appreciate your efforts! def find_intersections(
segment1,
segment2,
samples=50,
ta=(0.0, 1.0, None),
tb=(0.0, 1.0, None),
depth=0,
enhancements=2,
enhance_samples=50,
):
if depth == 0:
# Quick Fail. There are no intersections without overlapping quick bounds
try:
s1x = [e.real for e in segment1.bpoints()]
s2x = [e.real for e in segment2.bpoints()]
if min(s1x) > max(s2x) or max(s1x) < min(s2x):
return
s1y = [e.imag for e in segment1.bpoints()]
s2y = [e.imag for e in segment2.bpoints()]
if min(s1y) > max(s2y) or max(s1y) < min(s2y):
return
except AttributeError:
pass
assert(samples >= 2)
a = np.linspace(ta[0], ta[1], num=samples)
b = np.linspace(tb[0], tb[1], num=samples)
step_a = a[1] - a[0]
step_b = b[1] - b[0]
# changes from the original code
j = [segment1.point(t) for t in a]
k = [segment2.point(t) for t in b]
ax1, bx1 = np.meshgrid(np.real(j[:-1]), np.real(k[:-1]))
ax2, bx2 = np.meshgrid(np.real(j[1:]), np.real(k[1:]))
ay1, by1 = np.meshgrid(np.imag(j[:-1]), np.imag(k[:-1]))
ay2, by2 = np.meshgrid(np.imag(j[1:]), np.imag(k[1:]))
denom = (by2 - by1) * (ax2 - ax1) - (bx2 - bx1) * (ay2 - ay1)
qa = (bx2 - bx1) * (ay1 - by1) - (by2 - by1) * (ax1 - bx1)
qb = (ax2 - ax1) * (ay1 - by1) - (ay2 - ay1) * (ax1 - bx1)
hits = np.dstack(
(
denom != 0, # Cannot be parallel.
np.sign(denom) == np.sign(qa), # D and Qa must have same sign.
np.sign(denom) == np.sign(qb), # D and Qb must have same sign.
abs(denom) >= abs(qa), # D >= Qa (else not between 0 - 1)
abs(denom) >= abs(qb), # D >= Qb (else not between 0 - 1)
)
).all(axis=2)
where_hit = np.argwhere(hits)
if len(where_hit) != 1 and step_a < 1e-10:
# We're hits are becoming unstable give last best value.
if ta[2] is not None and tb[2] is not None:
yield ta[2], tb[2]
return
# Calculate the t values for the intersections
ta_hit = qa[hits] / denom[hits]
tb_hit = qb[hits] / denom[hits]
for i, hit in enumerate(where_hit):
at = ta[0] + float(hit[1]) * step_a # Zoomed min+segment intersected.
bt = tb[0] + float(hit[0]) * step_b
a_fractional = ta_hit[i] * step_a # Fractional guess within intersected segment
b_fractional = tb_hit[i] * step_b
if depth == enhancements:
# We've enhanced as best as we can, yield the current + segment t-value to our answer
yield at + a_fractional, bt + b_fractional
else:
yield from find_intersections(
segment1,
segment2,
ta=(at, at + step_a, at + a_fractional),
tb=(bt, bt + step_b, bt + b_fractional),
samples=enhance_samples,
depth=depth+1,
enhancements=enhancements,
enhance_samples=enhance_samples,
) |
The code only really does each individual intersection between segments. Depending on the curve the 50x50 subsegments should probably cross generalization might not work. Rather than at max order 2 the entire path could be order n. It can absolutely self-intersect any number of times. And you'd still be best to either brute force all the intersections or use a bit of a more extreme data structure like Esser50K was trying to do (to bring it less than O(N²). You'd probably just use this to check each segment within each path against each other and the fast-fail and fast operation would cover most parts. I actually use this code and similar bits of code elsewhere and decisions as to which segments to check, rather than the path as a whole, are a lot easier depending on individual use case. Sometimes you care about self-intersections sometimes you don't care about such things. Sometimes you're dealing with lines and there's much easier methods. |
I needed intersections code from this project, and beyond being a bit slow, it also has some issues with being a bit in accurate and not dealing with things like arcs at times. @Esser50K has a PR trying to speed this up with an acceleration structure, I have another to avoid the initial call altogether if their quick-bounds don't overlap. This might be used instead of the current code. See #188 as an attempt to help this, which I would assume is actually likely to be moot.
Example code to test against:
It's random so you'll get some variance but the output here:
So, on average, the error for the new routine is ~1e-11 (with default settings, you can get up to 1e-15) which is much less than ~1e-7 of the current routine (for quad quad in my test).
The algorithm itself is a bit easy. I cut both segments into 50 segment lines. I make a 50x50 mesh of all the segments from both pieces and I calculate if they intercept anywhere. Then I take that 1/100th of the segment and I recursively do this again but within that 1/100th of a segment. We then perform a set number of these enhancements on the found point. The final routine call adds the tiny fractional amount within the last field-of-view. We don't create new shapes we just create some lines and make them smaller and smaller.
The current routine still finds some extra phantom points, this one shouldn't find any phantom point except with low enhancements and the linearization gets a false positive. It could also get some false negatives, if the polyline approximation doesn't have an intersection.
In the standard case this would entirely be able to find all the intersections between an arc and a n-dimensional bezier curve etc. We're just linearizing it and zooming in repeatedly, which should would work for any segments.
The test had it 50x faster.
The text was updated successfully, but these errors were encountered: