-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Eliminating the sawtooth #41
Comments
That looks like the uplink is down for a short period. A bunch of pings will queue up to be sent and the when the link goes up and the pongs all return they will look like a slope from 80 ms down to 0 ms (the most recent ping), as the X axis is the time the pings are queued up. You can tell it's the uplink as the down latency doesn't spike. It's also not a hiccup in Crusader as there's multiple samples, which you can see by the cursor snaps in the UI or the shape of the down latency here. I see this behavior with my Intel AX210 when it's scanning channels. |
I am not imagining that Crusader is gathering incorrect data. I am wondering whether there is a better way to display it. Let's leave this question alone for a bit while you iron out |
an isochronous stream like voip is better than ping |
How to do that in Crusader? |
Would it be correct that an RTC connection is isochronous? Here's a wacky idea for Crusader 0.4... I'm becoming fond of the display provided by the VSee Network Stability Tester (bit.ly/VSee). Specifically, it appears to create an RTC connection, then continually monitor latency and packet loss to provide interesting charts. The Crusader "Monitor" tab could fire up an RTC connection, then make the comparable tests... (@Zoxc - sorry - this means more work for you...) |
I have always been surprised the sawtooth display that appears in latency charts. Here's an example:
It doesn't match my intuition of the physical effects that underly the measurements. I can see why the front edge is so sharp - there was a big increase. But why does the back edge always slope down? And why is it always so linear?
I think I figured it out, by watching this latency chart:
The spike began at -1.11 seconds, and it was back to (near) zero at -1.03 seconds. That's a difference of 80msec, and lo and behold, the spike was 80 msec tall. Ahah! And seems to match the other observations that I've seen.
My conclusion is that Crusader is "deaf" during the time that it's waiting for a response to the UDP ping. When it finally returns (say, 80msec later), Crusader records the ping time. If the next UDP ping returns in a couple msec, the line connecting the two samples shows that linear, downward slope.
This means the plot doesn't correctly represent what's really happening.
Am I guessing the correct mechanism? Thanks.
The text was updated successfully, but these errors were encountered: