Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement RPM methodology (draft-02) #562

Open
wants to merge 2 commits into
base: master
Choose a base branch
from

Conversation

FelixGaudin
Copy link

Implementation of this issue which implement the RPM methodology.

The goal of this methodology is to measure responsiveness under working conditions.
This also measures the throughput while doing loaded latency.

If you have any questions about the methodology, please let me know. You can also follow this link to the slack https://join.slack.com/t/teamrpm/shared_invite/zt-1nt7d2yj8-bPnO8d3xwjkA7pbYRGq_Lw

@adolfintel
Copy link
Member

This looks very useful and well written.

I honestly don't have time test this thoroughly at this moment, write the documentation and update the backend but I don't want this to go to waste, I'll contact you soon.

@peacepenguin
Copy link

This works really well, thanks so much @FelixGaudin for putting this together. I'm using your fork in production already now to test for bufferbloat on my internal networks, in addition to the public bufferbloat test i also use regularly: https://www.waveform.com/tools/bufferbloat

The measurement units are a little rough for me to understand though. The RPM i can't quite grasp at all what it represents or what a 'good' rpm value is.

Then the "Factor of latency increase" i think means this: assume a non-loaded baseline ping latency of 10ms. Then a measured Download factor of latency increase of 2.0. I think means: while under the download test, the latency increased to 20ms.

The public 'waveform bufferbloat test' for example just lists the 'download latency +' value, the closer to 0 this is the better while under load.

I'll take a look at the code in a fork and see if i can add the raw 'added latency' values for up/down. I assume they're already being collected and used by your new code additions.

Then i also want to add the storage of all this new latency data into the database storage mechanism and the image generation of the result for sharing.

Functionality wise this seems to work great though! thanks!

@FelixGaudin
Copy link
Author

The RPM is the number of Round-trip time Per Minute. This metric is with the idea "bigger is better", this means that a poor value is like 300RPM (200ms) and a good value is high so like 2400RPM (25ms).

The idea of the factor of increase is to have a simpler metric. Because when you tell non-IT people that with your link you increase the latency by 50ms it's not easy for them to understand it. But if you say that the time taken is 5 times bigger that's more understandable.

Also, the difference is more sensitive to the unloaded value. For example, if you are going from 5ms (unload) to 25ms (loaded) the difference is 20. While going from 200ms (unload) to 220ms (loaded) the difference is still 20 but the impact is not the same.

So I think using a factor of latency increase can be better, but maybe there is a better way to name it or to present it.

Anyway, thank you a lot for your interest in this feature!

@vincejv
Copy link

vincejv commented Aug 16, 2024

is there any reason why this isn't merged yet??
also, this should address this issue #479 correct?

@adolfintel
Copy link
Member

There is no particular reason @vincejv , the project was simply unmaintained at the time because I was too busy working (still am).

If we want to add this feature it needs to be rebased with current master. Not a whole lot of point at this point since I'm planning to do a full rewrite of the codebase with version 6.0

@peacepenguin
Copy link

I've used the feature extensively in a custom build with the RPM feature merged in.

I love the concept, and it generally works great!

But sometimes the test duration randomly increases. So it has some kind of bug with the overall test timing. Not sure if that's intentional or needed to gather the rpm data measurement.

One thing id like to see changed is the latency checks could be represented as a latency graph over the time duration of the test.

With shading or something in the background to see if the latency spiked or increased during which phase of the speed test. Upload vs download.

This way a nice flat line indicates perfect buffer bloat mitigation. And any increase in latency during the test is visible as a simple increase of the latency line on the graph.

The RPM measurement just isn't meaningful to the uninitiated. I'd rather know in normal latency terms what the latency is unloaded vs downloading vs uploading in normal millisecond units.

Hope that helps! I love the app itself and the rpm features as they are now, too!

Thanks for the hard work on this! Looking forward to seeing the rewrite you're planning!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants