Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lower throughtput numbers reported with v3.16 #1768

Open
pb8o opened this issue Sep 18, 2024 · 5 comments
Open

Lower throughtput numbers reported with v3.16 #1768

pb8o opened this issue Sep 18, 2024 · 5 comments

Comments

@pb8o
Copy link

pb8o commented Sep 18, 2024

Context

  • Version of iperf3: iperf3 3.16 and above

  • Hardware: HP ProBook laptop

  • Operating system (and distribution, if any): Ubuntu 22.04

Please note: iperf3 is supported on Linux, FreeBSD, and macOS.
Support may be provided on a best-effort basis to other UNIX-like
platforms. We cannot provide support for building and/or running
iperf3 on Windows, iOS, or Android.

  • Other relevant information (for example, non-default compilers,
    libraries, cross-compiling, etc.):

Bug Report

  • Expected Behavior

Consistency between versions 3.15 and below and 3.16 and above

  • Actual Behavior

iperf3 3.16 reports much smaller numbers

  • Steps to Reproduce
# ip netns exec netns-master-0 ./iperf3-amd64-3.15 -c 192.168.0.2 -n $(( 2**20 * 100 )) -N
Connecting to host 192.168.0.2, port 5201
[  5] local 192.168.0.1 port 40766 connected to 192.168.0.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-0.03   sec   100 MBytes  30.0 Gbits/sec  135   1.07 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-0.03   sec   100 MBytes  30.0 Gbits/sec  135             sender
[  5]   0.00-0.03   sec  97.6 MBytes  29.0 Gbits/sec                  receiver

iperf Done.
# ip netns exec netns-master-0 ./iperf3-amd64-3.16 -c 192.168.0.2 -n $(( 2**20 * 100 )) -N
Connecting to host 192.168.0.2, port 5201
[  5] local 192.168.0.1 port 40798 connected to 192.168.0.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   100 MBytes   838 Mbits/sec  187   1.22 MBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-1.00   sec   100 MBytes   838 Mbits/sec  187             sender
[  5]   0.00-1.00   sec   100 MBytes   838 Mbits/sec                  receiver

iperf Done.
  • Possible Solution

Sending more data seems to help

root@2245840c43f0:/firecracker# ip netns exec netns-master-0 ./iperf3-amd64-3.16 -c 192.168.0.2 -n $(( 2**20 * 50000 )) -N
Connecting to host 192.168.0.2, port 5201
[  5] local 192.168.0.1 port 40890 connected to 192.168.0.2 port 5201
[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec  4.46 GBytes  38.3 Gbits/sec   90    874 KBytes
[  5]   1.00-2.00   sec  4.22 GBytes  36.2 Gbits/sec  240   1007 KBytes
[  5]   2.00-3.00   sec  4.26 GBytes  36.6 Gbits/sec    0    973 KBytes
[  5]   3.00-4.00   sec  4.05 GBytes  34.8 Gbits/sec   45   1018 KBytes
[  5]   4.00-5.00   sec  4.29 GBytes  36.9 Gbits/sec    0   1.02 MBytes
[  5]   5.00-6.00   sec  4.20 GBytes  36.1 Gbits/sec    0    930 KBytes
[  5]   6.00-7.00   sec  4.02 GBytes  34.5 Gbits/sec    0    993 KBytes
[  5]   7.00-8.00   sec  4.12 GBytes  35.3 Gbits/sec    0    973 KBytes
[  5]   8.00-9.00   sec  4.03 GBytes  34.6 Gbits/sec   45   1.10 MBytes
[  5]   9.00-10.00  sec  4.35 GBytes  37.3 Gbits/sec    0   1.03 MBytes
[  5]  10.00-11.00  sec  4.47 GBytes  38.4 Gbits/sec    0    877 KBytes
[  5]  11.00-12.00  sec  2.36 GBytes  20.3 Gbits/sec    0    950 KBytes
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-12.00  sec  48.8 GBytes  34.9 Gbits/sec  420             sender
[  5]   0.00-12.00  sec  48.8 GBytes  34.9 Gbits/sec                  receiver

iperf Done.
@davidBar-On
Copy link
Contributor

The reason for this issue is that in the multi-thread versions (starting from 3.16), iperf3 checks whether the -n bytes or -k blocks were sent only at the end of the periodic throughput reports interval (-i) and the statistics are computed for the full interval. You can see that the 3.15 test took 0.03 sec while the 3.16 test took 1 sec (the default interval).

Can you explain what is the reason for running this test? I am asking to understand whether and why this is a useful test case. The point is that even with 3.15 the output statistics are questionable, since there are different overheads at the beginning and end of the test. E.g. it may be that sending took only 0.02 sec, but the statistics were per 0.03 because of these overheads.

@pb8o
Copy link
Author

pb8o commented Sep 20, 2024

We are trying to test that a token bucket rate limiter is working as expected. In particular here we want to make sure that the throughput at the beginning (the N "burst" bytes) is higher than the throughput after the burst is over.

  1. check throughput up to BURST bytes
  2. check throughput after
  3. throughput of 1) should be higher than 2)

I think sending more data helps as otherwise the transfer is too quick and as you said the numbers get skewed.

Just for context, this is throughput to/from a VM, so it all stays within the same host.

@davidBar-On
Copy link
Contributor

Submitted PR #1775 which terminates the test immediately when all data was send/received, similar to the per-3.16 versions (per-multi-thread). If possible, it would be helpful if you can test using the PR's code, to have a better confidence that it works properly.

@bmah888
Copy link
Contributor

bmah888 commented Sep 24, 2024

Independent from the PR, I'm not sure whether iperf3 is the best tool for testing a token bucket rate limiter, because iperf3 is designed to operate on much longer timeframes than the depth of a token bucket. It might be better to look at the timing of individual packets in traces to see if the rate limiter is doing the desired behavior.

@pb8o
Copy link
Author

pb8o commented Sep 25, 2024

Thanks @davidBar-On I can confirm the PR restores the previous behavior. Thank you for acting so quickly!

@bmah888 yes I agree this may not be the best way to validate the functionality in our end. We may want to revisit this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants