-
Notifications
You must be signed in to change notification settings - Fork 828
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: benchmark trace context #905
Conversation
Codecov Report
@@ Coverage Diff @@
## master #905 +/- ##
============================================
+ Coverage 82.76% 82.79% +0.03%
Complexity 839 839
============================================
Files 113 113
Lines 2936 2936
Branches 251 251
============================================
+ Hits 2430 2431 +1
Misses 394 394
+ Partials 112 111 -1
Continue to review full report at Codecov.
|
@BenchmarkMode({Mode.Throughput, Mode.AverageTime}) | ||
@Fork(1) | ||
@Warmup(iterations = 5, time = 1) | ||
@Measurement(iterations = 50_000, time = 1, timeUnit = TimeUnit.MILLISECONDS) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why running only for a single millisecond? Feels like that is so short that it could cause unstable results?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
From a readability standpoint of output data I would probably run longer iterations of a second and run 20-30 of them instead.
To make the output a bit more readable you can set the time unit used for the output to be milli, micro or nano instead will make it easier to read instead of getting the scientific notation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should decrease a number of iteration but increase time of runtime, good idea
|
||
@TearDown(Level.Iteration) | ||
public void tearDown() { | ||
this.spanContext = spanContextList.get(++iteration % spanContextList.size()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why switch each iteration?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because we wanna test all pairs in benchmark and not to do loop in benchmark
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do you expect them to behave differently? If that is the case using params would make it easier to detect a change.
If no difference why not just loop over them?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we discussed it with @carlosalberto, @bogdandrutu and @jkwatson that we should have some set of different traceId\spans and get medium time of it
No loop because in jmh we should not create a usual for\while loop directly in @benchmark method
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it make sense to have different value for sanity, but this way if you they for some strange reason behave a bit differently (doesn't take much) you will get a score that goes up and down between iterations that will be hard to explain and understand
I would recommend iterating across all of them in the benchmark method. If you want to minimize impact of iteration, store them as an array instead of a list and to a classic for-loop over it.
if you still want the score to be per toByteArray call you can use the OperationsPerInvocation annotation and set the value to the size of the array.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@sfriberg my opinion on loops in jmh based on this example it include loop and OperationsPerInvocation https://hg.openjdk.java.net/code-tools/jmh/file/1a48e97d9dea/jmh-samples/src/main/java/org/openjdk/jmh/samples/JMHSample_11_Loops.java
but i don't have a lot of experience with jmh.
Maybe it can be better to do smth like this
@Benchmark
@OperationsPerInvocation(3)
public void measureToByteArray() {
BlackHole.consume(binaryTraceContext.fromByteArray(firstByteArray));
BlackHole.consume(binaryTraceContext.fromByteArray(secondByteArray));
BlackHole.consume(binaryTraceContext.fromByteArray(thirdByteArray));
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, generally it is nice to avoid loops and but in cases like this is it nice to iterate over the entries.
I see two options to consider
For the first one you need to consider the overhead of the module calculation compared to the rest of the benchmarked code to ensure that doesn't end up being the main part of your execution
private static final byte[] bytes = ...;
private int index = 0;
@Benchmark
public byte[] measureToByteArray() {
b = binaryTraceContext.fromByteArray(bytes[index]));
index = (index + 1) % bytes.length
return b;
}
To ensure a side effect you can also do this basic add in this case which should be trivial enough to not really impact your benchmark much and probably generate less code than the blackhole call.
private static final int COUNT = X;
@Benchmark
@OperationsPerInvocation(COUNT)
public int measureToByteArray() {
int result = 0;
for (int = 0; i < COUNT; i++) {
result = binaryTraceContext.fromByteArray(bytes[i]).length;
}
return result;
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
good idea i will try to rewrite Http and Binary benchmarks
@DotSpy what's the status of this PR? I'm sure at the very least it needs a rebase after all this time. |
ping @DotSpy . I'd like to either close this, or get it up-to-date and merged. |
Hello @jkwatson it was holding one because we were thinking to remove Binary, if it was removed it should be closed otherwise i will update it and we can merge |
@DotSpy There currently isn't a binary format in the project. I'm not sure of the timeline for it coming back. |
@jkwatson i will close it) |
No description provided.