Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat: benchmark trace context #905

Closed
wants to merge 1 commit into from

Conversation

DotSpy
Copy link
Member

@DotSpy DotSpy commented Feb 21, 2020

No description provided.

@codecov-io
Copy link

Codecov Report

Merging #905 into master will increase coverage by 0.03%.
The diff coverage is n/a.

Impacted file tree graph

@@             Coverage Diff              @@
##             master     #905      +/-   ##
============================================
+ Coverage     82.76%   82.79%   +0.03%     
  Complexity      839      839              
============================================
  Files           113      113              
  Lines          2936     2936              
  Branches        251      251              
============================================
+ Hits           2430     2431       +1     
  Misses          394      394              
+ Partials        112      111       -1
Impacted Files Coverage Δ Complexity Δ
...elemetry/sdk/trace/export/BatchSpansProcessor.java 90.55% <0%> (+0.78%) 7% <0%> (ø) ⬇️

Continue to review full report at Codecov.

Legend - Click here to learn more
Δ = absolute <relative> (impact), ø = not affected, ? = missing data
Powered by Codecov. Last update e9200e6...d27c185. Read the comment docs.

@BenchmarkMode({Mode.Throughput, Mode.AverageTime})
@Fork(1)
@Warmup(iterations = 5, time = 1)
@Measurement(iterations = 50_000, time = 1, timeUnit = TimeUnit.MILLISECONDS)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why running only for a single millisecond? Feels like that is so short that it could cause unstable results?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

74876720-3ee9a500-5375-11ea-9554-0ef40106d4de
it do a tons of ops/s no need to run it for more

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

From a readability standpoint of output data I would probably run longer iterations of a second and run 20-30 of them instead.

To make the output a bit more readable you can set the time unit used for the output to be milli, micro or nano instead will make it easier to read instead of getting the scientific notation.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We should decrease a number of iteration but increase time of runtime, good idea


@TearDown(Level.Iteration)
public void tearDown() {
this.spanContext = spanContextList.get(++iteration % spanContextList.size());
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why switch each iteration?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because we wanna test all pairs in benchmark and not to do loop in benchmark

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do you expect them to behave differently? If that is the case using params would make it easier to detect a change.
If no difference why not just loop over them?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

we discussed it with @carlosalberto, @bogdandrutu and @jkwatson that we should have some set of different traceId\spans and get medium time of it
No loop because in jmh we should not create a usual for\while loop directly in @benchmark method

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it make sense to have different value for sanity, but this way if you they for some strange reason behave a bit differently (doesn't take much) you will get a score that goes up and down between iterations that will be hard to explain and understand

I would recommend iterating across all of them in the benchmark method. If you want to minimize impact of iteration, store them as an array instead of a list and to a classic for-loop over it.

if you still want the score to be per toByteArray call you can use the OperationsPerInvocation annotation and set the value to the size of the array.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@sfriberg my opinion on loops in jmh based on this example it include loop and OperationsPerInvocation https://hg.openjdk.java.net/code-tools/jmh/file/1a48e97d9dea/jmh-samples/src/main/java/org/openjdk/jmh/samples/JMHSample_11_Loops.java
but i don't have a lot of experience with jmh.
Maybe it can be better to do smth like this

  @Benchmark
  @OperationsPerInvocation(3)
  public void measureToByteArray() {
    BlackHole.consume(binaryTraceContext.fromByteArray(firstByteArray));
    BlackHole.consume(binaryTraceContext.fromByteArray(secondByteArray));
    BlackHole.consume(binaryTraceContext.fromByteArray(thirdByteArray));
  }

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, generally it is nice to avoid loops and but in cases like this is it nice to iterate over the entries.

I see two options to consider
For the first one you need to consider the overhead of the module calculation compared to the rest of the benchmarked code to ensure that doesn't end up being the main part of your execution

  private static final byte[] bytes = ...;
  private int index = 0;
  @Benchmark
  public byte[] measureToByteArray() {
    b = binaryTraceContext.fromByteArray(bytes[index]));
    index = (index + 1) % bytes.length
    return b;
  }

To ensure a side effect you can also do this basic add in this case which should be trivial enough to not really impact your benchmark much and probably generate less code than the blackhole call.

  private static final int COUNT = X;
  @Benchmark
  @OperationsPerInvocation(COUNT)
  public int measureToByteArray() {
    int result = 0;
    for (int = 0; i < COUNT; i++) {
        result = binaryTraceContext.fromByteArray(bytes[i]).length;
    }
    return result;
  }

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good idea i will try to rewrite Http and Binary benchmarks

@jkwatson
Copy link
Contributor

@DotSpy what's the status of this PR? I'm sure at the very least it needs a rebase after all this time.

@jkwatson
Copy link
Contributor

ping @DotSpy . I'd like to either close this, or get it up-to-date and merged.

@DotSpy
Copy link
Member Author

DotSpy commented Jul 22, 2020

Hello @jkwatson it was holding one because we were thinking to remove Binary, if it was removed it should be closed otherwise i will update it and we can merge

@jkwatson
Copy link
Contributor

@DotSpy There currently isn't a binary format in the project. I'm not sure of the timeline for it coming back.

@DotSpy
Copy link
Member Author

DotSpy commented Jul 31, 2020

@jkwatson i will close it)

@DotSpy DotSpy closed this Jul 31, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants