-
Notifications
You must be signed in to change notification settings - Fork 451
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stress testing best practices? #3667
Comments
Hi, One thing I'm seeing is that you are capturing |
I'm using |
Overall the way the API is used is correct, this should be working. There must be some detail that's wrong. Could I get a pointer to |
Sure, it's pretty straightforward. // IteratorRange represents a range of numbers to chunk slices into.
type IteratorRange struct{ From, To uint64 }
// GenerateRanges generates a list of ranges from min to max with a batch size.
func GenerateRanges(min, max uint64, batchSize uint64) []IteratorRange {
if batchSize == 0 {
batchSize = 1
}
var ranges []IteratorRange
// this has to be from <= max to include the last batch
for from := min; from <= max; from += batchSize {
// without -1 the first batch will be 10-20 and the second batch will be 20-30
// with -1 the first batch will be 10-19 and the second batch will be 20-29
to := (from + batchSize) - 1
if to > max {
to = max
}
ranges = append(ranges, IteratorRange{from, to})
}
return ranges
} |
Well, it looks like To is inclusive (it's |
I'm working on implementing load tests for a raft implementation on top of Pebble, and I'm running into sync issues with inserting many large batches.
Here's a snippet of log storage and fetching
Here's my stress test:
I'm consistently getting 10+
pebble: not found
errors on keys despite all batches having sync points plus a final flush to disk, so I suspect I'm doing something wrong. I was reading the docs on large batches, but I don't know what I'm doing wrong. I have noticed most of the keys that aren't found are the same keys, so I suspect there's internal sync issues but I expected the forced sync points to alleviate that.What is the right way to sync large batch inserts for immediate reads for stress testing?
Jira issue: PEBBLE-40
The text was updated successfully, but these errors were encountered: