Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update the vult to use backend search #1297

Merged
merged 3 commits into from
Feb 23, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,7 @@
## main / unreleased

* [CHANGE] Vulture now exercises search at any point during the block retention to test full backend search.
**BREAKING CHANGE** Dropped `tempo-search-retention-duration` parameter. [#1297](https:/grafana/tempo/pull/1297) (@joe-elliott)
* [FEATURE]: v2 object encoding added. This encoding adds a start/end timestamp to every record to reduce proto marshalling and increase search speed.
**BREAKING CHANGE** After this rollout the distributors will use a new API on the ingesters. As such you must rollout all ingesters before rolling the
distributors. Also, during this period, the ingesters will use considerably more resources and as such should be scaled up (or incoming traffic should be
Expand Down
11 changes: 6 additions & 5 deletions cmd/tempo-vulture/main.go
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,6 @@ var (
tempoReadBackoffDuration time.Duration
tempoSearchBackoffDuration time.Duration
tempoRetentionDuration time.Duration
tempoSearchRetentionDuration time.Duration

logger *zap.Logger
)
Expand All @@ -64,7 +63,6 @@ func init() {
flag.DurationVar(&tempoReadBackoffDuration, "tempo-read-backoff-duration", 30*time.Second, "The amount of time to pause between read Tempo calls")
flag.DurationVar(&tempoSearchBackoffDuration, "tempo-search-backoff-duration", 60*time.Second, "The amount of time to pause between search Tempo calls. Set to 0s to disable search.")
flag.DurationVar(&tempoRetentionDuration, "tempo-retention-duration", 336*time.Hour, "The block retention that Tempo is using")
flag.DurationVar(&tempoSearchRetentionDuration, "tempo-search-retention-duration", 10*time.Minute, "The ingester retention we expect to be able to search within")
}

func main() {
Expand Down Expand Up @@ -174,7 +172,7 @@ func main() {
if tickerSearch != nil {
go func() {
for now := range tickerSearch.C {
_, seed := selectPastTimestamp(startTime, now, interval, tempoSearchRetentionDuration)
_, seed := selectPastTimestamp(startTime, now, interval, tempoRetentionDuration)
log := logger.With(
zap.String("org_id", tempoOrgID),
zap.Int64("seed", seed.Unix()),
Expand Down Expand Up @@ -337,8 +335,11 @@ func searchTag(client *util.Client, seed time.Time) (traceMetrics, error) {
)
logger.Info("searching Tempo")

// Use the search API to find details about the expected trace
resp, err := client.Search(fmt.Sprintf("%s=%s", attr.Key, attr.Value.GetStringValue()))
// Use the search API to find details about the expected trace. give an hour range
// around the seed.
start := seed.Add(-30 * time.Minute).Unix()
end := seed.Add(30 * time.Minute).Unix()
resp, err := client.SearchWithRange(fmt.Sprintf("%s=%s", attr.Key, attr.Value.GetStringValue()), start, end)
if err != nil {
logger.Error(fmt.Sprintf("failed to search traces with tag %s: %s", attr.Key, err.Error()))
tm.requestFailed++
Expand Down