-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Recommendations on updating export interval for currency service #1330
Comments
Thanks for reporting this. A 1-second export interval is quite aggressive. This should be left as the default, which is every 60 seconds. Also, it overrides temporality to be delta when all other metrics emitted by the demo are cumulative. I'll get both of these issues fixed up. |
@puckpuck thanks! I didn't want to assume it was an issue with the app since the error was technically coming from my backend. I am still curious how someone would address this in a scenario where they don't have access to modify the app. Say I owned the metrics pipeline, and a team's app was exporting too aggressively like this. Would I use the collector to just drop points? Or batch/aggregate them? |
A collector in the pipeline could be used to batch this up yes. |
Question
Hi, I'm trying to run the demo on GCP and hitting a consistent error which I think could be fixed by changing the export interval of the currency service from 1s to 5s+. I only need this due to a GCP-specific limitation on the monitoring backend, so I was hoping there was a way to do this with configuration instead of recompiling the app.
More specifically, the collector returns an error on export from the GCP metrics service indicating that points are being sent too frequently. I see this occasionally from other demo components too, but it is pretty consistent from the currency service.
First I tried fixing this with collector processors (like the metrics transform processor) to aggregate points in the same batch. I also tried setting
OTEL_METRIC_EXPORT_INTERVAL
env var on the currency service pod, but it doesn't look like that had any effect (I don't know if C++ instrumentation handles that environment variable?)Wondering if you have any recommendation for tweaking these settings or handling this kind of rate limiting/aggregation in the collector
The text was updated successfully, but these errors were encountered: