-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add detailed tracking for start-of-invocation parse time #2823
Comments
@jtcohen6 I've been thinking about this as a breakdown across node type parsing. Maybe an example payload would be:
I think it might be challenging to break down parse times across file loading / jinja interpolation / data encoding/decoding. Instead, we can use this dataset to understand empirical performance across node types, then drill into specific performance improvements in a testing/sample project using something like cProfile and snakeviz. That's just my 2 cents - you buy it? |
Totally. I imagined the difficulties here to be reversed, whereby it'd be reasonable to break down parse time by functional area of the codebase and quite challenging to break it down by specific node. I'm not exactly sure why I thought that, but I'm equally happy with the approach you outline. |
We settled on tracking higher level timing info that is already being recorded by #2883 |
Describe the feature
Proposal: Extend our existing opt-out anonymous usage tracking to present a detailed picture of project size, resource distribution, and start-of-invocation steps with associated execution times.
As far as the implementation, I'm taking some indirect inspiration from Snowplow's performance timing context. I'm envisioning a custom structured event that does two things:
Found 1 model, 1 test, 1 snapshot, 1 analysis, 143 macros, 0 operations, 4 seed files, 0 sources
Describe alternatives you've considered
dbt run
); we could take the difference from the first model run to approximate total start-of-invocation-stuff time.Who will this benefit?
The text was updated successfully, but these errors were encountered: