-
Notifications
You must be signed in to change notification settings - Fork 7
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support compiling/executing SQL in the context of a model #46
Comments
This is a good idea! Thanks for opening @drewbanin What I'm wondering: Should compiling/running manifest nodes be a totally separate process from compiling/running arbitrary dbt code? If we know that a file maps to a node in the manifest, might we want to actually compile that node directly? So, rather than grabbing the SQL from
One important difference is that the This gets slightly trickier for running SQL / previewing data, since we wouldn't want to actually execute the dbt model / materialization. In that case, we'd have to compile the node (as above) and then pass the compiled SQL into the A downside of this approach is that the RPC server will compile the node's I think that limitation applies to both approaches, though. If a model's SQL references |
+1 dbt Cloud user User is trying to access
|
This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please remove the stale label or comment on the issue, or it will be closed in 7 days. |
A customer reached out about |
We are not going to be able to solve this within the constraints of the |
This issue has been marked as Stale because it has been open for 180 days with no activity. If you would like the issue to remain open, please remove the stale label or comment on the issue, or it will be closed in 7 days. |
Describe the feature
Today, the rpc server supports compiling/executing SQL in the context of a dbt project. This is good! Unfortunately, dbt uses a fake/stub compilation context for these compile/execute tasks, so model-specific variables like
this
or macros likeis_incremental()
return incorrect values in compilation. Instead, the rpc server should provide a mechanism for specifying the node (eg. by unique_id or path) which should provide the compilation context for the rpc method.Describe alternatives you've considered
Not sure how else something like this could work... possibly by providing a set of context overrides alongside the rpc call?
Additional context
Who will this benefit?
Users who leverage the rpc server as a part of their dbt code development workflow; dbt Cloud users
Are you interested in contributing this feature?
I would be happy to help :)
The text was updated successfully, but these errors were encountered: