Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Fixes #1611
The culprit was
get_ancestor_ephemeral_nodes
, which looked at the set of all selected models and then asked for all ancestors of each... if you select all your models, that means for each model we will go through all models. Oops!This fix is kind of a hack, but it just adds all ephemeral nodes to the run-list. Since ephemeral nodes run really fast (because they don't run), that seems fine. We will have to compile each ephemeral node regardless of your selection, but that is still O(n) 🙂
When we implement partial parsing/compilation it's possible this will become a pain point again, but only if you have an enormous number of ephemeral models and are selecting a tiny subset of your graph. I'll be quite happy to revisit the problem then, as we definitely could choose to memoize node selection (it would make the case of many overlapping
+my_model
calls faster, too).I also removed the
manifest.to_flat_graph()
call that happens in context creation, which is another O(n^2) thing. This is kind of a breaking change but I'm not sorry. We never documented thegraph
entry in the environment, so people shouldn't have been using it anyway.Old:
New: