Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix interrupted InferenceContext call chains #2209

Merged

Conversation

jacobtylerwalls
Copy link
Member

Type of Changes

Type
βœ“ πŸ› Bug fix
βœ“ πŸ”¨ Refactoring

Description

ClassDef.getitem() and infer_argument() both had interrupted call chains where the InferenceContext wasn't passed all the way through to infer(). This caused performance problems in packages such as sqlalchemy needing these features.

Closes pylint-dev/pylint#8150

Linting the example from pylint-dev/pylint#8150 now takes half as much time.

ClassDef.getitem() and infer_argument() both had
interrupted call chains where InferenceContext wasn't
passed all the way through to infer(). This caused
performance problems in packages such as sqlalchemy
needing these features.
@jacobtylerwalls jacobtylerwalls added topic-performance pylint-tested PRs that don't cause major regressions with pylint labels Jun 11, 2023
@jacobtylerwalls jacobtylerwalls added this to the 3.0.0a5 milestone Jun 11, 2023
Copy link
Collaborator

@DanielNoord DanielNoord left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd love to talk about your strategy finding these fixes. Seems like you have a better strategy than I have πŸ˜„

@codecov
Copy link

codecov bot commented Jun 11, 2023

Codecov Report

Merging #2209 (c477795) into main (1fbbf25) will increase coverage by 0.00%.
The diff coverage is 100.00%.

Additional details and impacted files

Impacted file tree graph

@@           Coverage Diff           @@
##             main    #2209   +/-   ##
=======================================
  Coverage   92.68%   92.68%           
=======================================
  Files          94       94           
  Lines       10828    10830    +2     
=======================================
+ Hits        10036    10038    +2     
  Misses        792      792           
Flag Coverage Ξ”
linux 92.44% <100.00%> (+<0.01%) ⬆️
pypy 87.64% <100.00%> (+<0.01%) ⬆️
windows 92.28% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Ξ”
astroid/arguments.py 99.24% <100.00%> (ΓΈ)
astroid/interpreter/dunder_lookup.py 100.00% <100.00%> (ΓΈ)
astroid/nodes/scoped_nodes/scoped_nodes.py 91.96% <100.00%> (ΓΈ)

@jacobtylerwalls
Copy link
Member Author

  1. @mbyrnepr2 pointed me in the right direction to the enum brain
  2. profile showed that is_enum_subclass() was the culprint, and the methods around mro() were taking about as long
  3. wasted some effort trying to cache is_enum_subclass()
  4. realized if caching wasn't working, then the InferenceContext was unnecessarily being refreshed
  5. bunch of print statements to find out why the InferenceContexts were not the same
  6. set a breakpoint and look at the call stack, find where they went missing! πŸ˜„

@jacobtylerwalls jacobtylerwalls merged commit 61ca2e8 into pylint-dev:main Jun 12, 2023
@jacobtylerwalls jacobtylerwalls deleted the cache-is-enum-subclass branch June 12, 2023 10:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
pylint-tested PRs that don't cause major regressions with pylint topic-performance
Projects
None yet
Development

Successfully merging this pull request may close these issues.

SQLAlchemy 2.0.0 takes forever to lint
2 participants