Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve performance of keepalive rescheduling #8662

Merged
merged 9 commits into from
Aug 9, 2024

Conversation

bdraco
Copy link
Member

@bdraco bdraco commented Aug 8, 2024

What do these changes do?

If not all handlers were done, the keep alive would get rescheduled every second until the current time was greater than self._keepalive_time + self._keepalive_timeout

Instead we will reschedule the timer for the expected keep alive close time if the timer handle fires too early.

Are there changes in behavior for the user?

no

Is it a substantial burden for the maintainers to support this?

no

related issue #8613

Before we would see 1 timer handle created every second for 60s per connection. After we get 1 timer handle total for when the keepalive timeout is due.

before
before

after
after

after_take_2

If not all handlers were done, the keep alive would get rescheduled
every second until the current time was greater than
self._keepalive_time + self._keepalive_timeout

Instead we will schedule the timer for either
1s later or self._keepalive_time + self._keepalive_timeout
whichever is greater to avoid many timer handles
@bdraco bdraco added this to the 3.10.3 milestone Aug 8, 2024
@bdraco bdraco added backport-3.10 Trigger automatic backporting to the 3.10 release branch by Patchback robot backport-3.11 Trigger automatic backporting to the 3.11 release branch by Patchback robot labels Aug 8, 2024
@bdraco
Copy link
Member Author

bdraco commented Aug 8, 2024

Before we would see 1 timer handle created every second for 60s per connection. After we get 1 timer handle total.

Copy link

codecov bot commented Aug 8, 2024

Codecov Report

All modified and coverable lines are covered by tests ✅

Project coverage is 97.97%. Comparing base (406cd2c) to head (63531a8).

Additional details and impacted files
@@           Coverage Diff           @@
##           master    #8662   +/-   ##
=======================================
  Coverage   97.97%   97.97%           
=======================================
  Files         107      107           
  Lines       33791    33794    +3     
  Branches     3969     3969           
=======================================
+ Hits        33107    33110    +3     
  Misses        507      507           
  Partials      177      177           
Flag Coverage Δ
CI-GHA 97.88% <100.00%> (+<0.01%) ⬆️
OS-Linux 97.53% <75.00%> (-0.01%) ⬇️
OS-Windows 95.91% <100.00%> (+<0.01%) ⬆️
OS-macOS 97.20% <75.00%> (-0.01%) ⬇️
Py-3.10.11 97.34% <100.00%> (+<0.01%) ⬆️
Py-3.10.14 97.27% <75.00%> (-0.01%) ⬇️
Py-3.11.9 97.51% <100.00%> (+<0.01%) ⬆️
Py-3.12.4 97.62% <100.00%> (+<0.01%) ⬆️
Py-3.8.10 95.55% <100.00%> (+<0.01%) ⬆️
Py-3.8.18 97.06% <75.00%> (-0.01%) ⬇️
Py-3.9.13 97.21% <100.00%> (+<0.01%) ⬆️
Py-3.9.19 97.15% <75.00%> (-0.01%) ⬇️
Py-pypy7.3.16 96.74% <75.00%> (+<0.01%) ⬆️
VM-macos 97.20% <75.00%> (-0.01%) ⬇️
VM-ubuntu 97.53% <75.00%> (-0.01%) ⬇️
VM-windows 95.91% <100.00%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

@psf-chronographer psf-chronographer bot added the bot:chronographer:provided There is a change note present in this PR label Aug 9, 2024
@bdraco
Copy link
Member Author

bdraco commented Aug 9, 2024

I went to do some archeology before I mark this ready as the previous code didn't make much sense to me.

@bdraco
Copy link
Member Author

bdraco commented Aug 9, 2024

It looks like the 1s delay was added in ad2366e

Previously it was next but it looks like that caused a problem for websocket but thats since been removed. It also looks like it caused a problem where it might fire too often, but thats not a problem with this implementation because we use max(next, now + 1) so it will never fire more than every 1s

@bdraco bdraco marked this pull request as ready for review August 9, 2024 00:26
CHANGES/8662.misc.rst Outdated Show resolved Hide resolved
@bdraco bdraco marked this pull request as draft August 9, 2024 14:25
aiohttp/web_protocol.py Outdated Show resolved Hide resolved
CHANGES/8662.misc.rst Outdated Show resolved Hide resolved
@bdraco
Copy link
Member Author

bdraco commented Aug 9, 2024

Thanks. I like the new version much better.

@bdraco bdraco marked this pull request as ready for review August 9, 2024 16:55
@bdraco bdraco merged commit be23d16 into master Aug 9, 2024
37 of 38 checks passed
@bdraco bdraco deleted the keepalive_reschedule_churn branch August 9, 2024 16:55
Copy link
Contributor

patchback bot commented Aug 9, 2024

Backport to 3.10: 💚 backport PR created

✅ Backport PR branch: patchback/backports/3.10/be23d16fa95e77516b7199d9b0ae8a08e8c941f4/pr-8662

Backported as #8670

🤖 @patchback
I'm built with octomachinery and
my source is open — https:/sanitizers/patchback-github-app.

Copy link
Contributor

patchback bot commented Aug 9, 2024

Backport to 3.11: 💚 backport PR created

✅ Backport PR branch: patchback/backports/3.11/be23d16fa95e77516b7199d9b0ae8a08e8c941f4/pr-8662

Backported as #8671

🤖 @patchback
I'm built with octomachinery and
my source is open — https:/sanitizers/patchback-github-app.

patchback bot pushed a commit that referenced this pull request Aug 9, 2024
bdraco added a commit that referenced this pull request Aug 9, 2024
bdraco added a commit that referenced this pull request Aug 9, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport-3.10 Trigger automatic backporting to the 3.10 release branch by Patchback robot backport-3.11 Trigger automatic backporting to the 3.11 release branch by Patchback robot bot:chronographer:provided There is a change note present in this PR
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants