-
Notifications
You must be signed in to change notification settings - Fork 151
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Meta] Performance #1361
Labels
Comments
TarikGul
added
I7 - Optimization 🚴
Make Sidecar drive faster
I9 - Meta
Meta discussions about Sidecar
labels
Nov 28, 2023
This issue has been mentioned on Polkadot Forum. There might be relevant details there: https://forum.polkadot.network/t/scaling-down-of-parity-s-public-infrastructure/4697/6 |
This was referenced Aug 30, 2024
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
Summary
Recently we have been posed with the question of how we can make Sidecar more performant. This can have multiple positive side affects such as lower latency, lower server costs, more responsive api, etc. As shown below, there are multiple avenues all of which can contribute to the performance of sidecar, some internal, and some external.
NOTE: everything I cover below will be in reference to the
/blocks/{blockId}
endpoint. A majority of the other endpoints are very performant and don't have the same issues as the blocks endpoint does.My Environment
Macbook pro 16inch M2 32GB memory.
I run sidecar with
--inspect
in order to profile and inspect performance.Also running an archive node locally.
External
Subway
Currently in beta, this library acts as an RPC middleware proxy that can cache RPC calls to the node. I did load tests on sidecar for 60s with, and without subway to demonstrate the affects. An important observation though is that I am running an archive node locally, which doesn't show the full power of
Subway
which when having to query an external archive node, it really shows the benefits of the cache.Without:
With:
Overall we can see a substantial benefit from the RPC cache layer, which is increasing the output from sidecar.
Local RPC vs non-local RPC node
/blocks/7753833 1477ms
<- Hosted in Germany/blocks/7753833 479ms
<- Hosted on my local machineThis is pretty obvious, therefore I didn't go to deep into this but having your server closer to your rpc node helps lower latency.
Internal
Lets talk about RPC requests :). Currently, a lot of the overhead for RPC calls comes from non batched calls (calls that aren't using Promise.all). When using inspect and you profile requests you can see how the server is in idle waiting for responses in order to continue its operations.
No Fee's
Calculating Fee's has a few fundamental problems.
We call Promises inside of a loop.
We can't batch those promises.
Therefore, when I added an option for
/blocks/{blockId}?noFee=true
, our performance went dramatically up:With noFees:
Without noFees:
The fees section needs to be optimized with an algorithm that iterates through the extrinsics creates a map that has the necessary Promises to batch to get the Fee information then call the Promises, and then apply the results to their correct extrinsics.
Getting Hash of a block number
Before each call to
/blocks/{blockId}
if the passed inblockId
is a number we fetch its corresponding blockHash. If a blockHash is passed in there will be a small improvement in performance, an improvement nonetheless. BUT, for fees we also need to get the previous blockHash which means if the user passes in a number, we can actually batch those 2 calls together because we would just need to subtract theblockId
by 1.Similarly for
/blocks/head
we make an RPC call in the controller to retrieve the header. Can this also be batched somewhere?Finalization
A small improvement for performance can also be set in the controller config with the
finalizes
field. If set to false, this will reduce the amount of calls by 1, which is one less idle period for the server. We should extend an override query param that will set finalize to false therefore saving a call.Overall
It's quite clear that the use of promises individually has an impact on server and endpoint performance in sidecar. Most of all calculating fee's. I think these finding should be used to optimize our process for querying a blocks information, but also as motivation to make docs on how to increase the speed sidecar's endpoints using available query params and or external tooling.
Edit: I will add more finding's and measurements below as they are available.
The text was updated successfully, but these errors were encountered: