-
Notifications
You must be signed in to change notification settings - Fork 668
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Transition to dynamic PoV metering #992
Comments
And what happens if you use more pov size than there is left for the block? #209 there is also this issue that will improve the precision of the used pov size, but only after the transaction was applied. |
"systematic over-estimation": https://forum.polkadot.network/t/eliminating-pre-dispatch-weight/400/7?u=alex
Yeah I wasn't able to find this issue. I think this is more a less a duplicate of that. |
closing as duplicate then, will move the conversation to this issue |
* Upgrade to Solidity 0.8.22 * Update foundry to support Solidity 0.8.22 * Update bindings for Gateway.sol * Smokestests now require serde
This issue is a follow up from this forum conversation: Eliminating pre-dispatch weight.
Our objective is to transition from PoV pre-dispatch weight to dynamic PoV metering. This will eliminate the need of benchmarking the PoV weight component, bringing us closer to liberating developers from writing benchmarks (see on-going effort to move to dynamic
ref_time
metering here and mentioned as well in the post above)Benefits
Approach
1. Client approach
Using this method, we'll design a host function that outputs the PoV size.
Fetching this data from the collator is straightforward since it owns a
ProofRecorder
that can be used for querying such information.For validators, the mechanism isn't as straightforward. The db is not available, So we would need to come up with an implementation, that construct another recorder from the StorageProof to track read accesses. Such an implementation would also need to return the exact same values as the one recorded by the collator.
Another downside of this approach is that any breaking change could cause consensus issues, as two conflicting clients would disagree on the weight value.
2. Runtime approach
With a runtime approach, we would need to keep track of all storages read in a global Hashset, to not double count repeated access to the same key. This would work along a global counter that keep track of the current PoV value of the transaction.
Notes:
RuntimeCell
andRuntimeRefCell
The text was updated successfully, but these errors were encountered: