-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
sweeper+contractcourt: deadline aware HTLC claims & commitment confirmation #4215
Comments
Interesting issue, I'd like to work on it. My thoughts on the main question,
Two factors, how much fee to pay and when to broadcast, contribute to the probability of confirmation. The assumptions are,
The most aggressive approach is to broadcast the transaction using the max fee immediately. It has the highest chance of success and costs the most. And the tradeoff is, of course, always between the risk and cost. Suppose we have N blocks till the deadline is reachedA naive approach is to increase fee by a delta value each block till the deadline, where the delta is The actual value of N mattersIf N is small, say 1 or 2 blocks, we don't have many choices but to bump it asap. As a starting point, according to this PR, we may check the N value against 34. if the deadline is 34 blocks away, we wait. Otherwise, we take action. While the 34 specifies an upper bound, we want a lower bound. That is, if the deadline is only a few blocks away(say In summary,
We also want to make the parameters configurable, leaving users a choice to CPFP in their own way(or they can use Possible ways to bump fee Another factor is the total values of HTLC outputs to be swept. The smaller it gets, the less likely a counterparty will appear and the less money we will lose in case of the worst scenario happens. This can further be reflected in |
Some thought/questions that I have around deadline-aware sweeping:
Can this be flattened to just "how much to pay at block
|
Indeed, "when to broadcast" is not accurate. What matters is we do it early. My assumption is that, if transaction A
Nope, I don't think it matters. Moreover, I think the fee rate behaves pretty randomly...
I think this value is set by the user? There is this |
I don't think you need to wait until expiry. Assuming both are published to the same mempool, B will have either replaced A or B is rejected (BIP125). Both can't exist at the same time.
Yes, definition of max fee can be kept a precondition. It was more a broader question about how to define that. The |
More discussion on this issue,
Sorry, maybe I wasn’t clear about my statements. I was referring to the occurrence of an event in terms of its probability, in particular, I was making the assumption that when a transaction is broadcast earlier, it has a higher chance to be confirmed before the deadline. If it helps, the statement is similar to, if I go to work at 8 am I’d have a higher chance of not being late versus going to work at 9 am. If this assumption is correct, then it means we want to broadcast early even the fee paid stays unchanged. We’d prefer
I think it’s up to the user to decide these values while we provide some defaults?
In short term this may help, essentially I think a machine learning model is being developed here...It’s an interesting approach, while history data only has limited efficacy, especially in the long run. I will try to derive some statistics and see what I get;) |
Some thoughts from offline discussion with @halseth:
|
@joostjager so I did a very simple experiment on the functions, here's the summary. Performance difference among different fee rate increment functionstl;dr
The last one works best. Build the simulation
Then we have two derived parameters,
Then we have a function,
The simulation result
Within the assumptions made from this simulation, the previous assumption, the probability of confirmation increases if broadcast earlier, now can be validated as true. Others Out of curiosity, I've also investigated on the success rates of different fee rates and deadline delta. If interested, here's the report. |
Excellent report @yyforyongyu! One question about the see of fee function you evaluated: how'd you arrive at this set? DId you do any sort of "hyperparameter searching", or were they selected after the fact based on some of the findings in the report re the relationship between the params and resulting heuristics? |
@Roasbeef No fancy tricks, just basic exploratory data analysis. These functions were chosen based on the findings as a starting point. |
@yyforyongyu, that is an interesting exploration. Great approach with taking historical blocks and defining a percentile for getting our tx included. One thing that I think could also be added is the goal of minimizing the fee paid. That is important, because if fees paid wouldn't matter, we could just always publish at 1000 sat/b from the start. On the one hand, it is important to get the tx confirmed before the deadline. But because inclusion in a block is never guaranteed, it is probably fair to define a parameter that sets the minimally required probability of getting in in time. This should be pretty high. I'd say >99.5%, maybe even higher. Given that parameter, the question becomes: which fee function minimizes the fee paid? A simulation could look like this:
Then report the percentage of txes that was confirmed in time and the average fee paid. |
One other thing to note is that the various deadlines differ greatly between the various output types. Commitment outputs can have weeks to be swept, while HTLCs can be a much lower number of blocks. I can imagine the ideal fee function used could be different. |
Took the ideas from @joostjager and made a new report to answer the question,
Simulation ResultsWith
Under this scope, the function that delays the increasing yields the best result, which has the lowest average fee rate. Simulation SetupThe simulations are run within block height 200,000 to 643,200. Blocks before 200,000 are treated as outliers. The
The simulation is run 1 million times over the three percentiles, 10th, 30th, and 60th. A new variable, inclusion rate, is introduced here, to reflect the fact that random events in the blockchain space may lead to fatal results that causes a definite exclusion of a transaction. Detailed results can be found in this report. Aside from the optimizationIMO, optimization should be pursued when all the functions have almost equal chance of success given a
It then makes sense to use the function that minimizes the fee paid. As in the case,
Optimization should come second, as our primary goal is to get the transaction included in a block. In this case, we probably should use up the In the end, I think's it's up to the users to decide how much risk they will take, with the preliminary that they are fully informed. |
Great continuation of the investigation @yyforyongyu !
This is interesting. So apparently all three functions have very high success rates, but differ a lot in the total fees paid. I'd say that deciding on the best function to use is very much dependent on how much money is at stake. Suppose for each of those one million transactions, there is 0.04 BTC at stake. We'll assume that that money is lost when the tx isn't confirmed in time. The tx size is 200 bytes. For f(x) = x³, we'd lose 0.04 * 16 = 0.64 BTC and pay 16 (fee rate) * 200 (size) * 1000000 = 32 btc. Total cost is 31.36 BTC A dramatic difference. So this is another way to approach the optimization. Minimize the total of miner fees and damages because deadlines were missed, given the cost of the damage.
I think that just running simulations for the 10th percentile only is sufficient. Aside from a few preferential txes, a rational miner would include txes at the 10th percentile I'd say. Final comment is that The question is what function
minimizes the total cost when ran in your 1000000 tx simulation with historical block data. Note that |
So, one thing led to another...I've switch from the above simple simulations to Monte Carlo Simulation so that a more rigorous conclution can be made. Previous models were built upon the assumption that history fee rates gave no information on how to choose the current fee rate. So the models used only the time variable (height). However, as pointed out by @joostjager , the In order to tackle the optimization problem, I've created two reports. The first report was created to perform a hypothesis test, validating the assumption that fee rates are correlated, which can be found here. A second report focused on optimizing fee functions using Monte Carlo Simulation, which can be found here. Reports are summerized below. Randomness in fee ratesA runs test is performed to show that there are correlations among fee rates, which indicates that a better fee function should take previous fee rates into account. Out of curiosity, I've also applied Approximate Entropy to both the fee rates and the changes in the fee rates. Interestingly, the randomness in fee rates decreased over time, while the randomness in the changes in fee rates was increasing. Simulation setupThere are three simulations, each is run with 100 trials.
Unlike previous simulation, where the Now we turn to results. |
Simulation resultsThe following table summarizes the findings. This time, the initial From the experiment,
The simulations used a transaction of 0.04 BTC and 200
The following table gives the results with
Cost based on transaction size and valueIt should be no surprise a different transaction size/value will yield a very different result. In general, the cost for each fee function can be calculated using the following formula, cost = (1 - λ)⋅r + λ⋅ρ The cost can be interpreted as the money cost per transaction size, as in
Based on differnt ρ and The result is summerized as follows,
Optimal function based on the value density (sats/vbyte).
If we assumme our average transaction size is 200
To see the detailed analysis, check here. |
This commit adds a new interface, `FeeFunction`, to deal with calculating fee rates. In addition, a simple linear function is implemented, hence `LinearFeeFunction`, which will be used to calculate fee rates when bumping fees. Check lightningnetwork#4215 for other type of fee functions that can be implemented.
This commit adds a new interface, `FeeFunction`, to deal with calculating fee rates. In addition, a simple linear function is implemented, hence `LinearFeeFunction`, which will be used to calculate fee rates when bumping fees. Check lightningnetwork#4215 for other type of fee functions that can be implemented.
This commit adds a new interface, `FeeFunction`, to deal with calculating fee rates. In addition, a simple linear function is implemented, hence `LinearFeeFunction`, which will be used to calculate fee rates when bumping fees. Check lightningnetwork#4215 for other type of fee functions that can be implemented.
This commit adds a new interface, `FeeFunction`, to deal with calculating fee rates. In addition, a simple linear function is implemented, hence `LinearFeeFunction`, which will be used to calculate fee rates when bumping fees. Check lightningnetwork#4215 for other type of fee functions that can be implemented.
This commit adds a new interface, `FeeFunction`, to deal with calculating fee rates. In addition, a simple linear function is implemented, hence `LinearFeeFunction`, which will be used to calculate fee rates when bumping fees. Check lightningnetwork#4215 for other type of fee functions that can be implemented.
This commit adds a new interface, `FeeFunction`, to deal with calculating fee rates. In addition, a simple linear function is implemented, hence `LinearFeeFunction`, which will be used to calculate fee rates when bumping fees. Check lightningnetwork#4215 for other type of fee functions that can be implemented.
This commit adds a new interface, `FeeFunction`, to deal with calculating fee rates. In addition, a simple linear function is implemented, hence `LinearFeeFunction`, which will be used to calculate fee rates when bumping fees. Check #4215 for other type of fee functions that can be implemented.
This commit adds a new interface, `FeeFunction`, to deal with calculating fee rates. In addition, a simple linear function is implemented, hence `LinearFeeFunction`, which will be used to calculate fee rates when bumping fees. Check #4215 for other type of fee functions that can be implemented.
One of the primary goals of anchor outputs is to be able to adjust fee rates to ensure that both the commitment transaction and the possibly contested HTLCs on the transaction get into the block in time. With our inclusion of anchor output enabled commitments, we're now able to do this using CPFP on the main commitment transaction, and either RBF or attaching a new input+output pair for the HTLC transactions. As we're concerned with the critical case of proper multi-hop HTLC resolution, we should expand all sub-systems related to commitment confirmation and HTLC claims to be able to ratchet up the fee rate with a deadline: the expiry of the CTLV delta between HTLC pairs.
Steps to Completion
Expand the commitment anchor sweeping by the sweeper to allow the caller to specify a new constraint: a confirmation deadline. We visited this route in the past with the hypothetical "super sweeper" sub-system. One open question here still is: do we bump each block we aren't confirmed until we reach a max fee, or do we wait for a period of T blocks, then rachet all the way up to our final fee.
Expose this new deadline formation in either
PendingChannels
, orPendingSweeps
(walletkit sub-server).Dynamically bump the posted transaction fee as we gradually approach the deadline
This is related to #610, as we'll also want the initiator of the channel to be able to set the base commitment fee, and also update it as they wish. In addition, after that issue, we should start to target a higher confirmation goal (in blocks) than we do atm, since we only want to ensure the fee is enough to land in the mempool. Once in the mempool, we should use CPFP to bump accordingly as described above.
The text was updated successfully, but these errors were encountered: