Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sweeper+contractcourt: deadline aware HTLC claims & commitment confirmation #4215

Closed
2 of 3 tasks
Roasbeef opened this issue Apr 22, 2020 · 22 comments · Fixed by #5148, #8424 or #8667
Closed
2 of 3 tasks

sweeper+contractcourt: deadline aware HTLC claims & commitment confirmation #4215

Roasbeef opened this issue Apr 22, 2020 · 22 comments · Fixed by #5148, #8424 or #8667
Assignees
Labels
anchors chain handling commitments Commitment transactions containing the state of the channel contracts fees Related to the fees paid for transactions (both LN and funding/commitment transactions) P1 MUST be fixed or reviewed safety General label for issues/PRs related to the safety of using the software
Milestone

Comments

@Roasbeef
Copy link
Member

Roasbeef commented Apr 22, 2020

One of the primary goals of anchor outputs is to be able to adjust fee rates to ensure that both the commitment transaction and the possibly contested HTLCs on the transaction get into the block in time. With our inclusion of anchor output enabled commitments, we're now able to do this using CPFP on the main commitment transaction, and either RBF or attaching a new input+output pair for the HTLC transactions. As we're concerned with the critical case of proper multi-hop HTLC resolution, we should expand all sub-systems related to commitment confirmation and HTLC claims to be able to ratchet up the fee rate with a deadline: the expiry of the CTLV delta between HTLC pairs.

Steps to Completion

  • Expand the commitment anchor sweeping by the sweeper to allow the caller to specify a new constraint: a confirmation deadline. We visited this route in the past with the hypothetical "super sweeper" sub-system. One open question here still is: do we bump each block we aren't confirmed until we reach a max fee, or do we wait for a period of T blocks, then rachet all the way up to our final fee.

  • Expose this new deadline formation in either PendingChannels, or PendingSweeps (walletkit sub-server).

  • Dynamically bump the posted transaction fee as we gradually approach the deadline

This is related to #610, as we'll also want the initiator of the channel to be able to set the base commitment fee, and also update it as they wish. In addition, after that issue, we should start to target a higher confirmation goal (in blocks) than we do atm, since we only want to ensure the fee is enough to land in the mempool. Once in the mempool, we should use CPFP to bump accordingly as described above.

@Roasbeef Roasbeef added commitments Commitment transactions containing the state of the channel safety General label for issues/PRs related to the safety of using the software fees Related to the fees paid for transactions (both LN and funding/commitment transactions) contracts chain handling anchors labels Apr 22, 2020
@Roasbeef Roasbeef added this to the 0.11.0 milestone Apr 22, 2020
@Roasbeef Roasbeef changed the title sweeper+contractcourt: deadline aware HTLC claims sweeper+contractcourt: deadline aware HTLC claims & commitment confirmation Apr 22, 2020
@Roasbeef Roasbeef modified the milestones: 0.11.0, 0.12.0 Jun 9, 2020
@yyforyongyu
Copy link
Collaborator

Interesting issue, I'd like to work on it. My thoughts on the main question,

One open question here still is: do we bump each block we aren't confirmed until we reach a max fee, or do we wait for a period of T blocks, then rachet all the way up to our final fee.

Two factors, how much fee to pay and when to broadcast, contribute to the probability of confirmation.

The assumptions are,

  • the probability of confirmation increases if a larger fee paid, and,
  • the probability of confirmation increases if broadcast earlier.

The most aggressive approach is to broadcast the transaction using the max fee immediately. It has the highest chance of success and costs the most. And the tradeoff is, of course, always between the risk and cost.

Suppose we have N blocks till the deadline is reached

A naive approach is to increase fee by a delta value each block till the deadline, where the delta is (max_fee - initial_fee) / N. Of course, we don't have to use a linear function. The fact is, when the deadline is far away, we are less stressed and thus in less hurry to bump the fee. As a reflection, a polynomial or exponential function may be applied, e.g., delta = ax^2 + b, where x is the time and a, b are some constants.

The actual value of N matters

If N is small, say 1 or 2 blocks, we don't have many choices but to bump it asap.
If N is huge, say hundreds of blocks, we have enough time to wait for confirmation.

As a starting point, according to this PR, we may check the N value against 34. if the deadline is 34 blocks away, we wait. Otherwise, we take action.

While the 34 specifies an upper bound, we want a lower bound. That is, if the deadline is only a few blocks away(say X), we will use up our max fee immediately. Again, according to this PR, we set the X to be equal to S, which is 12.

In summary,

  • we apply a min_deadline_delta of 12 and max_deadline_delta of 34.
  • If the deadline_delta is below min_deadline_delta, we use up the max fee immediately.
  • If the deadline_delta is above max_deadline_delta(rare), we do nothing but wait.
  • If it's in between, we apply a time-sensitive algorithm to bump the fee.

We also want to make the parameters configurable, leaving users a choice to CPFP in their own way(or they can use bumpclosefee).

Possible ways to bump fee
Suppose there are 100 units in (max_fee - initial_fee), at block delta 12 we want to spend all, and at block delta 34 we want to spend none. These functions, linear, quadratic, and exponential, are plotted here.

Another factor is the total values of HTLC outputs to be swept. The smaller it gets, the less likely a counterparty will appear and the less money we will lose in case of the worst scenario happens. This can further be reflected in min_deadline_delta if needed.

@joostjager
Copy link
Contributor

Some thought/questions that I have around deadline-aware sweeping:

  • how much fee to pay and when to broadcast

Can this be flattened to just "how much to pay at block expiry - x"? We can broadcast every block if we want to. Of course the fee can't be lowered, meaning that some broadcasts may be no-ops.

  • Does history matter at all? If we are at block expiry - x, is it relevant what we did at expiry - x - 1? I would think it isn't relevant.

  • What is the "max fee"? If we need to claim a 1 BTC htlc, are we willing to pay 1 BTC in fees at block expiry - 1? This can be very expensive if you've been offline for a while. A more sane maximum is probably a fee estimate with a very high probability of confirming in the next block, capped by the value at stake.

  • Intuitively I agree with a non-linear function. It would be nice to have more evidence for that. Maybe it is an idea to run a simulation with historical block data and fee estimates to try out different curves. Observe the success rate (swept before deadline) and fee paid.

  • I would avoid using highly specific constants like 12 and 34. It should be possible to solve this problem in a more generic way.

@yyforyongyu
Copy link
Collaborator

Can this be flattened to just "how much to pay at block expiry - x"?

Indeed, "when to broadcast" is not accurate. What matters is we do it early. My assumption is that, if transaction A
is broadcast at block expiry - x, transaction B, which is the same as A, is broadcast at expiry - x - 1, and we check the result at expiry, then transaction B should have a higher chance to be confirmed than A does.

Does history matter at all?

Nope, I don't think it matters. Moreover, I think the fee rate behaves pretty randomly...

What is the "max fee"?

I think this value is set by the user? There is this MaxFeeAllocation, default to 0.5.
My understanding is that, the precondition here is we have a max fee defined already, and we want to know how to spend it. We can only offer best-effort delivery, increase the chance to succeed while keeping the cost low. Once we've reached the max fee, we've done the job of spending it.

@joostjager
Copy link
Contributor

we check the result at expiry, then transaction B should have a higher chance to be confirmed than A does.

I don't think you need to wait until expiry. Assuming both are published to the same mempool, B will have either replaced A or B is rejected (BIP125). Both can't exist at the same time.

My understanding is that, the precondition here is we have a max fee defined already, and we want to know how to spend it.

Yes, definition of max fee can be kept a precondition. It was more a broader question about how to define that. The MaxFeeAllocation isn't ideal, but we can work with it.

@yyforyongyu
Copy link
Collaborator

More discussion on this issue,

I don't think you need to wait until expiry. Assuming both are published to the same mempool, B will have either replaced A or B is rejected (BIP125). Both can't exist at the same time.

Sorry, maybe I wasn’t clear about my statements. I was referring to the occurrence of an event in terms of its probability, in particular, I was making the assumption that when a transaction is broadcast earlier, it has a higher chance to be confirmed before the deadline.

If it helps, the statement is similar to, if I go to work at 8 am I’d have a higher chance of not being late versus going to work at 9 am.

If this assumption is correct, then it means we want to broadcast early even the fee paid stays unchanged. We’d prefer expiry - x - 1 over expiry - x. Otherwise, the only factor under this context is the fee and the complexity would be greatly decreased.

I would avoid using highly specific constants like 12 and 34. It should be possible to solve this problem in a more generic way.

I think it’s up to the user to decide these values while we provide some defaults?

Intuitively I agree with a non-linear function. It would be nice to have more evidence for that. Maybe it is an idea to run a simulation with historical block data and fee estimates to try out different curves. Observe the success rate (swept before deadline) and fee paid.

In short term this may help, essentially I think a machine learning model is being developed here...It’s an interesting approach, while history data only has limited efficacy, especially in the long run. I will try to derive some statistics and see what I get;)

@joostjager
Copy link
Contributor

Some thoughts from offline discussion with @halseth:

  • Ideally we don't rely on fee estimators, especially in the neutrino case. Alternative is a fixed (high) fee rate that is configured that is approached exponentially when moving towards the deadline. That probably does require some warm-up time after a restart, so that we don't immediately use the max rate if after restart there is just one block left.
  • Need to be careful with moving sweeper inputs to another fee rate bucket, because the old bucket may not be able to be rbf-bumped anymore (not enough absolute fee).

@yyforyongyu
Copy link
Collaborator

@joostjager so I did a very simple experiment on the functions, here's the summary.

Performance difference among different fee rate increment functions

tl;dr
The earlier we increase our fee rate, the more likely we will succeed. Specifically, among the following functions,

  • f(x) = x
  • f(x) = x3
  • f(x) = 1 + (x - 1)3

The last one works best.

Build the simulation
Suppose we have the following parameters,

  • base_fee_rate is the starting fee rate, in sats/vbyte;
  • max_fee_rate is the max fee rate we will use;
  • deadline_delta is the number of blocks left for us to try.

Then we have two derived parameters,

  • fee_rate_delta, which is max_fee_rate - base_fee_rate,
  • delta_position, which is the index of the position in deadline_delta.

Then we have a function, F, that takes deadline_delta and outputs a percentage, which specifies how much of the fee_rate_delta to use. Three functions are used, with the x-axis and y-axis both ranging in [0, 1],

  • a linear function, f(x) = x;
  • a cubic function, f(x) = x³, which increases the fee faster in the later stage. The curve steepens as it reaches the end. 12.5% of the gain happens in the first half, and 87.5% happens in the second half.
  • a cubit function, f(x) = 1 + (x-1)³, which increases fee faster in the early stage. The curve flattens as it reaches the end. 87.5% of the gain happens in the first half, and 12.5% happens in the second half.

The simulation result
A simple simulation is performed from block height 590000 to 643200. At any block, as long as the chosen fee rate is greater than a certain percentile of the fee rates in the block, it's considered a success.
The detailed report is in a jupyter notebook, which can be found here. A glance of the results is shown below, with a max_fee_rate of 100 sats/vbyte, a base_fee_rate of 1 sat/vbyte, and a deadline_delta of 10 blocks,

fee rate percentile f(x) = x f(x) = x³ f(x) = 1 + (x-1)³
10 0.98208680 0.96971861 0.98781978
30 0.97806432 0.96188042 0.98516945
60 0.96994417 0.94417398 0.98086502

Within the assumptions made from this simulation, the previous assumption, the probability of confirmation increases if broadcast earlier, now can be validated as true.

Others
The dataset is parsed from btcd's blocks_ffldb files. Relevant data is created and dump into CSV files for further analysis. I've dumped the table into CSV files and made them publicly available, here are links, blocks.csv and transactions.csv.

Out of curiosity, I've also investigated on the success rates of different fee rates and deadline delta. If interested, here's the report.

@Roasbeef
Copy link
Member Author

Excellent report @yyforyongyu! One question about the see of fee function you evaluated: how'd you arrive at this set? DId you do any sort of "hyperparameter searching", or were they selected after the fact based on some of the findings in the report re the relationship between the params and resulting heuristics?

@yyforyongyu
Copy link
Collaborator

@Roasbeef No fancy tricks, just basic exploratory data analysis. These functions were chosen based on the findings as a starting point.

@joostjager
Copy link
Contributor

@yyforyongyu, that is an interesting exploration. Great approach with taking historical blocks and defining a percentile for getting our tx included.

One thing that I think could also be added is the goal of minimizing the fee paid. That is important, because if fees paid wouldn't matter, we could just always publish at 1000 sat/b from the start.

On the one hand, it is important to get the tx confirmed before the deadline. But because inclusion in a block is never guaranteed, it is probably fair to define a parameter that sets the minimally required probability of getting in in time. This should be pretty high. I'd say >99.5%, maybe even higher.

Given that parameter, the question becomes: which fee function minimizes the fee paid?

A simulation could look like this:

  • repeat 10000 times:
    • select random block height (from historical data set)
    • select random deadline delta (for example [10..100])
    • run simulation using the fee function under test
    • record whether tx was confirmed in time in the simulation and at which fee rate

Then report the percentage of txes that was confirmed in time and the average fee paid.

@halseth
Copy link
Contributor

halseth commented Sep 1, 2020

One other thing to note is that the various deadlines differ greatly between the various output types. Commitment outputs can have weeks to be swept, while HTLCs can be a much lower number of blocks. I can imagine the ideal fee function used could be different.

@yyforyongyu
Copy link
Collaborator

Took the ideas from @joostjager and made a new report to answer the question,

which fee function minimizes the fee paid?

Simulation Results

With max_fee_rate of 1000 sats/vbyte, and 10th percentile of the block fee rate as target, we have the following result,

fee function success rate mean fee rate failed attempts total attempts
f(x) = x³ 0.9999840 16.3552 16 10078504
f(x) = x 0.9999950 39.0167 5 3207207
f(x) = 1 + (x-1)³ 0.9999990 74.6236 1 2330916

Under this scope, the function that delays the increasing yields the best result, which has the lowest average fee rate.

Simulation Setup

The simulations are run within block height 200,000 to 643,200. Blocks before 200,000 are treated as outliers. The max_fee_rate is set to 1000 sats/vbyte. For each simulation,

  • a random block is selected from the range defined;
  • a random deadline delta is selected from the range 6 to 144 (roughly one hour to one day);
  • with the random block and deadline delta, the aforementioned three functions are run and their success attempts and fee rate are recorded.

The simulation is run 1 million times over the three percentiles, 10th, 30th, and 60th. A new variable, inclusion rate, is introduced here, to reflect the fact that random events in the blockchain space may lead to fatal results that causes a definite exclusion of a transaction.

Detailed results can be found in this report.

Aside from the optimization

IMO, optimization should be pursued when all the functions have almost equal chance of success given a max_fee_rate and deadline_delta. For instance, when using max_fee_rate of 1000 and deadline_delta of 10, we got,

With max_fee_rate=1000, delta=10
[name: linear      ] <P10: 1.00000000> <P30: 1.00000000> <P60: 1.00000000>
[name: cubic_delay ] <P10: 1.00000000> <P30: 1.00000000> <P60: 1.00000000>
[name: cubic_eager ] <P10: 1.00000000> <P30: 1.00000000> <P60: 1.00000000>

It then makes sense to use the function that minimizes the fee paid. As in the case,

With max_fee_rate=100, delta=10
[name: cubic_delay ] <P10: 0.96971861> <P30: 0.96188042> <P60: 0.94417398>
[name: linear      ] <P10: 0.98208680> <P30: 0.97806432> <P60: 0.96994417>
[name: cubic_eager ] <P10: 0.98781978> <P30: 0.98516945> <P60: 0.98086502>

Optimization should come second, as our primary goal is to get the transaction included in a block. In this case, we probably should use up the max_fee_rate asap.

In the end, I think's it's up to the users to decide how much risk they will take, with the preliminary that they are fully informed.

@joostjager
Copy link
Contributor

joostjager commented Sep 3, 2020

Great continuation of the investigation @yyforyongyu !

With max_fee_rate of 1000 sats/vbyte, and 10th percentile of the block fee rate as target, we have the following result

This is interesting. So apparently all three functions have very high success rates, but differ a lot in the total fees paid. I'd say that deciding on the best function to use is very much dependent on how much money is at stake.

Suppose for each of those one million transactions, there is 0.04 BTC at stake. We'll assume that that money is lost when the tx isn't confirmed in time. The tx size is 200 bytes.

For f(x) = x³, we'd lose 0.04 * 16 = 0.64 BTC and pay 16 (fee rate) * 200 (size) * 1000000 = 32 btc. Total cost is 31.36 BTC
For f(x) = 1 + (x-1)³, we'd lose only 0.04 * 1 = 0.04 BTC, but pay 74 * 200 * 1000000 = 148 BTC in fees. Total cost is 147.96 BTC

A dramatic difference.

So this is another way to approach the optimization. Minimize the total of miner fees and damages because deadlines were missed, given the cost of the damage.

The simulation is run 1 million times over the three percentiles, 10th, 30th, and 60th.

I think that just running simulations for the 10th percentile only is sufficient. Aside from a few preferential txes, a rational miner would include txes at the 10th percentile I'd say.

Final comment is that max_fee_rate can also be considered as being part of the fee function, not a fixed parameter. So the same function can compete against itself (with the optimization goal above) for different max fee rates.

The question is what function

fee_rate(tx_size, deadline_delta, deadline_missed_cost)

minimizes the total cost when ran in your 1000000 tx simulation with historical block data. Note that max_fee_rate isn't a parameter here, but an integral part of the fee function.

@yyforyongyu
Copy link
Collaborator

So, one thing led to another...I've switch from the above simple simulations to Monte Carlo Simulation so that a more rigorous conclution can be made. Previous models were built upon the assumption that history fee rates gave no information on how to choose the current fee rate. So the models used only the time variable (height). However, as pointed out by @joostjager , the max_fee_rate may not be a fixed value if we want to reach an optimal cost.

In order to tackle the optimization problem, I've created two reports. The first report was created to perform a hypothesis test, validating the assumption that fee rates are correlated, which can be found here. A second report focused on optimizing fee functions using Monte Carlo Simulation, which can be found here.

Reports are summerized below.

Randomness in fee rates

A runs test is performed to show that there are correlations among fee rates, which indicates that a better fee function should take previous fee rates into account.

Out of curiosity, I've also applied Approximate Entropy to both the fee rates and the changes in the fee rates. Interestingly, the randomness in fee rates decreased over time, while the randomness in the changes in fee rates was increasing.

Simulation setup

There are three simulations, each is run with 100 trials.

  • For the first one, as a starting point, a sample size of 1000 is used. Each simulation is run with a fixed deadline_delta, ranging from 2 to 100, stepped by 1. Once the group means are obtained, a one-way ANOVA test (with confidence level 99.9%) is performed.
  • For those who failed the test, a second simulation with a sample size of 10,000 is used.
  • Finally, as pointed later, a sample size of 100,000 is used for deadline_delta ranging from 10 to 50, stepped by 10. It's chosen because it has lots of uncertainty (large variance). A step of 10 is used instead of 1 was due to the limited computing power.

Unlike previous simulation, where the deadline_delta was randomly chose, a fixed deadline_delta is used here because, we don't know the true distribution of the deadline_delta. If it's randomly chosen, we are assuming it follows a uniform distribution, which, IMO, is unlikely the case. Plus, as mentioned by @halseth , we would have very different deadline_delta based on various output types.

Now we turn to results.

@yyforyongyu
Copy link
Collaborator

Simulation results

The following table summarizes the findings. This time, the initial max_fee_rate is taken from the max(blocks[height-deadline_delta:height]). And a moving window with a length of deadline_delta is used to update the max_fee_rate. Overall, the max_fee_rate is updated from height-deadline_delta to height+deadline_delta. Refer to Example of the Algorithm under the section Optimization on Fee Rate Functions in the report for details.

From the experiment,

  • when deadline_delta is above 31, the best one is the cubic delay function (f(x) = x³).
  • when deadline_delta is between 29 and 31, the linear function (f(x) = x) performs the best.
  • when deadline_delta is between 26 and 28, the cubic delay function (f(x) = x³) performs the best.
  • when deadline_delta is between 13 and 25, the linear function (f(x) = x) performs the best.
  • when deadline_delta is less than 13, the cubic eager function (f(x) = 1 + (x-1)³) performs the best.

The simulations used a transaction of 0.04 BTC and 200 vbytes. Costs are measured in sats/vbyte, calculated as,

cost = num_of_success × avg_fee_rate + num_of_failed × 0.04 / 200

The following table gives the results with deadline_delta 70, sample size 100,000 and 100 trials. To see the full results, check here.

name fee_rate_mean fee_rate_std ideal_fee_rate_mean ideal_fee_rate_std costs_mean costs_std failure_mean failure_std
linear 15.2402 0.118585 2.83858 0.0483287 15.2402 0.118585 0 0
cubic_delay 10.3778 0.106819 2.83517 0.0548109 10.4438 0.144552 3.3e-06 5.8698e-06
cubic_eager 22.0289 0.140953 2.8276 0.0504101 22.0289 0.140953 0 0

ideal_cost simply uses the minimal fee rate found in the next deadline_delta blocks. It can be interpreted as the minimal cost, given our transaction value is greater than the fee paid.

Cost based on transaction size and value

It should be no surprise a different transaction size/value will yield a very different result. In general, the cost for each fee function can be calculated using the following formula,

cost = (1 - λ)⋅r + λρ

The cost can be interpreted as the money cost per transaction size, as in sats/vbyte, and the rest,

  • λ is the function's failure rate
  • r is the function's fee rate
  • ρ is the value of tx_value / tx_size, the value density.

Based on differnt ρ anddeadline_delta, the cost of the functions varies, thus we would have an optimal choice. Also notice that, when the failure rates are the same among functions, the cost is solely determined by the functions' fee rates. When it happens, we can simply ignore ρ.

The result is summerized as follows,

  • when the deadline_delta is below 40, performance of the three functions are affected by the value density.
  • when the deadline_delta is between 40 and 80, the function cubic eager is out of the picture. The competing candidates are function linear and function cubic delay.
  • when the deadline_delta is above 80, function cubic delay always performs the best.

Optimal function based on the value density (sats/vbyte).

deadline_delta lower ρ (sats/vbyte) upper ρ (sats/vbyte) inbetween
10 656.39: cubic_delay 9008.22: cubic_eager linear
20 6783.27: cubic_delay 106809.83: cubic_eager linear
30 22797.08: cubic_delay 437775.44: cubic_eager linear
40 65983.36: cubic_delay 65983.36: linear n/a
50 206116.79: cubic_delay 206116.79: linear n/a
60 430591.74: cubic_delay 430591.74: linear n/a
70 1473457.84: cubic_delay 1473457.84: linear n/a
80 3321953.66: cubic_delay 3321953.66: linear n/a
90 cubic_delay cubic_delay n/a
100 cubic_delay cubic_delay n/a

If we assumme our average transaction size is 200 vbytes, then we have,

deadline_delta lower value (BTC) upper value (BTC) inbetween
10 0.00131278: cubic_delay 0.01801644: cubic_eager linear
20 0.01356654: cubic_delay 0.21361966: cubic_eager linear
30 0.04559416: cubic_delay 0.87555088: cubic_eager linear
40 0.13196672: cubic_delay 0.13196672: linear n/a
50 0.41223358: cubic_delay 0.41223358: linear n/a
60 0.86118348: cubic_delay 0.86118348: linear n/a
70 2.94691568: cubic_delay 2.94691568: linear n/a
80 6.64390732: cubic_delay 6.64390732: linear n/a
90 cubic_delay cubic_delay n/a
100 cubic_delay cubic_delay n/a

To see the detailed analysis, check here.

yyforyongyu added a commit to yyforyongyu/lnd that referenced this issue Mar 2, 2024
This commit adds a new interface, `FeeFunction`, to deal with
calculating fee rates. In addition, a simple linear function is
implemented, hence `LinearFeeFunction`, which will be used to calculate
fee rates when bumping fees. Check lightningnetwork#4215 for other type of fee functions
that can be implemented.
yyforyongyu added a commit to yyforyongyu/lnd that referenced this issue Mar 26, 2024
This commit adds a new interface, `FeeFunction`, to deal with
calculating fee rates. In addition, a simple linear function is
implemented, hence `LinearFeeFunction`, which will be used to calculate
fee rates when bumping fees. Check lightningnetwork#4215 for other type of fee functions
that can be implemented.
yyforyongyu added a commit to yyforyongyu/lnd that referenced this issue Mar 27, 2024
This commit adds a new interface, `FeeFunction`, to deal with
calculating fee rates. In addition, a simple linear function is
implemented, hence `LinearFeeFunction`, which will be used to calculate
fee rates when bumping fees. Check lightningnetwork#4215 for other type of fee functions
that can be implemented.
yyforyongyu added a commit to yyforyongyu/lnd that referenced this issue Mar 29, 2024
This commit adds a new interface, `FeeFunction`, to deal with
calculating fee rates. In addition, a simple linear function is
implemented, hence `LinearFeeFunction`, which will be used to calculate
fee rates when bumping fees. Check lightningnetwork#4215 for other type of fee functions
that can be implemented.
yyforyongyu added a commit to yyforyongyu/lnd that referenced this issue Apr 2, 2024
This commit adds a new interface, `FeeFunction`, to deal with
calculating fee rates. In addition, a simple linear function is
implemented, hence `LinearFeeFunction`, which will be used to calculate
fee rates when bumping fees. Check lightningnetwork#4215 for other type of fee functions
that can be implemented.
yyforyongyu added a commit to yyforyongyu/lnd that referenced this issue Apr 11, 2024
This commit adds a new interface, `FeeFunction`, to deal with
calculating fee rates. In addition, a simple linear function is
implemented, hence `LinearFeeFunction`, which will be used to calculate
fee rates when bumping fees. Check lightningnetwork#4215 for other type of fee functions
that can be implemented.
yyforyongyu added a commit that referenced this issue Apr 13, 2024
This commit adds a new interface, `FeeFunction`, to deal with
calculating fee rates. In addition, a simple linear function is
implemented, hence `LinearFeeFunction`, which will be used to calculate
fee rates when bumping fees. Check #4215 for other type of fee functions
that can be implemented.
yyforyongyu added a commit that referenced this issue Apr 19, 2024
This commit adds a new interface, `FeeFunction`, to deal with
calculating fee rates. In addition, a simple linear function is
implemented, hence `LinearFeeFunction`, which will be used to calculate
fee rates when bumping fees. Check #4215 for other type of fee functions
that can be implemented.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
anchors chain handling commitments Commitment transactions containing the state of the channel contracts fees Related to the fees paid for transactions (both LN and funding/commitment transactions) P1 MUST be fixed or reviewed safety General label for issues/PRs related to the safety of using the software
Projects
Status: Done
6 participants