Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[5/5]sweep: introduce budget-based deadline aware fee bumper #8424

Merged

Conversation

yyforyongyu
Copy link
Collaborator

@yyforyongyu yyforyongyu commented Jan 25, 2024

This PR introduces a budget-based deadline-aware fee bumper that solves the following issues,

  1. overshooting fees - the sweeper sometimes sweeps an input without considering its cost.
  2. RBF oblivious - the sweeper cannot properly create RBF-compliant txns.
  3. naive fee management - the sweeper relies solely on the fee estimator to make decisions on fee rates.

In addition, this PR and its dependencies have refactored the sweeper so it's easier to question its architecture and implement more advanced fee bumping strategies in the future. The dependencies,

Replaces #7549 and fixes #4215

Definitions

A budget is the amount of sats you are willing to pay when sweeping a given input. Often the case, it's proportional to the bitcoin value of the input. For instance, say you are willing to pay 50% to sweep an HTLC of 100,000 sats, the budget would be 50,000. On the other hand, the budget is dependent on the context, or the type of inputs,

  • for a CPFP purpose anchor, although the value is only 330 sats, the budget would be the value under protection, which is the value from the incoming or outgoing HTLC with the smallest CLTV value.
  • for first-level HTLC txns, the budget is the proportional to the HTLC value although this budget cannot be directly spent from it because it must all go to the second-level HTLC, which means we need to borrow this budget from other inputs.

Because we define the budget beforehand, it makes sure we won't end up paying more fees than the inputs value.

A deadline defines a time preference, expressed in blocks, that an input must be confirmed by, as otherwise we may lose money. Among all the outputs generated from an FC, only three of them are time-sensitive, detailed below,

  • to_local output, which is currently swept using a conf target of 6. There's no time pressure on sweeping this UTXO as it can only be swept by us.
  • first level outgoing HTLCs, which are currently swept using a conf target of 6. The deadline is the CLTV value from this HTLC's corresponding incoming HTLC, as if that CLTV is reached, our upstream peer can take the incoming HTLC via the timeout path, and our downstream peer can take the outgoing HTLC via the preimage path. In this PR, we use the nLockTime field as the deadline, as the deadline defines how different inputs can be grouped together, and because the SINGLE|ANYONECANPAY sighash flag, only the txns sharing the same nLockTime can exist in the same tx, thus we use this field as the deadline for outgoing HTLCs.
  • first level incoming HTLCs, which are also swept using a conf target of 6. The deadline is defined as the CLTV value used by the remote. Since we must already have the preimage to perform this sweep, we must make sure it confirms before this HTLC being collected by the remote via the timeout path.
  • second level transactions, which are swept using a conf target of 6. There are no time pressure here.
  • anchor output before the FC is confirmed, which is used for CPCP purpose. Atm we already have a deadline defined for this type of sweeping, and we will continue to use it.
  • anchor output after FC is confirmed, there's no time pressure here. Although it can be collected by anyone after 16 blocks (15 if the FC is confirmed), it doesn't provide many economical incentives to sweep it.

Design

On a high level, when inputs are added to the UtxoSweeper via the entry point SweepInput, the sweeper will periodically check their states and ask the UtxoAggregator to group these inputs into different input sets. Each set is sent to the Bumper to construct an RBF-compliant tx, broadcast it, and monitor it and perform fee bumps if needed. The Bumper will consult the FeeFunction to get a fee rate to use. In details,

  • UtxoAggregator is an interface that defines a ClusterInputs method which returns a list of InputSet interfaces. There are two implementations of UtxoAggregator, SimpleAggregator which retains the old clustering logic, and a new BudgetAggregator which groups inputs based on budget and deadline.
  • Bumper is implemented by TxPublisher, which takes care of tx creation and broadcast. It leverages TestMempoolAccept to make the broadcast RBF-aware.
  • FeeFunction is implemented by LinearFeeFunction, which is a simple function that increases the fee rate linearly based on the blocks left till deadline. There are more advanced algorithms in sweeper+contractcourt: deadline aware HTLC claims & commitment confirmation  #4215, which can be easily implemented in the future.

RBF tx, not inputs

There's an alternative approach which quickly ditched due to its high level of complexity. In this PR, the Bumper will take the input set packaged by UtxoAggregator and perform an RBF on this set. Suppose a few blocks pass by, the UtxoAggregator may choose to package the set differently based on current mempool condition and the budget supplied. For instance, if a set of three inputs are made, but later on UtxoAggregator decides to create three sets, each containing one of the inputs. It would then be very difficult for the Bumper to handle the RBF. However, I want to mention it here as this can be a future optimization task.

No more repeated RBF logic

Previously we have implemented simple RBF logic and CPFP rules in lnd, while the actual rules can be very complicated. With the help of TestMempoolAccept, we no longer need to care about the implementation details about these rules as we can now use this RPC as a validator to guide us creating an RBF-compliant txns. The only downside here is for neutrino backends, we don't have this RPC available - we shouldn't give up tho, with the extra fee info stored in sweeper, we can still perform a native RBF with limited efficacy, since without a mempool, there's little we can do.

TODO

  • add minimal RBF support for neutrino backend
  • fix BumpFee RPC
  • add unit tests
  • fix current itests
  • add new itests
  • add design diagram
  • add readme in sweeper

This change is Reviewable

Copy link
Contributor

coderabbitai bot commented Jan 25, 2024

Important

Auto Review Skipped

Auto reviews are disabled on base/target branches other than the default branch. Please add the base/target branch pattern to the list of additional branches to be reviewed in the settings.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository.

To trigger a single review, invoke the @coderabbitai review command.

Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share

Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger a review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/coderabbit-overrides.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@yyforyongyu
Copy link
Collaborator Author

yyforyongyu commented Jan 25, 2024

Looking for concept ACK for the last 9 commits, which are commits after temp: docs: update release docs cc @Roasbeef @ziggie1984 @morehouse

@saubyk
Copy link
Collaborator

saubyk commented Jan 25, 2024

cc: @Roasbeef for concept ack

@morehouse
Copy link
Collaborator

Fundamentally, how much do we save by doing batched sweeps? Other implementations don't do this at all.

I imagine we could make RBFs much simpler and less error-prone without the batching. And there are other reasons to do less batching:

  • Batching anchor spends can violate the CPFP carve-out requirements, thereby leaving us unprotected against pinning.
  • Proposed cluster mempool changes would prevent batching anchor sepnds completely.

@yyforyongyu
Copy link
Collaborator Author

Fundamentally, how much do we save by doing batched sweeps? Other implementations don't do this at all.

Yeah also had the same question - I think it only helps with an FC that has many HTLCs, or multiple FCs happening at the same time. Need more empirical data tho. If the scenario described in testMultiHopHtlcAggregation happens, then we can definitely save some fees. My anticipation is we'd see more spendable UTXOs fail to get into mempool or block due to rising fees, and there needs an aggregation to help bring down the cost.

I imagine we could make RBFs much simpler and less error-prone without the batching. And there are other reasons to do less batching:

I think the general idea here is we don't need to care about mempool rules anymore if TestMempoolAccept can be used. We just retry creating txns until TestMempoolAccept allows it, saving us from re-implementing the logic again here.

@morehouse
Copy link
Collaborator

I think the general idea here is we don't need to care about mempool rules anymore if TestMempoolAccept can be used. We just retry creating txns until TestMempoolAccept allows it, saving us from re-implementing the logic again here.

But the reason that TestMempoolAccept fails is important. Retrying with a higher feerate might work most of the time, but would we be able to recognize if the failure was caused by RBF rule 2 or 5 being violated? In that case we would need to go back to the UTXO aggregation step and start breaking apart batches.

If we stop batch sweeping, these problems go away.

@yyforyongyu
Copy link
Collaborator Author

But the reason that TestMempoolAccept fails is important. Retrying with a higher feerate might work most of the time, but would we be able to recognize if the failure was caused by RBF rule 2 or 5 being violated? In that case we would need to go back to the UTXO aggregation step and start breaking apart batches.

Yeah that's the next step, to map all the errors. TestMempoolAccept gives a detailed RejectReason which can be used to decide the next step. Besides, how could rule 5 be violated?

If we stop batch sweeping, these problems go away.

So does the benefit? I guess we can test it out, like add a new implementation of UtxoAggregator, say NoAggregator that doesn't do any aggregation, should be straightforward, and the rest can stay the same.

On the other hand, I don't think the problems would just go away. As @Crypt-iQ mentioned here, it seems only CPFP-purpose anchor sweeping would cause violation of rule 2, if not aggregated right. Other than that, I think we are facing the same when RBFing a single input?

@morehouse
Copy link
Collaborator

Yeah that's the next step, to map all the errors. TestMempoolAccept gives a detailed RejectReason which can be used to decide the next step. Besides, how could rule 5 be violated?

If the peer is pinning the commitment transaction, and LND is trying to batch the anchor spend with another anchor spend, CPFP carve-out will not work, and rule 5 can be violated.

On the other hand, I don't think the problems would just go away. As @Crypt-iQ mentioned here, it seems only CPFP-purpose anchor sweeping would cause violation of rule 2, if not aggregated right. Other than that, I think we are facing the same when RBFing a single input?

We wouldn't have to worry about RBF rules 2 or 5 if we stop batching. The other rules we wouldn't have to worry about either since we can blindly fee bump and try TestMempoolAccept.

@yyforyongyu
Copy link
Collaborator Author

If the peer is pinning the commitment transaction, and LND is trying to batch the anchor spend with another anchor spend, CPFP carve-out will not work, and rule 5 can be violated.

Jut to be sure, I'm referring rule numbers here and I think you mean rule 2 right?

It seems to me it's the CPCP purpose anchor output is the most complicatedly one, and I think if we are concerned we can do the CPFP in contractcourt, and never offer it to the sweeper via SweepInput, or just never aggregate it with other inputs.

@morehouse
Copy link
Collaborator

Jut to be sure, I'm referring rule numbers here and I think you mean rule 2 right?

Ah, sorry, it's not an RBF rule at all! What I was referring to is the descendant limit (25 transactions). A CPFP will be rejected if it exceeds the descendant limit, unless CPFP carve-out is used.

It seems to me it's the CPCP purpose anchor output is the most complicatedly one, and I think if we are concerned we can do the CPFP in contractcourt, and never offer it to the sweeper via SweepInput, or just never aggregate it with other inputs.

+1. I've created #8433 for further discussion about this.

@saubyk saubyk added this to the v0.18.0 milestone Jan 28, 2024
@yyforyongyu yyforyongyu self-assigned this Jan 30, 2024
@yyforyongyu yyforyongyu added utxo sweeping llm-review add to a PR to have an LLM bot review it labels Jan 30, 2024
@ProofOfKeags ProofOfKeags self-requested a review February 8, 2024 22:24
Copy link
Collaborator

@ProofOfKeags ProofOfKeags left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Overall approach seems fine, though I have a number of comments on implementation details. Some larger than others.

sweep/tx_input_set.go Outdated Show resolved Hide resolved
sweep/aggregator.go Outdated Show resolved Hide resolved
sweep/aggregator.go Outdated Show resolved Hide resolved
sweep/aggregator.go Outdated Show resolved Hide resolved
// txns, the same tx-level `nLockTime` must be the same in order for
// them to be aggregated into the same sweeping tx. This `nLockTime`
// value is implicitly expressed as the deadline height.
if b.deadlineHeight != pi.params.DeadlineHeight {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This should probably be > instead of !=, even if we choose not to cluster things as aggressively we can safely add an input with a later deadline to a set with a specified earlier one.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes if its not a covenant where we already have the signature of the peer.

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suppose that includes the ss-htlc txs too... 😕

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So I came up with a way to solve the issue, with the help of fn.Option. It goes like this,

  • for ss-htlcs, they must specify deadlines to fn.Some, with their nLocktime values.
  • for non-time-sensitive outputs, they specify their deadlines to fn.None.
  • when aggregating, all fn.Nonce will be put in one set, allowing us to sweep them in one tx.

This can be extended to allow adding a later deadline input, as long as it's fn.None.

sweep/fee_bumper.go Outdated Show resolved Hide resolved
sweep/fee_bumper.go Outdated Show resolved Hide resolved
// the event channel on the record. Any broadcast-related errors will not be
// returned here, instead, they will be put inside the `BumpResult` and
// returned to the caller.
func (t *TxPublisher) broadcast(requestID uint64) (*BumpResult, error) {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm noticing that this publisher doesn't really persist across restarts. Isn't that gonna be a problem? Or am I missing something?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sweeper doesn't really persist any states either. IIUC, the way this'll recover after restart is by using testmempoolaccept to determine if a new fee rate step actually needs to happen or not.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah maybe we should reload the pending sweep txns on restart and re-offer them to the publisher?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't see the downside to doing so. Either they get rejected because they are already there and not sufficiently rbf'ed in which case we say "cool thanks 👍🏼" or it accepts it and we say "cool thanks 👍🏼".

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah maybe we should reload the pending sweep txns on restart and re-offer them to the publisher?

So today the it's the job of the resolvers to detect that after a restart their inputs weren't swept to re-offer.

I think the only gap in this would be user issued fee bumping requests.

sweep/fee_bumper.go Show resolved Hide resolved
contractcourt/utxonursery.go Outdated Show resolved Hide resolved
sweep/tx_input_set.go Outdated Show resolved Hide resolved
sweep/tx_input_set.go Outdated Show resolved Hide resolved
sweep/tx_input_set.go Show resolved Hide resolved
sweep/aggregator.go Show resolved Hide resolved
sweep/aggregator.go Outdated Show resolved Hide resolved
sweep/sweeper.go Show resolved Hide resolved
sweep/fee_function.go Show resolved Hide resolved
sweep/fee_bumper.go Outdated Show resolved Hide resolved
sweep/fee_bumper.go Outdated Show resolved Hide resolved
// the event channel on the record. Any broadcast-related errors will not be
// returned here, instead, they will be put inside the `BumpResult` and
// returned to the caller.
func (t *TxPublisher) broadcast(requestID uint64) (*BumpResult, error) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The sweeper doesn't really persist any states either. IIUC, the way this'll recover after restart is by using testmempoolaccept to determine if a new fee rate step actually needs to happen or not.

Copy link
Collaborator

@ziggie1984 ziggie1984 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice work 🫶

So I see we introduced the BudgetAggregator in this PR, are we planning to make it already available for to activate it in LND 18. It feels to me that we are not quite there yet.

So I think without being able to identify the inputs which cause a potential broadcast to fail we should not activate the new aggregator wdyt ? Moreover still open question how we specify the deadlines for the non-timesenstive inputs ?

Moreover I wonder whether there is a reliable way to get information why a transaction is cannot be broadcasted using bitcoind or will this new logic need to be integrated in LND?

I think maybe we should first try to introduce the non-aggregator which might be not super efficient but removes a lot of side effects we must think about when aggregating inputs ?

// txns, the same tx-level `nLockTime` must be the same in order for
// them to be aggregated into the same sweeping tx. This `nLockTime`
// value is implicitly expressed as the deadline height.
if b.deadlineHeight != pi.params.DeadlineHeight {
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes if its not a covenant where we already have the signature of the peer.

sweep/tx_input_set.go Show resolved Hide resolved
sweep/aggregator.go Outdated Show resolved Hide resolved
sweep/tx_input_set.go Outdated Show resolved Hide resolved
sweep/sweeper.go Show resolved Hide resolved
sweep/fee_bumper.go Outdated Show resolved Hide resolved
sweep/fee_bumper.go Outdated Show resolved Hide resolved
contractcourt/anchor_resolver.go Outdated Show resolved Hide resolved
contractcourt/htlc_timeout_resolver.go Outdated Show resolved Hide resolved
contractcourt/htlc_timeout_resolver.go Outdated Show resolved Hide resolved
@yyforyongyu yyforyongyu changed the base branch from master to elle-sweeper-rbf-aware February 21, 2024 00:30
@yyforyongyu yyforyongyu force-pushed the elle-sweeper-rbf-aware branch 2 times, most recently from d44ac7b to a5af885 Compare February 28, 2024 09:01
This commit adds more interface methods to `InputSet` to prepare the
addition of budget-based aggregator.
This way it's easier to pass values to this method in various callsites.
This commit changes `markInputsPendingPublish` to take `InputSet` only.
This is needed for the following commits as we won't be able to know the
tx being created beforehand, yet we still want to make sure these inputs
won't be grouped to another input set as it complicates our RBF process.
This commit adds `BudgetInputSet` which implements `InputSet`. It
handles the pending inputs based on the supplied budgets and will be
used in the following commit.
This commit adds `BudgetAggregator` as a new implementation of
`UtxoAggregator`. This aggregator will group inputs by their deadline
heights and create input sets that can be used directly by the fee
bumper for fee calculations.
This commit adds a new interface, `Bumper`, to handle RBF for a given
input set. It's responsible for creating the sweeping tx using the input
set, and monitors its confirmation status to decide whether a RBF should
be attempted or not.

We leave implementation details to future commits, and focus on mounting
this `Bumper` interface to our sweeper in this commit.
As there will be dedicated new tests for them.
As shown in the following commit, fee rate calculation will now be
handled by the fee bumper, hence there's no need to expose this on
`InputSet` interface.
This commit adds a new interface, `FeeFunction`, to deal with
calculating fee rates. In addition, a simple linear function is
implemented, hence `LinearFeeFunction`, which will be used to calculate
fee rates when bumping fees. Check lightningnetwork#4215 for other type of fee functions
that can be implemented.
This commit adds the method `MaxFeeRateAllowed` to calculate the max fee
rate. The caller may specify a large MaxFeeRate value, which cannot be
cover by the budget. In that case, we default to use the max feerate
calculated using `budget/weight`.
This commit adds `TxPublisher` which implements `Bumper` interface. This
is part one of the implementation that focuses on implementing the
`Broadcast` method which guarantees a tx can be published with
RBF-compliant. It does so by leveraging the `testmempoolaccept` API,
keep increasing the fee rate until an RBF-compliant tx is made and
broadcasts it.

This tx will then be monitored by the `TxPublisher` and in the following
commit, the monitoring process will be added.
This commit finishes the implementation of `TxPublisher` by adding the
monitor process. Whenever a new block arrives, the publisher will check
all its monitored records and attempt fee bumping them if necessary.
This commit adds a private type `mSatPerKWeight` that expresses a given
fee rate in millisatoshi per kw. This is needed to increase the
precision of the fee function. When sweeping anchor inputs, if using a
deadline delta of over 1000, it's likely the delta will be 0 sat/kw due
to precision.
So these inputs can be retried by the sweeper.
@Roasbeef
Copy link
Member

Roasbeef commented Apr 3, 2024

We should really have a mechanism to detect "bad apple" inputs and exclude them from batching with other inputs.
Previously this was done via random exponential backoff, but we removed that a few PRs ago.

Detecting bad inputs was sort of a side effect of the old back off process. With the new approach, we'll use testmempoolaccept, so we'll never even broadcast this bad set of inputs. We'll also eventually return those bad inputs (and the other inputs that were aggregated with it) back to to the set of pending inputs to be swept, where we'll be able to re-cluster them vs just giving up on them all together.

Agreed though that more precise bad input detection can be added in follow up PRs.

Copy link
Member

@Roasbeef Roasbeef left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed 51 of 51 files at r8, 26 of 32 files at r10, 10 of 12 files at r11, 2 of 2 files at r12, 10 of 10 files at r13, 7 of 7 files at r14, all commit messages.
Reviewable status: all files reviewed, 138 unresolved discussions (waiting on @Crypt-iQ, @ProofOfKeags, @yyforyongyu, and @ziggie1984)

@Roasbeef Roasbeef merged this pull request into lightningnetwork:elle-new-sweeper Apr 3, 2024
@yyforyongyu yyforyongyu deleted the sweeper-fee-bump branch April 3, 2024 22:49
@morehouse
Copy link
Collaborator

Detecting bad inputs was sort of a side effect of the old back off process. With the new approach, we'll use testmempoolaccept, so we'll never even broadcast this bad set of inputs. We'll also eventually return those bad inputs (and the other inputs that were aggregated with it) back to to the set of pending inputs to be swept, where we'll be able to re-cluster them vs just giving up on them all together.

With the current set of PRs re-clustering will generally group the same inputs together again (or add more inputs!), so the bad input will block the others forever.

If we never cluster bad inputs with other inputs, then there's no problem. But how confident are we that we can do that?

@Roasbeef
Copy link
Member

Roasbeef commented Apr 5, 2024

If we never cluster bad inputs with other inputs, then there's no problem. But how confident are we that we can do that?

Here's my current understanding:

So if we don't bisect, then unless those bad inputs are actually swept by a 3rd party somehow, they'll keep on being churned through.

One alternative to bisecting is that we modify the clustering to put inputs that have failed in their own cluster. This has the down side of still consuming wallet inputs at time to help to sweep those at a target fee rate. So perhaps bisection really is just the best way forward.

A sketch of bisection would look like:

  • After a failed publish, loop thru all the failed inputs:
    • Make a 1-in-1 out spend (output doesn't matter)
    • Sign that and try to do testmempoolaccept on it
    • Based on the return value (is it granular/precise enough?), decide to drop the inputs all together (call notified) or decide that it's safe to re-integrate them (eg: deep-ish re-org, locktime no longe valid)

cc @ziggie1984 @yyforyongyu

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
llm-review add to a PR to have an LLM bot review it utxo sweeping
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

sweeper+contractcourt: deadline aware HTLC claims & commitment confirmation
7 participants