Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug: Reusing TAP Address in Send Asset #775

Closed
CRex15 opened this issue Sep 7, 2023 · 7 comments
Closed

Bug: Reusing TAP Address in Send Asset #775

CRex15 opened this issue Sep 7, 2023 · 7 comments
Labels
bug Something isn't working

Comments

@CRex15
Copy link

CRex15 commented Sep 7, 2023

Describe the bug
Reusing a TAP Address when trying to "Send Asset" between tapd nodes has several failures. The first time reusing it, a popup shows up that it succeeds. The balances on the tapd nodes does not reflect that it was actually send a second time. The second time reusing it the proper error message shows up. The third time reusing the address, the sending docker image disconnects and goes into a perpetual restart state.

To Reproduce
Steps to reproduce the behavior:

  1. Make 2 tapd nodes and generate and sync an asset.
  2. Go to the second tapd node and generate a TAP Address.
  3. Send Asset from node 1 to node 2 with the TAP Address.
  4. Click "Send Asset" and reuse the SAME TAP Address.
  5. Redo step 4 until all errors occur.

Expected behavior
"2 UNKNOWN: transaction rejected: output already spent" pop up. This does not happen the first time a TAP Address is reused.

Desktop (please complete the following information):

  • OS: Ubuntu 22.04.3 LTS
  • Polar Version: [e.g. 2.0.0]
  • Docker Version: [e.g. 24.0.6]
  • Docker Compose Version: [e.g. 2.20.3]

Additional context
Was able to reproduce behavior multiple times.

@CRex15 CRex15 added the bug Something isn't working label Sep 7, 2023
@jamaljsr
Copy link
Owner

jamaljsr commented Sep 9, 2023

I was able to reproduce this issue. This actually appears to be a bug in tapd. I followed the same steps using just tapcli commands and it resulted in the node being shut down.

Here are the relevant logs when sending to the same address twice.

2023-09-09 02:36:24.049 [INF] FRTR: Received to send request to 1 addrs: [TapAddr{id=302b1ad9de3e79a232e30f08910cca61c8325c0fcd662f1a1873ce856e391c0e, amount=100, script_key=02b9af41cb492092384bd46ae02af5a3299cbddec0e03e4e4211393a29fabfc49e}]
2023-09-09 02:36:24.049 [INF] FRTR: ChainPorter executing state: SendStateVirtualCommitmentSelect
2023-09-09 02:36:24.051 [INF] FRTR: Identified 1 eligible asset inputs for send of 100 to 302b1ad9de3e79a232e30f08910cca61c8325c0fcd662f1a1873ce856e391c0e
2023-09-09 02:36:24.051 [INF] FRTR: Selected 1 asset inputs for send of 100 to 302b1ad9de3e79a232e30f08910cca61c8325c0fcd662f1a1873ce856e391c0e
2023-09-09 02:36:24.068 [INF] FRTR: ChainPorter executing state: SendStateVirtualSign
2023-09-09 02:36:24.068 [INF] FRTR: Generating Taproot Asset witnesses for send to: 02b9af41cb492092384bd46ae02af5a3299cbddec0e03e4e4211393a29fabfc49e
2023-09-09 02:36:24.070 [INF] FRTR: ChainPorter executing state: SendStateAnchorSign
2023-09-09 02:36:24.071 [INF] FRTR: Constructing new Taproot Asset commitments for send to: 02b9af41cb492092384bd46ae02af5a3299cbddec0e03e4e4211393a29fabfc49e
2023-09-09 02:36:24.083 [INF] FRTR: Received funded PSBT packet
2023-09-09 02:36:24.083 [INF] FRTR: Adjusting send pkt by delta of 2825 from 9900 sats to 12725 sats
2023-09-09 02:36:24.086 [INF] FRTR: ChainPorter executing state: SendStateLogCommit
2023-09-09 02:36:24.094 [INF] FRTR: Committing pending parcel to disk
2023-09-09 02:36:24.113 [INF] FRTR: ChainPorter executing state: SendStateBroadcast
2023-09-09 02:36:24.116 [INF] FRTR: Broadcasting new transfer tx, txid=318aed1f6a8af649bbe5444192587ee6dcce13829acbd7aae07b64ad5f6effc9
2023-09-09 02:36:24.132 [INF] FRTR: Outbound parcel with txid 318aed1f6a8af649bbe5444192587ee6dcce13829acbd7aae07b64ad5f6effc9 now pending (num_inputs=1, num_outputs=2), delivering notification
2023-09-09 02:36:24.132 [INF] FRTR: ChainPorter executing state: SendStateWaitTxConf
2023-09-09 02:36:24.133 [INF] FRTR: Waiting for confirmation of transfer_txid=318aed1f6a8af649bbe5444192587ee6dcce13829acbd7aae07b64ad5f6effc9
2023-09-09 02:36:34.299 [INF] FRTR: ChainPorter executing state: SendStateStoreProofs
2023-09-09 02:36:34.299 [INF] FRTR: Importing 0 passive asset proofs into local Proof Archive
2023-09-09 02:36:34.302 [INF] FRTR: Importing proof for output 0 into local Proof Archive
2023-09-09 02:36:34.307 [INF] FRTR: Importing proof for output 1 into local Proof Archive
2023-09-09 02:36:34.309 [INF] FRTR: ChainPorter executing state: SendStateReceiverProofTransfer
2023-09-09 02:36:34.309 [INF] PROF: Attempting to deliver receiver proof for send of asset_id=33303262316164396465336537396132333265333066303839313063636136316338333235633066636436363266316131383733636538353665333931633065, amt=100
2023-09-09 02:36:34.310 [INF] PROF: Waiting 9m35.138374502s before attempting to deliver receiver proof to receiver using backoff procedure

You'll see that there is a delay in delivering a proof. If I immediately try to resend to the same address, it returns the error and kills the node.

2023-09-09 02:37:32.876 [INF] FRTR: Received to send request to 1 addrs: [TapAddr{id=302b1ad9de3e79a232e30f08910cca61c8325c0fcd662f1a1873ce856e391c0e, amount=100, script_key=02b9af41cb492092384bd46ae02af5a3299cbddec0e03e4e4211393a29fabfc49e}]
2023-09-09 02:37:32.876 [INF] FRTR: ChainPorter executing state: SendStateVirtualCommitmentSelect
2023-09-09 02:37:32.879 [INF] FRTR: Identified 1 eligible asset inputs for send of 100 to 302b1ad9de3e79a232e30f08910cca61c8325c0fcd662f1a1873ce856e391c0e
2023-09-09 02:37:32.879 [INF] FRTR: Selected 1 asset inputs for send of 100 to 302b1ad9de3e79a232e30f08910cca61c8325c0fcd662f1a1873ce856e391c0e
2023-09-09 02:37:32.895 [INF] FRTR: ChainPorter executing state: SendStateVirtualSign
2023-09-09 02:37:32.895 [INF] FRTR: Generating Taproot Asset witnesses for send to: 02b9af41cb492092384bd46ae02af5a3299cbddec0e03e4e4211393a29fabfc49e
2023-09-09 02:37:32.898 [INF] FRTR: ChainPorter executing state: SendStateAnchorSign
2023-09-09 02:37:32.898 [INF] FRTR: Constructing new Taproot Asset commitments for send to: 02b9af41cb492092384bd46ae02af5a3299cbddec0e03e4e4211393a29fabfc49e
2023-09-09 02:37:32.911 [INF] FRTR: Received funded PSBT packet
2023-09-09 02:37:32.912 [INF] FRTR: Adjusting send pkt by delta of 2825 from 9900 sats to 12725 sats
2023-09-09 02:37:32.914 [INF] FRTR: ChainPorter executing state: SendStateLogCommit
2023-09-09 02:37:32.922 [INF] FRTR: Committing pending parcel to disk
2023-09-09 02:37:32.941 [INF] FRTR: ChainPorter executing state: SendStateBroadcast
2023-09-09 02:37:32.945 [INF] FRTR: Broadcasting new transfer tx, txid=932e75a090aeefa961f0d0d330f0debadb0998ea113243e6ab1e4f5ef2295e34
2023-09-09 02:37:32.994 [ERR] FRTR: Error evaluating state (SendStateBroadcast): rpc error: code = Unknown desc = transaction rejected: output already spent
2023-09-09 02:37:32.994 [WRN] FRTR: Unable to advance state machine: rpc error: code = Unknown desc = transaction rejected: output already spent
2023-09-09 02:37:32.995 [ERR] RPCS: [/taprpc.TaprootAssets/SendAsset]: rpc error: code = Unknown desc = transaction rejected: output already spent
2023-09-09 02:37:32.994 [ERR] SRVR: Shutting down because error in main method: received critical error from subsystem: rpc error: code = Unknown desc = transaction rejected: output already spent
2023-09-09 02:37:32.997 [INF] RPCS: Stopping RPC Server
2023-09-09 02:37:32.998 [INF] SRVR: Shutdown complete

I'll reach out to the tapd team to see if this is something they are already aware of.

@ffranr
Copy link

ffranr commented Sep 11, 2023

We encountered an issue on the main branch recently where the second send event was delayed because it was subject to the continuation of the first send event's backoff procedure. It should have been fixed in this commit (and PR): lightninglabs/taproot-assets@bbd15e6

@jamaljsr
Copy link
Owner

Thanks for the update @ffranr. I have tested this with tapd master branch lightninglabs/taproot-assets@c5bdbfe and confirmed that the node no longer shuts down.

Now when I try to send to the same address back to back, I get this error message the second time.

2023-09-12 03:32:42.465 [INF] FRTR: Received to send request to 1 addrs: [TapAddr{id=84e5335d5ecd25a9eb3801422aaca66b6c145c4fa15ab16894b7917a1ef86d99, amount=100, script_key=02ac11580f6155a70bc4608cef9ff13a15d82ae16b1e585b16c0c9fc947eece30a}]
2023-09-12 03:32:42.465 [INF] FRTR: ChainPorter executing state: SendStateVirtualCommitmentSelect
2023-09-12 03:32:42.467 [ERR] FRTR: Error evaluating state (SendStateVirtualCommitmentSelect): unable to fund address send: unable to list eligible coins: failed to find coin(s) that satisfy given constraints; if previous transfers are un-confirmed, wait for them to confirm before trying again
2023-09-12 03:32:42.467 [ERR] RPCS: [/taprpc.TaprootAssets/SendAsset]: unable to fund address send: unable to list eligible coins: failed to find coin(s) that satisfy given constraints; if previous transfers are un-confirmed, wait for them to confirm before trying again

Is this the intended behavior to wait the 10mins for confirmations even on regtest where blocks can be mined faster than that?

@ffranr
Copy link

ffranr commented Sep 12, 2023

Is this the intended behavior to wait the 10mins for confirmations even on regtest

Tapd locks coins for a certain duration (defaults to 10mins before broadcast). As far as I can tell, tapd does not currently vary the locking time based on the network. This behaviour was intended for mainnet, I'm not sure if we had regtest in mind.

I don't see why we couldn't support configuring the default coin lock time. Is that a feature that would be helpful to you?

@koirikivi
Copy link

+1 on supporting a user-specified coin lock-time, or coin lock time in blocks. Having a working regtest setup is a huge plus for local development and automated integration tests, and it's not reasonable to wait for 10 minutes.

@ffranr
Copy link

ffranr commented Sep 25, 2023

We're pretty sure that we've solved this bug. See here: lightninglabs/taproot-assets#508 (comment)

And related PR: lightninglabs/taproot-assets#529

Please let me know if this solves the issue that you guys saw.

@jamaljsr
Copy link
Owner

Thanks @ffranr for the update. I have tested the latest tapd master branch and confirmed assets can be sent multiple times repeatedly without error or delay.

I am going to close this issue now since no changes need to be made here. The bug will be fixed in Polar when the next tapd release is added.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

4 participants