-
Notifications
You must be signed in to change notification settings - Fork 225
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: add ldk gateway #5554
feat: add ldk gateway #5554
Conversation
/// The HTLC stream, until it is taken by calling | ||
/// `ILnRpcClient::route_htlcs`. | ||
htlc_stream_receiver_or: | ||
Option<tokio::sync::mpsc::Receiver<Result<InterceptHtlcRequest, Status>>>, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems really awkward to me. Does GatewayLdkClient
need to own this? Can it be passed in as an argument to a function?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The gateway lightning client can be in two states - either it's inactive (not intercepting HTLCs) or active (intercepting HTLCs), and calling ILnRpcClient::route_htlcs
spins up the HTLC interception, switching from the former state to the latter. Currently these two states are internally represented in the following way:
Inactive: GatewayLdkClient.htlc_stream_seeder_task_handle
is already running and passing incoming HTLCs to the HTLC stream receiver. The HTLC stream receiver is held in GatewayLdkClient.htlc_stream_receiver_or
until route_htlcs
is called, at which point this receiver is taken and returned.
Active: GatewayLdkClient.htlc_stream_seeder_task_handle
is still running. But the HTLC stream receiver has been passed to the caller of route_htlcs
and so GatewayLdkClient.htlc_stream_receiver_or
is None
.
What if we changed GatewayLdkClient.htlc_stream_seeder_task_handle
to be an Option<tokio::task::JoinHandle<()>>
, removed GatewayLdkClient.htlc_stream_receiver_or
altogether, and represented the states like this:
Inactive: GatewayLdkClient.htlc_stream_seeder_task_handle_or
is None
and ldk-node
events are not being handled yet. The HTLC stream receiver doesn't exist yet.
Active: The HTLC stream sender and receiver are created when route_htlcs
is called. The sender is passed into the htlc stream seeder task, which is started up and assigned to GatewayLdkClient.htlc_stream_seeder_task_handle_or
. The receiver is returned by route_htlcs
and never held by GatewayLdkClient
.
WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's chat about this offline
|
||
// TODO: Find a way to avoid looping/polling to find out when a payment is | ||
// completed. `ldk-node` provides `PaymentSuccessful` and `PaymentFailed` | ||
// events, but routing them to specific payment IDs isn't straightforward. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does "routing them" mean here?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, that was a strange wording. What I mean is we should only be handling events from ldk-node
's event queue in one place to avoid having two threads stomping/skipping over events. Right now, event handling is done in GatewayLdkClient::event_handler_task_handle
, so we'd need to add functionality there to handle PaymentSuccessful
and PaymentFailed
events and somehow be able to tell GatewayLdkClient::pay
when a payment has succeeded or failed. This can definitely be done, but we'd need some sort of per-paymenthash queue or something.
I cleaned up the comment here
{ | ||
description | ||
} else { | ||
"" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does ldk node support setting the description hash in receive_for_hash
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think so. Drilling down into the function calls here, we end up at this:
Where _create_invoice_from_channelmanager_and_duration_since_epoch()
accepts a Bolt11InvoiceDescription
(which can be a direct description or a hash) but is hardcoded to always be a direct description.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This would probably be good to make an issue upstream
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Created lightningdevkit/ldk-node#325
813d002
to
01d4751
Compare
b081aa3
to
2eadef4
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM, but someone from @fedimint/lightning should approve.
f152040
to
c37d607
Compare
// TODO: Find a way to avoid looping/polling to know when a payment is | ||
// completed. `ldk-node` provides `PaymentSuccessful` and `PaymentFailed` | ||
// events, but interacting with the node event queue here isn't | ||
// straightforward. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you want to implement this, you could potentially use a channel to communicate from the event queue thread to here. Can be done in a follow up
}), | ||
}?; | ||
|
||
Ok((route_htlc_stream, Arc::new(*self))) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will move self
rather than clone it, which is what we do in the other implementations. As far as I can tell this won't cause any issues (whereas with LND we need to clone it). I assume this was intentional?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep! This was intentional. The reason that ILnRpcClient::route_htlcs
consumes self
and returns Arc<Self>
(IIUC) is so that the type checker can enforce that it cannot be called twice. That should still be enforced here.
Blockers on this, waiting until after 0.4 to merge since we can't get this in this week and need to cut release with the DNS fix. This will be the next big merge.
LNv2 requires an implementation of
|
7e1d4b2
to
60ec702
Compare
// TODO: Respect `max_delay` and `max_fee` parameters. | ||
async fn pay( | ||
&self, | ||
invoice: Bolt11Invoice, | ||
_max_delay: u64, | ||
_max_fee: Amount, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Respecting this parameters is security critical!
d521327
to
80250f8
Compare
80250f8
to
496ccc6
Compare
Add a third lightning node type to the gateway, a lightning node powered by
ldk-node
which runs in the same process as the gateway. The gateway can be run in ldk mode by setting the following environment variables:FM_LDK_ESPLORA_SERVER_URL
: Esplora server where blockchain data is accessedFM_LDK_NETWORK
: The bitcoin network being used (e.g. mainnet/testnet)FM_PORT_LDK
: The port that the lightning server is running on, where other lightning nodes can access itThe node's data is stored in a subdirectory of the gateway's data folder
This gateway mode should still be considered experimental!!! None of the behavior is tested or has locked-in backwards compatibility yet!