Lightning Network Devs Discuss the Future of Sovereign Computing

Leading Bitcoin Lightning Network developers recently discussed the future of this Layer 2 protocol.

Watch On YouTube

Listen:

AppleSpotifyLibsynOvercast

Lightning Network channel capacity is increasing rapidly, reaching a new all-time high of over 2,900 bitcoin recently. But there is a whole lot that goes into running a node on the Lightning Network, with many different techniques and ways to optimize.

In a recent Twitter Spaces conversation hosted by Bitcoin Magazine, Lightning Network developer Thomas Jestopher described how rebalancing is a big part of managing a Lightning node and routing payments. Specifically, he described his technique for circular rebalancing.

“I tend to describe circular rebalancing as choosing who you want to send through, which nodes you want to send through and which nodes you want to receive through,” he said. “Typically, I like to receive through my best, connected nodes. Those are the ones that have a whole bunch of channels and that are large channels. That makes it so that I could receive from a large portion of the network. Then, my sending capacity, I might choose to send through some smaller nodes. They usually appreciate the inbound liquidity that I would be providing to them from my well-connected node.”

The Spaces also covered some issues to watch out for when running a Lightning node. For instance, a potential issue with privacy on the Lightning Network could stem from irresponsible or excessive “probing,” a way to discover channel balances.

“You are getting the traffic from somewhere and you don’t know where it’s coming from,” said Alex Bosworth, the infrastructure lead at Lightning Network development firm Lightning Labs. “If you rate limit it, you’re just rate limiting everybody. That actually makes the problem worse, because now, you’ve just increased the bang for the buck of doing an abuse. You basically shut off the node. I think there are a lot of solutions for how this could be solved, but it does need to be prioritized. People need to be talking about this more, maybe than other things that they’re working on, adding to the spec that are not thinking about how to harden the network.”

The speakers also discussed new tools that are being developed in order to onboard millions and eventually billions of people onto Lightning, one of which is sidecar channels — a Lightning Pool feature that lets someone access Lightning without a commitment of funds.

“The way that I understand sidecar channels right now is that it just requires that the person purchasing the channel lease does not have to be the one who receives the inbound liquidity from the channel lease,” added Keagan McClellan, the cofounder of Start9, which offers servers for running self-hosted software and easy installation of the Lightning Network. “I think, that that’s the only difference. What that would mean is that it basically just functions as a normal channel, but it doesn’t require someone to have Bitcoin loaded up into a bunch of different wallets to begin with.”

The full recording of this Spaces conversation includes many more details and much more discussion. To read the whole conversation, check out the unedited transcript below:

[00:00:06] P: Why don’t we start off? Keagan, do you want to give a brief introduction to who you are and what you are working on?

[00:00:13] KM: Yeah. Yeah, my name’s Keagan, I am one of the co-founders of Start9 Labs. In the context of this discussion, we built something similar to Umbrel, where we are building a server product to make these various applications one-click installs. That’s what I do for work.

More broadly, I’ve been a Bitcoiner for, I don’t know, four-ish years now in terms of, on the dev side, and then as a user for another two years beyond that. I actually got into Bitcoin dev, because I took trust, don’t verify a little too seriously. People would say things about how Bitcoin worked and I’d ask the question, “Is that actually how it works?” Most of the time, the answer was yes, but occasionally, the answer was no. I just kept doing that. Now, I do that for Lightning, instead of the layer one consensus stuff. Yeah, endlessly trying to dive deeper and learn new stuff all the time. It turns out, this stuff is enormously complicated.

[00:01:08] P: Fantastic. Yeah. I love it. Severin, you want to give us a brief intro and talk about the incredible website that you’ve created?

[00:01:16] SA: Yes. Hi. Very good morning. I’m Severin. I’m the creator of LnRouter. LnRouter is a tool to help routing nodes to get insights in their node and to get insights into the whole network. That is the goal of LnRouter. I started to create LnRouter in January. Around January. Yeah. It was just created out of the need, because I wanted to start my own routing node. I had no idea what I was doing, because the Lightning Network is basically a black box, if you start out with the Lightning Network. You have no idea where to connect to. You have no idea what the metrics matter in the Lightning Network. Then you’re just there and you connect to a node and nothing happens. You don’t see any traffic. Yeah. LnRouter is a website that I created to solve this. It’s nowhere near I want it to be, but I’m still working on – I believe, there are a lot of cool things coming in the future.

[00:02:19] P: Yeah. Yeah, absolutely. LnRouter is an incredible tool. When did you release it, the original version?

[00:02:24] SA: I bought the domain in April. I just looked it up yesterday. The first version was probably up in May or so. Yeah, there are tools coming all the time, as long as I have time to program on it.

[00:02:34] P: Yeah. It’s definitely one of the newer tools that has fundamentally shifted my understanding of the network in a really positive way. Jestopher, you want to jump in and give us a brief intro to who you are and what you’ve built?

[00:02:44] TJ: Sure. Yeah. Thanks for having me on. Yeah, let’s see. I started off in Lightning just as a pleb, playing with a raspy blitz as a quarantine hobby. Really fell in love with it. Was trying out some of the new apps, including ThunderHub. What I’m working on now is called amboss.space. It is a Lightning Network Explorer. I’m working with Tony IOI, or you might know him better as AP on telegram. He’s the developer behind ThunderHub. We teamed up, so using my knowledge as a routing node operator and Tony’s incredible work as a front-end developer to create a Lightning Network Explorer that’s built for routing nodes. We’re continuing to build out tools for just to help out the Lightning ecosystem and give and provide good data and actionable insights for routing nodes.

[00:03:38] P: Yeah. Yeah. It’s a really exciting time to be in Lightning. I think, just for the audience, what we’re talking about here is Bitcoin is a layer one technology. It’s sound money. It’s incredibly important. On top of it, there are what are called layer two protocols. The Lightning Network is built on top of Bitcoin. It allows you to transmit Bitcoin, essentially instantaneously, and for very low fees.

When you hear people say, “Oh, you can’t buy a coffee with Bitcoin.” Totally not true. You can totally do it on the Lightning Network today. That’s what we’re talking about just for a little more context. Everyone who’s a speaker has, with the exception of me, has created really amazing tools that that help us build up the network further by empowering people like me, like others, people in – which is this community that we got together and started to help us all understand Lightning and learn how to be effective routing nodes in the network, because there are three different types, or let’s say, very broadly speaking, there are three different types of users of the network. This is how I think about it.

There’s a person who basically wants to just whip up open their phone and pay someone, essentially instantaneously, basically for free over the Lightning Network. You can do that. Anyone in the audience today, you can just go and download Wallet of Satoshi, or if you want a solution that lets you have full custody of your funds, you can use Breeze Wallet. You can do that today and you don’t have to understand anything about the wiring of how it works.

Then there’s people who want to be merchants. They’re basically selling a service and they want to be able to accept Bitcoin over the Lightning Network for that reason. They can use things like, Breeze Wallet, which you can download, has a point of sale feature, but ultimately, a lot of merchants end up running their own Lightning nodes.

Then, the third extreme is what all of us are doing, which is not only are we participating in Lightning Network, but we actually are running nodes that allow payments to be routed through them, because that’s the way the Lightning Network works. Just because I don’t have a connection directly to some person, whoever they are, I can bounce payments through other nodes in order to get to them and pay them, or receive from them.

When we’re talking about all this stuff, I like to be clear that you don’t need to be as obsessed with all these, all the nitty-gritty details as we are in order to benefit from and participate in the Lightning Network. Yeah, there’s so much interesting stuff happening in it right now. I think, everyone on the stage is, or everyone that’s speaking is part of PlebNet, which a bunch of us are deeply excited about. As of right now, I think we’re about 3% of the entire Lightning Network in terms of number of active nodes, which is which is really interesting. As we’ve as we’ve been building that out and understanding the how to be an effective routing node, the tools that y’all have built and contribute to have really helped do that.

I’m curious, Jestopher, what led to you building out amboss.space, because that’s an essential part of my workflow as I go to – whenever I’m evaluating potential peers, potential nodes to open channels to, I check out Amboss as part that workflow. What drove you to that and how did that come about?

[00:06:36] TJ: Sure. Hopefully, we can get Severin back up on stage. Let’s see, what we started off actually designing a node manager program. We were focusing it on some specialized tools, where people are managing multiple nodes. We ran into some, I guess, some issues with licensing. If we wanted to make this thing open source, it’s really hard to build a business around it. Things like, Ride the Lightning and ThunderHub, they’re both struggling to build a sustainable business. These are our critical tools. Now, unfortunately they have to be open source. That’s a difficult thing to protect. I know, there’s been lots of history with, Umbrel following that story, not to get too deep into the weeds about it. In going through that process and forming a company, we recognized, there’s a real need for good information about the Lightning Network.

I think, the tool up until this point has been a 1ML, and we saw a real need to bring all of that information that the Lightning Network provides and create a one-stop shop for people that want to find out the information, as far as routing fees and who these people are to start opening up those lines of communication, so we can coordinate this Lightning Network and this market around liquidity. A big part of that is just getting people to talk to each other. We made the login process very simple. We don’t need to require you to open a channel, or get any information from you really. All you would need to do is sign a message using your node and signing a message proves to us that you own that node.

Then, you would be able to customize your page and provide contact information. You’ll be able to start talking to other node operators and start coordinating liquidity and allocating it in good spots to give you a return on your investment, when you’re putting your savings out on the Lightning Network.

[00:08:36] P: Yeah, it makes sense. Makes sense. Yeah, it’s so interesting that there aren’t any other services, or aren’t any websites that aggregate that information in the same way that amboss.space does. Yeah, I guess I’m curious. I have started running a little routing node in the period after Amboss was created. I’m curious how people got a sense for the fees of all of the nodes without that tool. Obviously, you can – I just don’t think there is anything like that out there.

[00:09:02] KM: You can grind through it on 1ML, but it’s not as good.

[00:09:07] P: Yeah. Yeah. As a random aside, it’s hilarious to me that 1ML, their node, it’s a garbage node. Never valid. It doesn’t seem like, they ever balanced their channels.

[00:09:14] KM: Well, I don’t know that they can. Consider this, in many cases, a lot of these companies that have these massive nodes, at least async “earned” one of their spots in the top. Things like 1ML, they basically took the popularity of 1ml.com to try to translate it into getting people to connect to their node. It was wildly successful in that regard. A lot of people didn’t know who to connect to, but they were like, “Hey, there’s this tool that I’m using to figure out how to connect to other people. Why don’t I just connect to their node?”

It turns out that if you have an enormous amount of inbound liquidity and comparatively little outbound liquidity, the odds that the route that you’re trying is going to succeed is astronomically low.

[00:09:55] P: Yeah. Yeah. That does make sense.

[00:09:57] TJ: Actually, to give them some credit and Keagan, you might be able to help fill in the gaps here. One thing that 1ML did that was really smart is actually require users to open channels, because they would get a better source of information about the network as a whole. For example, Amboss currently only has two channels. That affects our ability to see the entire graph.

Now, as our user base grows, I’m sure we’ll get more channels opened, and so then, we’ll have better visibility onto all of the nodes that are present. I’m sure, everyone can go at ham on the consequences of having that many channels open and having, yeah, essentially, that liquidity in those channels, in tiny channels.

[00:10:43] KM: The thing is, though, is you don’t need to have a lot of channels in order to have a complete view of the network graph. The gossip protocol is a peer level thing and not a channel relationship thing. You can receive gossip messages from all sorts of peers and you don’t actually have to open a channel to appear in order to have that peer persistently be connected to.

My recommendation is that you should see if just adding a whole bunch more peers without adding a whole bunch more channel relationships to Amboss fixes your problem of incomplete network traps.

[00:11:13] P: Yeah. How would you know if you had an incomplete network graph?

[00:11:16] KM: You can’t ever know.

[00:11:19] P: Great. I just want to say, I’ve attempted to invite several of the people in the audience up on stage. I don’t know if you’ve received them, but NDK, openams, CJ, Walton, KP, Richard, if you if you want to come up, request to speak and we’d love to have you up here.

[00:11:32] SA: Just one input before we continue on here, it’s like, connecting to 1ML, connecting to a node that has a lot of channels and a lot of exhausted checklists. It’s actually even counter-productive for you to some extent, because the pathfinding algorithm, when you send a payment, will take way longer than otherwise, because it needs to try out a lot of routes that are just not working. Connecting to such a node is really not that good of an idea. If you are only connected to one such node, it’s not a big of a problem, but if you’re connected to several of such nodes, then your pathfinding is getting slower, especially when this specific node has very low fee, so to pass on the algorithm actually tries this specific node, or all, perhaps, from the specific node.

[00:12:28] P: Oh, interesting. Wait, so just to repeat back to make sure I’m tracking, you’re saying that by connecting, if you open a channel to 1ML, you actually decrease the efficiency of your node, because every time you try to find a path through the network, you’re going to basically be scanning that node’s gazillion connections, even though none of them will actually be able to route.

[00:12:46] KM: This only applies to you if you’re the sender, because all routes in the Lightning Network are source constructed. As a routing node, you actually have no impact on what route is chosen. If you’re just routing, it actually doesn’t matter as much, other than the fact that it’s just dead weight capital. Ut won’t really affect you as a router.

[00:13:04] SA: Yup, exactly. It’s when you send payments. It really depends on how the fees are constructed. If this specific node has only one PPM fees, then yeah, half probably. It’s not like, that it takes 10 seconds suddenly, again. It takes a little bit more.

[00:13:22] P: Keagan, you said it was applied to people sending payments. Would it also apply to nodes that are trying to do rebalancing?

[00:13:29] KM: Circular. Yeah. Actually, all rebalancing pretty much, with the exception of perhaps a loop in, although I question the times that a loop in is ever viable. That’s just because if you are looping in to rebalance your channels, the sender in that regard is the loop server, or whoever your submarine swap provider is, and so you’re not exposed to it in that way.

It’s not like, it won’t have any impact, because if you’re connected to something like 1ML and someone’s trying to send something to you, it will still appear in the route backwards. Depending on how expensive the route is to 1ML from their point of view, they might still try it. Circular rebalances, you’re both the sender and the receiver, so that’s a definite yes on that front.

[00:14:10] P: Yeah. Just for everyone in the audience, when we’re talking about rebalancing, or balancing channels, what we’re talking about is in Lightning Network, you have a node that’s running one of the Lightning implementations. The most popular ones are LND, Éclair, C-Lightning. Basically, you create a channel between yourself and another node in the network. When you do that, what that actually is it’s a two of two multisig contract. Well, it’s a smart contract. When people say, “Oh, there’s no such thing as smart contracts on Bitcoin,” they’re just factually incorrect.

Basically, that channel has a bunch of liquidity locked up in it. If Keagan and I open a 10 million SAT channel, and we do it in a balanced fashion, there’s 5 million SATS on his side, 5 million SATS on my side. Then basically, we can both send each other SATS. More importantly, payments can actually be routed through that channel over the network. When that happens, if you’re running a routing node, you collect a small fee for that service.

When we talk about circular rebalancing, it’s where you basically send payments out through one channel and then you receive them back in through another channel, so your net liquidity, your net balance stays the same minus fees. What you do is you basically, shift your channel balances back to being in the middle. The reason that’s important is because, it allows you to route payments in both directions.

[00:15:22] TJ: Yeah. I tend to describe circular rebalancing as choosing who you want to send through, which nodes you want to send through and which nodes you want to receive through. Typically, I like to receive through my best, connected nodes. Those are the ones that that have a whole bunch of channels and that are large channels. That makes it, so that I could receive from a large portion of the network. Then, my sending capacity, I might choose to send through some smaller nodes. They usually appreciate the inbound liquidity that I would be providing to them from my well-connected node.

[00:16:01] KM: Sorry. Did you say that you tried to send through the smaller nodes?

[00:16:04] TJ: Yeah, generally.

[00:16:05] KM: If you do that, you’re creating outbound liquidity for them to you. You’re creating inbound on the other side.

[00:16:12] TJ: Yeah. I’m creating inbound as liquidity for those smaller nodes. Yeah, the terminology is a little confusing, right?

[00:16:22] KM: Okay. The inbound liquidity and outbound liquidity is conserved across payments, with an asterisk, right? Obviously, if you are charging any fees at all, you are earning slightly more in fees than you’re dispensing out the other side. Technically speaking, any payment through a node is going to turn a tiny amount of its inbound liquidity into outbound liquidity. You’re not actually creating net inbound liquidity for those nodes, but you are reducing the inbound liquidity they have from you and allocating it to wherever the exit point is through that node.

[00:16:55] TJ: That’s a good point. Yeah, because circular rebalances, they don’t create, or destroy any liquidity per se. It’s really just moving it around. It’s a question of, who do I want to receive from, and who do I want to send through? Yeah. Good point. I’m not creating any inbound liquidity for them. I’m really making myself the route through which they could receive some payments.

[00:17:16] KM: Just another nitpick, though. If you were sending through the small channels, that means that they used to have inbound liquidity from you. By sending through them, your channel from their perspective is filling up with outbound liquidity. It’s actually depleting their inbound liquidity from you when you send through them.

[00:17:34] P: One, I think, is in a slightly different direction. That’s all right. Actually, do you want to respond to that?

[00:17:39] TJ: Yeah, I’m getting a little bit lost in the weeds on what you’re trying to get at Keagan. Yeah, as a routing node, you want to position yourself to be able to receive from lots of nodes on the network. If you are doing circular rebalancing, you are going to be shifting around other people’s liquidity on who they’re going to be receiving from, or sending through.

[00:17:59] KM: Yeah. This is also why if you’re not a routing node, you should prefer to open your channels private to whatever routing service providers you want to use, so that your liquidity isn’t reallocated without your knowledge based off of the needs of the routing network as a whole. Not only that, but it also improves pathfinding for everyone else. Unless you’re actually getting solid earnings, then it’s probably not going to be worth it to open public channels, when you would otherwise just be a user.

[00:18:27] P: Wait, can you repeat that?

[00:18:29] KM: Yeah. Okay. There are two types of channels in the Lightning graph. There’s the public ones, which are basically public infrastructure. That’s the routing nodes are all advertising their channels, so that you can route through them, so that you can get your payments to their destinations without you having to have direct channel relationships with everybody, with whom you transact.

However, one of the consequences of that is that, by and large, unless you have specific tooling you’ve set up for this, any route any requests to route a payment over your channels will be satisfied, or your node will acquiesce to that request. What that will do is it will shift the liquidity between your channels. If you have channels that you wanted to have good inbound from and good outbounds to, because you’ve decided that’s what you want. For whatever reason, the routing nodes on the network have decided that they could benefit from reallocating the liquidity, the other direction, you will end up getting your liquidity moved around, and that won’t necessarily be a good thing for you. It’ll definitely be a good thing for whoever decided to do it, because that’s why they chose to do it.

I guess, the second point being is that private panels are not put into the public network graph, which means that it would do some of the compute costs of pathfinding, as well as increases the reliability of pathfinding, because a lot of private channels might not have a 100% well-balanced liquidity on either side. If that’s the case, then because that information isn’t knowable before you send a payment, it causes more payments to fail. I strongly encourage anyone who is using Lightning, but not trying to basically, up their routing game to open private channels.

[00:20:08] P: Interesting.

[00:20:10] TJ: Yeah. Absolutely agree.

[00:20:11] P: Interesting. You would recommend that basically, people that are not trying to be – That makes sense, actually. You’re saying, people that are not trying to be routing nodes, they should have just only open private channels?

[00:20:21] KM: Yeah.

[00:20:22] P: Yeah, got it. Got it.

[00:20:23] SA: It also improves the pathfinding element that I mentioned before. What’s happened right now, the default fee for LND for example, is 1 PPM. When you just start a new node, open a channel, it’s 1 PPM. This leads to a lot of new users who have exhaust the channels, because it’s so cheap, the liquidity is just come instantly.

The more major thing that happens there is that people who don’t really care about routing and don’t really care about fees, they pollute the network with 1 PPM channels. Very low-fee channels that are exhausted. This creates this effect that the whole network, it’s like, really hard to find a path through the network with the pathfinding algorithm, because the pathfinding algorithm tries low-fee channels first, if that makes sense.

[00:21:20] P: Yeah. Got it.

[00:21:20] TJ: Yeah. If you’re creating private channels, then you won’t be able to route through those. Essentially, get SATS back when you’re trying to actually pay with Lightning. Because if you’re paying one direction, like an opposing flow and then be able to charge routing fees to reset that flow of liquidity by providing an opposing flow. Yeah. Private channels, you’re totally right. Yeah, it would help you pay.

[00:21:47] KM: Someone just DM’d me a question from Twitter, asking if you would not be able to receive payments if you have private channels. That’s incorrect. That’s because in the invoicing spec, there is a method to embed the private channels in the invoice, such that the sender uses those as additions to the Lightning graph when they try to send the payment.

It’s usually very useful for last stops. It’s not tremendously well supported in all of the wallets. I actually tweeted about this not too long ago, basically, imploring every wallet dev out there to make sure that they support private channels, because of the benefits of A, protecting the liquidity of the end user, and B, not polluting the channel graph with a whole bunch of channels that are not routable.

[00:22:35] P: Yeah. Yeah. That is super important. It’s really interesting, as I’ve got further and further down the rabbit hole, understanding the information that is stored locally by a node as it tries to basically send payments through the various routes. I’m really fascinated by the ranking system, or the penalties that are applied for failed payments and how that affects the ability to accept routes in the future, or to receive routes in the future. Alex, I see you’re on the stage. Do you want to can you want to give us a brief introduction of who you are and all the cool shit that you’ve established? You can say no, of course.

[00:23:03] AB: Oh. Hi. I’m Alex Bosworth. I work at Lightning labs. We work on LND, and some liquidity products for routing, or receiving payments, like Lightning loop.

[00:23:13] P: You’re selling yourself short, my friend. Alex is the creator of the boss score, which is, I think, is the first system for basically, trying to provide visibility into what makes a good routing node in Lightning Network, versus a bad one. It’s super important, because by having these ranking systems that allow us to categorize our own nodes as effective, or ineffective routing nodes, it gives us more clarity around how to improve those metrics and those features, which is also something that Severin has put a lot of effort into. The terminal web debugger that that he’s created is a huge step in that direction. It gives us a lot more visibility into how to improve our nodes.

[00:23:53] AB: Yeah, it was designed from the opposite perspective. The perspective of the person who’s trying to join the network, and they need the routing nodes. The idea is to decentralize the network. In order to decentralize the network, we need somebody who joins a network to have a bootstrap, like these nodes are worth your time to consider. Like how, when you join the Bitcoin, you reach out to these DNS peers, and the DNS seeds tell you about some reasonable Bitcoin node, so you can connect to – you can find. They’re going to give you addresses of other Bitcoin nodes. After a while, you’ll develop your own set of peers.

That was the idea is, we don’t want the Lightning Network to just be everybody connects to the 10 big routing nodes. We want this to be a decentralized network, where you have a bunch of choices. If one node goes offline, it’s fine. You have other peers. That’s the idea is establishing that seed list.

[00:24:40] P: Yeah. Got it. Did you create the boss score as a way for you personally, initially to evaluate what were good nodes, or was the intention basically, to provide visibility for other people?

[00:24:49] AB: That project was done in the context of the Lighting Lab’s mobile app. The mobile app, we wanted to do it all, as far as make super easy to use accessible app for everybody to just join the Lightning Network. That was my high-level goal, which is okay, you downloaded this app. How do we make you have a good experience without running our own routing node?

[00:25:12] P: Got it. One thing and that somebody else said a minute ago, that struck me is in one of our previous conversations, Alex, you’d mentioned – I believe it was you. The excitement around Rust Lightning, which is another implementation that I think is, I’m actually unclear on what stage of development it’s in. You were saying specifically, the ability to create a more nuanced, custom routing strategy was something that you were excited about.

Just a second ago, we were talking about the effect of connecting to 1ML, and how that might affect the way your own node, calculator routes. How long before we’re able to implement those types of customized routing algorithms, so that we can as an individual basically say, okay, avoid these types of nodes in the future? Maybe that’s a good thing.

[00:25:55] AB: Yeah. I think, the more tooling we have, the more the more libraries we have, the easier it is to try out these different ideas and execute them. On my node, I already have custom strategies. I have a list of nodes that I blacklist from all my routes. I have tooling to help me develop what that list looks. Right now, I pick all those nodes manually, but that could easily be dynamically done. Then, LND also has a new API in 0.13 that allows you to influence the mission control. The mission control is what does the pathfinding logic. That’s an area of just experimentation.

[00:26:27] KM: It’s also worth noting that LND and Rust Lightning will dump the entire channel graph to you, if you ask for it through one of the APIs, and then you can do your own pathfinding outside of the LND process. Rust Lightning is the library, not an actual node implementation. The point being that, if you dump the graph, you can write your own custom pathfinding logic, and then send directly to a route. LND has APIs for that, too.

[00:26:54] P: Oh, interesting. Alex, is that what Balance of Satoshis does? Is it already implementing its own customized routing node? Oh, I can’t hear you. I don’t know if you’re speaking, Alex. Oh, man. Can anyone else? I don’t know if it’s my phone, or this is –

[00:27:08] KM: He’s back on as a listener now.

[00:27:10] P: Okay. Yeah. One of the issues of Twitter Spaces is quite interesting, and it tends to boot people and do weird shit. Let me find Alex again and bring it back up. Go ahead. Somebody was going to say something.

[00:27:21] KM: I think it was Alex, but I think, just what we were talking about is the ability to do pathfinding in a more custom way, rather than leaving it up to the various implementations. I think, you were asking about what is the exciting thing about Rust Lightning.

One of the things that Russ Lightning offers is an entire Lightning implementation in library form. Right now, if you want to get at some of the more raw functionality within these node implementations, you have a few options. LND has a GRPC API. That GRPC API is much richer than what LN CLI gives you and what the config allows you to specify, but it necessarily requires you to write software that is in another process.

There’s a similar dynamic in C-Lightning, where their plugin infrastructure, as opposed to having a GRPC API, the request responses happen over standard input, standard output, and so you can write your own plugins that can interact with C-Lightning. What’s interesting about Rust Lightning is that it’s all in the same process. You can get it down to a very low footprint. One of the consequences and Matt Corillo was very stringent about the way that Rust Lightning was set up, where it basically has no dependencies, which means that the actual binary footprint is actually fairly small.

I just heard of that project yesterday, that’s actually working on compiling Rust Lightning to WASM and embedding an entire Lightning node straight in the browser. We’ll see how that pans out in practice. I have numerous questions about how that’s actually not going to work, but it’s definitely one of the cooler aspects.

[00:28:54] R: Does this mean that the Docker container for Rust Lightning would be really compact?

[00:28:58] KM: I don’t actually know, because for the Docker container, again, Rust Lightning is a library primarily. They do have a tutorial, where you can basically build your own Lightning node in five or six lines of code using that library. If you’re talking about what it would take to build your own Lightning, using the Rust Lightning library and then Dockerizing that. In general, Rust binary sizes are pretty good, because there’s no runtime. There isn’t something like Go. I don’t know how it would compare to something like C-Lightning.

[00:29:25] R: The reason I ask is because, someone is inevitably going to roll out a Lightning node package, which is containerized, like in Umbrel, if you install more than a couple apps and you have only 4 gigs of RAM, which most nodes do, they can start to crawl. I believe, that it can actually lead to some failures, such as Bitcoin D failing health checks. If three of them fail in a row, it can do an emergency shutdown. I believe, that’s what happened to my node a couple of days ago. I’m very concerned about the docker container size of some of these apps and of Lightning redeeming itself.

[00:30:00] KM: Keep in mind that the container size is completely different than the in-memory occupancy. The container size doesn’t actually have to fit all the way in memory. Because what Docker is doing is it’s setting up a file system overlay. Obviously, any app that’s going to have a huge Docker image footprint size is likely to have a high memory footprint, but that’s purely based off of correlation of what I would describe as carelessness by the developers, and less so about some intrinsic link between the two.

[00:30:28] R: had a question, if Alex is able to talk. I think he’s a speaker now. Hey, Alex. When can we have a truly grown node-based Lightning LND implementation?

[00:30:38] AB: If you update to 0.13.1, it should allow for [inaudible 00:30:41] Bitcoin D. It works by getting the blocks from the peers directly when they’re needed.

[00:30:47] KM: It does seem to still be buggy though, Alex. I was talking to Wilmer about this last week. We deployed 0.13 to the embassy. 0.13.0. I don’t know if this was fixed in 0.13.1 that came out today. When we deployed 0.13.0 and used the LND native print node support, it caused nodes to periodically go offline and then not be able to come back. Then, when we switched back to our block patching proxy that we had been using prior to 0.13, it seems to fix it. Now, I don’t have any better evidence than that. I am working with Wilmer to try to nail it down, but we might want to be careful using LND’s prune node support, walking closely.

[00:31:26] AB: It is a brand-new feature, so your mileage may vary. The other thing I try to change is there’s a catching system in it. It’s not going to get every block from every peer. It’s going to collect the blocks in the near timeframe that it might need. Then also, you can adjust your prune setting to say, “Oh, I want to prune everything, or I want to prune just the last two weeks, or the last month.” In some scenarios, it might be more reliable than others. Yeah, it’s a new feature that hasn’t been in the wild before. Yeah, there might be bugs.

[00:31:54] R: Thanks.

[SPONSOR MESSAGE]

[00:32:00] CK: All right, Bitcoiners. I want to tell you about our newest sponsor. This show is brought to you by ledn.io. I have been super, super impressed with the guys over at Ledn. I’ve actually known the co-founders, Adam and Mauricio for a very long time. I’ve had the pleasure to watch them build Ledn up from a tiny, tiny startup, to now a super impressive institutional grade, Bitcoin and crypto lender. Y’all, I’m so impressed with these guys. They are offering some of the best rates out there. I don’t think anyone even comes close to touching them.

You can get 6.1% APY on your first two Bitcoin that you deposit into Ledn interest accounts, and you can get 8.5% on USDC deposits. I mean, I know all the competitors. They’re not even close. If you’re going to put your crypto and your Bitcoin into an interest account, Ledn is by far the best. On top of that, like I said, these guys are hardcore Bitcoiners, and they know the products and the services that Bitcoiners want and appreciate. They came up with B2X. It allows you to put your Bitcoin in and a leverage it up, and you can with one click of a mouse, get twice the exposure to Bitcoin.

If you’re super bullish, Ledn has you covered with a super, super easy way to get leverage with B2X. Then on top of that, they know that Bitcoiners care about your reserves. They know that Bitcoiners don’t like under-reserved and not full-reserved financial institutions. They are pushing the frontier in transparency in the digital asset lending space. They are the first digital asset lender to do a full proof of reserves and proof of attestation, through a Mariano LLC, a public accounting firm.

The Ledn guys, they know what Bitcoiners like. They are legit. I encourage you guys to check them out, do your own research and go to ledn.io. That is L-E-D-N.I-O and learn more.

[00:33:51] CK: Bitcoiners, I want to tell you about The Deep Dive. The Deep Dive is Bitcoin Magazine’s premium market intelligence newsletter. This is a no fluff, hard-hitting, incredible newsletter going deep into the market, helping you understand what’s happening with derivatives, what’s happening on-chain, what’s happening in macro, what’s happening with the narrative and what’s happening with the tech.

My man, Dylan LeClair is an absolute savant. He is making his name known in the Bitcoin community, getting shoutouts left and right, getting on podcast left and right, and him and his team are bringing you everything that you need to know about Bitcoin. You don’t even have to be on Bitcoin Twitter. You can ignore every other newsletter. This is the newsletter to rule them all. Go over to members.bitcoinmagazine.com. Sign up today. If you use promo code MACRO, you get a full month for free.

You have nothing to lose. What are you waiting for? Sign up, see the incredible work that Dylan and his team are putting out. If you don’t like it, just unsubscribe. You don’t pay a dime. If you do, it’s going to be well worth the SATs and investment and understanding Bitcoin, and gaining the confidence to continue to invest in Bitcoin and making the right moves around Bitcoin. It’s going to be well worth every single Satoshi. Again, can’t recommend it enough. That is members.bitcoinmagazine.com, promo code MACRO. Do it today.

[EPISODE CONTINUED]

[00:35:22] P: Just to be clear. Rust Lightning is the idea that you would run Bitcoin core, for example, LND and then you’d use Rust Lighting as the library?

[00:35:30] KM: Rust Lightning would substitute for LND in that particular case. The primary difference, and this is – I’m taking a lot of this from their documentation. I had a couple of conversations with Matt Corallo about it, but the thing that they’re going for is that the various node implementations make a lot of decisions with respect to how to store certain pieces of data and how the Lightning node fits into some broader architecture.

By busting up what makes up a Lightning node into its various sub systems and making those a bit – giving you the ability to control those from inside of another process, just gives you a lot tighter level control. By and large, it’s not as well developed from a end-user perspective, as something like LND is, or even C-Lightning. As a developer, if you find that the other node implementations are not serving your needs, either doing due to being heavy, or awkward to deploy, or you need just lower level access to the actual individual protocol features, then Rust Lightning, I think, has an opportunity to serve your needs better in that way. It is a comparatively earlier –

[00:36:40] P: I’m so curious. How long before the people that are before all of you start really playing around with it, what needs to happen before you feel comfortable doing so and implementing it in your own tools?

[00:36:49] KM: Rust Lightning?

[00:36:50] P: Yeah.

[00:36:51] KM: I have to hate Rust less.

[00:36:55] P: That seems like a huge problem.

[00:36:57] KM: It is.

[00:36:58] P: Got it. Okay. Alex, how much have you built on top of balance – Do you use Balance of Satoshis as the intermediate layer between most of the stuff that you do on the Lightning Network?

[00:37:08] AB: I use that for my own nodes. I use that helping to manage the Lightning loop service and trying out different things. I have various test net nodes, test nodes. It’s both what I use to manage nodes. Then also, to prototype different ideas, try different things out. It’s built on top of my general Lightning library that I’ve been working with since I originally built yalls.org. It’s built on that code base.

[00:37:32] P: Got it. How often do you LNCLI directly, versus the tools that you’ve built on top of it?

[00:37:38] AB: Generally, if LNCLI does what I want it to do, I’m not going to replace it with a new command. Although, Balance of Satoshis does have a new command, which we’ll just call basically, LNCLI, so you can use it that way. Generally, I build the commands more for automating common tasks. Whereas, LNCLI is a great way to access API directly.

[00:37:59] P: Yep. I have one more question for you, then I want to open up a more open dialogue between everybody that’s currently a speaker, so we can just riff and go into whatever we want. One of the things that was really interesting to me that that I guess, does the fact that C-Lightning, as I understand it, probing in the way that a terminal web uses a less not possible anymore. How does that affect the network and things like, terminal web and the tools that you’re working on? Is that good or a bad thing in your mind?

[00:38:26] AB: I missed when it made probing impossible. What happened?

[00:38:29] P: Like Ace and Q right on terminal web. I like terminal.lightning.engineering. Ace and Q was the number one node forever, basically. Then very recently, my understanding is that the newest implementation of C-Lightning made it so that probes can no longer be used to basically, determine the channel balance.

[00:38:48] KM: Do you mean Éclair?

[00:38:50] P: I’m sorry. Is it Éclair? It’s not the C-Lightning?

[00:38:52] KM: Async is almost certainly using Éclair, instead of C-Lightning.

[00:38:54] P: Okay. I apologize. Not C-Lightning. Eclair.

[00:38:57] SA: I’m going to jump in for a moment here, because I believe a good chip connection problems. What Éclair did is basically, Éclair made the – I lift the payment sequence as a requirement. This as far as I know, disables key send, and also disables procs. What happened with async, or I don’t know how to pronounce this node. It fell completely off the terminal score. That is because terminal score to some extent, uses probing to determine the health of a node.

[00:39:28] AB: I’m not working on terminal web. I can’t get into exactly what happened there. I don’t know. I don’t think that you can necessarily make probing impossible, but you cause problems for it, for sure. Also, the terminal web, it’s not probing your balances or anything. That’s not part of how it works. I think, actually, async was deliberately removed from the original scoring list, because it was causing problems for probing. Maybe they don’t want to be probed, so they were rejected. It was removed, because it wasn’t working. I think, you can make problems for people who want to run probes, but you can’t really categorically stop probing. You can just send a signal that you don’t want to be probed.

[00:40:07] P: Oh, wow. Wait, so Severin, in the other chat, my understanding was that we’d come to the conclusion that Eclair no longer provided that information, but it sounds like, that’s not the case.

[00:40:17] SA: I’m not sure if I understand your question correctly. Can you repeat that again?

[00:40:21] P: Yeah. I don’t know if it was in the beta group, or in the advanced group, but I thought we had come to the conclusion that the newest version of Éclair, basically made it not made it not possible to reliably probe channels, as a result. It sounds like, Alex was saying, is that’s actually not the case.

[00:40:39] SA: If you probe according to the probing research paper that came out two years ago, or so if you do it like this, then it’s not possible anymore. They will return and a different error message. Yeah, it doesn’t work. You can possibly get around with it, making one or two adjustments to the probing algorithm. Then it should work again. The standard doesn’t work anymore with Éclair.

[00:41:09] P: Okay. Got it.

[00:41:10] AB: I don’t really think that’s the reason. Because they were actually upset that they weren’t on the list, and they asked to be included. They asked for the exemption to be removed. I think, probably the reason that they’re not on is unrelated to any probing changes.

[00:41:25] SA: Alex. What I saw on the Éclair GitHub is literally, they merged some code that makes the payment secret and the requirement. It’s just coincidentally at the same time. Then async fell out of the terminal score, but it doesn’t need to be, I don’t know.

[00:41:44] AB: Yeah. I don’t know either.

[00:41:45] TJ: One question that’s coming up for me is Severin, in our conversations, we’ve talked about a really responsible use of probing. I’m curious, as probing grows and is more tools are built around it, how do you folks feel about, or how will the network respond in response to a whole bunch of probes happening across the network, or potentially irresponsible use of a probing that might not protect privacy, or that might be abusing individual nodes resources?

[00:42:17] P: Good question.

[00:42:19] TJ: Alex, I know we’ve talked before about how the network is resilient. How do you see nodes responding to excessive probing?

[00:42:26] AB: Yeah. I wouldn’t necessarily even say probing. It’s just what happens if you make a lot of requests. Like, what if you go to a webpage and you hit it a billion times and you get everybody to hit it a billion times? There’s a level of abuse, even in regular things that people are expected to do. I think, that’s a super important question for how does the protocol deal with this scenario? It conflicts with the goal of also making it, so that you don’t know who’s responsible for the traffic. Because it’s not like, you can just put a rate limit on an IP.

You are getting the traffic from somewhere and you don’t know where it’s coming from. If you rate limit it, you’re just rate limiting everybody. That actually makes the problem worse, because now, you’ve just increased the bang for the buck of doing an abuse. You basically shut off the node. I think, there are a lot of solutions for how this could be solved, but it does need to be prioritized. People need to be talking about this more, maybe than other things that they’re working on, adding to the spec that are not thinking about how to harden the network.

[00:43:25] KM: Can you shed some light more on actually, how probes work? Is it done through the onion packet?

[00:43:31] AB: Probing is just a very generic way to describe doing a payment, that maybe doesn’t succeed. The simple probe, if you use my tool for probing, all it’s going to do, it’s going to send the payment to the destination, but instead of the hash that the H and HTLC, the has, it’s going to send random data. The nodes along the path, they won’t know that’s not the correct hash, so they’ll still forward it. Then when it gets to the end, the end will reject it and say, “That didn’t work for me.” That’s one type of probe, and that’s the most simple type of probing.

It can be useful when you’re making a real payment. A lot of wallets actually do a probe before they pay, including the Lightning loop service. Before we actually do a swap, we do a probe just to test the route, to see is the route going to work for us? Once we know that the route is going to work for us, then we send along a real payment. It’s not like, it’s just information gathering for information gathering sake. It can be part of the regular payment flow.

[00:44:26] KM: Just to clarify here, so what happens is that the onion packet is sent with basically, a full route, or a candidate route to the destination. At the very end, the payment hash, the HTLC being offered to the final hop, the recipient is not associated with the payment hash that they’ve generated. They reject the HTLC and then the HTLCs get rejected all the way back to the source.

[00:44:49] AB: You can see, okay, my payment made it along this path. If I want to use that path again, there’s a high chance it’s going to work. There’s also the payment, like you were saying before that there’s a payment non-secluded. When you generate a payment request in there, there’s a random number that is encrypted in that payment that you make. Actually, if you use my probing tool, and you use it with it a payment request, it will still include that knot. Might even be compatible with the way that async is blocking and probing, because it signals that you have knowledge of the payment request. That’s just way to probe.

Another way to probe is, you can pay past the point that you want to pay. That makes it harder to block it. How do you know if you’re a routing node? How do you know that the payment is a probe, versus just paying one of your peers? That’s how probing is a general concept of I’m just gathering information that was going to help me to do something.

[00:45:40] P: Okay. Got it. The statement that I had made that basically, Eclair is blocking, probing totally incorrect. I’m still a little bit unclear on exactly –

[00:45:47] AB: Well, they were always causing problem for probing. That’s why they were not originally included in what I worked on. They weren’t sending back the error, which was, I don’t know about this payment. They always worked that way. Then, they did update their node and they also asked to be included in the scores. They were included for a while, but I don’t know why they fell out.

[00:46:06] P: Okay. Got it. Just to be clear, my assertion earlier that running the newest version of Eclair has anything to do with this was incorrect. Is that right? It’s a per node.

[00:46:15] AB: It might be correct. I really don’t know.

[00:46:16] P: Okay. Got it. Got it. Got it. Because one of the things that’s been interesting in PlebNet is that we’ve noticed that a ton of us have basically, jumped hundreds and hundreds of scores up on my terminal web, and I had thought that was because the newest version changed something, but –

[00:46:31] AB: It might change things. I just don’t know, because I’m not working on the current version.

[00:46:34] P: Yeah. That makes sense. Severin, anything else that you would add to that?

[00:46:38] SA: No. It’s actually a very good explanation of Alex on how probing works. There are ways around it. Even if they make the payments mandatory, like Éclair did. I believe, it has to do with the recent merge request I sent. Alex has sent you the merge request in the Lightning Labs [inaudible 00:46:58] group. You can have a look. They explicitly say, you can safely make it mandatory, which closes probing attack vectors in the merge request. It actually doesn’t prevent probing, if you can get around it.

[00:47:11] KM: Yeah. The routing one hop pass, basically kills it.

[00:47:15] P: Just to be clear, the routing one hop pass is where you’re sending a probe to one farther than the node you’re actually interested in, or is that you’re sending a payment one hop farther than the node you’re actually interested in?

[00:47:25] KM: They’re not materially different, but yeah, it’s mostly – You’re offering an HTLC that never resolves.

[00:47:31] P: P: Got it. Oh, that’s so fascinating. Okay. One thing that you said, Alex, a second ago is that terminal web does not use probing to determine what constitutes a good peer?

[00:47:40] AB: It doesn’t use balance probing. It’s not like, figuring out everybody balances. As far as I know, that’s not how it works at all.

[00:47:47] P: What do you think?

[00:47:47] SA: I’m not sure about this, because when you have a look at the chasing file that the terminal web score loads in the background, then there is one field that clearly states that you need to have minimal routable tokens of 1 million Satoshi. It clearly states minimal routable tokens with my debugging effort on a terminal score debugger on my website, lnrouter.app, there is a pattern that you must have. You must have 1 million routable tokens, but the pattern is not clear. There are some exceptions and I cannot 100% say that they do probing. They do something in this direction, but I don’t know what they exactly do.

[00:48:32] AB: It does do probing. I’m not saying it doesn’t do probing. I’m saying, it doesn’t do the type of probing, where it narrows in on what your balance is from hour-to-hour, or day-to-day. As far as I know, it doesn’t do anything like that. It just does more of an information gathering probing.

[00:48:46] SA: Yeah, absolutely. That’s a big thing. Actually, a lot of people connect probing with private being privacy invading. I disagree there. If you don’t really determine the balance of the channel. If you just chat, “Hey, would this payment will through,” which happens all the time in the network by just trying to find a path. I don’t believe this is privacy invading, to be honest.

[00:49:14] TJ: What you could do for probing is just say, “Hey, can you route that 1 million Sat payment? Oh, no, you can’t? How about a 500,000 Satoshi payment? Oh, you can. How about and just narrow in, how about a 750,000 SAT?” You can bring down that resolution on exactly what someone’s balance is. Instead of doing the balance probing, you don’t need that type of resolution. You’re just curious, what can you route generally a large payment.

[00:49:43] AB: Yeah. Also, you can get the same information just by making regular payments on the network. Because every time you make a payment, you’re routing through lots and lots of different nodes. Even if you’re just making regular payments, you’re already gathering that data, like who can forward for you?

[00:49:57] KM: Yeah. This is another reason that you might want to make your channels private, if you’re not trying to be at router, is because you don’t want someone to be able to zero in on the balance of your channels through a series of a binary search on the probing, whether or not you can route a payment.

[00:50:14] P: Yeah. Can’t you create the same effect though by, I guess, you could still force it. Basically, by setting the max HTLC size? What if you had a 16 million SAT channel and then you just set the maximum HTLC to 100?

[00:50:25] AB: They can also stack HTLC.

[00:50:27] P: Yeah.

[00:50:28] KM: You can have up to 480 something HTLCs on a channel at once.

[00:50:34] P: Yep. Yep. Yep. Fair.

[00:50:36] TJ: One thing that we didn’t talk about is private channels in parallel with public channels. I know, open arms and Alex have talked about this before. That’s been fascinating, because what I was gathering was that you could actually use this private panel for routing in parallel with a public channel. That routing that liquidity in the private channel could actually be used for routing, if you have them set up in parallel.

[00:51:03] AB: Yeah. Another thing that I know, or I have heard of people doing, and I played around with a little bit myself is basically, having public channels and then private channels for rebalancing, which I think is it’s related. Or are you saying something differently?

[00:51:14] TJ: Oh, I think earlier, we were saying that private channels couldn’t be used for routing, but I was adding a little bit of nuance into it, because I think it’s an exciting opportunity for people to maybe improve their privacy, or actually, yeah, make this probing question a little bit more difficult to get a handle on, and maybe clean up your offset a little bit.

[00:51:33] AB: Also, if you’re a routing node, you might not want to advertise to nodes that you’re connected to, or how much you’re connected, because you’re leaking information to your competitors about how much they should sign to a destination. I also think, the private channel mix could be interesting. Right now, a channel and a UT Excel or a one-to-one mapping. In the future, it could be that you could just have your channels be cold wallet UTXOs that are not actually used for the channel. They’re just a marker, a placeholder that says, “I can route up to this amount.” Keep them on your cold wallet. Then, you can make private channels to be to manage how much actual hot wallet liquidity you want to have on your node. You can tear that down and raise it up.

[00:52:15] P: Wait, Alex. Can you elaborate on that? I don’t quite understand. You’re saying you could use UTXOs that you couldn’t actually sign as you’d have it on the –

[00:52:23] AB: Right. From the perspective of the network, it doesn’t know if the coin that you’ve referenced in your channel is actually being used for the channel at all. It’s just a pointer. The cost of the pointer is just to sign a multisig without UTXO. It’s conceivable that you could just have that UTXO actually be living on your cold wallet. You don’t actually have those funds on your node. Even the funds could actually not even be your own funds. You could pay somebody else to create that pointer for you. Once you have that, then you would be able to manage your actual liquidity totally privately by making private channels that just follow along the same path. Whenever you receive a new HTLC, you just send it along the private channel, instead of the public channel that the sender referred to.

[00:53:04] P: Oh. Wait. You’re blowing my mind. Is that something that people are doing today?

[00:53:07] AB: We would also have no way to know. I don’t know of an easy way to accomplish it, like using a current tool.

[00:53:13] KM: When you say that you people might use these things as pointers, the thing that’s jumping out in my mind right now is that it’s not clear why someone would want to do this. Because if UTXOs are small, for instance, that the idea that some people might want to do, I think, I’ve heard the practice called shadow routing, where they might open a 10 million SAT public channel and have a 100 million SAT private channel. At least, until amps are a little bit more widely used, that basically limits the amount that you can route over that link to 10 million at a given point, but you’re hiding the private liquidity, or you’re hiding the lion’s share of the liquidity and the private channel.

However, that doesn’t still change your hot wallet exposure as a result. It might not leak the information that you have that much available. If you have the reverse scenario, where the public channel appears, even though it might not belong to you, or something like that, appears much larger than a smaller private channel, if you look, that creates even more problems.

[00:54:11] AB: Yeah. This is a theoretical solution. I think, that it addresses one of the issues with having shadow routing channels, which has said, you limit yourself in what you can forward. You’re turning away customers. If you have the public channel that’s 10 million, but then you decide, “Oh, I want my shadow channel to have a 100 million,” the people who are sending, they don’t know that you have a 100 million, so those 100 million sends are going to go to somebody else and you’re going to lose that revenue.

Whereas, if you had one of these pointer UTXOs, you could set that to be a 100 million, but then only commit 10 million. Then if you decide, you want to go up, then you could add more shadow channels and your pointer will still remain valid.

[00:54:47] KM: You probably have to splice them, because well, link level can’t –

[00:54:51] AB: No. Because it really doesn’t matter. LND will already switch your forward to the channel that has liquidity, even if you specify the different channel. The sender doesn’t need to know about it, because LND will just automatically switch it over to the one that does exist.

[00:55:04] KM: Will it do it over parallel channels as well?

[00:55:07] AB: Yeah. That’s the only time it will do it. If you have multiple channels with your peer and one of them is depleted and the other one isn’t, but the center didn’t know that, so they specified the one that was depleted, LND will automatically switch it over to the one that wasn’t depleted.

[00:55:19] KM: Yeah. Sorry, what I meant is that if you advertise a 100 million, but you used to have 10 million and you said you wanted to up it, so you open up a second private channel with 20 million, you’re still limited to 20 million in a single shot. Until link level amps have been – are those standardized?

[00:55:37] AB: No. The there’s no link level amp implementation that I know of. Yeah, the problem is really with your peer isn’t going to respect that you have this pointer, they’re going to say, “I need to have the channel. I need to have those funds in the hot wallet.” It just gives you the flexibility to grow if you want it to grow.

[00:55:51] TJ: This is fascinating dialogue. I’m also curious if I can ask another question, P. Stop me.

[00:55:57] P: No. Please. The goal of this is basically, to have an interesting conversation. Anyone who’s a speaker, but please feel free to dive in and ask questions.

[00:56:05] TJ: Yeah. Another thing that comes up is how do you think the Lightning Network will change with taproot getting activated? Do you expect that it’ll be easier, or more difficult to find routes? Or how do you see it playing out as more tools become available with the soft fork?

[00:56:22] KM: I don’t actually anticipate it making anything more. I guess, I don’t know about. It’ll depend on whether the implementations can get an uptake of some channel point that is taproot enabled quickly, because it does require a spec change. Because in one of the BOLTs, I think, BOLT 3, it actually specifies the entire transaction and script formats. There’s all the implementations have to use that in order to be able to enforce the punishment schemes. In so far as it takes a long time to get that implemented and there’s going to be this heterogeneity between the network.

HDB2 came out forever ago and we still use HTTP1 on half the Internet. It might take a while in order to be able to use taproot channels with most of the peers on the network. I don’t think it should impact routing all that much, as in constructing a route to the destination.

[00:57:11] KM: Okay. Interesting.

[00:57:13] P: How do you think that sidecar channels will affect the topology, or the way that the Lighting Network is used?

[00:57:19] KM: I wish I understood sidecar channels better.

[00:57:21] TJ: That’s a pool product. Is that right, where you’re essentially providing inbound liquidity to a new entrant to the Lightning Network, for a fee and making that available to the pool auction. Is that right? Please correct me if I’m wrong.

[00:57:36] P: I’m not sure.

[00:57:36] KM: Does Elizabeth want to come up?

[00:57:38] P: I sent her an invite, but she’s refusing, which I’m deeply offended by it. No, I’m kidding. Elizabeth, you want to come up and give us your thoughts? She may be otherwise occupied.

[00:57:48] KM: Try to take a stab at it. The way that I understand sidecar channels right now is that it just requires that the person purchasing the channel lease does not have to be the one who receives the inbound liquidity from the channel lease. I think, that that’s the only difference. What that would mean is that it basically, just functions as a normal channel, but it doesn’t require someone to have Bitcoin loaded up into a bunch of different wallets to begin with.

[00:58:14] TJ: Okay. Under that, then established nodes would be able to participate in pool and help broker deals for liquidity for new nodes, because, I think, that’s one of the biggest problems is that when people start up a node, they’re like, “How in the world will I get inbound liquidity, so that I can receive payments ,or become a routing node?” Beyond sidecar channels, it sounds like, there’s a whole bunch of tools that are emerging, lightningnetwork.+ for these organized rings. I’ve been really impressed with it. You’re able to construct these ring routes in a matter of hours, instead of trying to coordinate these liquidity ring just manually through messaging.

[00:58:55] P: Yeah. I can say, the trying to participate in the rings of fire is a very onerous process. It just takes days and days, and then people change their fee structure, or they can actually route. We’ve found it much more effective in PlebNet to basically, just organize those directly between people. The problem of course, is that is very trusted. It requires trust. The reason that I got super interested in the balances of Satoshi’s dual-funded channel option is because it is trustless, which is super interesting. I didn’t realize that it was possible to implement that through keys and on LND, but I certainly use that a lot these days.

[00:59:29] AB: Yeah. I think that I’d be interested in making a group version of that.

[00:59:31] P: Oh, my gosh. You should do that.

[00:59:33] AB: I think, there’s a lot of interesting angles to approach it, like making it easy, making it, so that you’re not relying on somebody running some script that you just say, “I want to join this group.” Then the group just happens. This is a new phenomenon. I never really thought about it before, but I’ve been thinking about expanding the way that the balance channel works to make it amenable to groups. That was the impetus behind the balance channels. I saw people who were opening a channel and then they were trusting the other person to send them half the money back.

I thought, “Oh, we have the technology here that you don’t have to do that.” I think, the same applies also to the group channels, but the group channels themselves have also been progressing. It’s not as bad as it was before with this trust model, but I think it could be better than what we have now.

[01:00:18] P: Oh, it absolutely could. I love the idea of being able to, as you said, have these group architectures. One of the things that I’ve been thinking a lot about is in the last three months even, the tools that are available that Severin has built out, that you built up has just exploded. As a person who has had a Lightning node for a long time, but has not actually been able to figure out how to participate effectively in the Lightning Network and how to basically make strategic decisions about which nodes I connect to, I just feel like, we’re in this is magical time when that that the tooling is just being built out in front of us, and we’re able to participate in that process.

One of the things that I have been really excited about is tools that allow one to effectively, like LN node insight by a small world. That’s another tool I was trying to get them to join, but he’s in a different country, and so the timing was off. Essentially, there is a channel simulator that he’s built out, that allows you to basically go in, you put your node in, and then you can plug in any other node and simulate like, how it will affect your centrality, which of course is only one aspect of being an effective routing node.

There’s a real space. There’s a real need right now for tools that allow people who are non-software engineers to be able to intuitively understand, or build mental models around how routing works and how rebalancing works. I think, that’s the thing that is so desperately needed right now. As we all put effort into building up the number of high-quality nodes in the Lightning Network. For example, being able to visually have a tool that would display the entire Lightning Network and then basically, use a slightly different force directed graph that would show communities. Then basically, have you be able to visually see in real-time, or maybe after the fact how routes are being constructed, even just a graph that’s on lnrouter.app/graph, but then you could plug in and basically dump. You could see in after the fact, exactly the route that was taken through the network. That stuff is so valuable for people who are just trying to wrap their heads around how Lightning Network works.

[01:02:13] TJ: I love all the visualizers popping up, including the one on LnRouter, as well as cheese robot. I think, one of my favorite things just like at Amboss is just watching the loop node and watching all of the people compete with fee rates, just since they can see the actual fee rates that other people are charging, they’re now actively undercutting each other. Then, they’ve taken to changing their aliases to send passive-aggressive nodes or whatever to say, “Oh, you undercut me. I’m undercutting you now.” Using that as a broadcast communication method. It’s very entertaining to watch.

[01:02:53] AB: It’s great for the whole concept of Lightning, that the capital is going to be where you want it to be. I wouldn’t take that for granted. You’re a service. Then, the people who are just going to appear to offer you inbound liquidity when you need it. Loop is the proof point that that does work. That if there’s a demand, that’s a sustained demand, there’s going to be a marketplace for people to come in and supply that inbound liquidity. It’s going to be a very vibrant marketplace, where people are going to figure out how much is this costing me? How much can I earn? Can I do better than the other guy? If we scale this network up to a 100X, this is a market process that can just work.

[01:03:28] KM: Yeah. The hard part is actually just discovering where those reliable demand points are.

[01:03:33] AB: Yeah. It didn’t happen overnight. The original loop node, that was some of me just begging people, “Do you want to open channels?” Then, it takes time for people to find out about this. That’s on both sides of the equation, if you’re somebody who is starting a new node, like if you want inbound liquidity, and you were talking about just starting only private channels. That’s one reason that you wouldn’t want to do only private channels, because that sacrifices that organic inbound liquidity with people knowing that, “Oh, if I sent you.” There’s also a marketplace, even within the peering. If I open a channel to loop and it’s at a low fee rate, but there’s somebody else at a higher fee rate, the people at the higher fee rate can buy the liquidity from the people at the lower fee rate. That creates a marketplace just through rebalancing. You don’t really get that unless you have public channels, and unless you have an established node in the network

[01:04:22] TJ: Now with parallel channels, I guess, the higher price node might think that they might be able to rebalance and eliminate some underpriced nodes, or some lower price nodes. They might find themselves that there’s actually a whole lot more liquidity than they were prepared for.

[01:04:41] AB: Also, you’re creating your own demise to some degree. Let’s say, you’re a high-fee node and you peer with loop. Then, you look at the low fee nodes and you say, “I’ll buy all of their liquidity out.” You can do that, but you’re also giving them an incentive to get new inbound liquidity, to create new channels. This is like a market in the sense that you’re predicting the future. What are they going to do? What’s going to be the demand in the future? Then that’s what’s determining the price of doing a loop in the routing sphere.

[01:05:05] TJ: Fascinating. I love how this is evolving. quickly.

[01:05:09] P: Yeah. This is an amazing time to be in Lightning. Yeah, I wonder, does anyone that is a speaker on stage have questions for anyone else on stage? What are the things that you’re currently thinking about that might be useful to get input on?

[01:05:21] R: I have a question to Alex, or everybody else. I was tweeting a lot about Thor recently. It seems like, a lot of Thor nodes have trouble staying up, trouble having their channels being active and not disabled. I’m a little bit confused. I’m not sure now if it is Thor, or if Thor is the problem. Open noms also replied to my tweet there, or if it is actually an issue with LND at the moment, that a lot of Thor nodes are having issues.

[01:05:53] AB: There is an LND issue that should be fixed in today’s 0.13.1 release. I guess, it was yesterday. The problem is if you’re a Thor node and you’re connected to a node on Clearnet and the Clearnet node changes its IP, the Thor node will not automatically reconnect to the clear node, new clear node IP. It will just stay disconnected forever, then the channel will be disabled. Unless, you run a re reconnect script periodically, they won’t be figuring that out. That issue has been fixed to date.

There is also a greater issue, which is that Thor itself as a network. It’s not a 100% reliable. There’s a lot of problems with Thor. That manifests itself as you just lose the ability to forward to your peer.

[01:06:36] KM: Yeah. There was significant problems earlier this year with consensus process and the Thor hidden service directories, which is how the dot onions know where they’re routing. First of all, V2 addresses on Thor have been deprecated. It’s recommended that you use V3s to begin with. If you did use a V3, you were probably going to be affected by this. It happened sporadically. There was a patch that the tour team released to deal with it, but it isn’t widely available on a lot of the home node implementations, because the patch that they deployed was only available for ARM V8. It never actually got back propagated to ARM V7, and a significant number of the note implementations run off of operating systems that require 32-bit, or ARM V7.

[01:07:24] R: Great, thanks. This is insightful.

[SPONSOR MESSAGE]

[01:07:31] CK: Bitcoiners, I am so excited to tell you about the Bitcoin 2022 Conference. You guys, Bitcoin 2021 was absolutely a smash hit success. It was over 13,000 Bitcoiners coming together, breaking the barriers on who can come together and celebrate freedom, celebrate Bitcoin. The energy was absolutely electric.

Unfortunately, it was just oversubscribed. There’s just people flowing out everywhere. This year, we are learning, we are making the conference bigger and better. We are moving over to the Miami Beach Convention Center, and we are going to be throwing a massive four-day festival for Bitcoin, celebrating Bitcoin, bringing together the greatest minds in Bitcoin and the greatest businesses in Bitcoin and lastly, the culture of Bitcoin all together.

We had a four-day extravaganza planned for you guys for Bitcoin 2022. Day one is going to be industry day. It is a day where you can buy a special ticket in order to just mingle and make business deals happen. Day two and three is going to be a full-blown Bitcoin conferences. Our main conference is going to be on April 7th, and 8th. Then lastly, we have the sound at music festival, day four.

Imagine going to Coachella, but for Bitcoin. There’s going to be very few talks. It’s going to be all about the culture of Bitcoin. It’s going to be all about hanging with your fellow plebs. It’s going to be an absolutely amazing time. There’s going to be Bitcoin musicians, Bitcoin artists, and all your favorite Bitcoiners and just an amazing environment to party and just see it all, soak it all in, and to get people to realize that a Bitcoin world, a world filled with Bitcoin people doing Bitcoin things is the world that they want to live in. That’s what Bitcoin 2022 is all about. That is what the Bitcoin conference is all about. That’s what Bitcoin Magazine is all about.

It is going to be a celebration of Bitcoin, the Bitcoiners and this amazing movement that is going to make the world a better place. Go to b.tc/conference, learn more about the Bitcoin Conference, learn more about all the amazing things that are happening in Miami around the Bitcoin Conference and buy your tickets. Guess what? If you buy your tickets with Bitcoin, you save a $100 on all the tickets and a $1,000 on the whale pass. If you want the VIP pass, the Big Kahuna, you buy with Bitcoin, you save a $1,000. That’s a lot of SATS. Go and do it right now today. Don’t wait. Prices are only going up. This is going to be a can’t miss event.

[01:09:59] CK: Bitcoiners, let’s take a break from the content and I want to tell you about CoolBitX. CoolBitXis an awesome Bitcoin hardware wallet. It’s been around for a really long time. They are building an amazing Bitcoin wallet called the CoolWallet Pro. The CoolWallet Pro is state of the art Bitcoin hardware, wallet technology. Its form factor is like a credit card. You can put it into your wallet, and it is designed to go with you on the go. That way, even when you’re on the go, you can have the benefit of a two-factor hardware wallet design when you’re trying to spend your Bitcoin, so you can have your Bitcoin wallet.

You exit on your phone and make it really easy to scan, decide what you want to do. Then you sign with CoolBitX, which is in your back pocket. It is tamper-proof. It is waterproof. It is flexible. It has an awesome secure element in it. It is a really awesome way in order to have some more flexibility, yet security when you’re taking your Bitcoin on the go. I personally am a fan of this idea of making Bitcoin into a medium of exchange and making it into something that people use. I know, it’s going to take time, but they are working on the UX for making that possible in a secure a way possible. Have some peace of mind. Check out the CoolWallet Pro from CoolBitX. Thank you to them for sponsoring this podcast.

[EPISODE CONTINUED]

[01:11:27] R: I am wondering how larger node operations with a heavier volume of transactions deal with the channel DB infinitely growing. Obviously, there’s compaction offline. In my case, I’m doing a lot of rebalancing pretty much constantly through the day. At this point, I have a significant number of settled invoices from that.

If you want to use the UIs that is getting significantly worst performance. I see that in the next version LND, they’ll have some pagination enabled, which obviously, once UI developers add that that should help. If there’s any other things may be in the pipeline that anybody knows about.

[01:12:07] AB: The problem isn’t the invoices side of rebalancing. The problem is on the payment side. LND is keeping every history of every failure that you ever see, and it will keep it forever. Even if the payment fails, we’ll keep that payment around and data, and we’ll also keep every attempt to achieve that final failure. That will usually comprise the bulk of your database, if you’re doing a significant number of payments.

The way that you can deal with that is number one, there’s always been this API call, where you can delete all your payments. You can dump all your payments out to a file or something, delete them all, run the compaction. You probably would see maybe even a 10 times decrease in the amount of database space used, depending on how many payments you’ve made.

Then in later versions of LND, there are other API calls that allow you to delete all the failed payments. Only the payments that succeeded will stay in your database. Or, there’s another flag to allow you to delete all of the attempts that failed. You were trying to make a payment and it failed this route, it failed this route, it failed this route. It will delete those attempts. On my nodes, maybe every week or two, I’d run a delete payments. I’d run a compaction. In addition to the space savings, your node performance can dramatically increase. It could be a 10X increase, depending on how fragmented your database is, depending on how much data you’ve got on there.

[01:13:29] R: That makes sense. I noticed that API today and I was going to figure, I was going to play around with it, because yeah, my rebalancing performance has dropped like a rock in the past two days.

[01:13:40] AB: Yeah. If you use my script, you could just do a delete payments history, or you can just hit that API call. There’s no LNCLI command for it, so you do have to use some tool, or use API directly.

[01:13:50] P: Wait what script is that?

[01:13:51] AB: The Balance of Satoshis. Just as a delete payments history command.

[01:13:55] P: Oh, no way. Okay. Man. Let me ask you something, Alex. What are the things that for those of us who are running, or attempting to run effective routing nodes, what are the things that you have in your Cron jobs that basically, you would recommend all of us are doing? I know, there’s boss reconnect, where she’d been super helpful in explaining, it sounds like, boss delete payments. Are there any other things that you currently have running on a cycle?

[01:14:20] AB: I do dynamic fees. If there is a scenario where I’ve identified I need my fees to change based on my inbound or outbound, or things like that, I have a Cron job to execute this command, and it has a little bit of logic in it, which is if inbound is greater than this, then do that. Then also, I run multiple nodes. One thing I’ve noticed sometimes with people who run multiple nodes, that they don’t keep the channel between them, that it’s balanced. That’s something you could easily do with a Cron job.

You just say, send the missing balance over to the other node. Then, you can have two nodes act as one node. A lot of people rolled their own custom scripts for this. Like, [inaudible 01:14:55] has this in their code base. I noticed that Bitfinex used to not do this and then they switched over to it. They said, they had great results with it. I do like to add multiple nodes. I think, multiple nodes is something that has a lot of advantages, that have two routing nodes and they work a little bit differently, and they have their strengths.

[01:15:12] P: Interesting. Also, just going to give you props, Alex. I don’t know how you do it exactly. I feel like, you have to look the little time dilation device from Harry Potter, but you respond in approximately 15 seconds to any message that anyone posts in the Balance of Satoshis chat. It’s quite remarkable.

[01:15:27] AB: People are pretty good about reporting issues. I think, it’s pretty useful if you have people testing things out. A lot of the things that I wouldn’t have noticed first other people are like, if I run this commit with this flag, it has an error or something. It’s a community project, which is pretty cool.

[01:15:41] P: Yeah. I’m assuming the answer is absolutely, but in terms of improving the UX, or adding clarity for things that people are confused about, I’m assuming you appreciate pull requests to the balance of Satoshis tool?

[01:15:52] AB: Yeah. Definitely, if people want to add things. Really, the tool itself has the command line version of different libraries, working on different libraries to help different use cases. If you look at [inaudible 01:16:01] wallet, they powered Bitcoin Beach, they’re using some of these libraries, so they don’t use the Balance of Satoshis tool. They use different libraries that are then, you see it on the command line. That’s what I’m going for as well, is to empower people to make their own stuff using these common libraries.

[01:16:16] P: Yeah. Yeah, Absolutely. Somebody came off mute. Hey, look, I know you had a question a second ago. Do you want to ask it?

[01:16:22] TJ: Yeah. I was curious if there was any observable difference to routing fees as the mempool has cleared, blocks aren’t filling up. I know people FOMO into creating more channels, so assuming that there’s more competition and less fees, and also, once again, competing with just on-chain transaction. I was wondering if there was any noticeable effect.

[01:16:47] KM: I’ve observed it. It’s tough to say what the ultimate cause is, because the mempool clearing coincided roughly with the PlebNet taking off as well. Yeah, I’ve seen massive downward fee pressure over the last month, like four to six weeks in my corner of the world. Now, I don’t know if you’re more of established and things like that, you may have seen it less. It’s definitely something I’ve observed.

[01:17:10] TJ: It’s certainly crossed my mind. Of course, there are fixed minimum costs for maintaining channels in my view, because at minimum, it’s going to be a channel open and a channel close, which there’s a fee associated with that. If you’re both opening and closing a channel at one SAT per bite, that would be a minimum of 300 SATs, or just roughly, a minimum 300 SATs per million SATs of your channel.

If you’re only opening 1 million SAT channels just to cover your costs, those should be at 300 PPM, at one SAT per byte. I’d see a ton of channels that are lower than that, because they think that that they’re going to get bidirectional traffic, which in my view might be a poor assumption.

[01:17:58] AB: I think, you can get bi-directional traffic, but it is good to start with that fundamental premise. My channel set up in long live, a 16-million channel can easily have a full Bitcoin worth of traffic, or even 10 Bitcoins worth of traffic, because it’s been around for years and it’s been used a million times.

The most basic strategy should definitely be like, coming at with your cost perspective of how am I going to make my money back? On my node, I’m spending $200 a month on chain fees. I have to think, I don’t want to just waste those Bitcoins. I want to make the $200 back plus, maybe something for me. That’s how I’ve always thought about it, even from the beginning. I set my fees at a pretty high rate compared to the rest of the network. My premise was always like, this isn’t going to scale as a charity, because we’re going to talk about people putting in tens of millions, hundreds of millions of dollars. It’s not going to work if everybody just gives away chain fees for free.

People that were very critical of me at that time, they were saying like, “Oh, why don’t you put your fees to zero, like everybody else?” Now I see some of the bigger players, even Blockstream and Async. Async has fees of 30 basis points, which are higher than even mine are in my direction to popular destinations. They have higher fees, like 60 basis points. That’s definitely something to think about is approaching it as a business, that you’re going to have costs and you’re going to try to get revenues.

[01:19:14] TJ: Yeah. The other thing is I had an unexpected force close this month and it nearly wiped out all of my earnings for the month, just to have one force close. You have to really struggling to find a good mental model on how to price in that risk of force close.

[01:19:30] AB: There is so that the anchor channels update, which would mitigate that cost. Because instead of having a high commitment fee, you would have a minimum relay fee cost. Then, if only if you need to, you will increase the chain fee. In practice, if that works out like it’s supposed to, you would see at least a 10X decrease in the amount that you would pay, maybe even a 100X.

[01:19:51] TJ: That’s fantastic. Yeah. Very excited about the anchor channel.

[01:19:54] AB: Channels exist now, so they’re the default channel type. The optimizations to bring those fees down, they haven’t been fully implemented. If you update it to 0.13.1, there is an optimization now where instead of targeting a confirmation of six, which was hard coded, it’ll target a confirmation of a 144, which is still hard coded. It’s going to save you a lot of money.

[01:20:16] TJ: With a force close on that anchor channel, is there a replace by fee option, so that it could be bumped in the future?

[01:20:22] AB: There’s no option, but that’s what it is doing. It’s doing that automatically. It’s saying, “I have a certain deadline that I need this to be confirmed within, and I’m going to start low. Then as time goes by, I’m going to keep pumping it up.”

[01:20:34] TJ: Oh, that’s fantastic.

[01:20:35] KM: Presumably, what you’re worried about, Jestopher, though, is that your remote party is force closing. Is that right?

[01:20:42] TJ: For this instance, it was my node that made the decision to force close. I haven’t dove into the logs to figure out exactly what happened, but it’s something that happens when you’ve got defaults setting.

[01:20:52] AB: As far as the cost go, it doesn’t matter who does the force closing. It’s the person who initiated the channel that always pays, even if it’s not your fault that you closed it.

[01:20:59] KM: You still have to pay the chain fees to claim the funds from the UTXO that’s created by the channel close transaction. Yeah, the commitment fees, you don’t have to pay, right?

[01:21:07] AB: Yeah. Although, the anchor channels also does change that equation a bit, because now, it’s whoever wants the channel to close the fastest, they’re the ones now responsible for the payment. It changes the calculus also of accepting channels. Now, when you accept a channel, the lion’s share of the cost might not be on the person who initiated the channel with you. The lion’s share of the cost of might actually ultimately be on your side.

[01:21:28] P: You need to close and reopen channels to get anchor channel set up.

[01:21:31] AB: Technically, it might be possible in the future to upgrade them without it, because you still have a two of two. Right now, you need to open new channels if you want the anchor channels. There’s also two versions of anchor channels. Do you want the real version of anchor channels, you need to, yeah, open up new channels.

[01:21:47] KM: Interesting.

[01:21:48] R: Can you expand on real version, if you had anchor channels from 0.12.1 Would those be real, or an older version of the anchor channel?

[01:22:00] AB: I think, 12.1 was on the spec. There was two iterations. One is the proposal state of anchor channels. It was implemented in LND. Then, once there was a working implementation, it was back and forth on the mailing list and on the spec about how everybody would implement it. Then, that’s what’s in the current formulation of anchor channels. I think, it’s probably unlikely that you even have any of the old ones.

[01:22:22] R: Yeah. I’m about half a traditional channels and half anchor.

[01:22:27] AB: Yeah. If you made them in 12.1. Because 12.1, the anchor channels were almost made default. It was only at the last minute that there was some more changes that we thought we should go in to make them default in 0.13.

[01:22:37] R: I assume, some of the payment issues in 0.13 have been resolved in 0.13.1?

[01:22:45] AB: Yeah. Yeah. There was problems with key sense in 0.13, and there was problems with just payments that were made on Neutrino, maybe [inaudible 01:22:52]. I’m not sure. Then, even in 13.1, in the early revisions of it, there was problems in regular sense. Hopefully, I haven’t heard of anybody reporting any issues and I’ve tested myself that the issues that were in 13.0 are resolved in 13.1. That should be all fixed up now.

[01:23:09] R: Good. I’m going to see if I can get BTC pay server to move up to 13.1 here in the near future.

[01:23:15] P: Alex, I have a question. I love the run LND repo that you have, which walks you through, basically setting up a Bitcoin core and LND with Thor, and then goes through all this specific LND.com configuration tags, for lack of a better word, that you’ve implanted. If someone is not yet at the level of being able, or feeling comfortable fully rolling their own, is there a specific, not necessarily pre-built, but a more pre-built implementation that you prefer in terms of security, usability. Again, this is for someone who is comfortable with the command line, but for whatever reason is not willing to run their own full node. I know Start9 has a great product, RaspiBlitz, Umbrel. Do you have a preferred implementation?

[01:23:55] AB: I’ve heard good things about RaspiBlitz. Also, the guide does include instructions for Neutrino, if you want to skip this step where you compile Bitcoin D, I’m going to skip this block sync. There’s instructions on how to use Neutrino, which I think is, can be good for a node, where you’re sending. You’re not receiving money. There’s more limited risks if you’re not running a routing node. Or if you have your own neutrino source that you can trust.

Yeah. I also think if you are putting a bunch of capital on there, and you’re trying to write a series node, it might just be worth investing in some time to learn how to run Bitcoin D properly. Because you might run into a situation, where you need to fix things and it’s going to be ideal if you know how things are working.

[01:24:34] KM: Yeah. I tend to view a lot of the node products and obviously, I spend a lot of my day trying to improve them. I don’t see them as serious routers tools. I see them more as tools for individual users, who want to get up and running quickly, as trust, minimize the way as possible. I think, it really does accomplish that well. I don’t think that you’re going to be able to be a serious routing node in two years’ time without being able to roll your own tools, or do a lot of your own systems administration, at least a little bit.

There can be certain things that automate some of the services and getting them up and filling them down. I think, being a Lightning routing node, it’s this niche skill that requires technical know-how, as well as some financial acumen.

[01:25:20] AB: Yeah. I don’t even think that the barrier is all that high. It’s more cause I’m certainly not the world’s best sysadmin. I think, it’s more like, getting a hand on how to run commands. Sometimes, I look at people who are putting a bunch of money on one of these nodes that isn’t really meant for it. I think, you really be well-served if you just took the basics about how to use a shell and how to set up things properly. Because it’s not that hard if you just spend some time on it.

[01:25:44] R: Alex, are there any soft forks that you’re particularly interested in? ANYPEVOUT, or CTV? Are there any swaps that that would be enabled? I remember attending your original workshop, or whatever, where – or actually, it was just the Asset Bitcoin Devs, where you talked about tit for tat swaps, HTLC dash swaps, pow swaps. Is there anything new that would be, I guess, easier to do?

[01:26:07] AB: Of course, Schnorr. I’m excited about doing key aggregations. That’ll be amazing. It looks like, knock on wood, that’s a software that we’ll be activating. Beyond that, I don’t know if people are talking about it so much, but I don’t know if any of the existing software proposals cover this. I don’t love the way that her current anchor channels work, or that the current way that channel resolution happens, where you have to increase your fee.

People have written papers on this in hyperbolic terms, like the flood and loot paper. We have mitigations for that. I’d like a software targeted at that, which is we have very high levels of predictability about what’s going to happen, if I have this unsigned transaction that I’m not going to have to guess the fee correctly. I’m not going to have to compete with people forget the fee, that it’s always going to play out the way that I think it’s going to play out. If we can formulate a software like that. All of the proposals circle around that issue.

I’m hoping, what happens is they coalesce those ideas. Just how taproot happened. The idea of having the mass the mass functionality was originally proposed as a separate soft fork, and people kicked around that idea for a long time. It finally coalesced over many years into the taproot, so I think the same thing. It would be great for channels and for any swap, any off-chain protocol. You need better finality than just, I’m going to guess a fee that’s going to work.

[01:27:24] KM: Can you explain what soft fork would actually – I’m struggling to figure out what consensus chains would be needed to be able to predict fees better.

[01:27:32] AB: You would want something that would make the fees irrelevant, basically. The fee would just be about the timing of when things would be executed. If you use the covenants, some covenant soft fork, as example, you would say, I know that even when this confirms, it can only go to these people. It’s not like raising, because the order of events of how things can play out is already set in stone, because we pre-committed to it. How that actually happens, it’s a lot of different ways. It doesn’t necessarily need to be covenant. That’s a lot of complexity also, that I don’t really spend a ton of time on, but it’s more of what I want to see.

[01:28:06] PARTICIPANT: Hey, guys. I heard you talking about fees earlier. Obviously, we know the mempool is a ghost town right now. As a miner, I came up here to blame you guys for taking away all my fees. In all seriousness, I actually was wondering if you guys think that PLUGnet and the rise of the Lightning Network is having a demonstrable measurable effect on main chain fees, or if it’s just a function of the lack of supply, or demand that’s out there for actual on-chain transactions? Because spot buying is also non-existent it seems right now. Also, sorry. It might have something to do with the fact that it was a blockchain.com finally implemented SegWit.

[01:28:39] KM: I’m not sure that can be answered empirically.

[01:28:42] AB: If you’re talking about chain fees, since I started running my routing nodes, I spend way more on chain fees, than I ever have spent in the past, because there’s just so much activity happening that I’ve gone from occasionally, I’m going to spend a chain fee to try out some new service or something to chain fees are now part of my regular operating expenses. If I need to pay a chain fee, I’m going to have to pay to get in no matter what.

It’s a big change to go from paying a dollar a month, maybe to $200 a month. I think, that more use, it’s going to have that effect. If people make more services that use these micropayments, people are going to have more reason to open up channels, and we’re going to have more needs to move liquidity around and we’re going to see chain fees increased. I don’t think, also in the current mode that chain fees are materially changed by the traffic that the Lightning Network. I was looking at the submarine swaps. There’s a page that lists out every loop that happened. There’s a lot of loops. It’s over 10,000. There’s also 50,000 channels. Every block is having 2,000 transactions. I don’t think it’s making a big difference either way.

[01:29:42] TJ: Is that a public page, Alex?

[01:29:44] AB: Yeah. I think, it’s loop.lightningporter.net, or something like that. It just lists every single submarine swap that it could detect just by looking at the on-chain signature of those swaps.

[01:29:55] TJ: Fantastic. You really don’t know what’s happening on the Lightning Network. Even an individual routing node wouldn’t be able to speak for the whole thing. Although, personally, I see plenty of routing activity and I’m looking forward to seeing more, if we’re going to bring 6 million people onto the Lightning Network in the next couple of months, the next year. It still needs to grow pretty significantly.

We’ve got lots of work ahead to build out all the infrastructure and get the tools ready to get folks to allocate their resources, or their SATs in a smart way, so that they’re efficient. Yeah, I think at this point, it’s a lot of trial and error. With all the tools coming out of the folks on stage, I’m very excited about the future of it.

[01:30:40] P: I have a quick question. There’s a, I forget what the name of the website is, but it’s gosh, I can look it up. It’s basically, it’s TX something. TX insights or something like that. It basically scans the blockchain for a Lightning channel opens that are public to determine the total liquidity in the Lightning Network. Isn’t that also possible as of right now, before taproot [inaudible 01:30:59] implemented to do for private channels? Couldn’t you scan the network in the same way and get an accurate measure of the total liquidity if it’s locked up in the Lightning Network?

[01:31:07] KM: You’d be making some assumptions about some of the script types. You can try to run that analysis, but as of right now, channel types are paid a witness script hash, usually the [inaudible 01:31:17] 32 version. You can try to make the assumption that like, okay, any payment to a witness script hash has the possibility of being to the Lightning Network. Any advanced multisig setup is going to look the same, at least until it’s closed.

[01:31:31] AB: Yeah. People have run the analysis on closed channels to get an upper bound, and that’s been done before. I think, the analysis was actually that there are very few couple channels comparatively, at least few of them that are closing.

[01:31:42] P: Apologies, my Twitter app crashed. Can you just repeat the last sentence that you said?

[01:31:48] AB: The analysis of the two of two closes on-chain, or two of two spends on chain revealed that by far and away the most common use to closes are publicly identifiable channels. Like channels that you could see, like the out point was listed in the graph. That saying, not that many people are using a native segue two of two. It’s also saying, not that many people are using private channels.

[01:32:11] P: Got it.

[01:32:12] KM: I guess, the question, because in the anonymity set, you don’t know that it’s two of two, until the witnesses provide it. The real question is, can you – what the broader use of pay to witness script hashes is in general, separately from Lightning channels, because then, you can just count the number of outputs there and then try to make heads or tails of it. Of course, with taproot, assuming things are cooperative, all of this goes away.

[01:32:38] P: Yeah, absolutely.

[01:32:39] AB: Yeah. You’re really talking about the Schnorr key aggregation, because it will make the – instead of having two of two keys, you’ll just have one key. Still save money and it’ll be more private. I think that will definitely help. Although, you’ll have another impact, which is how many people are using taproot to make sense, period. It could be that if Lightning is the main first adopter, that you can just add up all of the taproot outputs, and now you almost have less privacy, because you can use that as upper bound, how much money was sent to the taproot output.

[01:33:08] R: Are taproot enabled channels something people are playing with in signet yet? Or is that still too early?

[01:33:16] AB: I don’t think any of those kinds of things exist. I haven’t even seen very many taproot demos, period.

[01:33:22] KM: There are some taproot outputs, I think, in signet, but they’re very few. In order to get them into a channel relationship, we would probably need at least a proposal for a spec change, because it meaningfully impacts the structure of the transactions themselves. Pretty much all of BOLT 3. I don’t know if the revocation mechanic will work the same as it does. There’s a lot of changes to be made to the Lightning protocol itself before it’s actually usable, even in a demo sense.

[01:33:52] P: Okay. There’s a question. What are each of you the most interested in on the timescale of let’s say, 30 to 60 days? What development that is related to the Lightning Network, or something that you are personally working on is the most exciting to you in terms of helping to improve the Lightning Network? Let’s start with, that would be Severin.

[01:34:10] SA: I’m currently working on a tool that tries to estimate the health of a node. The idea is still fluid, so it changes a lot. To really say, this is a good node to connect to, or this is a bad node to connect to. In the future, this might uptime, or whatever, but this is the general direction. but I’m still digging into data, how I can do that and stuff like this.

[01:34:39] P: Got it. All right. Hello, Jessica. Your turn. Speak.

[01:34:43] R: Personally, I’m interested in trying to automate the rebalancing process, because I’ve found success with active rebalancing, but it requires far too much labor at the moment. I’m actively working on trying to improve that. Also, potentially analyzing some of the HTLC event data to maybe see if I’m missing opportunities due to fee structure.

[01:35:06] P: Yeah. I got to say, that’s the thing for me right now that is the most interesting. It’s being able to determine which channels are receiving the most failed payments, so that I can change max HTLC sizes, close channels out, things like that. HTLC event stream. Alex, what about you?

[01:35:22] AB: I think, the most dynamic in the 30-day, 60-day timeframe is the groups, like group channel opening. I want to explore that myself to thinking a lot about it. I think, it’s a new use case for Lightning, because it isn’t so focused on I want to make payments cheaper. It isn’t so focused on, I want to receive payments. I want to make a specific app. It’s more a social experience. It’s more, I want to take part in this peer-to-peer network. I think, it’s been underserved because we’ve been focusing so much on the nuts and bolts of making things efficient and making things work for businesses that we haven’t worked so much on the peer-to-peer side of things.

[01:35:55] P: Yeah. Oh, man. This is one of my other current passions is that, we’ve all been building out PlebNet for that reason. I’ve been working with Lamar who was in the audience. I’m not sure if he still is, who’s basically – he’s doing that for a – he runs the Black Bitcoin Billionaires Club on Clubhouse, and they’ve been building out a community of Lightning nodes that are in their community. I think that model, these small communities, groups of friends, large communities all getting onboarded and on-boarding themselves and each other to Lightning Network is going to be the future of Lightning adoption.

There aren’t any tools that I’m aware of that, what you just described in terms of tools that facilitate opening trustless balance channels among groups, but also that allow a group of people to strategically determine the best channels to open in order to both strengthen the routing within a group. Then also, to benefit the larger Lightning Network. I want to see something where I can take the output of cheese robot, which is an incredible tool, and it’s the background for all the stuff that’s happening within telegram, because it allows us to gamify and really have fun with the size of the graph.

I want to be able to take something like that and then plug that into a third-party tool, or a website, or just something I’m running that they cloned down from GitHub, and then get a dynamic readout of metrics that are for that entire group, rather than just me as an individual node. It’s going to be really powerful.

[01:37:18] KM: Yeah. I think that segues into what I’m working on right now, which is primarily doing more in-depth yield analysis on the different channels. Because unlike various Lightning channels, despite the fact that they give you revenues and stuff like that, they are not fixed income instruments. Your liquidity has been continuously reallocated from your destination to your source in any given moment. Understanding what the actual time-based ROIs of having channel allocations in any given place, I think is going to be really important to be able to make good decisions about, especially if you’re capital-bound, right?

If you can continuously add capital, maybe this doesn’t matter as much, but you always want to be closing your least profitable channels to do your forward experiments with, as opposed to your most profitable ones. There are naive ways to understand that. I think, actually, human intuition, as long as your data set is small is actually pretty good approximation here. Especially, as your channel counts grow and your payment flows are growing, having tools to be able to say definitively that on a per unit time basis, this is your least profitable channel, close it. Then experiment with moving it, the capital there elsewhere.

[01:38:30] P: Yes. Okay. That’s a huge thing. Have you played around with the Python scripts that grid? I’m sure you actually probably have more nuanced tools you’re using yourself, but the Python scripts that Gridflare has built out, because I found those incredibly helpful in that regard.

[01:38:41] KM: Yeah, I’ve played with them. I haven’t really done it enough to really have an assessment of whether or not it’s materially helped me or not. I honestly haven’t given all of them “fair shake.” Not that I have anything against them. I just haven’t had time. I’ve been working on some of my own stuff.

[01:38:55] P: Yeah, absolutely. I think, I only mentioned them, because, and of course, again, centrality is not the only factor. I think, the type of analysis you’re talking about, the more nuanced analysis, that’s really the goal. For me to be able to basically run an analysis on all of my channels and then have it spit back, this channel has never been used. This channel is only writing a small amount of payments. Then basically, have the metrics right next to it that are like, here is how your centrality score will be affected if you remove this channel. Again, that’s only one factor, but that strategic analysis, I think, is sorely needed and sounds amazing. If you want any beta testers, you know where to where to ask.

[01:39:29] KM: Yeah. One of the things, for the benefit of the audience, what Phillip’s talking about is that there’s a person in the PlebNet community named Gridflare that put together some scripts that did an analysis of the channel graph to figure out what some of the best nodes to connect to were to improve your between this, or centrality score with respect to the graph topology. It’s definitely an interesting thing. It’s well-studied with respect to the graph theory. I think, one of the observations that was made by the Lightning Lab’s team in the form of the Lightning pool product is that the graph is devoid of economic information.

You don’t really have a great idea of what the demand for payment flows are, just by looking at the channel graph. You set yourself up for having – get some of these scripts that improve your centrality, set you up from a topology perspective to have the opportunity to route certain payments, but it doesn’t necessarily mean that any of the payment demands for those routes exist. These tools all have to be used in conjunction with one another. Otherwise, you’re not going to get a complete view of what the right approach is.

[01:40:34] P: Beautifully put. Jestopher, what, on a 30 to 60-day time scale are you most excited?

[01:40:39] TJ: Yeah, this is probably going to be the last thing that I’ll say. I got to run after this. Thank you so much Bitcoin Magazine and P for having me on. I think as far as developments coming up, I’m most excited about the bottom-up growth, things like PlebNet popping up, because it’s been a real gift and excitement, as all these people are joining the Lightning Network. One thing that I’ve noticed is that it is a one-way trip. It’s like a second orange pill that you take to be on the Lightning Network, because once you start, you don’t really want to stop, because the incentives are aligned. It’s exciting. It’s a social network and it’s growing rapidly.

I think, I’m most excited about people discovering this technology as we are undercutting all the other payment rails out there. A stat that I like to consider is that at 300 PPM, you’re underpricing Visa by about 43 times. As people discover this technology, I think they’ll see some real opportunities and people will be inspired to build on it.

[01:41:41] P: Yeah. Thank you so much for joining. I’m going to be doing these twice a week, generally. I think we’re going to have another one on Thursday at the same time. Please feel free to jump back in. This has been awesome.

[01:41:50] TJ: Thanks so much.

[01:41:51] P: The last thing I’ll say is, one of the things that has been so interesting to me is that I’ve had a Lightning node for a while, and it wasn’t until we started building up PlebNet that we just decided a bunch of us learning together, that it became really fun. Not only are the incentives and really interesting, which is that as you benefit yourself, you are benefiting the larger network.

Then, when you’re part of this community of friends, you get these group incentives, where previously, my experience in the Lightning Network has been that people discover these interesting ways to extract more economic value and they keep it to themselves, because that provides – there’s an edge that you get by doing that. When you’re in these communities of friends and it’s super fun you, you tend to be more willing to share some of this information, because you benefit. As you share things, like we’re seeing in a PlebNet advanced channel, you get input from people that have different perspectives than you do, and you learn more and more.

I think, that’s all just built in top on top of the incentive structures that are put in place by Bitcoin as a layer one, and then Lightning as a layer two. That to me is the most compelling thing about Bitcoin and Lightning. Then lastly, I just want to say, for people that are feeling like, “Oh, man. This seems like it’d be super boring.” I got to say, managing my routing node is more compelling and more fun than any real-time strategy game I’ve played, because there’s real money involved. There’s real SATs. The decisions you make affect your income in this way, and it’s very fun. It’s addicting.

Anything else anybody else wants to say, or rep before I close out the room?

[01:43:18] TJ: Just thanks for having us.

[01:43:20] KM: Yeah. Also, my side, just thanks for having us. It was a really good conversation we had here on. There’s some really good people. I’m looking forward to continually be part of this community, to talk to a lot of people to improve the Lightning Network and the experience we have and all the people have of it.

[01:43:36] P: Absolutely.

Leave a Reply

Your email address will not be published. Required fields are marked *