Performance improvement: decrease announce request response time #330
Replies: 1 comment 1 reply
-
In my honest opinion, I think a cache for a torrent tracker shouldn't be happening. |
Beta Was this translation helpful? Give feedback.
-
In my honest opinion, I think a cache for a torrent tracker shouldn't be happening. |
Beta Was this translation helpful? Give feedback.
-
Today @WarmBeer and I were discussing a feature to decrease response time.
Introduction
First, I will explain the current process to handle an
announce
request received by the HTTP tracker.The HTTP job handles an incoming HTTP request. It could be done in the main job thread or a newly spawned thread.
This is the controller (called handler in Axum):
I've removed some parts I do not consider relevant to this explanation.
The handler invokes the app service:
And the app service invokes the domain tracker service:
The HTTP tracker protocol performs two operations with one request: a command and a query.
The
announce
response contains:The peer that makes the announce request is not included in the list, but if the peer has just announced that it has completed downloading the torrent, then the
complete
attribute would be increased by one.Alternative solution
It seems we could return a response immediately to the client even before updating the peer list because the peer is not included in the result.
When we receive the request, we could:
In fact, updating the peer list and returning the response data could be executed in parallel if we use a different thread.
The main problem to implement this solution is that the response includes the
complete
statistics that may change if the request event iscomplete
.That problem has two solutions:
announce
request is for acomplete
event. In that case, we can process the request as usual. For other cases, we go with the new implementation.complete
does not include the peer making the request in this case. From my point of view, that solution would be less surprising. I would not expect the counter to change if I'm the first peer announcing that torrent. I would expect to get the list of completed peers who are not myself. For my previous research (ChatGPT :-)), it seems we should not count the peer in the seeders counter. So we could totally skip this problem, just returning the stats we get, not including the peer making the request.So basically, the proposal is: "query command segregation"
Implementation proposal
I think the simplest solution would be changing the service to:
I think we need to spawn a new child because we do not control the web framework, I do not know how we could continue with the update after sending the response. Maybe there is a way to do that with Axum.
We could even improve the solution more if we introduce the second child thread since the beginning: the main thread handles the "query", and the child thread handles the "command". But I think that would not increase the response time because writes and updates have the same priority, I guess. We are using RwLocks. Read more about how the app uses locks here.
Clarifications
This theoretically would improve only the response time, but it's not clear that you can get that benefit with the current app architecture because:
BTreeMap
with locks, which are all sequential.BTreeMap
Conclusions
It seems that to implement this solution, we should also split the models, having:
So we could have a queue of pending updates and a queue of pending readings.
We were also talking about a possible implementation for that. We could have a "write" BTreeMap and a "read" BTreeMap. When you write on the write model we trigger an event that updates the read model (cache). We have to have at least two read models:
Every N seconds, we can switch the new cache to read-only and start building the new cache. The problem is the peer list would not be fresh. Clients can miss some peers. So they would get faster responses with fewer peers. The longer you keep the cache, the fewer new peers you get.
Does it make sense?
cc @WarmBeer @da2ce7 @Power2All?
Beta Was this translation helpful? Give feedback.
All reactions