Limit server resources for the tracker #825
Replies: 5 comments 11 replies
-
1. MEM: Clean peerless torrentsThis is already implemented to save memory. Peers that don't announce in 2 minutes are removed from the torrent entry in the torrent repository. When the last peer is removed from the torrent (peerless torrent), the torrent is also removed from the repo. |
Beta Was this translation helpful? Give feedback.
-
2. MEM LIMIT: Set a memory limit and remove torrents that have not been updated the longestWe have already tried to implement this: But it was not merged because it was mixed with other features that were not ready to merge and because it was not implemented for all the torrent repository implementations. The main idea is to keep the last "update" datetime for each torrent and calculate the memory used by the torrent repository. When we reach the limit we remove the torrents that has not been updated the longest. Pros:
Cons:
|
Beta Was this translation helpful? Give feedback.
-
3. MEM LIMIT: Set a memory limit and move less active torrents to diskThis would require counting the number of requests per torrent. Popular torrents remain in memory, and torrents with few requests are moved to disk. Pros:
Cons:
This would work if we have a Pareto distribution or similar, otherwise we are just moving torrents from memory to disk. |
Beta Was this translation helpful? Give feedback.
-
Torrents and peers can be stored using very little memory. I’ve been told that the explodie.org tracker tends to use around 1.5GB, and it currently tracks 4 million torrents and 10 million peers and serves 120k requests per second. For this reason, I think resource limiting is a bit of a dead end, and that a good first step would instead be to use a memory profiler to see where all the memory is consumed and try to reduce it. If you do implement memory limit functionality or similar, you should be careful about in practice increasing memory use due to overhead for extra metadata, duplicating data when shuffling it around, and so on, risking pushing peak use into OOM territory. |
Beta Was this translation helpful? Give feedback.
-
Also, memory use should not increase over time if torrent/peer counts are stable. Have you checked for memory leaks? |
Beta Was this translation helpful? Give feedback.
-
We are running a tracker demo at:
We are using a Digital Ocean droplet:
We had to increase it recently because the server received more and more requests.
Currently, the server is processing approximately 500 req/sec, which is a very low load compared with other trackers, such as https://www.gbitt.info/.
The tracker was restarted on the 29th of April. The current stats:
If the tracker continues running, we will have problems again because it will consume more resources. In the past, we tried to limit memory consumption:
#789
However, we did not finish that implementation. Limiting resources is a tricky/complex feature because it can degrade the service (response time or accuracy).
We are considering different strategies for limiting resource consumption. Resources considered are:
We would like to know:
Does it make sense?
If we don't limit resources, the tracker will be restarted periodically depending on the server size, which means downtime anyway. A bigger machine can only prolong the period between restarts.
On the other hand, it seems it would make sense to allow people to run smaller servers depending on their budgets. Some people may want to contribute to the community by running a small $6 tracker server. We don't know if this makes sense since that would make the tracker less efficient, which is the main advantage compared to DHT. Many smaller trackers could lead to an efficiency similar to DHT. Maybe only big tracker servers make sense.
We will add some ideas we have been discussing below. For example, limiting the memory consumption for the torrent repository. We add an "update" datetime to all torrents. That datetime is updated every time an
announce
request is received for that torrent. When we reach the limit, we remove torrents that have not been updated the longest.Type of solutions
There are two main solutions to this problem:
This discussion is only about the second type.
cc @da2ce7 @Power2All @greatest-ape
Beta Was this translation helpful? Give feedback.
All reactions