-
Notifications
You must be signed in to change notification settings - Fork 45
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some potential bug and improvements #17
Comments
I think I've already fixed the first two problems in the dev branch https://github.com/maypok86/otter/tree/dev
P.S. I tried debugging and if is really not needed |
Awesome work! :) |
@1a1a11a Speed is otter's best feature by a huge margin so far, so for me the higher priority for now is
|
Make sense!
|
|
Although I doubt such DBs would have traffic similar to DS1 or S3 traces... |
I think I've corrected all the comments. It is better to talk about S3-FIFO in another issue. |
About the expiration policy, I might have missed something. It looks like you would need to re-distribute all items (with lock) when the timing wheel turns every time. Is that correct? |
Eh, I got too caught up in preparing the first otter release and trying to reach A standard timer wheel implementation would require this, but there are variants that guarantee amortized O(1) time complexity. I plan to use a hashed hierarchical timing wheel. Original paper |
HI @maypok86, Thank you for sharing these blogs! I am aware of hashed and hierarchical timing wheels. But they are two different timing wheels (hashed timing wheel and hierarchical timing wheel) with different tradeoffs. What do you mean by hashed hierarchical timing wheel? My guess is that most people go for hierarchical timing wheel because the per-tick time complexity for hashed timing wheel is too high. But as I mentioned earlier, the problem with hierarchical timing wheel causes high tail latency (when the upper level timing wheel move by one tick and it requires a locking). For example, when the |
Oh @1a1a11a, I apologize for the confusion. I was of course referring to the hierarchical timer wheel, "hashed" just refers to the fact that this wheel doesn't need sorting and uses hashing instead of comparing. The day wheel expiration thing is something to think about. Sounds like something really nasty, especially since I see setting ttl to 1 day quite often in my work.😞 |
Maybe there are some approximations can be done to address the problem. Or we can use the Memcached's approach that uses a dedicated thread to move 100 (number is made up) entries each time, and then releases lock and sleep for some time before next operation. But I think even the hourly timing wheel will be an issue. Assuming a 256GB cache with mean object size of 256 bytes, 256GB / 256B = 1billion objects are stored. If we assume the expiration time is evenly distributed in a week, then there are 1 billion / (7 * 24) = 7 million objects expire each hour, so each time the hourly wheel ticks, you need to process 7 million objects, which would take 10s to 100s milliseconds. |
Hi @maypok86, it was great to see the hackernews post attracts a lot of attention!
|
Hi @1a1a11a, I wasn't even worried about that :). The main thing for me was that S3-FIFO showed a good result. Actually, there might be something wrong here, because I just took zipf parameters from ristretto benchmarks and didn't check them in any way. Here are the parameters of zipf distribution , and here is the description of these parameters and this is run with 1000000 requests. I tested otter along with a similar s3-fifo implementation https://github.com/scalalang2/golang-fifo and they seem to have about the same results.
|
Hi @maypok86, Thank you for the explanations! How was the miss ratio of ARC obtained? |
Oh, my comment may be confusing, I was wondering where is the ARC implementation |
Oh, it uses the most famous implementation from hashicorp https://github.com/hashicorp/golang-lru/blob/main/arc/arc.go |
The LRU implementation is also taken from the same repository. |
Thank you! The implementation matches the libCacheSim results. :) |
If my understanding is correct, the current implementation has a bug that may evict more objects than needed. I believe it should
return
at this line instead of continuing to evict because an object has already been evicted from the main queue https://github.com/maypok86/otter/blob/main/internal/s3fifo/small.go#L62As you have mentioned (Experiment with S3-FIFO eviction policy Yiling-J/theine-go#29), it is better to store a fingerprint instead of the full object in the ghost. Storing data in the ghost defeats the purpose of using a ghost queue...
the size of the ghost queue does not need to be accurate, so you can estimate using the total cache size divided by the mean object size. VMware found that the ghost queue size has little impact between 40% and 200% of the main queue size (in terms of #objects).
there are multiple if statement seems not necessary, e.g., https://github.com/maypok86/otter/blob/main/internal/s3fifo/small.go#L55
The text was updated successfully, but these errors were encountered: