-
-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add cost func #3
base: main
Are you sure you want to change the base?
Conversation
@Gobd i've explained the "cost" factor on reddit which i think is unnecessary additional added overhead cost to this lru, if you can do the ttl version as "cost" will be great. Will accept that as a pull then. Can you check the reddit thread and maybe explain why this cost factor is necessary? Having said so i can do a branch that does cost factor though or separate it out for special use case. |
I think your reddit account and all posts you made are removed :( I don't think this adds any overhead and starts an options pattern that could be used for other things too, like how many shards or whatever else |
@Gobd i have no idea why they deleted the post, it was my first time posting on reddit for golang stuff. i'll need to check out what this cost factor is about. the "cost" factor that i will implement in future will be based on TTL. so the greater the ttl, the more important it is? anyway, TTL will be added in future. i still cant wrap my head around a "cost" factor on ttl LRU for now. i think this additional overhead of processing for a "cost" adds extra cost to the lru. This is what may make Accelru stand out and be fair to lru comparison. when you limit by item capacity, you are not constraining to the device's resources available. but if you do so, that's where this lru will seemed "better", afterall, it's the memory limitation and how it's used as cache. p.s. : let's use the "lowest memory allocation used by the others (other than phuslu/lru which doesnt have mem alloc)" by each category for testing against this LRU for the total cache. How about this for comparison? Let's see what happens. one more thing. the lower bound of ~ 1.3MB memory allocation used by otter with 10000 items, accelru key value should be within it. so I can take this as the memory used. just put NewLRUCache(1300000,1). I'm curious about the hit ratio only then. then total memory used for accelru will be confined to 1.3mb for the 10k items. (which honestly, i think it's a bit too much. i would just use 150k mem for a 10k item lru cache of integers. but that's just me.) i can even estimate the hit ratio without actual runs, guess i'm quite a Pareto principle believer. I'll take the lowest bound of "memory allocation used" by the cached compared (only for the ones u are comparing today) as cache capacity size for each of the sections you have done. i can live with this. (just the lowest memory allocation used by each of the category will do) i wont speculate why the post is deleted but if you can, do help post your findings on reddit etc. instead of deleting the thread, they deleted the whole post and filtered content preventing any mention of the repo. |
Here's what 2 hit ratio benchmarks from there look like with cost of an item fixed at 1 using my branch. You can compare to the results at https://github.com/Gobd/benchmarks for these 2 traces to see the difference when treated more fairly compared to the other caches. |
@Gobd what was the NewLRUCache settings used? Can you show the chart for this? i only modified this, capacity * 40bytes
i have issue with simulator chart generation, what software to install to see the chart for this? See the error message at the bottom
|
lru.NewLRUCache(int64(capacity), 1, lru.WithCostFunc(func(key, value []byte) int64 {
return 1
})) So same as others, limit based on items not memory. |
@Gobd what about memory to memory used comparison? curious, possible to see any issues with same amount of memory used by like others on the hit ratio? |
@Gobd i thought about it and your initial capacity is still capped at memory used. e.g. 10000 = 10kb memory size. so the lower hit ratio is expected. you have to increase the initial capacity as mem size too |
@Gobd if you really want to test the capacity limit then the way should be
pls check my remarks and publish the findings for total memory used is best to justify the effectiveness of cache for golang use. |
@Gobd i've updated a faster, more memory efficient cache for your test, you can self define your own hashing algo in this one. |
So much formatting sry :( Maybe this would work for benchmarks to be more fair, allow user supplied cost func to be more like other caches for benchmark.
Don't merge this since I changed the go.mod, just an example.