-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
caching concurrently on high volume requests #3
Comments
That happens because there's a lot of concurrent requests which will MISS the first cache lookup, so they'll all execute your handler and then cache the response. Subsequent requests will then HIT the cache lookup. To avoid that you'll need to use a Mutex to lock the execution to a single goroutine per time https://gobyexample.com/mutexes |
I will try to make a change and post a gist for you to review, is that ok? |
Following your suggestion, I've implemented mutex operation on the middleware, please see it here: https://gist.github.com/mlevkov/41094e7d536c720e2b09f617347eb1d2 Here are the performance numbers
--> After:
Over 2x improvement and no longer reports multiple caches at the same time. Please let me know if I've implemented the change correctly. Any other suggestions? |
That's a good implementation, but I'd suggest using RLock as well. It'd be like:
a read lock doesn't block other read locks, so you can have multiple read locks I expect this to also give you slightly higher throughput |
RLock in addition to the mutex or in place of the mutex? |
You can use the same mutex, it should have a RLock func there |
Oops. Minor mistake, your mutex should be this one https://golang.org/pkg/sync/#RWMutex |
I made a quick change, followed your proposed change and testing it now. |
I realized that such was the case. Here is the gist: |
the above implementation performance is:
|
After that Lock, you’ll need an additional lookup to the cache. The reason being that two or more goroutines might try to acquire that lock, but only the first should do the work. The subsequent goroutines, after acquiring the lock, should then hit the cache. |
Something like this ?
|
Yep, how does that perform? Also, that last write should be inside the brackets |
ok, let me make that change. Here is the performance from that one:
|
here is the performance feedback from the last run with change of w.Write inside the brackets:
|
perhaps I'm not implementing this correctly. Would you be open to making a sample that you see fit the intent correctly?
the performance outcome of this code is:
|
I'm using bombardier for testing at 150 concurrent connections for 90 second period, just for reference. |
That’s interesting, I’d expect it to be better or equal in terms of perf. Will try to have a look at this later on. |
That would be great. Thank you! |
Hey @mlevkov, try this version. The difference is that I'm using RLock first and only then a Lock (if value not found on cache).
|
hm, that is interesting indeed. Thank you so much! Will try that shortly and report back with results. |
@goenning I've tried the exact details you mentioned here. Here are the reported results:
what was your performance like? |
Hello,
I've implemented your code with some minor adjustments to my rest endpoint, which takes a url with parameters and returns json document. In that scenario, your code is perfect for middleware []byte storage. However, I've ran unto a problem where stress testing tool such as bombardier makes with the following parameters "/bombardier --method=GET --latencies --duration=10s --connections=150 127.0.0.1:3030/evs/1/at/153000,674000,1404000,1433926/" and hits the endpoint where I've enabled the caching support. I see that the endpoint, instead of caching a first request, it caches few more past the first one as depicted in the following log output:
Any thoughts?
The text was updated successfully, but these errors were encountered: