-
Notifications
You must be signed in to change notification settings - Fork 35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
/purge/* will cause 100% of disk io usage #32
Comments
Configuration file you have may be affected by other nginx settings like thread async i/o, cache lock. |
My OS is FreeBSD 12.0. No async i/o. No matter I set proxy_cache_lock on or off, the same failure will occur. |
I found that when I purge with wildcard, it will traverse all my cache
files. My cache files is very huge, so it cause 100% of disk io usage.
Denis Denisov <[email protected]> 于2020年9月23日周三 上午12:52写道:
… Configuration file you have may be affected by other nginx settings like
async i/o.
What is the number of objects to be deleted (cache folder)?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#32 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AQ4GSITOPFU4HADPQPKSAZTSHDI6VANCNFSM4RVN4WBA>
.
|
I'm having the same issue here.. any workaround for big cache size like > 300GB ? |
Just about 10GB.
midorinet <[email protected]> 于2020年9月29日周二 上午11:08写道:
… I'm having the same issue here.. any workaround for big cache size like >
300GB ?
—
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub
<#32 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AQ4GSIVTEZTEBVPTDQGCWW3SIFFSNANCNFSM4RVN4WBA>
.
|
Earlier today, I explained why this happens, here: |
delete the file from disk in a worker process is REALLY NOT a good idea. |
I use Azure VM with nginx as the reverse proxy. And add a 64g Premium SSD as cache disk. When I use /purge/* to clear the cache, the disk IO exceeds 100%.
The text was updated successfully, but these errors were encountered: