-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add a caching layer for NSSDB per-attribute keys #133
Conversation
NSS uses a particularly expensive PBKDF2 operation to derive per-attribute encryption/siagnture keys, therefore a caching layer is necessary to attain reasonable peformance. Signed-off-by: Simo Sorce <[email protected]>
Signed-off-by: Simo Sorce <[email protected]>
This is a very simple way to avoid unbounded memory pressure, however it has no smarts to keep in cache the most used keys. It simply kicks out the last key whch is the key that happens to have the "higher" salt. It can cause performance degradation if a token has very many keys and keeps trying to access a roration of those with a "higher" salt. Signed-off-by: Simo Sorce <[email protected]>
Added a patch to constraint the size of the cache, otherwise it could grow unbounded. |
Signed-off-by: Simo Sorce <[email protected]>
Immortalizing an epic failure:
Will have to check next week why this test fails exclusively when dynamically built, it makes no sense to me. |
Signed-off-by: Simo Sorce <[email protected]>
v3 is deprecated and will be soon non-functional Signed-off-by: Simo Sorce <[email protected]>
match self.cache.write() { | ||
Ok(mut w) => { | ||
if w.len() > MAX_KEY_CACHE_SIZE { | ||
let _ = w.pop_last(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is basically dropping a random item from the BTreeMap as it is ordered. Is there some way to be able to drop the oldest item without too much complication, or this is the best approximation we can get?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dropping the oldest item is just as good as dropping a random one, we have no way to know if the one we are dropping is useful or was a one off anyway.
We would have to store additional data to know which is oldest I believe, and that is more complicated and would require also more processing.
Ideally normal usage of the token does not involve hundreds of encrypted attributes.
/* now that the pin has changed all cached keys are invalid, replace the lot */ | ||
self.keys = newkeys; | ||
/* changing the pin does not leave the token logged in */ | ||
self.keys.unset_key(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This sounds weird. We get the new keys, we replace them in self
and then throw away as with logout. Would not it be more straightforward just throw away both instead of doing the indirect removal of old keys by assigning new keys and then flushing them too?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No because I want to preserve the cache of the pin which is created when we store it.
If you look carefully at the code you'll see caches are never dropped except when we whole sale replace all keys in this function, which makes sense because a change of PIN will change all keys so all caches are useless.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
Adds a caching layer to NSSDB to reduce the performance impact of the key derivation operations.
NSSDB uses a particularly costly operation (pbkdf2) to derive a key for each of the attributes being encrypted.
This PR also adds a test to demonstrate the issue.
This test takes a relatively long time to execute without the cache:
It is much faster with the cache:
Fixes #130