Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add a caching layer for NSSDB per-attribute keys #133

Merged
merged 6 commits into from
Dec 16, 2024

Conversation

simo5
Copy link
Member

@simo5 simo5 commented Dec 13, 2024

Adds a caching layer to NSSDB to reduce the performance impact of the key derivation operations.
NSSDB uses a particularly costly operation (pbkdf2) to derive a key for each of the attributes being encrypted.

This PR also adds a test to demonstrate the issue.
This test takes a relatively long time to execute without the cache:

$ time cargo test --features standard,nssdb test_nssdb_key_cache
   Compiling kryoptic v0.1.0 (/home/remote/ssorce/devel/git/kryoptic)
    Finished `test` profile [unoptimized + debuginfo] target(s) in 4.96s
     Running unittests src/lib.rs (target/debug/deps/kryoptic_pkcs11-0639a8933718ff9b)

running 1 test
test tests::nssdb::test_nssdb_key_cache has been running for over 60 seconds
test tests::nssdb::test_nssdb_key_cache ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 186 filtered out; finished in 110.29s


real	1m55.282s
user	1m57.233s
sys	0m0.649s

It is much faster with the cache:

$ time cargo test --features standard,nssdb test_nssdb_key_cache
    Finished `test` profile [unoptimized + debuginfo] target(s) in 0.03s
     Running unittests src/lib.rs (target/debug/deps/kryoptic_pkcs11-0639a8933718ff9b)

running 1 test
test tests::nssdb::test_nssdb_key_cache ... ok

test result: ok. 1 passed; 0 failed; 0 ignored; 0 measured; 186 filtered out; finished in 0.11s


real	0m0.166s
user	0m0.144s
sys	0m0.020s

Fixes #130

NSS uses a particularly expensive PBKDF2 operation to derive
per-attribute encryption/siagnture keys, therefore a caching layer is
necessary to attain reasonable peformance.

Signed-off-by: Simo Sorce <[email protected]>
This is a very simple way to avoid unbounded memory pressure, however it
has no smarts to keep in cache the most used keys. It simply kicks out
the last key whch is the key that happens to have the "higher" salt.
It can cause performance degradation if a token has very many keys and
keeps trying to access a roration of those with a "higher" salt.

Signed-off-by: Simo Sorce <[email protected]>
@simo5
Copy link
Member Author

simo5 commented Dec 13, 2024

Added a patch to constraint the size of the cache, otherwise it could grow unbounded.
I chose an arbitrary limit of 128 keys. The logic can be improved later if it turns out there is some use case that can consistently cause thrashing

@simo5 simo5 requested a review from Jakuje December 13, 2024 22:28
@simo5
Copy link
Member Author

simo5 commented Dec 13, 2024

Immortalizing an epic failure:

---- tests::login::test_login stdout ----
thread 'tests::login::test_login' panicked at src/tests/login.rs:78:9:
assertion `left != right` failed
  left: 0
 right: 0

Will have to check next week why this test fails exclusively when dynamically built, it makes no sense to me.

src/storage/nssdb/ci.rs Outdated Show resolved Hide resolved
v3 is deprecated and will be soon non-functional

Signed-off-by: Simo Sorce <[email protected]>
@simo5 simo5 requested a review from Jakuje December 16, 2024 15:48
match self.cache.write() {
Ok(mut w) => {
if w.len() > MAX_KEY_CACHE_SIZE {
let _ = w.pop_last();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is basically dropping a random item from the BTreeMap as it is ordered. Is there some way to be able to drop the oldest item without too much complication, or this is the best approximation we can get?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dropping the oldest item is just as good as dropping a random one, we have no way to know if the one we are dropping is useful or was a one off anyway.
We would have to store additional data to know which is oldest I believe, and that is more complicated and would require also more processing.
Ideally normal usage of the token does not involve hundreds of encrypted attributes.

Comment on lines +1205 to +1208
/* now that the pin has changed all cached keys are invalid, replace the lot */
self.keys = newkeys;
/* changing the pin does not leave the token logged in */
self.keys.unset_key();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sounds weird. We get the new keys, we replace them in self and then throw away as with logout. Would not it be more straightforward just throw away both instead of doing the indirect removal of old keys by assigning new keys and then flushing them too?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No because I want to preserve the cache of the pin which is created when we store it.

If you look carefully at the code you'll see caches are never dropped except when we whole sale replace all keys in this function, which makes sense because a change of PIN will change all keys so all caches are useless.

Copy link
Contributor

@Jakuje Jakuje left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

lgtm

@simo5 simo5 merged commit 6166f28 into latchset:main Dec 16, 2024
19 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

NSSDB Performance issues
2 participants