Skip to content

Changed local_ids for Atomic counter and removed key_slot_semaphore. #191

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

sbailey-arm
Copy link
Contributor

@sbailey-arm sbailey-arm commented Jun 25, 2020

key_slot_semaphore removed - #187 1.

local_ids changed to an AtomicU32 that store the current highest key ID including destroyed keys). On load, loops through all stored keys and sets the counter to the max ID. On creating a key ID, adds 1 to the counter. If key store fails and no thread has created another key in the mean time, decrements the counter to the value before the attempt to create a new key ID failed. On key destroy, counter not altered as extra complication deemed unnecessary. #187 2.

Signed-off-by: Samuel Bailey [email protected]

// If storing key failed and no other keys were created in the mean time, it is safe to
// decrement the key counter.
let _ = max_current_id.compare_and_swap(new_key_id, new_key_id - 1, Relaxed);
return Err(ResponseStatus::PsaErrorBadState);
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Wasn't sure on the error to return if there are no more key IDs available.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think BadState is an error referring to multi-parts operations. It might be better to use InsufficientMemory. Also, could you please add an error! log to note that we've reached the limit?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is it a good idea to return InsufficientMemory? I guess in conjuction with a log message it would be easy to understand what's going on, but based on the error code alone it might be confusing.

Copy link
Contributor Author

@sbailey-arm sbailey-arm Jun 25, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking about adding a new error but didn't want to touch parsec-interface without discussing first.

Something like 'MaxKeyIdReached'.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the problem with a new response code like that is we shouldn't be exposing internal issues (or, not ones that mean nothing to a client). InsufficientMemory is probably ok if we document what it means in our context - we just add an entry here

Copy link
Member

@hug-dev hug-dev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks!

// If storing key failed and no other keys were created in the mean time, it is safe to
// decrement the key counter.
let _ = max_current_id.compare_and_swap(new_key_id, new_key_id - 1, Relaxed);
return Err(ResponseStatus::PsaErrorBadState);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think BadState is an error referring to multi-parts operations. It might be better to use InsufficientMemory. Also, could you please add an error! log to note that we've reached the limit?

if new_key_id > key::PSA_KEY_ID_USER_MAX {
// If storing key failed and no other keys were created in the mean time, it is safe to
// decrement the key counter.
let _ = max_current_id.compare_and_swap(new_key_id, new_key_id - 1, Relaxed);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As you said in your issue comment, do you think this is always safe?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's one problem I've thought of (see second paragraph) but in general, yes. In that particular case, the new ID would be created with the value of PSA_KEY_ID_USER_MAX + 1, and max_current_id would be set to that. If no other thread tries to create a new ID, max_current_id will still be PSA_KEY_ID_USER_MAX + 1, which will then be decremented back down to PSA_KEY_ID_USER_MAX. If a new thread does try to create a new ID, it will increment it to PSA_KEY_ID_USER_MAX + 2. The original thread will then fail the compare and not touch PSA_KEY_ID_USER_MAX.

It could overflow though, theoretically there's theoretically a chance that mac_current_id could keep going up if new threads stop any older threads from decrementing it. Maybe it'd be better to just hard reset it to PSA_KEY_ID_USER_MAX if it fails the check? No key will ever be created if max_current_id is at the ID limit so we should be able to set it back.
Thread #1 comes in, increments it to MAX + 1 and realises its too high. Then thread #2 comes in, increments it to MAX + 2 and also realises its too high. Thread #1 runs again and resets to MAX. Then thread #2 does the same (though it's already at MAX).

Decrementing in the other case (line 74) should also be safe. If no threads have created key IDs in the mean time, the compare will match and the counter will be decremented correctly. If a thread has come in in the mean time, the compare will fail and the counter will not be decremented.

Am I missing something?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it'd be better to just hard reset it to PSA_KEY_ID_USER_MAX if it fails the check? No key will ever be created if max_current_id is at the ID limit so we should be able to set it back.

Yeah, just commented the same thing in the issue, that sounds sensible!

As for the second decrement - it works, but is it needed? I'd say that having an operation that isn't deterministic isn't a good idea (regardless of correctness) and if you end up crashing through the handle limit then you have bigger issues on your system.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I saw 😋

Ok. Yeah if you're going that high, saving a key ID here and there isn't going to make a difference, I'll remove it.

while local_ids_handle.contains(&key_id) {
key_id = rng.gen_range(key::PSA_KEY_ID_USER_MIN, key::PSA_KEY_ID_USER_MAX + 1);
// fetch_add adds 1 to the old value and returns the old value, so add 1 for new ID
let new_key_id = max_current_id.fetch_add(1, Relaxed) + 1;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I guess it is fine to not care about overflows as PSA_KEY_ID_USER_MAX is equal to the defined value of 0x3FFFFFFF.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Better safe than sorry or overkill? 😉

Comment on lines 100 to 103
if key_id > mbed_provider.id_counter.load(Relaxed) {
mbed_provider.id_counter.store(key_id, Relaxed);
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be more efficient to find the maximum locally, in a psa_key_id_t variable and then store it in the id_counter at the end. I believe the atomic operations impose restrictions on compiler optimizations.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, didn't realise there would be much overhead in just using the atomic variable. I will change it.

@hug-dev hug-dev requested a review from ionut-arm June 25, 2020 09:59
@hug-dev hug-dev added the enhancement New feature or request label Jun 25, 2020
@hug-dev hug-dev linked an issue Jun 25, 2020 that may be closed by this pull request
@sbailey-arm sbailey-arm force-pushed the further-simplify-mbedcrypto-provider branch from 0f5db9b to 014bd8a Compare June 25, 2020 11:16
@sbailey-arm sbailey-arm marked this pull request as ready for review June 25, 2020 11:36
Copy link
Member

@hug-dev hug-dev left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good to me!

Copy link
Member

@ionut-arm ionut-arm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💯

@ionut-arm ionut-arm merged commit 5f96b21 into parallaxsecond:master Jun 25, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Further simplification of the Mbed Crypto provider
3 participants