-
Notifications
You must be signed in to change notification settings - Fork 106
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Performance impact with libspdm #2544
Comments
You can use two buffer for send and receive. libspdm does not have such restriction.
That is required in SPDM spec. That is why the limitation is there in the beginning. |
For sending app messages also do we have restrictions to get response before we send other request ? |
It depends. I think SPDM VDM has such requirement. |
I am not sure all other APP messages. Need review APP spec. |
Since the SPDM specification is super flexible on VDMs I believe it's legal to specify a VDM that consists only of a request and has no corresponding response. |
Also from https://github.com/DMTF/libspdm/blob/main/doc/design.md The buffers have the following properties:
|
Okay, I will check this. Thanks a lot for pointers |
For GPU confidential compute we (NVIDIA) establish an initial SPDM session and then use the Export Master Secret to derive lots of new symmetric keys. We then have separate buffers for each key, and only SPDM messages go through libspdm. |
@steven-bellock @jyao1 For my use case as mentioned above. I changed the existing APIs to give me encrypted payload/decrypted payload.
Questions:
|
Hi Team,
I went into a scenario where I see lot of performance impact when I try to
send/receive
data with libspdm. The transaction happens over MCTP.Scenario:
a) We have one buffer to
send/receive
which makes it difficult to send/receive parallel request/response.b) libspdm requires response for a request sent. (Doesn't support async way of transfer and receive)
Thanks,
Prithvi A Pai
The text was updated successfully, but these errors were encountered: