-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Propose Enhancement 57: support_agent_behind_firewall.md #58
Conversation
This is definitely a really useful feature, but it requires fundamental changes to how Keylime currently works. In the environment where I work with Keylime the planned workaround is to put all the devices into a VPN with OpenVPN. Two different types interactions currently need a direct connection to the agent. QuotesFirst the tenant get an identity quote from the agent that is used to prove that transport key actually belongs to the agent/TPM registered with that agent ID. In my opinion this quote can be eliminated by just using make/activate credential also for that key during registration. Second the verifier periodically connects to the agent to pull the necessary data for attestation (TPM quote, IMA log, UEFI event log). Here we can change the model for the agent to push the data periodically. To make this possible, following problems have to be solved:
Payload DeploymentThis one is more tricky because the tenant directly sends its payload to the agent. The V key needs also sent somehow to the agent, but this can be done as a response when the agent pushed new attestation data. |
agreed, it is a fundamental change to support agent-push approach (probably I should re-name the enhancement to that). Regarding how the agents finds out which verifier to push to, I would expect this to be part of a device onboarding process, when the agent is provisioned (e.g. using FDO). Regarding load balancing, I would expect the verifier to be able to scale out horizontally if needed. Regarding payload deployment, I would suggest to wait until the agent connects the next time (e.g. to push attestation data, as it should periodically) and provide it then. Agreed, the payload needs to be stored somewhere on the registrar. In the k8s world, I would take a persistent volume claim with RWX storage for that, easy to access for all scaled out instances. |
I would expect the verifier to be configurable, perhaps initially static but later by the data center admin GUI. I imagined that the agent could go to more than one verifier. Each may have different requirements (e.g., rate of attestation.) Can't this be done by just having several agents running as separate processes? |
How would incremental work differently in a push model? The verifier says to the agent: here's the nonce, the PCR selection, and the starting event number for the pre-OS and IMA logs. Give me the quote and the incremental logs. Hopefully, the message is exactly the same in both models. |
Yes, my idea is that the registrar just tells the agent what verifiers are interested in attestation data from that agent.
It doesn't but we currently don't commit every state into the DB because the agent does not change its verifier during attestation. |
@DanielFroehlich I've written an initial draft for describing the problem and what needs to change in order to support it: https://gist.github.com/THS-on/aedfd139ac1cb012745abeb0276d5e5c |
Superseded by #103 |
#57