Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sibling communications over VSOCK #275

Open
Bert-Proesmans opened this issue Sep 15, 2024 · 1 comment
Open

Sibling communications over VSOCK #275

Bert-Proesmans opened this issue Sep 15, 2024 · 1 comment

Comments

@Bert-Proesmans
Copy link

I'd like to know if anyone else has experimented with, or even got working, sibling VSOCK communications.

This past week I looked into enabling communication between guest vm siblings over AF_VSOCK. Expecting it to behave like AF_INET with local bridging, this ended up being more complex than anticipated.

Sibling communications on;

  • Qemu; does not work (tested)
  • Crosvm; does not work (tested)
  • Cloud-hypervisor; does not work (untested, derived from documentation)

Virtual machine sockets (VSOCK) are provided in one of two ways; using the host kernel driver vhost_vsock, or from a userspace program emulating the packet protocol (and acting as a vsock switch, see below). The virtual machine monitors qemu and crosvm use the host driver (other source), but the expectation that this driver can switch packets between context identifiers (CID) does not hold. All authoritative documentation by these monitors on VSOCK states explicitly guest to host communication and vice a versa. Documentation details on VSOCK itself are sparse, and there are also blogposts about the technology but some are partially outdated because driver functionality has changed between ten years ago and now. I made the mistake of expecting undocumented features to "just work".

For Qemu, the current implementation (defining qemu device type vhost-vsock) does not support sibling communications and implementing a CID forwarder device has been mentioned but that hasn't realised yet with counter arguments around security. 1
The suggested 2, and already shown to work 3, method is to extract the VSOCK device handling into a userspace proces (defining qemu device type vhost-user-vsock) that will also handle forwarding.
I haven't experimented with this approach yet, will do this the next week, but my understanding is that the daemon will deliver VSOCK packets to a unix socket on the host and holds a table mapping CIDs to local unix sockets to provide forwarding.
I'm confused by the complexity of this architecture and wonder why this is better than building on top of the host's VSOCK devices (the ones at /dev/{vhost-,}vsock) together with connection flag VMADDR_FLAG_TO_HOST inside the guests. Everybody seems to want to be compatible with the Firecracker VSOCK design 4, which explicitly states the requirement to not use the host kernel drivers but lacks substantiation. The second reason provided by 2 makes some sense to me, as to increase the amount of compatible host operating systems.

My context for doing this is setting up isolated guests with a macvtap providing services. A dedicated guest "bastion" is proxying to the other guests. Using VSOCK in this case would always guarantee authenticated connections so I don't have to encrypt the proxied communications. The services themselves bind directly to their VSOCK CID, strictly no inbound AF_INET allowed.
This setup otherwise works for me by fixing ports configurations and having socat perform hairpin proxying on the host, but it's not elegant.

@brianmcgillion
Copy link
Contributor

brianmcgillion commented Oct 8, 2024

@Bert-Proesmans https://github.com/search?q=repo%3Atiiuae%2Fghaf%20vsock&type=code you could have a look here for some ideas

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants