Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Transparent management interface #287

Merged
merged 9 commits into from
Dec 14, 2024
11 changes: 9 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ This is a fork of the original [plajjan/vrnetlab](https://github.com/plajjan/vrn
The documentation provided in this fork only explains the parts that have been changed in any way from the upstream project. To get a general overview of the vrnetlab project itself, consider reading the docs of the upstream repo.

## What is this fork about?

At [containerlab](https://containerlab.srlinux.dev) we needed to have [a way to run virtual routers](https://containerlab.srlinux.dev/manual/vrnetlab/) alongside the containerized Network Operating Systems.

Vrnetlab provides a perfect machinery to package most-common routing VMs in the container packaging. What upstream vrnetlab doesn't do, though, is creating datapath between the VMs in a "container-native" way.
Expand All @@ -15,6 +16,7 @@ This fork adds additional option for `launch.py` script of the supported VMs cal
By adding a few options a `connection-mode` value can be set to, we made it possible to run vrnetlab containers with the networking that doesn't require a separate container and is native to the tools like docker.

### Container-native networking?

Yes, the term is bloated, what it actually means is that with the changes we made in this fork it is possible to add interfaces to a container that hosts a qemu VM and vrnetlab will recognize those interfaces and stitch them with the VM interfaces.

With this you can just add, say, veth pairs between the containers as you would do normally, and vrnetlab will make sure that these ports get mapped to your router' ports. In essence, that allows you to work with your vrnetlab containers like with a normal container and get the datapath working in the same "native" way.
Expand All @@ -23,6 +25,7 @@ With this you can just add, say, veth pairs between the containers as you would
> With this being said, we recommend the readers to start their journey from this [documentation entry](https://containerlab.srlinux.dev/manual/vrnetlab/) which will show you how easy it is to run routers in a containerized setting.

## Connection modes

As mentioned above, the major change this fork brings is the ability to run vrnetlab containers without requiring vr-xcon and by using container-native networking.

The default option that we use in containerlab for this setting is `connection-mode=tc`. With this particular mode we use tc-mirred redirects to stitch container's interfaces `eth1+` with the ports of the qemu VM running inside.
Expand All @@ -46,16 +49,19 @@ There are two types of management connectivity for NOS VMs: _pass-through_ and _
_Pass-through management_ interfaces allows the use of the assigned management IP within the NOS VM, management traffic is transparently passed through to the VM, and the NOS configuration can accurately reflect the management IP. However, it is no longer possible to send or receive traffic directly in the vrnetlab container (e.g. for installing additional packages within the container), other than to pre-defined exceptions, such as the QEMU serial port on TCP port 5000.

NOSes defaulting to _pass-through_ management interfaces are:
- All vJunos routers

* All vJunos routers
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The recent changes made this no longer true -- everything is host-forwarded by default!

Copy link
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

hopefully done right in 8d977ba


In case of _host-forwarded_ management interfaces, certain ports are forwarded to the NOS VM IP, which is always 10.0.0.15/24. The management gateway in this case is 10.0.0.2/24, and outgoing traffic is NATed to the container management IP. This management interface connection mode does not allow for traffic such as LLDP to pass through the management interface.

NOSes defaulting to _host-forwarded_ management interfaces are:
- Every NOS not listed as pass-through management

* None so far, we are gathering feedback on this, and will update this list as feedback is received. Please contact us in [Discord](https://discord.gg/vAyddtaEV9) or open up an issue here if you have found any issues when trying the passthrough mode.
hellt marked this conversation as resolved.
Show resolved Hide resolved

It is possible to change from the default management interface mode by setting the `CLAB_MGMT_PASSTHROUGH` environment variable to 'true' or 'false', however, it is left up to the user to provide a startup configuration compatible with the requested mode.

## Which vrnetlab routers are supported?

Since the changes we made in this fork are VM specific, we added a few popular routing products:

* Arista vEOS
Expand All @@ -75,4 +81,5 @@ Since the changes we made in this fork are VM specific, we added a few popular r
The rest are left untouched and can be contributed back by the community.

## Does the build process change?

No. You build the images exactly as before.