This is a fork of the original plajjan/vrnetlab project and was created specifically to make vrnetlab-based images runnable by containerlab.
The documentation provided in this fork only explains the parts that have been changed from the upstream project. To get a general overview of the vrnetlab project itself, consider reading the docs of the upstream repo.
At containerlab we needed to have a way to run virtual routers alongside the containerized Network Operating Systems.
Vrnetlab provides perfect machinery to package most-common routing VMs in container packaging. What upstream vrnetlab doesn't do, though, is create datapaths between the VMs in a "container-native" way.
Vrnetlab relies on a separate VM (vr-xcon) to stitch sockets exposed on each container and that doesn't play well with the regular ways of interconnecting container workloads.
This fork adds the additional option connection-mode
to the launch.py
script
of supported VMs. The connection-mode
option controls how vrnetlab creates
datapaths for launched VMs.
The connection-mode
values make it possible to run vrnetlab containers with
networking that doesn't require a separate container and is native to tools such
as docker.
Yes, the term is bloated. What it actually means is this fork makes it possible to add interfaces to a container hosting a qemu VM and vrnetlab will recognize those interfaces and stitch them with the VM interfaces.
With this you can, for example, add veth pairs between containers as you would normally and vrnetlab will make sure these ports get mapped to your routers' ports. In essence, that allows you to work with your vrnetlab containers like a normal container and get the datapath working in the same "native" way.
Important
Although the changes we made here are of a general purpose and you can run vrnetlab routers with docker CLI or any other container runtime, the purpose of this work was to couple vrnetlab with containerlab.
With this being said, we recommend the readers start their journey from this documentation entry which will show you how easy it is to run routers in a containerized setting.
As mentioned above, the major change this fork brings is the ability to run vrnetlab containers without requiring vr-xcon and instead uses container-native networking.
For containerlab the default connection mode value is connection-mode=tc
.
With this particular mode we use tc-mirred redirects to stitch a container's
interfaces eth1+
with the ports of the qemu VM running inside.
Using tc redirection (tc-mirred) we get a transparent pipe between a container's interfaces and those of the VMs running within.
We scrambled through many connection alternatives, which are described in this post, but tc redirect (tc-mirred ⭐) works best of all.
Full list of connection mode values:
Connection Mode | LACP Support | Description |
---|---|---|
tc-mirred | ✅ | Creates a linux bridge and attaches eth and tap interfaces to it. Cleanest solution for point-to-point links. |
bridge | 🌗 | No additional kernel modules and has native qemu/libvirt support. Does not support passing STP. Requires restricting MAC_PAUSE frames in order to support LACP. |
ovs-bridge | ✅ | Same as a regular bridge, but uses OvS (Open vSwitch). |
macvtap | ❌ | Requires mounting entire /dev to a container namespace. Needs file descriptor manipulation due to no native qemu support. |
Since the changes we made in this fork are VM specific, we added a few popular routing products:
- Arista vEOS
- Cisco XRv9k
- Cisco XRv
- Cisco FTDv
- Juniper vMX
- Juniper vSRX
- Juniper vJunos-switch
- Juniper vJunos-router
- Juniper vJunosEvolved
- Nokia SR OS
- OpenBSD
- FreeBSD
- Ubuntu
The rest are left untouched and can be contributed back by the community.
No. You build the images exactly as before.