Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Using CNI and docker - ns.GetNS - unknown FS magic on "/run/snap.docker/netns/{container_id}": 1021994 #24318

Closed
jocado opened this issue Oct 29, 2024 · 4 comments
Labels

Comments

@jocado
Copy link

jocado commented Oct 29, 2024

Hi,

Currently it seems that use of CNI does not work with docker snap, however in most other regards using the docker snap does work correctly.

I'm trying to get a bit more info on the failing mechanism to see if there is any way I can at least work around it, or even better perhaps contribute a fix somewhere, could be in the snap or in CNI or in Nomad.

When using CNI plugins with Nomad, and docker running from a snap, it results in the following error pattern when creating the container via Nomad:

failed to setup alloc: pre-run hook "network" failed: failed to configure networking for alloc: failed to configure network: plugin type="bridge1" failed (add): failed to open netns "/run/snap.docker/netns/db65a7df22ec": unknown FS magic on "/run/snap.docker/netns/db65a7df22ec": 1021994
The bridge1 plugin referred to here is identical to the referenced in the workaround here: https://github.com/hashicorp/nomad/issues/11085
#!/bin/bash
CNI_IFNAME=eth1
exec /opt/cni/bin/ipvlan

I've tried this with both CNI plugin bundle versions 1.3.0 and 1.6.0

Nomad [ which is also running from a snap ], works fine for standard docker operations.

I have tried running the nomad process with the snap confinement disabled [ apparmor and seccomp ], but get the same error. I'm wondering if there is some kind of mount namespace issue going on, but it's not clear because the contents of /run/snap.docker/netns/ is isn't itself in a separate mount namespace.

Is anyone able to tell me where I should start looking ? What is the likely cause of unknown FS magic ?

It seems like there is some kind of mount info somewhere which can't be access from nomad. Just not sure where.

I have also raised this here in case anyone there can provide some insight: containernetworking/plugins#1110

Thanks very much!

Cheers,
Just

@gulducat
Copy link
Member

Hello!

What a fascinating and strange issue. I'll say at the outset that this not really a supported configuration. Not only do we not maintain a Nomad snap, I imagine there are about as many issues trying to do it as there are trying to run Nomad in docker (which is also not supported). Nomad client agents expect to do all kinds of things at the host level, is distributed in a single binary to be easy to run that way, and any sort of isolation can get things really mixed up.

In short, we do not recommend running Nomad as a snap. Then, Docker as a snap is its own little world, and I think that may be the source of your issue. Either way, our ability to help here is very limited.

The easiest solution to the problem is to not run these things as snaps.


That said, I looked into it a little bit, and here's some info that might help you narrow down the issue.

First, here's something that works as expected. To show something like what CNI expects to be able to do, I'll use the nsenter linux command.

On a fresh Ubuntu VM (focal, 24.04), running docker not as a snap (prefix all these commands with sudo, or otherwise be root):

# install docker (not a snap)
$ apt update && apt install -y docker.io
# make sure it worked
$ docker ps -a
# run a container
$ docker run --rm -it -d --name hello hashicorp/http-echo -text='hello' -listen=:8080
# get its namespace path
$ docker inspect hello | grep netns
            "SandboxKey": "/var/run/docker/netns/df705ae297fc",
# show that curl doesn't work from the host machine, because the port isn't exposed
$ curl localhost:8080
curl: (7) Failed to connect to localhost port 8080: Connection refused
# use nsenter to curl from within the network namespace
$ nsenter --net=/var/run/docker/netns/df705ae297fc curl localhost:8080
hello
# stop the container
docker stop hello

When I try to do the same thing with a docker snap, I get:

$ snap install docker
$ docker ps -a
$ docker run --rm -it -d --name hello hashicorp/http-echo -text='hello' -listen=:8080
$ docker inspect hello | grep netns
            "SandboxKey": "/run/snap.docker/netns/2267ca0b9f2b",
$ nsenter --net=/run/snap.docker/netns/2267ca0b9f2b curl localhost:8080
nsenter: reassociate to namespace 'ns/net' failed: Invalid argument

so something about the snap is not allowing this, which the CNI plugin probably expects to be able to do, too.

It still doesn't work after disabling apparmor (aa-teardown && aa-status), and from here I think I'm getting too far down in the weeds, so I'll stop my investigation here.


Notice that this example doesn't include Nomad, nor any CNI plugis. I'm just trying to interact directly with docker's network namespaces. If you can resolve that issue, then Nomad executing CNI plugins will have a chance of success, but even then they may be similarly restricted by being in a snap.

I'm going to close this issue as out of scope, but I do hope you find a way to get your environment to work! Feel free to leave a reply here if you solve it, in case it may help others in the future.

@github-project-automation github-project-automation bot moved this from Needs Triage to Done in Nomad - Community Issues Triage Oct 31, 2024
@jocado
Copy link
Author

jocado commented Oct 31, 2024

Hi @gulducat

Just wanted to say that I really appreciate the pointers, along with the quite understandable disclaimer at the start 👍

For some added context, in our particular scenario we don't have an option to run it natively. and are reasonably comfortable with it's use in this way within a well define and tested set of scenarios. Actually use of CDI is not 100% essential, but if we can make that work too it would be beneficial, so we're investigating if it's possible.

I will try and come back and post any relevant info when we find it, as you say it may be useful for others.

It was a bit of a long shot creating this issue, I knew that. So thank you for taking the time to help! 💯

@gulducat
Copy link
Member

A colleague of mine found that snapd runs docker in its own mnt namespace, so e.g. this works:

$ sudo lsns --output-all | grep docker
4026532424 mnt    /proc/956/ns/mnt       3   956     1 dockerd --group docker --exec-root=/run/snap.docker --data-root=/var/snap/docker/common/var-lib-docker --pidfile=/run/snap.docker/docker.pid --config-file=/var/snap/docker/2932/config/daemon.json     0 root                        /run/snapd/ns/docker.mnt
$ sudo nsenter --mount=/run/snapd/ns/docker.mnt
# docker inspect hello | grep netns
            "SandboxKey": "/run/snap.docker/netns/58295ff8d04f",
# nsenter --net=/run/snap.docker/netns/58295ff8d04f bash
# nc -z localhost 8080
# echo $?
0

so I can get to the container's network namespace through docker's mnt ns. once there, though, the mount ns isn't the same as the host filesystem -- there's no curl (which is why I used nc to check the port), and also no /opt/cni/bin/* to run the ipvlan CNI plugin.

I'm not sure what to suggest from here, but I hope this gets you closer!

@jocado
Copy link
Author

jocado commented Nov 1, 2024

My initial conclusion is that this problem seems quite hard, potentially impossible overcome running Nomad with within a snap.

One sticking point seems to be that it's not possible to enter a network namespace without the file descriptor, and each snap operates in it's own mount namespace. Therefor from within the snap which is running nomad, it's not possible to switch to the mount namespace of the docker snap [ the docker snap mount namespace file descriptor isn't available ]. As already pointed out, this isn't an AA or SECCOMP issue.

So, unless there's another way to find and switch to the network namespace of the docker snap [ something which doesn't rely on a file descriptor ], then I don't really see how it can be done.

BTW, snaps don't run in their own dedicated network namespace, only a mount namespace.

There was one potential way that worked outside of the snap confinement. It was possible to enter the mount namespace of the nomad snap using nsenter, find a pid running inside the target container, and use that to enter the network namespace:

# nsenter --mount=/run/snapd/ns/nomad.mnt /bin/bash
# 
# ps aux |grep hello
65532    3320647  0.0  0.0 1230200 1104 pts/0    Ssl+ 13:55   0:00 /http-echo -text=hello -listen=:8080
# 
# nsenter --net=/proc/3320647/ns/net
# 
# ip a s
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
5340: eth0@if5341: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
    link/ether 02:42:ac:11:00:03 brd ff:ff:ff:ff:ff:ff link-netnsid 0
    inet 172.17.0.3/16 brd 172.17.255.255 scope global eth0
       valid_lft forever preferred_lft forever
# 

The obvious drawback of this is that you have to reliably find the correct pid [ doesn't seem nice/practical for any CNI plugin ]. But that aside, I carried on.

When I tied the same thing from inside the snap confinement it doesn't work:

# snap run --shell nomad.service
# 
# nsenter --net=/proc/3320647/ns/net
nsenter: cannot open /proc/3320647/ns/net: Permission denied

That seemed like it could be a MAC or SECOMP issue, but there are no obvious audit messages about it, and disabling the confinement [ reinstalling the snap in devmode ] didn't seem to help.

Anyhow, some small insights there, but nothing conclusive.

If I get time I may come back to this, but for now it seems like it's going to be a bit too much effort for the return.

Thanks again for the pointers above 👍

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Development

No branches or pull requests

2 participants