Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

D-in-D with VSCode remote development is broken on 5.3.2 #25153

Closed
ankudinov opened this issue Jan 29, 2025 · 5 comments · Fixed by containers/podman-machine-os#71
Closed

D-in-D with VSCode remote development is broken on 5.3.2 #25153

ankudinov opened this issue Jan 29, 2025 · 5 comments · Fixed by containers/podman-machine-os#71
Assignees
Labels
jira kind/bug Categorizes issue or PR as related to a bug. machine remote Problem is in podman-remote

Comments

@ankudinov
Copy link

Issue Description

We are using D-in-D with Podman Desktop rootful machine. It worked great with podman 5.3.1. However it's breaking on 5.3.2.
sudo modprobe ip_tables on podman machine fixes the problem. I'd expect it's somehow related to the kernel change, however can't find any clues in 5.3.2 release notes.

The problem is easy to reproduce. 1st, start any devcontainer. Normally VSCode will be used for that, but it's easier to spot errors on CLI.

podman run --rm -it --privileged \
  -v dind-var-lib-docker:/var/lib/docker \
  -w $(pwd) \
  -v $(pwd):$(pwd) \
  ghcr.io/aristanetworks/avd/universal:python3.11-avd-v4.10.2 zsh

Next, init docker inside the container using Microsoft script:

/usr/local/share/docker-init.sh

This will fail with error and docker info will be complaining that docker is not running.
As a workaround:

podman machine ssh
sudo modprobe ip_tables

This will fix the problem, although some iptables errors will be still reported in logs.

Here is the full script for convenience:

#!/bin/sh
#-------------------------------------------------------------------------------------------------------------
# Copyright (c) Microsoft Corporation. All rights reserved.
# Licensed under the MIT License. See https://go.microsoft.com/fwlink/?linkid=2090316 for license information.
#-------------------------------------------------------------------------------------------------------------

set -e

AZURE_DNS_AUTO_DETECTION=true
DOCKER_DEFAULT_ADDRESS_POOL=
dockerd_start="AZURE_DNS_AUTO_DETECTION=${AZURE_DNS_AUTO_DETECTION} DOCKER_DEFAULT_ADDRESS_POOL=${DOCKER_DEFAULT_ADDRESS_POOL} $(cat << 'INNEREOF'
    # explicitly remove dockerd and containerd PID file to ensure that it can start properly if it was stopped uncleanly
    find /run /var/run -iname 'docker*.pid' -delete || :
    find /run /var/run -iname 'container*.pid' -delete || :

    # -- Start: dind wrapper script --
    # Maintained: https://github.com/moby/moby/blob/master/hack/dind

    export container=docker

    if [ -d /sys/kernel/security ] && ! mountpoint -q /sys/kernel/security; then
        mount -t securityfs none /sys/kernel/security || {
            echo >&2 'Could not mount /sys/kernel/security.'
            echo >&2 'AppArmor detection and --privileged mode might break.'
        }
    fi

    # Mount /tmp (conditionally)
    if ! mountpoint -q /tmp; then
        mount -t tmpfs none /tmp
    fi

    set_cgroup_nesting()
    {
        # cgroup v2: enable nesting
        if [ -f /sys/fs/cgroup/cgroup.controllers ]; then
            # move the processes from the root group to the /init group,
            # otherwise writing subtree_control fails with EBUSY.
            # An error during moving non-existent process (i.e., "cat") is ignored.
            mkdir -p /sys/fs/cgroup/init
            xargs -rn1 < /sys/fs/cgroup/cgroup.procs > /sys/fs/cgroup/init/cgroup.procs || :
            # enable controllers
            sed -e 's/ / +/g' -e 's/^/+/' < /sys/fs/cgroup/cgroup.controllers \
                > /sys/fs/cgroup/cgroup.subtree_control
        fi
    }

    # Set cgroup nesting, retrying if necessary
    retry_cgroup_nesting=0

    until [ "${retry_cgroup_nesting}" -eq "5" ];
    do
        set +e
            set_cgroup_nesting

            if [ $? -ne 0 ]; then
                echo "(*) cgroup v2: Failed to enable nesting, retrying..."
            else
                break
            fi

            retry_cgroup_nesting=`expr $retry_cgroup_nesting + 1`
        set -e
    done

    # -- End: dind wrapper script --

    # Handle DNS
    set +e
        cat /etc/resolv.conf | grep -i 'internal.cloudapp.net' > /dev/null 2>&1
        if [ $? -eq 0 ] && [ "${AZURE_DNS_AUTO_DETECTION}" = "true" ]
        then
            echo "Setting dockerd Azure DNS."
            CUSTOMDNS="--dns 168.63.129.16"
        else
            echo "Not setting dockerd DNS manually."
            CUSTOMDNS=""
        fi
    set -e

    if [ -z "$DOCKER_DEFAULT_ADDRESS_POOL" ]
    then
        DEFAULT_ADDRESS_POOL=""
    else
        DEFAULT_ADDRESS_POOL="--default-address-pool $DOCKER_DEFAULT_ADDRESS_POOL"
    fi

    # Start docker/moby engine
    ( dockerd $CUSTOMDNS $DEFAULT_ADDRESS_POOL > /tmp/dockerd.log 2>&1 ) &
INNEREOF
)"

sudo_if() {
    COMMAND="$*"

    if [ "$(id -u)" -ne 0 ]; then
        sudo $COMMAND
    else
        $COMMAND
    fi
}

retry_docker_start_count=0
docker_ok="false"

until [ "${docker_ok}" = "true"  ] || [ "${retry_docker_start_count}" -eq "1" ];
do
    # Start using sudo if not invoked as root
    if [ "$(id -u)" -ne 0 ]; then
        sudo /bin/sh -c "${dockerd_start}"
    else
        eval "${dockerd_start}"
    fi

    retry_count=0
    until [ "${docker_ok}" = "true"  ] || [ "${retry_count}" -eq "5" ];
    do
        sleep 1s
        set +e
            docker info > /dev/null 2>&1 && docker_ok="true"
        set -e

        retry_count=`expr $retry_count + 1`
    done

    if [ "${docker_ok}" != "true" ] && [ "${retry_docker_start_count}" != "4" ]; then
        echo "(*) Failed to start docker, retrying..."
        set +e
            sudo_if pkill dockerd
            sudo_if pkill containerd
        set -e
    fi

    retry_docker_start_count=`expr $retry_docker_start_count + 1`
done

# Execute whatever commands were passed in (if any). This allows us
# to set this script to ENTRYPOINT while still executing the default CMD.
exec "$@"

Steps to reproduce the issue

Steps to reproduce the issue

  1. Start d-in-d container.
  2. Init docker using shell script provided with VSCode devcontainer
  3. The script will fail and docker info will fail as well.

Describe the results you received

D-in-D is failing on 5.3.2

Describe the results you expected

Expect D-in-D to work

podman info output

host:
  arch: arm64
  buildahVersion: 1.38.1
  cgroupControllers:
  - cpuset
  - cpu
  - io
  - memory
  - pids
  - rdma
  - misc
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: conmon-2.1.12-3.fc41.aarch64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.12, commit: '
  cpuUtilization:
    idlePercent: 99.67
    systemPercent: 0.17
    userPercent: 0.15
  cpus: 10
  databaseBackend: sqlite
  distribution:
    distribution: fedora
    variant: coreos
    version: "41"
  eventLogger: journald
  freeLocks: 2046
  hostname: localhost.localdomain
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 6.12.7-200.fc41.aarch64
  linkmode: dynamic
  logDriver: journald
  memFree: 23557054464
  memTotal: 25361227776
  networkBackend: netavark
  networkBackendInfo:
    backend: netavark
    dns:
      package: aardvark-dns-1.13.1-1.fc41.aarch64
      path: /usr/libexec/podman/aardvark-dns
      version: aardvark-dns 1.13.1
    package: netavark-1.13.1-1.fc41.aarch64
    path: /usr/libexec/podman/netavark
    version: netavark 1.13.1
  ociRuntime:
    name: crun
    package: crun-1.19.1-1.fc41.aarch64
    path: /usr/bin/crun
    version: |-
      crun version 1.19.1
      commit: 3e32a70c93f5aa5fea69b50256cca7fd4aa23c80
      rundir: /run/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
  os: linux
  pasta:
    executable: /usr/bin/pasta
    package: passt-0^20241211.g09478d5-1.fc41.aarch64
    version: |
      pasta 0^20241211.g09478d5-1.fc41.aarch64-pasta
      Copyright Red Hat
      GNU General Public License, version 2 or later
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>
      This is free software: you are free to change and redistribute it.
      There is NO WARRANTY, to the extent permitted by law.
  remoteSocket:
    exists: true
    path: unix:///run/podman/podman.sock
  rootlessNetworkCmd: pasta
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.3.1-1.fc41.aarch64
    version: |-
      slirp4netns version 1.3.1
      commit: e5e368c4f5db6ae75c2fce786e31eef9da6bf236
      libslirp: 4.8.0
      SLIRP_CONFIG_VERSION_MAX: 5
      libseccomp: 2.5.5
  swapFree: 0
  swapTotal: 0
  uptime: 0h 22m 30.00s
  variant: v8
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /usr/share/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 1
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.imagestore: /usr/lib/containers/storage
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphRootAllocated: 198757789696
  graphRootUsed: 6988705792
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Supports shifting: "true"
    Supports volatile: "true"
    Using metacopy: "true"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 1
  runRoot: /run/containers/storage
  transientStore: false
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 5.3.2
  Built: 1737504000
  BuiltTime: Wed Jan 22 01:00:00 2025
  GitCommit: ""
  GoVersion: go1.23.4
  Os: linux
  OsArch: linux/arm64
  Version: 5.3.2

Podman in a container

No

Privileged Or Rootless

Privileged

Upstream Latest Release

Yes

Additional environment details

Additional environment details

Additional information

Additional information like issue happens only occasionally or issue happens with a particular architecture or on a particular setting

@ankudinov ankudinov added the kind/bug Categorizes issue or PR as related to a bug. label Jan 29, 2025
@github-actions github-actions bot added the remote Problem is in podman-remote label Jan 29, 2025
@Luap99 Luap99 added the machine label Jan 29, 2025
@Luap99
Copy link
Member

Luap99 commented Jan 29, 2025

The new machine VM images are based on fedora coreos. The latest update is now based on fedora 41 and there we switched podman over to use nftables by default, https://fedoraproject.org/wiki/Changes/NetavarkNftablesDefault.

As such we no longer use the old ip tables modules so they are no longer loaded on the host.
So I think it is requirement now that you have to load the modules yourself if you need them

@ankudinov
Copy link
Author

@Luap99 Thank your for the explanation. Makes sense and the trend towards nftables is well know. However, I'm not entirely certain if simply changing the kernel and suggesting users to enable missing features is the right way. Especially in the context of docker desktop where user expertise is very different.
This works for me. But we have number of users who will hit the same issue and not everyone will be able to troubleshoot it.
I can only guess about the scale of the problem outside of my use case. D-in-D is quite common for various dev use cases and not every tool is up to date. VSCode d-in-d script is maintained here and I don't see any related PRs.

Keeping legacy is certainly not always the best approach, but I'd expect changes like that to be done between major releases and well covered in the release notes. With a clear migration plan.
Honestly, one of the reasons to stop using Docker Desktop was unpredictable kernel changes where certain flags were just disappearing between releases. And building custom kernel for Docker Desktop is close to PhD for most users.

Is there any clear guide explaining how to use D-in-D on Podman with the new kernel?
If yes, I'm happy to test that and potentially submit a bug or PR for VSCode team.
If not, I'd say it's a bug to be fixed and legacy iptables must be enabled by default.

Open for any other opinions.

@Luap99
Copy link
Member

Luap99 commented Jan 29, 2025

Well first of all the machine os image is managed separately from podman. We do not have resources to maintain an entire OS distribution so we build on top of fedora coreos, then we just our our own up to date podman rpm on top with some other customizations.

As such it is really hard for us to catch changes like this and yes I wrote the change for fedora and I also maintain part of the machine-os image. You cannot expect there to be no breaking changes in the image when it bumps to the next major fedora version. Sure we can/should work on documenting such change when it happen in the release notes.

Is there any clear guide explaining how to use D-in-D on Podman with the new kernel?
If yes, I'm happy to test that and potentially submit a bug or PR for VSCode team.

Is there a guide from us to use podman and vscode to begin with? I am not sure anyone from us actually uses and test that. If there are volunteers to write such thing I am sure they can write that.

You don't list the actual error so I don't what exactly the problem is, if it is iptables complaining about not being able to use/load the legacy module then this is expected as the container is not allowed to load the kernel module.
IMO the thing they can do is to use iptables-nft instead which use the nftables modules which should always work AFAICT. Even when they were not loaded before the kernel loaded them automatically on first use without needing extra privileges, at least it was like that when I tested in the past on older fedora versions.

If not, I'd say it's a bug to be fixed and legacy iptables must be enabled by default.

We can certainly consider adding the modules-load file for the ip_tables module back into the machine os. podman/netavark can keep using nftables and since docker runs inside their own container (netns) there should be no firewall conflicts in such case.
The module should not do any harm being loaded there, at least as long as fedora keeps including the legacy iptables modules that is easy to do. So for now that certainly seems like the easiest fix.

@ankudinov
Copy link
Author

Understood and agreed. Long term the issue must be fixed on VSCode side to make sure that new kernels are respected.
I'd propose following:

  1. As a short term workaround - load legacy iptables by default if possible. I think it's the right thing to do to avoid hard transition, but I'd avoid doing that forever and cut on the next major release and announce in the release notes. Alternatively - simply document the workaround somewhere - it's easy enough. The only disadvantage - user have to know what is failing.
  2. Improve Podman testing pipeline to cover more cases, including D-in-D. I totally understand if this request will be parked for a better future when we all retired and have nothing to do, as it's a lot of work. However, every investment in testing pays off long term. So, if there is any slim chance to implement that - this will improve product quality and will be highly appreciated.
  3. I'd appreciate any hints on how to run D-in-D with nftables. If no hints, I'll try to solve the puzzle myself.
  4. Another rightful long term option would be stating - build your own image in docs with a simple example. That's what I'm currently considering to do to avoid uncontrolled kernel changes.

With all that said, I have to admit that this use case is a bit weird, as we are trying to start docker in podman in the first place. But it could be quite common nevertheless due various dev tools and growing podman community.

Regarding the errors.
Script is quite silent by default and only produces following:

$ /usr/local/share/docker-init.sh
Not setting dockerd DNS manually.
(*) Failed to start docker, retrying...
Not setting dockerd DNS manually.
(*) Failed to start docker, retrying...
Not setting dockerd DNS manually.
(*) Failed to start docker, retrying...
Not setting dockerd DNS manually.
(*) Failed to start docker, retrying...
Not setting dockerd DNS manually.

The script is relying on docker info to check if docker started:

$ docker info
Client:
 Version:    27.0.3-1
 Context:    default
 Debug Mode: false
 Plugins:
  buildx: Docker Buildx (Docker Inc.)
    Version:  v0.18.0
    Path:     /usr/libexec/docker/cli-plugins/docker-buildx
  compose: Docker Compose (Docker Inc.)
    Version:  2.30.3-1
    Path:     /usr/libexec/docker/cli-plugins/docker-compose

Server:
ERROR: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
errors pretty printing info

When unmuted, script provides some clues:

WARN[2025-01-30T08:59:18.541791642Z] Running modprobe bridge br_netfilter failed with message: , error: exec: "modprobe": executable file not found in $PATH 
INFO[2025-01-30T08:59:18.542839923Z] unable to detect if iptables supports xlock: 'iptables --wait -L -n': `iptables v1.8.7 (legacy): can't initialize iptables table `filter': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.`  error="exit status 3"
INFO[2025-01-30T08:59:18.553539024Z] stopping event stream following graceful shutdown  error="<nil>" module=libcontainerd namespace=moby
INFO[2025-01-30T08:59:18.553813188Z] stopping healthcheck following graceful shutdown  module=libcontainerd
INFO[2025-01-30T08:59:18.553835938Z] stopping event stream following graceful shutdown  error="context canceled" module=libcontainerd namespace=plugins.moby
failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to register "bridge" driver: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables v1.8.7 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
 (exit status 3)

After loading ip_tables docker starts successfully, although according to the warnings some nice-to-have modules are probably still missing:

WARN[2025-01-30T09:03:06.767988101Z] Running modprobe bridge br_netfilter failed with message: , error: exec: "modprobe": executable file not found in $PATH 
WARN[2025-01-30T09:03:06.796201011Z] ip6tables is enabled, but cannot set up ip6tables chains  error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.7 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
WARN[2025-01-30T09:03:06.797136227Z] Setting the default DROP policy on firewall reload failed, setting default policy to DROP in FORWARD chain failed:  (iptables failed: ip6tables --wait -t filter -P FORWARD DROP: ip6tables v1.8.7 (legacy): can't initialize ip6tables table `filter': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
 (exit status 3)) 
WARN[2025-01-30T09:03:06.801771059Z] Controller.NewNetwork none:                   error="failed to create DOCKER-USER IPV6 chain: iptables failed: ip6tables --wait -t filter -N DOCKER-USER: ip6tables v1.8.7 (legacy): can't initialize ip6tables table `filter': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
WARN[2025-01-30T09:03:06.805445799Z] Controller.NewNetwork host:                   error="failed to create DOCKER-USER IPV6 chain: iptables failed: ip6tables --wait -t filter -N DOCKER-USER: ip6tables v1.8.7 (legacy): can't initialize ip6tables table `filter': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
WARN[2025-01-30T09:03:06.825056884Z] Controller.NewNetwork bridge:                 error="failed to create DOCKER-USER IPV6 chain: iptables failed: ip6tables --wait -t filter -N DOCKER-USER: ip6tables v1.8.7 (legacy): can't initialize ip6tables table `filter': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
INFO[2025-01-30T09:03:06.825115426Z] Loading containers: done.                    
WARN[2025-01-30T09:03:06.830565223Z] WARNING: bridge-nf-call-iptables is disabled 
WARN[2025-01-30T09:03:06.830585932Z] WARNING: bridge-nf-call-ip6tables is disabled 
INFO[2025-01-30T09:03:06.830599932Z] Docker daemon                                 commit=662f78c0b1bb5114172427cfcb40491d73159be2 containerd-snapshotter=false storage-driver=overlay2 version=27.0.3-1
INFO[2025-01-30T09:03:06.830701349Z] Daemon has completed initialization          
INFO[2025-01-30T09:03:06.975201094Z] API listen on /var/run/docker.sock

PS: thx for maintaining a great product!

@Luap99
Copy link
Member

Luap99 commented Jan 30, 2025

Understood and agreed. Long term the issue must be fixed on VSCode side to make sure that new kernels are respected. I'd propose following:

1. As a short term workaround - load legacy iptables by default if possible. I think it's the right thing to do to avoid hard transition, but I'd avoid doing that forever and cut on the next major release and announce in the release notes. Alternatively - simply document the workaround somewhere - it's easy enough. The only disadvantage - user have to know what is failing.

containers/podman-machine-os#71

2. Improve Podman testing pipeline to cover more cases, including D-in-D. I totally understand if this request will be parked for a better future when we all retired and have nothing to do, as it's a lot of work. However, every investment in testing pays off long term. So, if there is any slim chance to implement that - this will improve product quality and will be highly appreciated.

That is far out of scope for us. Our CI pipeline is already gigantic and we are already only testing podman itself. Testing if third party applications work or not in our pipeline is simply not maintainable for us.
That said anyone can setup such automatic tests for themselves by building the latest podman from main and report issues right away and not after the release.

3. I'd appreciate any hints on how to run D-in-D with nftables. If no hints, I'll try to solve the puzzle myself.

In general there are two iptables packages iptables-legcay and iptables-nft on most distros, the later has the same cli interface but works with the new nftables kernel module AFAIK.

4. Another rightful long term option would be stating - `build your own image` in docs with a simple example. That's what I'm currently considering to do to avoid uncontrolled kernel changes.

I am not aware any actual documentation around this. But our image build process is totally open so anyone can have a look there: https://github.com/containers/podman-machine-os

Server:
ERROR: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
errors pretty printing info

When unmuted, script provides some clues:

WARN[2025-01-30T08:59:18.541791642Z] Running modprobe bridge br_netfilter failed with message: , error: exec: "modprobe": executable file not found in $PATH 
INFO[2025-01-30T08:59:18.542839923Z] unable to detect if iptables supports xlock: 'iptables --wait -L -n': `iptables v1.8.7 (legacy): can't initialize iptables table `filter': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.`  error="exit status 3"
INFO[2025-01-30T08:59:18.553539024Z] stopping event stream following graceful shutdown  error="<nil>" module=libcontainerd namespace=moby
INFO[2025-01-30T08:59:18.553813188Z] stopping healthcheck following graceful shutdown  module=libcontainerd
INFO[2025-01-30T08:59:18.553835938Z] stopping event stream following graceful shutdown  error="context canceled" module=libcontainerd namespace=plugins.moby
failed to start daemon: Error initializing network controller: error obtaining controller instance: failed to register "bridge" driver: failed to create NAT chain DOCKER: iptables failed: iptables -t nat -N DOCKER: iptables v1.8.7 (legacy): can't initialize iptables table `nat': Table does not exist (do you need to insmod?)
Perhaps iptables or your kernel needs to be upgraded.
 (exit status 3)

After loading ip_tables docker starts successfully, although according to the warnings some nice-to-have modules are probably still missing:

WARN[2025-01-30T09:03:06.767988101Z] Running modprobe bridge br_netfilter failed with message: , error: exec: "modprobe": executable file not found in $PATH 
WARN[2025-01-30T09:03:06.796201011Z] ip6tables is enabled, but cannot set up ip6tables chains  error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.7 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
WARN[2025-01-30T09:03:06.797136227Z] Setting the default DROP policy on firewall reload failed, setting default policy to DROP in FORWARD chain failed:  (iptables failed: ip6tables --wait -t filter -P FORWARD DROP: ip6tables v1.8.7 (legacy): can't initialize ip6tables table `filter': Table does not exist (do you need to insmod?)
Perhaps ip6tables or your kernel needs to be upgraded.
 (exit status 3)) 
WARN[2025-01-30T09:03:06.801771059Z] Controller.NewNetwork none:                   error="failed to create DOCKER-USER IPV6 chain: iptables failed: ip6tables --wait -t filter -N DOCKER-USER: ip6tables v1.8.7 (legacy): can't initialize ip6tables table `filter': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
WARN[2025-01-30T09:03:06.805445799Z] Controller.NewNetwork host:                   error="failed to create DOCKER-USER IPV6 chain: iptables failed: ip6tables --wait -t filter -N DOCKER-USER: ip6tables v1.8.7 (legacy): can't initialize ip6tables table `filter': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
WARN[2025-01-30T09:03:06.825056884Z] Controller.NewNetwork bridge:                 error="failed to create DOCKER-USER IPV6 chain: iptables failed: ip6tables --wait -t filter -N DOCKER-USER: ip6tables v1.8.7 (legacy): can't initialize ip6tables table `filter': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
INFO[2025-01-30T09:03:06.825115426Z] Loading containers: done.                    
WARN[2025-01-30T09:03:06.830565223Z] WARNING: bridge-nf-call-iptables is disabled 
WARN[2025-01-30T09:03:06.830585932Z] WARNING: bridge-nf-call-ip6tables is disabled 
INFO[2025-01-30T09:03:06.830599932Z] Docker daemon                                 commit=662f78c0b1bb5114172427cfcb40491d73159be2 containerd-snapshotter=false storage-driver=overlay2 version=27.0.3-1
INFO[2025-01-30T09:03:06.830701349Z] Daemon has completed initialization          
INFO[2025-01-30T09:03:06.975201094Z] API listen on /var/run/docker.sock

ip6_tables is needed as well. That was the only two modules the we (podman) at least dropped from being loaded by default.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
jira kind/bug Categorizes issue or PR as related to a bug. machine remote Problem is in podman-remote
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants