Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enabling hostIPC does not have any effect #10757

Closed
remod opened this issue Aug 27, 2024 · 6 comments
Closed

Enabling hostIPC does not have any effect #10757

remod opened this issue Aug 27, 2024 · 6 comments

Comments

@remod
Copy link

remod commented Aug 27, 2024

Environmental Info:
K3s Version:

k3s version v1.30.1+k3s1 (80978b5b)
go version go1.22.2

Node(s) CPU architecture, OS, and Version:

Linux remod-p1 6.8.0-41-generic #41-Ubuntu SMP PREEMPT_DYNAMIC Fri Aug  2 20:41:06 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux

Cluster Configuration:

1 server, 1 agent

Describe the bug:

It seems that enabling hostIPC does not have any effect.

Steps To Reproduce:

  • Installed K3s: curl -sfL https://get.k3s.io | sh -
  • Create a pod definition:
echo "apiVersion: v1
kind: Pod
metadata:
  name: test-ipc
spec:
  hostIPC: true
  containers:
    - name: test-ipc
      image: ubuntu:focal
      command: [\"sh\", \"-c\"]
      args: [\"while true; do echo 'foo'; sleep 1; done;\"]" > test-ipc-pod.yaml
  • Install pod: kubectl apply -f test-ipc-pod.yaml
  • Check IPC mode: docker inspect $(docker ps -q --filter "name=k8s_test-ipc") | grep IpcMode

Expected behavior:

The IpcMode is set as if you'd run a docker image with --ipc=host:

"IpcMode": "host",

Actual behavior:

"IpcMode": "container:e86cde4006dd4ebb82229db13e77b223e248b4969dc3738d58600971874ff372",

Additional context / logs:

I've noticed that the POD sibling of the container uses "IpcMode": "host", but that seems to be independent of whether I am setting hostIPC to true or false:

$ docker inspect $(docker ps -q --filter "name=k8s_POD_test-ipc") | grep IpcMode
            "IpcMode": "host",
@brandond
Copy link
Member

It sounds like you're using cri-dockerd (--docker) although you didn't call this out?

I don't know this is expected to work, but this is not managed by anything here in K3s. If you're using Docker as your container runtime, I would open an issue with https://github.com/Mirantis/cri-dockerd

@github-project-automation github-project-automation bot moved this from New to Done Issue in K3s Development Aug 27, 2024
@remod
Copy link
Author

remod commented Sep 2, 2024

All clear, thanks for the pointer @brandond !

Created a new issue here: Mirantis/cri-dockerd#399

@joaogbcravo
Copy link

@brandond

Still on this, but not 100% related with the title.

Using k3s with containerd, with the Pod spec described above.
I do a ctr c info CONTAINER_ID and I got something like this under namespaces mapping:

{
    "type": "ipc",
    "path": "/proc/2058171/ns/ipc"
}

Where that proc path number is the pause container.

If I start a container using nerdctl (nerdctl run --ipc=private ubuntu:focal sh -c "while true; do echo 'foo'; sleep 1; done;", then I do a ctr c info CONTAINER_ID and I got no IPC details on namespaces mapping.

Now my question are:

  • is this behaviour a bug or a feature?
  • if it is a feature, can we bypass the pause container and have a similar way how nerdctl does things?

@brandond
Copy link
Member

brandond commented Sep 6, 2024

K3s (this project) doesn't do any of this. This is all containerd, kubernetes, and CRI API stuff. If you want things changed, that would need to happen elsewhere.

@brandond
Copy link
Member

brandond commented Sep 6, 2024

I will say that the pause container IS the pod sandbox. The pause containers entire purpose is to sit there doing nothing except existing as a process, because namespaces are cleaned up by the kernel as soon as the last process in them exits. The pause container (pod sandbox) remains running as long as the pod exists, so that other containers can run within the same namespaces.

This is just how Kubernetes and CRI works.

@joaogbcravo
Copy link

Thanks for the reply! Yes, I got the same behaviour with other k8s engines (EKS and minikube). Thanks for confirming our suspicions.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done Issue
Development

No branches or pull requests

3 participants