Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

crashplan stops working after update #407

Open
gkyriazis opened this issue Dec 31, 2022 · 18 comments
Open

crashplan stops working after update #407

gkyriazis opened this issue Dec 31, 2022 · 18 comments

Comments

@gkyriazis
Copy link

I've been running crashplan docker for well over a year with no problems inside an LXC container in proxmox. After a recent proxmox upgrade, I'm getting the following errors while starting the container:

[init ] container is starting...
[cont-env ] loading container environment variables...
[cont-env ] APP_NAME: loading...
[cont-env ] APP_VERSION: loading...
[cont-env ] DISPLAY: executing...
[cont-env ] DISPLAY: terminated successfully.
[cont-env ] DISPLAY: loading...
[cont-env ] DOCKER_IMAGE_PLATFORM: loading...
[cont-env ] DOCKER_IMAGE_VERSION: loading...
[cont-env ] GTK_THEME: executing...
[cont-env ] GTK_THEME: terminated successfully.
[cont-env ] GTK_THEME: loading...
[cont-env ] HOME: loading...
[cont-env ] QT_STYLE_OVERRIDE: executing...
[cont-env ] QT_STYLE_OVERRIDE: terminated successfully.
[cont-env ] QT_STYLE_OVERRIDE: loading...
[cont-env ] TAKE_CONFIG_OWNERSHIP: loading...
[cont-env ] XDG_CACHE_HOME: loading...
[cont-env ] XDG_CONFIG_HOME: loading...
[cont-env ] XDG_DATA_HOME: loading...
[cont-env ] XDG_RUNTIME_DIR: loading...
[cont-env ] container environment variables initialized.
[cont-secrets] loading container secrets...
[cont-secrets] container secrets loaded.
[cont-init ] executing container initialization scripts...
[cont-init ] 10-certs.sh: executing...
[cont-init ] 10-certs.sh: terminated successfully.
[cont-init ] 10-check-app-niceness.sh: executing...
[cont-init ] 10-check-app-niceness.sh: terminated successfully.
[cont-init ] 10-cjk-font.sh: executing...
[cont-init ] 10-cjk-font.sh: terminated successfully.
[cont-init ] 10-clean-logmonitor-states.sh: executing...
[cont-init ] 10-clean-logmonitor-states.sh: terminated successfully.
[cont-init ] 10-clean-tmp-dir.sh: executing...
[cont-init ] 10-clean-tmp-dir.sh: terminated successfully.
[cont-init ] 10-fontconfig-cache-dir.sh: executing...
[cont-init ] 10-fontconfig-cache-dir.sh: terminated successfully.
[cont-init ] 10-init-users.sh: executing...
[cont-init ] 10-init-users.sh: sed: can't move '/etc/group' to '/etc/group.bak': Invalid argument
[cont-init ] 10-init-users.sh: terminated with error 1.

Container image has not changed:

crashplan:~/bin# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
jlesage/crashplan-pro latest ffab6950e94f 2 weeks ago 473MB

Just to be on the safe side, I removed the image and re-loaded it, but I'm getting the same error.

Thank you!

George

@gkyriazis gkyriazis changed the title crashplan stops working after container update crashplan stops working after update Dec 31, 2022
@jlesage
Copy link
Owner

jlesage commented Jan 3, 2023

How do you create the container ?

@p3av3y
Copy link

p3av3y commented Jan 24, 2023

I just started getting this error as well. I am also using Proxmox with a LXC container.

I am creating the container using compose (portainer stack to be exact). I tried from the command line as well, getting the same error.

@jlesage
Copy link
Owner

jlesage commented Jan 24, 2023

So you see the same error when running docker run --rm jlesage/crashplan-pro ?

@p3av3y
Copy link

p3av3y commented Jan 24, 2023

Yes, running that yields the same error.

Starting to wonder if it is related to a privledged LXC in Proxmox. A bit weird as it was working just the other week. I believe I updated the container last week and have not been able to get it to work since.

@gkyriazis
Copy link
Author

In my case, I created the LCX container from he GUI as a privileged container with the following options (which are necessary to run docker inside the container):

lxc.apparmor.profile: unconfined
lxc.cgroup2.devices.allow: a
lxc.cap.drop:

I was running an alpine LXC container w/ 32GB or RAM and 4 cores, 50GB of disk space.

I decided to install crashplan directly on the container (using using the docker images), and it works fine.

Thank you

@jlesage
Copy link
Owner

jlesage commented Jan 25, 2023

Ok sorry I didn't realized that you where running Docker inside an LCX container.

Looks like something is preventing changes to the content of the container itself. Can you try this test to confirm:

Run the following command:

docker run --rm -ti jlesage/crashplan-pro sh

Then, once inside the container, run:

mv /etc/group /etc/group.bak

@gkyriazis
Copy link
Author

I'm getting:

mv: can't rename '/etc/group': Invalid argument.

Thanks!

@jlesage
Copy link
Owner

jlesage commented Jan 27, 2023

So for some reason the LCX container prevent changes inside the Docker container... My guess is that you should have this problem with any Docker container.

On a normal Linux installation, a container is actually stored somewhere under /var/lib/docker/. Maybe check for any restrictions on it?

@p3av3y
Copy link

p3av3y commented Jan 27, 2023

I would agree there seems to be something with the LXC. Something changed some where as this was working for me with this exact same setup 3 weeks ago. A new image pull is when I noticed the issue, but I do believe I updated Proxmox a bit before that as well, so harder to isolate what may have been the catalyst.

I have not had an issue with any other docker image I am running.

@jlesage
Copy link
Owner

jlesage commented Jan 27, 2023

I would agree there seems to be something with the LXC. Something changed some where as this was working for me with this exact same setup 3 weeks ago. A new image pull is when I noticed the issue, but I do believe I updated Proxmox a bit before that as well, so harder to isolate what may have been the catalyst.

You could try to use a previous version of the Docker image that you know was working.

I have not had an issue with any other docker image I am running.

I guess it depends on what the container is doing. You could try the test I mentioned in previous comment, but using a different image. For example:

docker run --rm -ti alpine:3.17 sh
mv /etc/group /etc/group.bak

@p3av3y
Copy link

p3av3y commented Jan 27, 2023

I did try going back before and that didn't work.

I just pulled down alpine with your recommendation above and am getting an error running the mv. This this confirms that it is related to Proxmox and LXC. Guess it is time to go down a rabbit hole with that now.

Thank you for the help.

@jlesage
Copy link
Owner

jlesage commented Jan 27, 2023

Thanks for the test. Keep us updated if you found any solution, I'm sure other people will have the same issue.

@gkyriazis
Copy link
Author

I can confirm that I see the same behavior.

The "invalid argument" message appears with an alpine image, though. If, instead, I run an ubuntu:22.04 image and run the same "mv" command, I get a different (and perplexing) error:

mv: cannot move '/etc/group' to a subdirectory of itself, '/etc/group.bak'

@jlesage
Copy link
Owner

jlesage commented Jan 28, 2023

Humm interesting. Could you provide the output of the following commands (run inside container):

  • mount
  • ls -la /etc

Also, are you able to move or create other files in the container ? I would like to understand if the problem is only related to /etc/group.

@jlesage
Copy link
Owner

jlesage commented Jan 28, 2023

I tried to reproduce on my side by doing the following steps:

  • Installed Proxmox 7.3 in a VM.
  • Created an LXC container based on the ubuntu 22 template.
  • Installed Docker inside the LXC container.
  • Executed docker run --rm jlesage/crashplan-pro inside the LXC container.

This worked fine for me. Is there anything else I need to do to reproduce the problem ?

@p3av3y
Copy link

p3av3y commented Jan 28, 2023

Underprivileged or privileged LXC? I am using a privileged LXC. I am using a Debian template, don't know if that matters.

@jlesage
Copy link
Owner

jlesage commented Jan 28, 2023

Underprivileged or privileged LXC?

I tried both. Here is the config of the privileged LXC container:

arch: amd64
cores: 4
hostname: ubuntu22ctpriv
memory: 2048
net0: name=eth0,bridge=vmbr0,firewall=1,hwaddr=46:A2:57:92:CE:4C,ip=dhcp,ip6=dhcp,type=veth
ostype: ubuntu
rootfs: local-lvm:vm-102-disk-0,size=8G
swap: 2048
lxc.apparmor.profile: unconfined
lxc.cgroup.devices.allow: a
lxc.cap.drop: 

@jlesage
Copy link
Owner

jlesage commented Jan 28, 2023

Also, did you try to reproduce in a new LXC container ?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants