Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Big sur showing list failed: cannot connect to the multipass socket Please ensure multipassd is running and '/var/run/multipass_socket' is accessible #1983

Closed
dmuiX opened this issue Feb 25, 2021 · 57 comments · Fixed by #1989
Labels

Comments

@dmuiX
Copy link

dmuiX commented Feb 25, 2021

Describe the bug
I have closed multipassd forcefully with the activity view thing. I don't know how its called correctly in english. Now multipass list is now showing:

list failed: cannot connect to the multipass socket
Please ensure multipassd is running and '/var/run/multipass_socket' is accessible

The only thing how to get it working again was for me so far to completely delete everything. Maybe there is another way?

My overall impression so far:*
Have used it for a while now and my impression is its not quite stable on mac big sur. After digging around with it a while I am getting more annoyed of it as its not really providing value for me with these errors. Hopefully getting better in the future as I think its a quite good software.

@dmuiX dmuiX added the bug label Feb 25, 2021
@Saviq
Copy link
Collaborator

Saviq commented Feb 25, 2021

@dmuiX if you forcefully shut it down, there's a chance our data storage got corrupted. Can you please share the contents of your multipassd.log? See accessing logs for instructions on where to find it.

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

multipassd.log

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

Thanks for responding so fast :)

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

@dmuiX if you forcefully shut it down, there's a chance our data storage got corrupted. Can you please share the contents of your multipassd.log? See accessing logs for instructions on where to find it.

What is the correct way to shut multipass down if something is wrong or not responding?

@Saviq
Copy link
Collaborator

Saviq commented Feb 25, 2021

This should do:

$ sudo launchctl unload /Library/LaunchDaemons/com.canonical.multipassd.plist
$ sudo launchctl load /Library/LaunchDaemons/com.canonical.multipassd.plist

But it may well result in the same problem, if multipassd got stuck somehow.

I can see in the log why it never came up again:

[error] [daemon] Caught an unhandled exception: Invalid MAC address

Can you please share the contents of /var/root/Library/Application Support/multipassd/multipassd-vm-instances.json?

Unfortunately I can't see what went wrong in the first place… Everything seems ok right up to the above errors, at which point I imagine you killed it.

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

{
    "ubuntu": {
        "deleted": false,
        "disk_space": "0",
        "extra_interfaces": [
        ],
        "mac_addr": "",
        "mem_size": "0",
        "metadata": {
        },
        "mounts": [
        ],
        "num_cores": 0,
        "ssh_username": "",
        "state": 2
    },
    "ubuntuVM": {
        "deleted": false,
        "disk_space": "5368709120",
        "extra_interfaces": [
        ],
        "mac_addr": "52:54:00:19:23:bc",
        "mem_size": "1073741824",
        "metadata": {
        },
        "mounts": [
            {
                "gid_mappings": [
                    {
                        "host_gid": 20,
                        "instance_gid": -1
                    }
                ],
                "source_path": "/Volumes/Data/Computerspende/computerspende",
                "target_path": "~/computerspende",
                "uid_mappings": [
                    {
                        "host_uid": 501,
                        "instance_uid": -1
                    }
                ]
            },
            {
                "gid_mappings": [
                    {
                        "host_gid": 20,
                        "instance_gid": -1
                    }
                ],
                "source_path": "/Volumes/Data/Computerspende/computerspende",
                "target_path": "/Volumes/Data/Computerspende/computerspende",
                "uid_mappings": [
                    {
                        "host_uid": 501,
                        "instance_uid": -1
                    }
                ]
            }
        ],
        "num_cores": 1,
        "ssh_username": "ubuntu",
        "state": 4
    }
}{
    "ubuntu": {
        "deleted": false,
        "disk_space": "0",
        "extra_interfaces": [
        ],
        "mac_addr": "",
        "mem_size": "0",
        "metadata": {
        },
        "mounts": [
        ],
        "num_cores": 0,
        "ssh_username": "",
        "state": 2
    },
    "ubuntuVM": {
        "deleted": false,
        "disk_space": "5368709120",
        "extra_interfaces": [
        ],
        "mac_addr": "52:54:00:19:23:bc",
        "mem_size": "1073741824",
        "metadata": {
        },
        "mounts": [
            {
                "gid_mappings": [
                    {
                        "host_gid": 20,
                        "instance_gid": -1
                    }
                ],
                "source_path": "/Volumes/Data/Computerspende/computerspende",
                "target_path": "~/computerspende",
                "uid_mappings": [
                    {
                        "host_uid": 501,
                        "instance_uid": -1
                    }
                ]
            },
            {
                "gid_mappings": [
                    {
                        "host_gid": 20,
                        "instance_gid": -1
                    }
                ],
                "source_path": "/Volumes/Data/Computerspende/computerspende",
                "target_path": "/Volumes/Data/Computerspende/computerspende",
                "uid_mappings": [
                    {
                        "host_uid": 501,
                        "instance_uid": -1
                    }
                ]
            }
        ],
        "num_cores": 1,
        "ssh_username": "ubuntu",
        "state": 4
    }
}

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

Actually I deleted ubuntu and ubuntuTest with multipass delete ubuntu and multipass delete ubuntuTest. Seems like its not showing here.

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

Everytime I have tried multipass purge so far it ended with taking very long time. And then I stopped it with ctrl+c or last time with the activity thing and then when I ran multipass list it showed the error above.

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

I remember a detail now:
These two VMs I wanted to delete ubuntu and ubuntuTest did not start properly in the first place. They just stood on starting the whole time not showing anything.
I am trying to create vms with a cloud-init file. Maybe it has something to do with these files?

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

How is it possible to get more output with these commands? If i try -vvvv nothing changes.

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

This should do:

$ sudo launchctl unload /Library/LaunchDaemons/com.canonical.multipassd.plist
$ sudo launchctl load /Library/LaunchDaemons/com.canonical.multipassd.plist

But it may well result in the same problem, if multipassd got stuck somehow.

I can see in the log why it never came up again:

[error] [daemon] Caught an unhandled exception: Invalid MAC address

Can you please share the contents of /var/root/Library/Application Support/multipassd/multipassd-vm-instances.json?

Unfortunately I can't see what went wrong in the first place… Everything seems ok right up to the above errors, at which point I imagine you killed it.

about the restart => Yeah it ends in the same error again.

@Saviq
Copy link
Collaborator

Saviq commented Feb 25, 2021

OK, to recover, unload, replace the contents of the json file with the below, and load again.

{
    "ubuntuVM": {
        "deleted": false,
        "disk_space": "5368709120",
        "extra_interfaces": [
        ],
        "mac_addr": "52:54:00:19:23:bc",
        "mem_size": "1073741824",
        "metadata": {
        },
        "mounts": [
            {
                "gid_mappings": [
                    {
                        "host_gid": 20,
                        "instance_gid": -1
                    }
                ],
                "source_path": "/Volumes/Data/Computerspende/computerspende",
                "target_path": "~/computerspende",
                "uid_mappings": [
                    {
                        "host_uid": 501,
                        "instance_uid": -1
                    }
                ]
            },
            {
                "gid_mappings": [
                    {
                        "host_gid": 20,
                        "instance_gid": -1
                    }
                ],
                "source_path": "/Volumes/Data/Computerspende/computerspende",
                "target_path": "/Volumes/Data/Computerspende/computerspende",
                "uid_mappings": [
                    {
                        "host_uid": 501,
                        "instance_uid": -1
                    }
                ]
            }
        ],
        "num_cores": 1,
        "ssh_username": "ubuntu",
        "state": 4
    }
}

That should bring ubuntuVM back.

@Saviq
Copy link
Collaborator

Saviq commented Feb 25, 2021

I am trying to create vms with a cloud-init file. Maybe it has something to do with these files?

Yes, if it failed to boot properly (or just failed to get IP), that's likely to be the result of that. If you can show me what the cloud init was, I can maybe suggest what's wrong with them.

And yes, we're not exactly great dealing with unresponsive VMs… there's a couple issues we have around and we're planning a --force option to stop and delete that would not try and shut the VM down cleanly, but just killed it.

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

I am trying to create vms with a cloud-init file. Maybe it has something to do with these files?

Yes, if it failed to boot properly (or just failed to get IP), that's likely to be the result of that. If you can show me what the cloud init was, I can maybe suggest what's wrong with them.

And yes, we're not exactly great dealing with unresponsive VMs… there's a couple issues we have around and we're planning a --force option to stop and delete that would not try and shut the VM down cleanly, but just killed it.

So at the moment its not really possible to shut a not responding vm down properly? Or do you know a way to do it properly?
As I am experimenting with these cloud-init files i don't want to reinstall multipass if something goes wrong.
Oh I just saw: Can I first do stop and then delete everything should be fine?

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

OK, to recover, unload, replace the contents of the json file with the below, and load again.

{
    "ubuntuVM": {
        "deleted": false,
        "disk_space": "5368709120",
        "extra_interfaces": [
        ],
        "mac_addr": "52:54:00:19:23:bc",
        "mem_size": "1073741824",
        "metadata": {
        },
        "mounts": [
            {
                "gid_mappings": [
                    {
                        "host_gid": 20,
                        "instance_gid": -1
                    }
                ],
                "source_path": "/Volumes/Data/Computerspende/computerspende",
                "target_path": "~/computerspende",
                "uid_mappings": [
                    {
                        "host_uid": 501,
                        "instance_uid": -1
                    }
                ]
            },
            {
                "gid_mappings": [
                    {
                        "host_gid": 20,
                        "instance_gid": -1
                    }
                ],
                "source_path": "/Volumes/Data/Computerspende/computerspende",
                "target_path": "/Volumes/Data/Computerspende/computerspende",
                "uid_mappings": [
                    {
                        "host_uid": 501,
                        "instance_uid": -1
                    }
                ]
            }
        ],
        "num_cores": 1,
        "ssh_username": "ubuntu",
        "state": 4
    }
}

That should bring ubuntuVM back.

And multipass list is working again?

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

These are the two yaml files.

#cloud-config
groups:
  - docker
  
users:
  - default
  - name: ubuntu
    groups: docker
    sudo:  ALL=(ALL) NOPASSWD:ALL
    ssh_authorized_keys:
      - ssh-rsa <somekey>

package_upgrade: true

power_state:
  mode: reboot
  message: Restarting after installing docker & docker-compose

The only difference to the one before is the docker install script.

#cloud-config
groups:
  - docker

users:
  - default
  - name: ubuntu
    groups: docker
    sudo:  ALL=(ALL) NOPASSWD:ALL
    ssh_authorized_keys:
      - ssh-rsa <somekey>

package_upgrade: true

packages:
  - apt-transport-https
  - ca-certificates
  - curl
  - gnupg-agent
  - software-properties-common

runcmd:
  # install docker following the guide: https://docs.docker.com/install/linux/docker-ce/ubuntu/
  - curl -sSL https://get.docker.com/ | sh
  # install docker-compose following the guide: https://docs.docker.com/compose/install/
  - sudo curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
  - sudo chmod +x /usr/local/bin/docker-compose

power_state:
  mode: reboot
  message: Restarting after installing docker & docker-compose

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

Thanks a lot for helping out :)

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

Don't know if you need it but here is the output of

multipass info --all
Name:           ubuntuVM
State:          Running
IPv4:           192.168.236.11
                172.17.0.1
Release:        Ubuntu 20.04.2 LTS
Image hash:     c5f2f08c6a1a (Ubuntu 20.04 LTS)
Load:           1.21 0.29 0.10
Disk usage:     2.7G out of 4.7G
Memory usage:   195.4M out of 981.4M
Mounts:         /Volumes/Data/Computerspende/computerspende => /Volumes/Data/Computerspende/computerspende
                    UID map: 501:default
                    GID map: 20:default
                /Volumes/Data/Computerspende/computerspende => ~/computerspende
                    UID map: 501:default
                    GID map: 20:default

@Saviq
Copy link
Collaborator

Saviq commented Feb 25, 2021

So this is at least part of the problem:

users:
  - default
  - name: ubuntu
    ssh_authorized_keys:
      - ssh-rsa <somekey>
...

We rely on the default user to manage the VM, and doing this you're replacing Multipass's SSH key, so we lose connectivity to the instance. We plan to drop that requirement, but didn't get around to it just yet.

It'd be safest if you used a custom user instead.

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

Thats weird.
The ubuntuVM above I have created with a pretty similiar cloud-init file. And it was running just fine.

#cloud-config
groups:
  - docker

users:
  - default
  - name: ubuntu
    groups: docker
    sudo:  ALL=(ALL) NOPASSWD:ALL
    ssh_authorized_keys:
      - ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC1+Ji93zXpPqYjFUVWmNWooqUBwAbc0zUefCZVzP012RXgDQAR2LZtr6t1Yx35/jr4E9oBFvCFHvMBmUpmQEIehLb7RR4ksSdmEEQB3QHqlS0fTmEdnrjg3pgVOuXYKVySGoyiUPaVo5wV/lcyLD2xZQaXWKtu25bn+EaE9Eo58TnvEHiVWyf0avgUXx6xoXpuy0n3VFZ3QXSq1ll7wmEzfxOEIBqDJVkfVGJA9bUdYY05kEq5IZhLMxyyKFgCdYwhDau7HCkBwkhuJoM2RZYNaCoiIU2+hmirYtkvXdz8agiXhzqThYYPJUE2+Ash2sUMoQtWxS7LxkW/7EXKtW/3

package_upgrade: true

packages:
  - apt-transport-https
  - ca-certificates
  - curl
  - gnupg-agent
  - software-properties-common

runcmd:
  # install docker following the guide: https://docs.docker.com/install/linux/docker-ce/ubuntu/
  - curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
  - sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
  - sudo apt-get -y update
  - sudo apt-get -y install docker-ce docker-ce-cli containerd.io
  - sudo systemctl enable docker
  # install docker-compose following the guide: https://docs.docker.com/compose/install/
  - sudo curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
  - sudo chmod +x /usr/local/bin/docker-compose

power_state:
  mode: reboot
  message: Restarting after installing docker & docker-compose

But I am runnning another launch with this config again at the moment. Maybe it was just luck :D.
Nope:

multipass launch -vvvvv -n ubuntu --cloud-init cloud-init_docker.yaml
Launched: ubuntu

Finished and working. I don't understand whats going on here....

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

Maybe it has something to do if I start another instance of the same image although the other instance is still running?
I am trying now the first file from above. And will report what happens.

@Saviq
Copy link
Collaborator

Saviq commented Feb 25, 2021

Oh I just saw: Can I first do stop and then delete everything should be fine?

Killing the hyperkit process is safest. That would make Multipass realize it's gone.

@Saviq
Copy link
Collaborator

Saviq commented Feb 25, 2021

The ubuntuVM above I have created with a pretty similiar cloud-init file. And it was running just fine.

Oh indeed, it concatenates the SSH keys :]

@Saviq
Copy link
Collaborator

Saviq commented Feb 25, 2021

FWIW I just launched with your "full" cloud init, and it launched fine, so it must be some bad luck on those that failed to start…

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

The ubuntuVM above I have created with a pretty similiar cloud-init file. And it was running just fine.

Oh indeed, it concatenates the SSH keys :]

so this is the correct way to add ssh-keys?

@Saviq
Copy link
Collaborator

Saviq commented Feb 25, 2021

so this is the correct way to add ssh-keys?

Yes, I can't see anything wrong with your cloud-init after all.

Well, except for this, but that won't prevent Multipass from working :)

The following packages have unmet dependencies:
 containerd.io : Depends: libc6 (>= 2.32) but 2.31-0ubuntu9.2 is to be installed
 docker-ce : Depends: libc6 (>= 2.32) but 2.31-0ubuntu9.2 is to be installed
             Recommends: docker-ce-rootless-extras but it is not going to be installed
             Recommends: pigz but it is not going to be installed
 docker-ce-cli : Depends: libc6 (>= 2.32) but 2.31-0ubuntu9.2 is to be installed
E: Unable to correct problems, you have held broken packages.

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

FWIW I just launched with your "full" cloud init, and it launched fine, so it must be some bad luck on those that failed to start…

I am not that lucky now.
I am running two vms now one with the first file and the second with the second file. Both vms started but they are both are stucked at starting...

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

so this is the correct way to add ssh-keys?

Yes, I can't see anything wrong with your cloud-init after all.

Well, except for this, but that won't prevent Multipass from working :)

The following packages have unmet dependencies:
 containerd.io : Depends: libc6 (>= 2.32) but 2.31-0ubuntu9.2 is to be installed
 docker-ce : Depends: libc6 (>= 2.32) but 2.31-0ubuntu9.2 is to be installed
             Recommends: docker-ce-rootless-extras but it is not going to be installed
             Recommends: pigz but it is not going to be installed
 docker-ce-cli : Depends: libc6 (>= 2.32) but 2.31-0ubuntu9.2 is to be installed
E: Unable to correct problems, you have held broken packages.

is this output from the vm? how did you get this?

@Saviq
Copy link
Collaborator

Saviq commented Feb 25, 2021

is this output from the vm? how did you get this?

This is from cloud-init logs at /var/log/cloud-init-output.log inside the VM.

@Saviq
Copy link
Collaborator

Saviq commented Feb 25, 2021

I just ran this and it worked fine:

$ multipass launch --cloud-init - <<EOF                             
#cloud-config
groups:
  - docker

users:
  - default
  - name: ubuntu
    groups: docker
    sudo:  ALL=(ALL) NOPASSWD:ALL
    ssh_authorized_keys:
      - ssh-rsa $( cat ~/.ssh/id_rsa.pub )

package_upgrade: true

packages:
  - apt-transport-https
  - ca-certificates
  - curl
  - gnupg-agent
  - software-properties-common

runcmd:
  # install docker following the guide: https://docs.docker.com/install/linux/docker-ce/ubuntu/
  - curl -sSL https://get.docker.com/ | sh
  # install docker-compose following the guide: https://docs.docker.com/compose/install/
  - sudo curl -L "https://github.com/docker/compose/releases/download/1.25.4/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
  - sudo chmod +x /usr/local/bin/docker-compose

power_state:
  mode: reboot
  message: Restarting after installing docker & docker-compose
EOF

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

/var/root/Library/Application Support/multipassd/vault/instances/ubuntu/pty: Permission denied

if i try that it says permission denied? although i am root with sudo su? screen tells the same..?

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

I have now three vms running. all with the three different cloud-init files. None of the is starting correctly...looks like none of them is getting a ip?

@Saviq
Copy link
Collaborator

Saviq commented Feb 25, 2021

if i try that it says permission denied? although i am root with sudo su? screen tells the same..?

Because it's trying to execute the file rather than connect to it… I could've sworn that worked at some point, but can't get it to, now… Any case, the logs in /Library/Logs/Multipass have the same contents, and you wouldn't be able to log in on the console anyway. It's more useful to read through the logs, to see if they're getting IPs.

looks like none of them is getting a ip?

What about if you don't provide a --name? There's a chance the DHCP server (bootpd) got confused with those instances using the same name.

You could try clearing /var/db/dhcpd_leases, too.

I'm going EOD now, will pick it up in the morning if you don't resolve it yourself by then.

One last thing, I occasionally see instances as Starting here, too, but multipass shell still works with them, and then the correct state is shown.

@Saviq
Copy link
Collaborator

Saviq commented Feb 25, 2021

I could've sworn that worked at some point, but can't get it to, now…

Ah, because suddenly they're not symlinks, but rather just contain the name of the TTY device:

$ sudo cat /var/root/Library/Application\ Support/multipassd/vault/instances/dignified-jackdaw/pty
/dev/ttys003
$ sudo screen /dev/ttys003
# got the console now

@dmuiX
Copy link
Author

dmuiX commented Feb 25, 2021

okay thanks a lot so far :). I will try again and report what happens.

@dmuiX
Copy link
Author

dmuiX commented Feb 26, 2021

if i try that it says permission denied? although i am root with sudo su? screen tells the same..?

Because it's trying to execute the file rather than connect to it… I could've sworn that worked at some point, but can't get it to, now… Any case, the logs in /Library/Logs/Multipass have the same contents, and you wouldn't be able to log in on the console anyway. It's more useful to read through the logs, to see if they're getting IPs.

looks like none of them is getting a ip?

What about if you don't provide a --name? There's a chance the DHCP server (bootpd) got confused with those instances using the same name.

You could try clearing /var/db/dhcpd_leases, too.

I'm going EOD now, will pick it up in the morning if you don't resolve it yourself by then.

One last thing, I occasionally see instances as Starting here, too, but multipass shell still works with them, and then the correct state is shown.

About that you are right. I can use the shell although its still starting. If I am in the shell it is showing the ip address.

@dmuiX
Copy link
Author

dmuiX commented Feb 26, 2021

multipass shell computerspende
start failed: The following errors occurred:
computerspende: timed out waiting for response

This error ocurred one time. Haven't checked the log files at this moment.
I have tried it another time then it was working flawlesly.
Now another time was not working again.
Have a theorie about that: Vmware fusion was running with a vm in the background. So could have been that.
Yes that was it now everything is working. Thanks again for your help :).
Will report if there's some error.

@ChangheeOh
Copy link

ChangheeOh commented Mar 2, 2021

I just newly installed multiples via homebrew as bellow and there is an error happening now.
I can not proceed to do anything now, because all the commands and GUI doesn't work with the same error.
I have read #1935 and it's been closed now. Does it mean that this error is new one with the same symptom?

$ sw_vers
ProductName:	macOS
ProductVersion:	11.2.2
BuildVersion:	20D80

$ multipass --version
multipass  1.6.2+Mac

$ Multipass multipass info --all
info failed: cannot connect to the multipass socket
Please ensure multipassd is running and '/var/run/multipass_socket' is accessible

$ sudo launchctl list | grep multi
PID	Status	Label
-	1	        com.canonical.multipassd

$ ls -la /var/run/multi*
zsh: no matches found: /var/run/multi*

$ tail /Library/Logs/Multipass/multipassd.log
[error] [daemon] Caught an unhandled exception: Invalid MAC address
[warning] [Qt] QMutex: destroying locked mutex
[error] [daemon] Caught an unhandled exception: Invalid MAC address
[warning] [Qt] QMutex: destroying locked mutex
[error] [daemon] Caught an unhandled exception: Invalid MAC address
[warning] [Qt] QMutex: destroying locked mutex
[error] [daemon] Caught an unhandled exception: Invalid MAC address
[warning] [Qt] QMutex: destroying locked mutex
[error] [daemon] Caught an unhandled exception: Invalid MAC address
[warning] [Qt] QMutex: destroying locked mutex

$ ifconfig -a
lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384
	options=1203<RXCSUM,TXCSUM,TXSTATUS,SW_TIMESTAMP>
	inet 127.0.0.1 netmask 0xff000000
	inet6 ::1 prefixlen 128
	inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1
	nd6 options=201<PERFORMNUD,DAD>
gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280
stf0: flags=0<> mtu 1280
anpi0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=400<CHANNEL_IO>
	ether 1e:00:8a:3e:c1:58
	inet6 fe80::1c00:8aff:fe3e:c158%anpi0 prefixlen 64 scopeid 0x4
	nd6 options=201<PERFORMNUD,DAD>
	media: none
	status: inactive
anpi1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=400<CHANNEL_IO>
	ether 1e:00:8a:3e:c1:59
	inet6 fe80::1c00:8aff:fe3e:c159%anpi1 prefixlen 64 scopeid 0x5
	nd6 options=201<PERFORMNUD,DAD>
	media: none
	status: inactive
ap1: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=400<CHANNEL_IO>
	ether 3a:3e:ef:e1:ba:83
	nd6 options=201<PERFORMNUD,DAD>
	media: autoselect
	status: inactive
en3: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=400<CHANNEL_IO>
	ether 1e:00:8a:3e:c1:38
	nd6 options=201<PERFORMNUD,DAD>
	media: none
	status: inactive
en4: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=400<CHANNEL_IO>
	ether 1e:00:8a:3e:c1:39
	nd6 options=201<PERFORMNUD,DAD>
	media: none
	status: inactive
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=400<CHANNEL_IO>
	ether 18:3e:ef:e1:ba:83
	inet6 fe80::10a6:8:4bea:c6f1%en0 prefixlen 64 secured scopeid 0x9
	inet 192.168.0.30 netmask 0xffffff00 broadcast 192.168.0.255
	nd6 options=201<PERFORMNUD,DAD>
	media: autoselect
	status: active
en1: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
	options=460<TSO4,TSO6,CHANNEL_IO>
	ether 36:10:fc:1d:fe:00
	media: autoselect <full-duplex>
	status: inactive
en2: flags=8963<UP,BROADCAST,SMART,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
	options=460<TSO4,TSO6,CHANNEL_IO>
	ether 36:10:fc:1d:fe:04
	media: autoselect <full-duplex>
	status: inactive
bridge0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=63<RXCSUM,TXCSUM,TSO4,TSO6>
	ether 36:10:fc:1d:fe:00
	Configuration:
		id 0:0:0:0:0:0 priority 0 hellotime 0 fwddelay 0
		maxage 0 holdcnt 0 proto stp maxaddr 100 timeout 1200
		root id 0:0:0:0:0:0 priority 0 ifcost 0 port 0
		ipfilter disabled flags 0x0
	member: en1 flags=3<LEARNING,DISCOVER>
	        ifmaxaddr 0 port 10 priority 0 path cost 0
	member: en2 flags=3<LEARNING,DISCOVER>
	        ifmaxaddr 0 port 11 priority 0 path cost 0
	nd6 options=201<PERFORMNUD,DAD>
	media: <unknown type>
	status: inactive
awdl0: flags=8943<UP,BROADCAST,RUNNING,PROMISC,SIMPLEX,MULTICAST> mtu 1500
	options=400<CHANNEL_IO>
	ether 9e:c7:fb:f6:be:bc
	inet6 fe80::9cc7:fbff:fef6:bebc%awdl0 prefixlen 64 scopeid 0xd
	nd6 options=201<PERFORMNUD,DAD>
	media: autoselect
	status: active
llw0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
	options=400<CHANNEL_IO>
	ether 9e:c7:fb:f6:be:bc
	inet6 fe80::9cc7:fbff:fef6:bebc%llw0 prefixlen 64 scopeid 0xe
	nd6 options=201<PERFORMNUD,DAD>
	media: autoselect
	status: active
utun0: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
	inet6 fe80::498a:fcd8:be29:7bb%utun0 prefixlen 64 scopeid 0xf
	nd6 options=201<PERFORMNUD,DAD>
utun1: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 2000
	inet6 fe80::1f99:d97c:8e01:8765%utun1 prefixlen 64 scopeid 0x10
	nd6 options=201<PERFORMNUD,DAD>
utun2: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
	inet6 fe80::541f:d98e:2c46:8239%utun2 prefixlen 64 scopeid 0x11
	nd6 options=201<PERFORMNUD,DAD>
utun3: flags=8051<UP,POINTOPOINT,RUNNING,MULTICAST> mtu 1380
	inet6 fe80::eb00:27c6:478a:9238%utun3 prefixlen 64 scopeid 0x12
	nd6 options=201<PERFORMNUD,DAD>

@Saviq
Copy link
Collaborator

Saviq commented Mar 2, 2021

Hi @ChangheeOh, can you please show your /var/root/Library/Application Support/multipassd/multipassd-vm-instances.json?

@ChangheeOh
Copy link

ChangheeOh commented Mar 2, 2021

Hi @ChangheeOh, can you please show your /var/root/Library/Application Support/multipassd/multipassd-vm-instances.json?

@Saviq,

Here is the json file you want.

$ cat /var/root/Library/Application\ Support/multipassd/multipassd-vm-instances.json
{
    "valuable-goblin": {
        "deleted": false,
        "disk_space": "0",
        "extra_interfaces": [
        ],
        "mac_addr": "",
        "mem_size": "0",
        "metadata": {
        },
        "mounts": [
        ],
        "num_cores": 0,
        "ssh_username": "",
        "state": 2
    }
}

$ cat /var/root/Library/Application\ Support/multipassd/vault/multipassd-instance-image-records.json
{
}

$ ls -la /var/root/Library/Application\ Support/multipassd/vault/instances/
total 0
drwxr-xr-x  2 root  wheel   64  2 13 09:50 .
drwxr-xr-x  4 root  wheel  128  2 13 09:49 ..

@Saviq
Copy link
Collaborator

Saviq commented Mar 2, 2021

@ChangheeOh and you're on Multipass 1.6.2? multipass --version please? You can remove those files and it should work again.

@ChangheeOh
Copy link

@ChangheeOh and you're on Multipass 1.6.2? multipass --version please? You can remove those files and it should work again.

@Saviq, this is the version of multipass I have installed. Do you mean that I can remove the above multipassd-vm-instances.json and retry?

$ multipass --version
multipass  1.6.2+Mac

@Saviq
Copy link
Collaborator

Saviq commented Mar 2, 2021

@ChangheeOh yes

@Saviq Saviq linked a pull request Mar 2, 2021 that will close this issue
@ChangheeOh
Copy link

ChangheeOh commented Mar 2, 2021

@ChangheeOh yes

@Saviq,

I removed multipassd-vm-instances.json and restart multipassd via launchctl start multipassd and another error is occurring now.
Even though the initial launching process failed, multipassd is still running.
I assume it's trying to start ubuntu VM with not arm64 but amd64.
Could you confirm and advise me how to change iso image?

# launch failed: The following errors occurred:                                   
Instance stopped while starting
Saving session...completed.                                                     

[Process completed]

# tail -f /Library/Logs/Multipass/multipassd.log
[2021-03-03T00:18:20.720] [debug] [primary] process working dir ''
[2021-03-03T00:18:20.720] [info] [primary] process program '/Library/Application Support/com.canonical.multipass/bin/hyperkit'
[2021-03-03T00:18:20.720] [info] [primary] process arguments '-c, 1, -m, 1024M, -u, -A, -H, -U, 386bba5a-5dc4-3ac2-95c9-cf0b9a29b352, -s, 0:0,hostbridge, -s, 2:0,virtio-net, -s, 5,virtio-rnd, -s, 31,lpc, -l, com1,autopty=/var/root/Library/Application Support/multipassd/vault/instances/primary/pty,log=/Library/Logs/Multipass/primary-hyperkit.log, -s, 1:0,virtio-blk,file:///var/root/Library/Application Support/multipassd/vault/instances/primary/ubuntu-20.04-server-cloudimg-amd64.img?sync=os&buffered=1,format=qcow,qcow-config=discard=true;compact_after_unmaps=262144;keep_erased=262144;runtime_asserts=false, -s, 1:1,ahci-cd,/var/root/Library/Application Support/multipassd/vault/instances/primary/cloud-init-config.iso, -f, kexec,/var/root/Library/Application Support/multipassd/vault/instances/primary/ubuntu-20.04-server-cloudimg-amd64-vmlinuz-generic,/var/root/Library/Application Support/multipassd/vault/instances/primary/ubuntu-20.04-server-cloudimg-amd64-initrd-generic,earlyprintk=serial console=ttyS0 root=/dev/vda1 rw panic=1 no_timer_check'
[2021-03-03T00:18:20.720] [info] [primary] process state changed to Starting
[2021-03-03T00:18:20.739] [info] [primary] process state changed to Running
[2021-03-03T00:18:21.300] [error] [primary] Using fd 5 for I/O notifications
[2021-03-03T00:18:21.338] [error] [primary] hv_vm_create unknown error -85377023
[2021-03-03T00:18:21.347] [error] [primary] process error occurred Crashed
[2021-03-03T00:18:21.347] [info] [primary] process state changed to NotRunning
[2021-03-03T00:18:21.347] [info] [primary] process finished with exit code 6

# launchctl list | grep multipassd
10677	1	com.canonical.multipassd

# ps -ef | grep multipassd
    0 10677     1   0 12:14AM ??         0:17.80 /Library/Application Support/com.canonical.multipass/bin/multipassd --verbosity debug
    0 10818 10645   0 12:30AM ttys003    0:00.00 grep multipassd

# cat /var/root/Library/Application Support/multipassd/vaultmultipassd-instance-image-records.json
{
    "primary": {
        "image": {
            "aliases": [
            ],
            "current_release": "",
            "id": "c5f2f08c6a1adee1f2f96d84856bf0162d33ea182dae0e8ed45768a86182d110",
            "initrd_path": "/var/root/Library/Application Support/multipassd/vault/instances/primary/ubuntu-20.04-server-cloudimg-amd64-initrd-generic",
            "kernel_path": "/var/root/Library/Application Support/multipassd/vault/instances/primary/ubuntu-20.04-server-cloudimg-amd64-vmlinuz-generic",
            "original_release": "20.04 LTS",
            "path": "/var/root/Library/Application Support/multipassd/vault/instances/primary/ubuntu-20.04-server-cloudimg-amd64.img",
            "release_date": "20210223"
        },
        "last_accessed": 1614698300636006,
        "query": {
            "persistent": false,
            "query_type": 0,
            "release": "default",
            "remote_name": ""
        }
    }
}

# ls -la /var/root/Library/Application\ Support/multipassd/vault/instances/primary/
total 2818536
drwxr-xr-x  6 root  wheel         192  3  3 00:18 .
drwxr-xr-x  3 root  wheel          96  3  3 00:18 ..
-rw-r--r--  1 root  wheel       53248  3  3 00:18 cloud-init-config.iso
-rw-r--r--  1 root  wheel    26971105  3  3 00:18 ubuntu-20.04-server-cloudimg-amd64-initrd-generic
-rw-r--r--  1 root  wheel    11690752  3  3 00:17 ubuntu-20.04-server-cloudimg-amd64-vmlinuz-generic
-rw-r--r--  1 root  wheel  1387790416  3  3 00:18 ubuntu-20.04-server-cloudimg-amd64.img

@Saviq
Copy link
Collaborator

Saviq commented Mar 2, 2021

@ChangheeOh oh you're on Apple M1? We don't support it yet, sorry. It's on our list, but it will be some months still. Subscribe yourself to #1857 for news.

@ChangheeOh
Copy link

@ChangheeOh oh you're on Apple M1? We don't support it yet, sorry. It's on our list, but it will be some months still. Subscribe yourself to #1857 for news.

@Saviq
Thank you for the prompt answer. I will waiting for next release.

@bors bors bot closed this as completed in #1989 Mar 2, 2021
@Saviq
Copy link
Collaborator

Saviq commented Mar 3, 2021

To anyone else with this issue, the workaround is to clear any instances that look like this from /var/root/Library/Application\ Support/multipassd/multipassd-vm-instances.json:

{
    "deleted": false,
    "disk_space": "0",
    "extra_interfaces": [
    ],
    "mac_addr": "",
    "mem_size": "0",
    "metadata": {
    },
    "mounts": [
    ],
    "num_cores": 0,
    "ssh_username": "",
    "state": 2
}

We'll follow up with a bugfix release.

@cyal1
Copy link

cyal1 commented Apr 23, 2021

@dmuiX if you forcefully shut it down, there's a chance our data storage got corrupted. Can you please share the contents of your multipassd.log? See accessing logs for instructions on where to find it.

I also encountered the problem

@cyal1
Copy link

cyal1 commented Apr 23, 2021

@dmuiX if you forcefully shut it down, there's a chance our data storage got corrupted. Can you please share the contents of your multipassd.log? See accessing logs for instructions on where to find it.

I also encountered the problem

it happened in multipass 1.6.2+mac

@varunvd
Copy link

varunvd commented May 1, 2021

The issue still persists on 1.6.2 version.
OS : Mac OS Catalina ( 10.15.7 )
The workaround suggested here works with this OS as well. Thanks for that

@yuquansi
Copy link

yuquansi commented Jul 3, 2021

@Saviq
Hi, master
I meet the same issue but with macOS Catalina 10.15.7.
the error from multipassd.log is :
[error] [daemon] Caught an unhandled exception: Invalid MAC address
[warning] [Qt] QMutex: destroying locked mutex

Would you pls tell me how to fix it pls?

@SupianIDz
Copy link

chmod a+w /var/run/multipass_socket

fix my problem on macOS Monterey

@numbernumberone
Copy link

[error] [daemon] Caught an unhandled exception: Internal error: qemu-img failed (Process returned exit code: 1) with output:
qemu-img: Could not open '/var/root/Library/Application Support/multipassd/qemu/vault/instances/bionic3/ubuntu-18.04-server-cloudimg-arm64.img': Too much extra metadata in snapshot table entry 0
You can force-remove this extra metadata with qemu-img check -r all

please help , what should i do ?

@ubuntuvim
Copy link

just run command launchctl start multipassd!! And I don't know why, but the problem was solved.

@ml4
Copy link

ml4 commented Jul 2, 2023

Hi I uninstalled multipass 1.11, and used brew install multipass to get 1.12 on Ventura. Now I have the same problem.
info failed: cannot connect to the multipass socket

Looking down this thread, I have tried:
launchctl start multipassd serves to stop a long wait at the terminal before I get the error.
chmod a+w /var/run/multipass_socket didn't work. I did note it was 0660 prior to setting to 0666 so left it like that.
find / -name multipass\* 2>/dev/null to reveal everything left after a brew remove multipass and there was a load of stuff, which I deleted, and ran brew install multipass again, but no.

On the restart, I note that /Library/Logs/Multipass/multipassd.log says

[2023-07-02T23:22:34.839] [warning] [Qt] Empty filename passed to function
[2023-07-02T23:22:35.145] [debug] [update] Latest Multipass release available is version 1.12.0
[2023-07-02T23:22:37.303] [info] [VMImageHost] Did not find any supported products in "appliance"
[2023-07-02T23:22:37.308] [debug] [blueprint provider] Loading "anbox-cloud-appliance" v1
[2023-07-02T23:22:37.308] [debug] [blueprint provider] Loading "charm-dev" v1
[2023-07-02T23:22:37.309] [debug] [blueprint provider] Loading "docker" v1
[2023-07-02T23:22:37.309] [debug] [blueprint provider] Loading "jellyfin" v1
[2023-07-02T23:22:37.310] [debug] [blueprint provider] Loading "minikube" v1
[2023-07-02T23:22:37.310] [debug] [blueprint provider] Loading "ros-noetic" v1
[2023-07-02T23:22:37.311] [debug] [blueprint provider] Loading "ros2-humble" v1
[2023-07-02T23:22:37.329] [info] [rpc] gRPC listening on unix:/var/run/multipass_socket
[2023-07-02T23:22:37.329] [warning] [Qt] QIODevice::write (QFile, "/var/root/Library/Caches/multipassd/qemu/vault/multipassd-image-records.json"): device not open
[2023-07-02T23:22:37.329] [info] [daemon] Starting Multipass 1.12.1+mac
[2023-07-02T23:22:37.329] [info] [daemon] Daemon arguments: /Library/Application Support/com.canonical.multipass/bin/multipassd --verbosity debug
E0702 23:22:38.591480000 6178500608 tcp_server_posix.cc:245]           Failed getpeername: Invalid argument

To confirm:

$ ls -la /var/run/multipass_socket
srw-rw-rw- 1 root admin 0 Jul  2 23:22 /var/run/multipass_socket=
$ mps list
list failed: cannot connect to the multipass socket

I'm going to have to back out of 1.12 and try and download 1.11 as mps is on the critical workflow path for me.
Note I don't care about any machines as I've scripted the deploy so will just rerun the script when I can get mps to work.

I assumed that lack of M1 support in 2021 was fixed by July 2023 - do please correct me if I'm wrong. Thanks

@ricab
Copy link
Collaborator

ricab commented Jul 3, 2023

Hi @ml4, it looks like you're experiencing a regression caused by an update in one of our dependencies (gRPC). We are working on it and it's being tracked here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.