Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect ZFS command when making zpool(?) #820

Closed
vennsofar opened this issue Oct 10, 2024 · 9 comments
Closed

Incorrect ZFS command when making zpool(?) #820

vennsofar opened this issue Oct 10, 2024 · 9 comments
Labels
question Not a bug or issue, but a question asking for help or information

Comments

@vennsofar
Copy link

Hi all, it seems that Disko calls a non-existent command when trying to create a ZPool. It tries to call mkfs.zfs which does not exist, instead of something like zpool create

Originally I tried to use disko with this config, and after applying fixes from #765 it does create all the partitions successfully, up to creating the partition where ZFS would occupy, but fails to actually create any ZPool or datasets, but gives no mention of the aforementioned mkfs.zfs command (at least within visible terminal history.)

image

I then tried to use a modified version of a ZFS template config but had no luck with that either, but this config does mention the mkfs.zfs command, and that it does not exist.

image

Please note that I ran sudo wipefs -fa /dev/sda between using each config file to make sure that they didn't interfere with each other.

@Enzime
Copy link
Contributor

Enzime commented Oct 10, 2024

Can you post your disko configuration?

@iFreilicht
Copy link
Contributor

@Enzime they linked the two configs they tried.

The second config is invalid, because you're using the filesystem type instead of the zfs type:

primary = {
  size = "100%";
  content = {
    type = "filesystem";
    format = "zfs";
  };
};

filesystem is a generic type that always uses mkfs.${format}. Your first config is correct:

primary = {
  size = "100%";
  content = {
    type = "zfs";
    pool = "sys";
  };
};

And as you correctly observe, it doesn't call the non-existing mkfs.zfs command. The question now is, does it even fail at all?

Your config looks correct, and I don't see any errors in the screenshots you posted. If you use legacy mountpoints, the datasets will not be mounted by default, you have to do it manually.

What do you see when running lsblk --fs after running the first config?

Also, you should be able to scroll up with the PgUp and PgDown keys.

@iFreilicht iFreilicht added the question Not a bug or issue, but a question asking for help or information label Oct 10, 2024
@vennsofar
Copy link
Author

vennsofar commented Oct 10, 2024

It doesn't fail in the sense that the program crashes/errors out, but fails in the sense that there is no sign of anything related to ZFS on the entire drive.
There are no ZPools, datasets, anything at all.
lsblk --fs returns the following:
image
sudo zpool status & zfs list returns this:
image

Aside, but PgUp and PgDown don't scroll for me, they only move in my bash history.
Thank you for the clarification between the type = "zfs" and type = "filesystem", though I find it strange that the template taken from this repo is incorrect. Is there a reason for that? Like I mentioned, the second config file was only slightly edited, only removing the second disk which isn't present in my system. Did I miss something?

* Edit: I am aware that I have to mount legacy datasets manually, but like I showed they do not exist, so I can't mount anything.

@iFreilicht
Copy link
Contributor

iFreilicht commented Oct 10, 2024

I find it strange that the template taken from this repo is incorrect.

Where did you find it, exactly? Searching the entire repo for format ?= ?"zfs" yields zero results, and all templates we have are tested in VMs by CI on every PR.


Ahhhhh haha I figured it out!. Shortened, you config looks like this:

{
  disko.devices.disk = {
    main = {
      ...
    };
  };
  zpool = {
    sys = {
      ...
    };
  };
}

So you're defining the disko.devices.disk and zpool options. But you need to define disko.devices.zpool. You can either just change the name like this:

{
  disko.devices.disk = {
    main = {
      ...
    };
  };
  disko.devices.zpool = {
    sys = {
      ...
    };
  };
}

Or nest everything under disko.devices:

{
  disko.devices = {
    disk = {
      main = {
        ...
      };
    };
    zpool = {
      sys = {
        ...
      };
    };
  };
}

This is the sort of configuration issue that's hard to detect with nix itself, and I hope to remedy with #789

@vennsofar
Copy link
Author

I pulled the example config from https://github.com/nix-community/disko/blob/master/example, which is found under the Overview Section of the README (See below.)
image

I see, and this does seem to be the case. Adding disko.devices to the zpool does create the ZPool and all attributes correctly (barring some misconfiguration on my end, it doesn't seem to inherit options like the normal zfs create unless I'm totally misreading.
image

Side note, is there a more effective method to mount ZFS datasets instead of the legacy format? Is the typical ZFS automount reliable on NixOS at all?

In any case my updated config works as expected now, I appreciate the help.

@iFreilicht
Copy link
Contributor

I pulled the example config from https://github.com/nix-community/disko/blob/master/example, which is found under the Overview Section of the README (See below.)

As I said, none of those examples contains the string format = "zfs";, so I'm wondering where you got that from?

it doesn't seem to inherit options like the normal zfs create unless I'm totally misreading.

Yes, this is a known issue: #560

is there a more effective method to mount ZFS datasets instead of the legacy format?

You can set options.mountpoint to whatever mountpoint you want (let's say /mnt/mydataset), and set the option boot.zfs.extraPools to a list containing your pool names (in your case [ "sys" ]).

This is the setup I'm using and it works reliably. However, there is a small trade-off for things like /home as described in this comment.

@vennsofar
Copy link
Author

Well I did some looking, and it turns out that I edited more than I remembered I did, including the format = "zfs"; bit, so that's my fault.
Ill keep the options inheritance and the mounting stuff in mind, thanks for telling me.
I've done more testing and disko has been reliable since, so all is good here, thank you for the help.

@iFreilicht
Copy link
Contributor

Great to hear!

@Lassulus
Copy link
Collaborator

we could throw an error if we encounter something like

  content = {
    type = "filesystem";
    format = "zfs";
  };

inside a config? and tell people to use the zfs/zpool types instead

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Not a bug or issue, but a question asking for help or information
Projects
None yet
Development

No branches or pull requests

4 participants