Common ZFS on root error messages

Using the ZFS file system¹ ² for your antergos install may result in one or more of the following error messages during boot time. This article should help you understand and correctly treat these errors.
Most of the files that need to be edited and the commands that need to be run in this article require you to be root. You can edit and run commands with sudo³.


ERROR: resume: no device specified for hibernation

Caused by a (yet) missing hibernation support in ZFS. Although by default you got a ZVOL  set up by antergos as a virtual swap partition this can not be used for hibernation/resume . You can get rid of the message though by telling your bootloader that the zfs root pool should be the swap volume. To do so you can pass the UUID of your root partition as a kernel parameter to your bootloaders config. On a default antergos install the root partition would be most probably /dev/sda3, if it’s not you should possibly know which one it is. To get the UUID of your device partitions you can issue blkid. After getting the UUID of your zfs on root partition you can then add the following kernel parameter:


For GRUB (default antergos bootloader) edit:

/etc/default/grub and add the parameter to GRUB_CMDLINE_LINUX_DEFAULT=“quiet …” in between the parenthesis. Run grub-mkconfig -o /boot/grub/grub.cfg after editing the grub config file.

For Systemd edit:

/boot/loader/entries/your.conf and add the parameter to the options line.

Note! This will not enable hibernation, it will just get rid of the error message. If you search dmesg you will notice that it now finds a hibernation partition but it will claim that “PM: Hibernation image not present or could not be loaded”. There is no workaround for this if you don’t have a real seperate swap partition (as you probably won’t).



ash: 1: unknown operand
cannot open 'yourRootPoolName': no such pool

This is not your fault either, it happens because of formatting in the ZFS on Linux (ZOL) source code which actually has already been flattened out on the master branch a while ago. If you (still) see this you have two options:

    1. Wait for antergos to get rid of it in a future update
    2. Patch the ZFS hook used by mkinitcpio yourself


For the latter you need to edit /usr/lib/initcpio/hooks/zfs with your favourite editor. The following changes will have to be made:

# Inside of zfs_mount_handler ()
-    if ! "/usr/bin/zpool" list -H $pool 2>&1 > /dev/null ; then
+    if ! "/usr/bin/zpool" list -H $pool 2>1 > /dev/null ; then

# The following all inside run_hook()
-    [[ $zfs_force == 1 ]] && ZPOOL_FORCE='-f'
-    [[ "$zfs_import_dir" != "" ]] ...
+    [[ "${zfs_force}" = 1 ]] && ZPOOL_FORCE='-f'
+    [[ "${zfs_import_dir}" != "" ]] ...
     # Double quotes and curly brackets !

-    if [ "$root" = 'zfs' ]; then
+    if [ "${root}" = 'zfs' ]; then

-    ZFS_DATASET=$zfs
+    ZFS_DATASET=${zfs}

You will need to rebuild your images with mkinitcpio -p linux after editing this file.



ZFS: No hostid found on kernel command line or /etc/hostid. ZFS pools may not import correctly

ZFS does not recognize your hostid by default. Again you have two options here:

    1. Pass your hostid as a kernel parameter
    2. Correctly generate your hostid file


In both cases you will want to issue hostid and copy/write down/memorize the output. Then for the first option simply pass spl.spl_hostid=YourHostid as a kernel parameter. See above for instructions on how to add a kernel parameter to your bootloader.

For the second option you will need to use a little C script you can quickly write yourself. Please refer to the excelent Arch Wiki for detailed instructions¹⁰



[FAILED] Failed to start ZFS file system shares
See 'systemctl status zfs-share.service' for details

This should be rare to see if you used the antergos installer for your ZFS on root installation. Happens because really, the mountpoints of the failing datasets are actually not empty. At one point maybe the system or you put stuff in there while the dataset wasn't yet mounted by ZFS. An example could be a dataset pool/home which mounts to /home and several datasets with a structure of pool/userdata/documents, pool/userdata/downloads etc. The latter ones all mounted to /home/username/documents and respectively.

In this example the system wrote all the user files and directories like .bash_profile, .cache etc. to the user directory inside of /home (naturally). At shutdown though when pool/home got exported, all of these files remained. The example situation was remedied by exporting the mentioned pool, copying everything that remained to a safe location with cp -arv, making sure everything got backed up and deleting the affected directory. Afterwards the dataset pool/userdata was set to mountpoint /home/username and the children just mount below that.

This is very individual in every case and can happen everytime you got files and directories written to directories that don’t export with the respective dataset. In that case review your dataset structure, export, import, review, export, import, review until you see where you have to move/remove stuff so your set will mount fine. You can refer to this comment on Github to get an idea and start reviewing your sets in rescue mode.


Have fun using ZFS on root!

(Visited 162 times, 1 visits today)

Pin It on Pinterest

Share This