ZFS lz4 compression with Solaris 11.3

If you haven’t read Cindy Swearingen’s latest blog post yet, it is time for you to know that Solaris 11.3 beta is shipped with a zpool version 37 which brings lz4 compression to ZFS.

My last post is about generating report files in a html format and store them on a apache webserver.
Today I was wondering how much disk space the reporting will take and looked at the dataset.

# zfs get compression,compressratio,recordsize,referenced,used lofs/AP6A103/cve
NAME              PROPERTY       VALUE  SOURCE
lofs/AP6A103/cve  compression    on     inherited from lofs
lofs/AP6A103/cve  compressratio  2.45x  -
lofs/AP6A103/cve  recordsize     128K   default
lofs/AP6A103/cve  referenced     163M   -
lofs/AP6A103/cve  used           163M   -

# mv 2015 /var/tmp/
# ptime mv /var/tmp/2015 .

real        1.897081940
user        0.068310370
sys         1.828314030

Since this is not a Solaris 11.2 installation I wanted to change compression to lz4 and see what difference this would make. So I moved the data to a different zfs dataset, set the compression value to lz4 and moved the data back again. Just as a note, I set the compression value for the dataset of the whole zpool in this case. Could have also done it only for the child dataset.

# mv 2015 /var/tmp/
# zfs set compression=lz4 lofs
# ptime mv /var/tmp/2015 .

real        2.094843840
user        0.072113780
sys         2.022243500

The data is moved in only a few more milliseconds so let’s see if lz4 is worth using it.

# zfs get compression,compressratio,recordsize,referenced,used lofs/AP6A103/cve
NAME              PROPERTY       VALUE   SOURCE
lofs/AP6A103/cve  compression    lz4     inherited from lofs
lofs/AP6A103/cve  compressratio  16.26x  -
lofs/AP6A103/cve  recordsize     128K    default
lofs/AP6A103/cve  referenced     16.9M   -
lofs/AP6A103/cve  used           16.9M   -

As the above output shows the compressratio is awesome. It might be a wink of an eye slower on compressible data like this but that for general purpose this environments that are not 100% I/O critical this shouldn’t matter.
This use case here is dealing with compressible data only. But on a companies storage you happen to not always being able to separate compressible and incompressible data. Unlike other algorithms if data is recognized as incompressible lz4 won’t try hard to compress it anyway and instead move on with the next data. lz4 has more to offer than just this but this feature by itself qualifies it as the new default value of compression for every zpool (besides the rpool, yet).

Solaris 11.3 brings ZFS task monitoring enhancement – zpool monitor

With the Solaris 11.3 beta release Oracle added a beautiful zpool command. Zpool monitor! And who would have figured, it does exactly that. It helps you to keep track of send, receive, scrub and/or resilver actions.

Let’s say you run zpool scrub rpool and you are interested in the status/progress. So far zpool status was the command of choice to do so. Usually used somehow like while :; do zpool status …|grep …; done. Works but not really convenient. And just imagine have a few send and/or receives running.
Well it is all taking care of. zfs monitor will the job for you. A long needed zfs feature. Thanks a lot Solaris engineering. Very nice improvement.

So how does zpool monitor work. Let’s take a look:

root@wacken:~# zpool help monitor
usage:
        monitor -t provider [-T d|u] [[-p] -o field[,...] [pool] ... [interval [count]]
        Valid values for 'provider' are send, receive, scrub, and resilver
        Valid values for 'field' are done, other, pctdone, pool, provider, speed, starttime,
          tag, timeleft, timestmp, total

This is what the default looks like for monitoring scrub:

root@wacken:~# zpool monitor -t scrub 1
pool                            provider  pctdone  total speed timeleft
rpool                           scrub       0.0    64.8G 1.66M 11h08m
rpool                           scrub       0.0    64.8G 1.69M 10h55m
rpool                           scrub       0.0    64.8G 1.71M 10h45m
rpool                           scrub       0.0    64.8G 1.74M 10h36m
rpool                           scrub       0.0    64.8G 1.79M 10h19m
rpool                           scrub       0.0    64.8G 1.80M 10h15m
...
rpool                           scrub       0.8    64.8G 27.3M 40m09s
rpool                           scrub       0.8    64.8G 26.5M 41m23s
rpool                           scrub       1.0    64.8G 31.9M 34m23s
rpool                           scrub       1.1    64.8G 33.6M 32m34s
rpool                           scrub       1.1    64.8G 32.2M 33m58s
rpool                           scrub       1.1    64.8G 30.9M 35m22s
rpool                           scrub       1.1    64.8G 29.8M 36m44s

Let’s add a few more fields to it.

root@wacken:~# zpool monitor -t scrub -o pool,provider,pctdone,total,speed,timeleft,tag,starttime,done 1
pool                            provider  pctdone  total speed timeleft   tag                 starttime done
rpool                           scrub       3.5    64.8G 18.8M 56m51s     scrub-267           14:22:58  0
rpool                           scrub       3.5    64.8G 18.7M 57m05s     scrub-267           14:22:58  0
rpool                           scrub       3.5    64.8G 18.6M 57m29s     scrub-267           14:22:58  0
rpool                           scrub       3.5    64.8G 18.5M 57m52s     scrub-267           14:22:58  0

Ain’t this nice. If you have an environment that includes automated zfs snapshots together with send/receive this will really make your day.
This command will definately get an alias in my zshrc ;-)