ZFS Device Removal with Solaris 11.4 Beta

The earlier mentioned Solaris 11.4 Beta build refresh is here.

Which means: ZFS DEVICE REMOVAL is available to all of you. :-)

Since I am short of time right now here is the very quick and trivial proof:

root@wacken:~# zpool list
NAME    SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
rpool  99.5G  55.5G  44.0G  55%  1.00x  ONLINE  -
root@wacken:~# for i in 1 2 3;do mkfile 1g diskfile$i;done
root@wacken:~# zpool create rempool /root/diskfile1 /root/diskfile2 /root/diskfile3
root@wacken:~# zpool list
NAME      SIZE  ALLOC   FREE  CAP  DEDUP  HEALTH  ALTROOT
rempool  2.98G   152K  2.98G   0%  1.00x  ONLINE  -
rpool    99.5G  55.5G  44.0G  55%  1.00x  ONLINE  -
root@wacken:~# zpool status rempool
  pool: rempool
 state: ONLINE
  scan: none requested
config:

        NAME               STATE      READ WRITE CKSUM
        rempool            ONLINE        0     0     0
          /root/diskfile1  ONLINE        0     0     0
          /root/diskfile2  ONLINE        0     0     0
          /root/diskfile3  ONLINE        0     0     0

errors: No known data errors
root@wacken:~# zpool remove rempool /root/diskfile2
root@wacken:~# zpool status rempool
  pool: rempool
 state: ONLINE
  scan: resilvered 1K in 1s with 0 errors on Fri Mar  9 13:08:34 2018

config:

        NAME                      STATE      READ WRITE CKSUM
        rempool                   ONLINE        0     0     0
          /root/diskfile1         ONLINE        0     0     0
          /root/diskfile3         ONLINE        0     0     0

errors: No known data errors

Works!

This is what it looks like when there is not enough space left for removing a device from a pool.

root@wacken:~# zpool status rempool
  pool: rempool
 state: ONLINE
  scan: none requested
config:

        NAME               STATE      READ WRITE CKSUM
        rempool            ONLINE        0     0     0
          /root/diskfile1  ONLINE        0     0     0
          /root/diskfile2  ONLINE        0     0     0
          /root/diskfile3  ONLINE        0     0     0

errors: No known data errors
root@wacken:~# zpool list rempool
NAME      SIZE  ALLOC  FREE  CAP  DEDUP  HEALTH  ALTROOT
rempool  2.98G  2.54G  443M  85%  1.00x  ONLINE  -
root@wacken:~# zpool remove rempool /root/diskfile2
cannot remove device(s): not enough space to migrate data

It actually doesn’t matter what your top-level vdev is. Her is an example of a multiple mirrored vdevs:

root@wacken:~# zpool destroy rempool
root@wacken:~# zpool create rempool2 mirror /root/diskfile1 /root/diskfile2 mirror /root/diskfile3 /root/diskfile4 mirror /root/diskfile5 /root/diskfile6
root@wacken:~# zpool status
  pool: rempool2
 state: ONLINE
  scan: none requested
config:

        NAME                 STATE      READ WRITE CKSUM
        rempool2             ONLINE        0     0     0
          mirror-0           ONLINE        0     0     0
            /root/diskfile1  ONLINE        0     0     0
            /root/diskfile2  ONLINE        0     0     0
          mirror-1           ONLINE        0     0     0
            /root/diskfile3  ONLINE        0     0     0
            /root/diskfile4  ONLINE        0     0     0
          mirror-2           ONLINE        0     0     0
            /root/diskfile5  ONLINE        0     0     0
            /root/diskfile6  ONLINE        0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
  scan: resilvered 0 in 5s with 0 errors on Thu Mar  8 23:27:12 2018

config:

        NAME    STATE      READ WRITE CKSUM
        rpool   ONLINE        0     0     0
          c1d0  ONLINE        0     0     0

errors: No known data errors
root@wacken:~# zpool remove rempool2 mirror-1
root@wacken:~# zpool status
  pool: rempool2
 state: ONLINE
  scan: resilvered 1.50K in 1s with 0 errors on Tue Apr 10 05:25:51 2018

config:

        NAME                      STATE      READ WRITE CKSUM
        rempool2                  ONLINE        0     0     0
          mirror-0                ONLINE        0     0     0
            /root/diskfile1       ONLINE        0     0     0
            /root/diskfile2       ONLINE        0     0     0
          mirror-2                ONLINE        0     0     0
            /root/diskfile5       ONLINE        0     0     0
            /root/diskfile6       ONLINE        0     0     0

errors: No known data errors

  pool: rpool
 state: ONLINE
  scan: resilvered 0 in 5s with 0 errors on Thu Mar  8 23:27:12 2018

config:

        NAME    STATE      READ WRITE CKSUM
        rpool   ONLINE        0     0     0
          c1d0  ONLINE        0     0     0

errors: No known data errors

Very nice additional improvement to an already feature-rich and fantastic release!

ZFS lz4 compression with Solaris 11.3

If you haven’t read Cindy Swearingen’s latest blog post yet, it is time for you to know that Solaris 11.3 beta is shipped with a zpool version 37 which brings lz4 compression to ZFS.

My last post is about generating report files in a html format and store them on a apache webserver.
Today I was wondering how much disk space the reporting will take and looked at the dataset.

# zfs get compression,compressratio,recordsize,referenced,used lofs/AP6A103/cve
NAME              PROPERTY       VALUE  SOURCE
lofs/AP6A103/cve  compression    on     inherited from lofs
lofs/AP6A103/cve  compressratio  2.45x  -
lofs/AP6A103/cve  recordsize     128K   default
lofs/AP6A103/cve  referenced     163M   -
lofs/AP6A103/cve  used           163M   -

# mv 2015 /var/tmp/
# ptime mv /var/tmp/2015 .

real        1.897081940
user        0.068310370
sys         1.828314030

Since this is not a Solaris 11.2 installation I wanted to change compression to lz4 and see what difference this would make. So I moved the data to a different zfs dataset, set the compression value to lz4 and moved the data back again. Just as a note, I set the compression value for the dataset of the whole zpool in this case. Could have also done it only for the child dataset.

# mv 2015 /var/tmp/
# zfs set compression=lz4 lofs
# ptime mv /var/tmp/2015 .

real        2.094843840
user        0.072113780
sys         2.022243500

The data is moved in only a few more milliseconds so let’s see if lz4 is worth using it.

# zfs get compression,compressratio,recordsize,referenced,used lofs/AP6A103/cve
NAME              PROPERTY       VALUE   SOURCE
lofs/AP6A103/cve  compression    lz4     inherited from lofs
lofs/AP6A103/cve  compressratio  16.26x  -
lofs/AP6A103/cve  recordsize     128K    default
lofs/AP6A103/cve  referenced     16.9M   -
lofs/AP6A103/cve  used           16.9M   -

As the above output shows the compressratio is awesome. It might be a wink of an eye slower on compressible data like this but that for general purpose this environments that are not 100% I/O critical this shouldn’t matter.
This use case here is dealing with compressible data only. But on a companies storage you happen to not always being able to separate compressible and incompressible data. Unlike other algorithms if data is recognized as incompressible lz4 won’t try hard to compress it anyway and instead move on with the next data. lz4 has more to offer than just this but this feature by itself qualifies it as the new default value of compression for every zpool (besides the rpool, yet).

Deploying automated CVE reporting for Solaris 11.3

With Solaris 11.2 Oracle started including quiet few new Solaris features for security and automated deployment. Besides bringing in immutable zones, which I didn’t get to write about yet (which is a shame since these are wonderful), and compliance, Solaris IPS received a new package called pkg://solaris/support/critical-patch-update/solaris-11-cpu. This package includes packages that are considered to be part of the Critical Patch Update. In addition to the package name and version this package now enables you to see which CVE each of these packages belong to.
You can use the pkg command to do some basic searches. Immutable zones, compliance and CVEs are only three of the different security features that were added to Solaris 11.2 and Solaris 11.3.

Most likely an admin will not want to login to each of his hundreds, thousands or even more Solaris installations in order to install needed packages and take care of a proper configuration. Can’t blame him. That’s probably what the Solaris team thought when puppet became part of the IPS repository with Solaris 11.2. There is not that much to say about it for those who do not know it. It does what is suppose to and is a relief for every admin if used right. In case you are interested in some really great articles go check out Manuel Zach’s blog. For automation in general you will want to go and also read Glynn Foster’s blog.
Now why am I writing about these “old” Solaris 11.2 features if Solaris 11.3 beta was released a few weeks ago already. Well, these are fundamental technologies in order to get the most out of Solaris 11.3.

Bringing Solaris IPS and NIST together

Companies mostly use an external software that alerts and reports every single CVE that is out there and triggers a service request for the responsible team. The thing is, it is slow, costs a lot and you get service requests for software that is not installed on any system. So what happens is the admin ends up checking it himself.

So I figured I will just do it Solaris style. By now as I write this post, we do have a fully automated reporting for CVEs for no extra costs at work.

Let’s start with IPS and CVEs. As mentioned before you will need to have a certain package installed.

# pkg install support/critical-patch-update/solaris-11-cpu

This package is updated with every SRU and will include every known CVE for Solaris 11. If you want to know a few basics about it read Darren Moffat’s blog.
Use the following command to see all the included packages:

# pkg contents -ro name,value solaris-11-cpu|grep '^CVE.*'
CVE-1999-0103                 pkg://solaris/system/security/kerberos-5@0.5.11,5.11-0.175.1.10.0.3.2
CVE-2002-2443                 pkg://solaris/system/security/kerberos-5@0.5.11,5.11-0.175.1.10.0.3.2
CVE-2003-0001                 pkg://solaris/driver/network/ethernet/pcn@0.5.11,5.11-0.175.1.11.0.3.2
CVE-2004-0230                 pkg://solaris/system/kernel@0.5.11,5.11-0.175.1.15.0.4.2
CVE-2004-0452                 pkg://solaris/runtime/perl-584/extra@5.8.4,5.11-0.175.1.11.0.3.2
CVE-2004-0452                 pkg://solaris/runtime/perl-584@5.8.4,5.11-0.175.1.11.0.3.2
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-apc@3.0.19,5.11-0.175.2.8.0.3.0
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-idn@0.2.0,5.11-0.175.2.8.0.3.0
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-memcache@2.2.5,5.11-0.175.2.8.0.3.0
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-mysql@5.2.17,5.11-0.175.2.8.0.3.0
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-pear@5.2.17,5.11-0.175.2.8.0.3.0
...
CVE-2015-4024                 pkg://solaris/web/php-53/extension/php-tcpwrap@1.1.3,5.11-0.175.2.13.0.4.0
CVE-2015-4024                 pkg://solaris/web/php-53/extension/php-xdebug@2.2.0,5.11-0.175.2.13.0.4.0
CVE-2015-4024                 pkg://solaris/web/php-53/extension/php-zendopcache@7.0.2,5.11-0.175.2.13.0.4.0
CVE-2015-4024                 pkg://solaris/web/php-53@5.3.29,5.11-0.175.2.13.0.4.0
CVE-2015-4024                 pkg://solaris/web/server/apache-22/module/apache-php52@5.2.17,5.11-0.175.2.13.0.4.0
CVE-2015-4024                 pkg://solaris/web/server/apache-22/module/apache-php53@5.3.29,5.11-0.175.2.13.0.4.0
CVE-2015-4770                 pkg://solaris/system/file-system/ufs@0.5.11,5.11-0.175.2.11.0.3.2
CVE-2015-4770                 pkg://solaris/system/kernel/platform@0.5.11,5.11-0.175.2.11.0.4.2
CVE-2015-5073                 pkg://solaris/library/pcre@8.37,5.11-0.175.2.13.0.3.0
CVE-2015-5477                 pkg://solaris/network/dns/bind@9.6.3.11.2,5.11-0.175.2.12.0.7.0
CVE-2015-5477                 pkg://solaris/network/dns/bind@9.6.3.11.2,5.11-0.175.2.13.0.5.0
CVE-2015-5477                 pkg://solaris/service/network/dns/bind@9.6.3.11.2,5.11-0.175.2.12.0.7.0
CVE-2015-5477                 pkg://solaris/service/network/dns/bind@9.6.3.11.2,5.11-0.175.2.13.0.5.0

Now we got the information of which Solaris IPS package belongs to which CVE-ID. That’s nice but how do we get all the other CVE information? Base score, summary, access vector, etc.?! In order to add more details to it I imported the NIST nvd-files into a sqlite3 database.
The files can be either downloaded as a compressed gz-file or regular xml. For more information visit https://nvd.nist.gov/download.cfm.
I imported the xml files into a sqlite3 database for a better performance. If you don’t want to work your way through the xml structures yourself use this python program that I came across while writing it myself. I like the approach of just having to do:

# curl https://nvd.nist.gov/static/feeds/xml/cve/nvdcve-2.0-2015.xml | nvd2sqlite3 -d /wherever/you/like/to/keep/the/dbfile

In order to keep your NIST CVE database current I put the commands in a script and created a crontab entry.

getNvdCve.sh:

#!/usr/bin/bash

curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2002.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2003.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2004.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2005.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2006.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2007.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2008.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2009.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2010.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2011.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2012.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2013.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2014.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2015.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-modified.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-recent.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb

cron:

0 5 * * * /scripts/admin/getNvdCve.sh

Alright this gives us a database with all the information we need and want. The schema of the sqlite3 database looks like this:

sqlite> .schema
CREATE TABLE nvd (access_vector varchar,
                                            access_complexity varchar,
                                            authentication varchar,
                                            availability_impact varchar,
                                            confidentiality_impact varchar,
                                            cve_id text primary key,
                                            integrity_impact varchar,
                                            last_modified_datetime varchar,
                                            published_datetime varchar,
                                            score real,
                                            summary varchar,
                                            urls varchar,
                                            vulnerable_software_list);

Next to do is to match the data of the db with the IPS information. When I started working on this I focused on console output only but when I looked at our centralized compliance reports I wanted the same thing for CVEs. A central CVE reporting. So I ended up with writing the output to a html-file on an apache webserver.

Since the standard Perl in Solaris does not contain DBD::SQLite I switched to Python.
cveList.py does the following:

  • get all the installed package information from IPS
  • get all the information from the solaris-11-cpu package
  • match the above data and filter which pkg is installed and what version does it have (lower version = unpatched CVE)
  • create html report file with all the needed elements
  • connect to the sqlite3 db and get cve_id, access_vector, score and summary
  • write the select output to file, sorted by unpatched and patched CVEs

The CVE report looks like this:
cveReport

What we got now is a script that pulls in all the nvd information from NIST and stores it in a sqlite3 database. And we got a script that matches these information with the installed IPS packages and generates a CVE report in a html format.

Scheduled Services with Solaris 11.3

Next up is to automatically generate these reports. With Solaris 11.2 cron would be the way to do it. Trivial entry in the crontab and done.

30 5 * * * /scripts/admin/cveList.py

With Solaris 11.3 cron is almost obsolete. Why? Because of SMF and the new scheduled and periodic services. I’m not gonna talk about why SMF is great or not. To me it is great and I never ran into any serious problem. If it is a Solaris 11.3 installation I will move custom cronjobs to SMF and create scheduled services.
What these do is the same as cron plus everything else SMF has to offer.
The scheduled service I use for the CVE reporting is the following:

<?xml version="1.0" ?>
<!DOCTYPE service_bundle
  SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<!--
    Manifest created by svcbundle (2015-Sep-04 15:05:15+0200)
-->
<service_bundle type="manifest" name="site/cveList">
    <service version="1" type="service" name="site/cveList">
        <!--
            The following dependency keeps us from starting until the
            multi-user milestone is reached.
        -->
        <dependency restart_on="none" type="service"
            name="multi_user_dependency" grouping="require_all">
            <service_fmri value="svc:/milestone/multi-user"/>
        </dependency>
        <instance enabled="true" name="default" >
                <scheduled_method
                        interval='day'
                        hour='5'
                        minute='30'
                        exec='/lib/svc/method/cveList.py'
                        timeout_seconds='0'>
                                <method_context>
                                        <method_credential user='root' group='root' />
                                </method_context>
                </scheduled_method>
        </instance>
    </service>
</service_bundle>

Use svcbundle to generate your own manifest.

# svcbundle -o /var/tmp/cveList.xml -s service-name=site/cveList -s start-method=/lib/svc/method/cveList.py -s interval=day -s hour=5 -s minute=30
# svccfg validate /var/tmp/cveList.xml

It’s as easy as that. Add to it whatever you feel like and is needed. Mail reporting in case of a status change for example.

Well now we have a service that generates a CVE report every day at 5:30am of a server.
We need more so let’s move on to the next piece.

Building a custom IPS package

The best way to deploy any piece of software on a Solaris 11.x server is with IPS.
IPS packages are very easy to use when they already build and published. List, install, info, uninstall, contents, search, freeze, unfreeze, etc.. It is always the same command pattern that makes it that way. But how do you build your own packages. That is always a bit more tricky than using them. Instead of explaining how it works I will just link to another article written by Glynn Foster which covers everything you need to know.
If you don’t want to type in every single step this little script might help. Adjust your IPS repository and your paths and all you need is a so called mog file which in this case could look like this:

set name=pkg.fmri value=pkg://custom/security/custom-cveList@1.0.2
set name=variant.arch value=sparc value=i386
set name=pkg.description value="custom CVE reporting"
set name=pkg.summary value="custom Solaris CVE reports"
<transform dir path=lib$ -> drop>
<transform dir path=lib/svc$ -> drop>
<transform dir path=lib/svc/manifest$ -> drop>
<transform dir path=lib/svc/manifest/site$ -> set owner root>
<transform dir path=lib/svc/manifest/site$ -> set group sys>
<transform file path=lib/svc/manifest/site/cveList\.xml$ -> set owner root>
<transform file path=lib/svc/manifest/site/cveList\.xml$ -> set group bin>
<transform file path=lib/svc/manifest/site/cveList\.xml$ -> set mode 0444>
<transform file path=lib/svc/manifest/site/cveList\.xml$ -> default restart_fmri svc:/system/manifest-import:default>
<transform dir path=lib/svc/method$ -> drop>
<transform file path=lib/svc/method/cveList\.py$ -> set owner root>
<transform file path=lib/svc/method/cveList\.py$ -> set group bin>
<transform file path=lib/svc/method/cveList\.py$ -> set mode 0555>

Besides the mog file you just enter the path to your proto directory that includes the software that is suppose to be packaged up and you are good to go. You will be asked to type in the name of the package and that’s it. Rest is done automatically. You might have to adjust the configuration inside of your mog file in case of unresolved dependencies for example. Should you be missing a custom IPS repo create one right quick and then start packaging.

Creating a custom IPS repo and share it via nfs:

# zfs create -po mountpoint=/ips/custom rpool/ips/custom
# zfs list -r rpool/ips
NAME              USED  AVAIL  REFER  MOUNTPOINT
rpool/ips          62K  36.2G    31K  /rpool/ips
rpool/ips/custom   31K  36.2G    31K  /ips/custom
# pkgrepo create /ips/custom
# zfs set share=name=custom_ips,path=/ips/custom,prot=nfs rpool/ips/custom
name=custom_ips,path=/ips/custom,prot=nfs
# zfs set share.nfs=on rpool/ips/custom
# zfs get share
NAME                                                           PROPERTY  VALUE  SOURCE
rpool/ips/custom                                               share     name=custom_ips,path=/ips/custom,prot=nfs  local

Let’s actually build the cveList IPS pkg.

# /scripts/admin/buildIpsPkg.sh /scripts/admin/IPS/CVE/MOG/custom-cveList.mog /scripts/admin/IPS/CVE/PROTO.CVE

Need some information about the package. Answer the following questions to generate a mogrify-file (package_name.mog) or if you have a package_name.mog template execute this script with args:

 /scripts/admin/buildIpsPkg.sh [path_to_mog_file] [path_to_proto_dir]


Enter Package Name (eg. custom-compliance): custom-cveList

Ready! Generating the manifest.

pkgsend generate... OK
pkgmogrify... OK
pkgdepend generate... OK
pkgdepend resolve... OK
eliminating version numbers on required dependencies... OK
testing manifest against Solaris 11.2 repository, pkglint ... 
Lint engine setup...

Ignoring -r option, existing image found.
Starting lint run...

OK

Review the manifest file custom-cveList.p5m.4.res!


publish the ips package with:
pkgsend publish -s file:///data/ips/custom -d /scripts/admin/IPS/CVE/PROTO.CVE /scripts/admin/IPS/CVE/custom-cveList.p5m.4.res

check the package with:
pkg refresh
pkg info -r custom-cveList
pkg contents -m -r custom-cveList
pkg install -nv custom-cveList

remove it:
pkgrepo remove -s file:///data/ips/custom pkg://custom/security/custom-cveList@1.0.2

Et voilà, the package is ready to be published.

# pkgsend publish -s file:///data/ips/custom -d /scripts/admin/IPS/CVE/PROTO.CVE /scripts/admin/IPS/CVE/custom-cveList.p5m.4.res
# pkg refresh
# pkg info custom-cveList
             Name: security/custom-cveList
          Summary: custom Solaris CVE reports
      Description: custom CVE reporting
            State: Installed
        Publisher: custom
          Version: 1.0.2
           Branch: None
   Packaging Date: Tue Sep 08 16:48:50 2015
Last Install Time: Tue Sep 08 16:52:07 2015
             Size: 8.80 kB
             FMRI: pkg://custom/security/custom-cveList@1.0.2:20150908T164850Z

DONE! At least with getting CVEs, matching CVEs, scheduling reports and building a package out of all of this.

Let’s deploy.

Let puppet do your job

In this case I am already running a puppet master and several puppet agents. Since I talked about multiple hundreds or thousands of Solaris installations a master-agent setup is exactly what we want.
Nobody has the time and endurance to login on each system and do a pkg install custom-cveList.
I figured a puppet module would be just what I want.
And to save time, here it is:

# cat /etc/puppet/modules/cve/manifests/init.pp
class cve {
        if $::operatingsystemrelease == '11.3' or $::operatingsystemmajrelease == '12' {
                package { 'custom-cveList':
                        ensure => 'present',
                }
        } 
        if $::operatingsystemrelease == '11.2' {
                cron { 'cveList' :
                        ensure => 'present',
                        command => '/scripts/admin/cveList.py',
                        user => 'root',
                        hour => 5,
                        minute => 30,
                }
        }
}

The first if-statement will install the recently build and published IPS pkg. Since scheduled services are not available in Solaris 11.2 I had to add a crontab entry for that case, which would be the second if-statement in the above.
Now just add it to your /etc/puppet/manifest/site.pp and you are all set up.

node default {
        include nameservice
        include tsm
        include arc
        include compliance
        include mounts
        include users
        include cve
}

This is it now. From now on, every single Solaris server that runs a puppet agent will have your custom CVE reporting deployed.
Reading all this actually takes longer than just doing it and you only need to go through all of this once.

I know this looks like it is a lot but it really isn’t. If you want to leave out the IPS part just add your scripts and service/cron to your puppet configuration.
This is easier to handle than a third party tool. If want you could just implement it to your ITIL process and its tools to automate CVE service request handling.

Attachments

Solaris 11.3 brings ZFS task monitoring enhancement – zpool monitor

With the Solaris 11.3 beta release Oracle added a beautiful zpool command. Zpool monitor! And who would have figured, it does exactly that. It helps you to keep track of send, receive, scrub and/or resilver actions.

Let’s say you run zpool scrub rpool and you are interested in the status/progress. So far zpool status was the command of choice to do so. Usually used somehow like while :; do zpool status …|grep …; done. Works but not really convenient. And just imagine have a few send and/or receives running.
Well it is all taking care of. zfs monitor will the job for you. A long needed zfs feature. Thanks a lot Solaris engineering. Very nice improvement.

So how does zpool monitor work. Let’s take a look:

root@wacken:~# zpool help monitor
usage:
        monitor -t provider [-T d|u] [[-p] -o field[,...] [pool] ... [interval [count]]
        Valid values for 'provider' are send, receive, scrub, and resilver
        Valid values for 'field' are done, other, pctdone, pool, provider, speed, starttime,
          tag, timeleft, timestmp, total

This is what the default looks like for monitoring scrub:

root@wacken:~# zpool monitor -t scrub 1
pool                            provider  pctdone  total speed timeleft
rpool                           scrub       0.0    64.8G 1.66M 11h08m
rpool                           scrub       0.0    64.8G 1.69M 10h55m
rpool                           scrub       0.0    64.8G 1.71M 10h45m
rpool                           scrub       0.0    64.8G 1.74M 10h36m
rpool                           scrub       0.0    64.8G 1.79M 10h19m
rpool                           scrub       0.0    64.8G 1.80M 10h15m
...
rpool                           scrub       0.8    64.8G 27.3M 40m09s
rpool                           scrub       0.8    64.8G 26.5M 41m23s
rpool                           scrub       1.0    64.8G 31.9M 34m23s
rpool                           scrub       1.1    64.8G 33.6M 32m34s
rpool                           scrub       1.1    64.8G 32.2M 33m58s
rpool                           scrub       1.1    64.8G 30.9M 35m22s
rpool                           scrub       1.1    64.8G 29.8M 36m44s

Let’s add a few more fields to it.

root@wacken:~# zpool monitor -t scrub -o pool,provider,pctdone,total,speed,timeleft,tag,starttime,done 1
pool                            provider  pctdone  total speed timeleft   tag                 starttime done
rpool                           scrub       3.5    64.8G 18.8M 56m51s     scrub-267           14:22:58  0
rpool                           scrub       3.5    64.8G 18.7M 57m05s     scrub-267           14:22:58  0
rpool                           scrub       3.5    64.8G 18.6M 57m29s     scrub-267           14:22:58  0
rpool                           scrub       3.5    64.8G 18.5M 57m52s     scrub-267           14:22:58  0

Ain’t this nice. If you have an environment that includes automated zfs snapshots together with send/receive this will really make your day.
This command will definately get an alias in my zshrc ;-)