RAD – get ZFS properties

After starting out with Python RAD zonemgr module I thought it is time to write about another available Python RAD module. In the following I want to give you a short and simple look and feel of how to get ZFS property information (name, type, atime, compression, compressratio, dedup, mounted and mountpoint). The puprpose is to get people interested in Solaris going with RAD and show how easily it can be used.

Let’s start with the usual imports which are pretty self-explaining. One is needed to open a connection and the other one depends on the purpose of the script. In this case we want to get ZFS properties which means we will use zfsmgr.
In case it is not already installed just run: pkg install rad-zfsmgr

import rad.connect as radc
import rad.bindings.com.oracle.solaris.rad.zfsmgr_1 as zfsmgr

Next we need to connect to a RAD client. In this case it is a local one and a unix socket can be used.
If you like to use a remote connection go ahead and use ssh://[hostname] instead.

uri = radc.RadURI("unix:///")
rc = uri.connect()

We now have an open connection (rc) and we can move on to the ZFS part.
First we need is to know what the existing datasets are. rc.list_object(zfsmgr.ZfsDataset()) will do exactly this for us. It lists all the ZFS dataset objects (ZfsDataset) the RAD zfs manager (zfsmgr) has to offer for the chosen connection (rc).

zfsDataSets = rc.list_objects(zfsmgr.ZfsDataset())

Well, the list of datasets is complete. But we will need more than just the objects of each dataset.
In order to get the information we are interested in we need to define it first. Therefore ZfsPropRequest is used.

prop0 = zfsmgr.ZfsPropRequest(name="name")
prop1 = zfsmgr.ZfsPropRequest(name="type")
prop2 = zfsmgr.ZfsPropRequest(name="atime")
prop3 = zfsmgr.ZfsPropRequest(name="compression")
prop4 = zfsmgr.ZfsPropRequest(name="compressratio")
prop5 = zfsmgr.ZfsPropRequest(name="dedup")
prop6 = zfsmgr.ZfsPropRequest(name="mounted")
prop7 = zfsmgr.ZfsPropRequest(name="mountpoint")

After defining the properties we just loop through each of the object list of the ZFS datasets and request the value for the just defined keys of the current object (zobj).

for dataset in zfsDataSets:
    zobj = rc.get_object(dataset)
    zvalues = zobj.get_props([prop0, prop1, prop2, prop3, prop4, prop5, prop6, prop7])
    print "%-40s%-14s%-8s%-13s%-15s%-7s%-9s%s" % (zvalues[0].value, zvalues[1].value, zvalues[2].value, zvalues[3].value, zvalues[4].value, zvalues[5].value, zvalues[6].value, zvalues[7].value)

Done. Got the information and therefore can close the connection at this point.

rc.close()

The output will remind you of a regular zfs list -o … output. One may say why should I use RAD then and the answer is quiet simple. Because this is just a trivial example of how you can make use RAD of the zfsmgr to get dataset information. The next steps would be to use the above and automate whatever comes to your mind. Juggle around with the objects, keys, values, etc.. Add more functions to it and even combine it with more RAD modules (e.g.: rad-zonemgr). That’s where you will benefit the most. But even small automation tasks are perfect for this.

Last but not least, here is an example of what it might look like. I had to take out a few lines because it included Solaris beta content.

rad-zfs2

Remember, the purpose was to make the very first step with RAD together with ZFS. Try it out and you will most probably like it and stick to it.

RAD – syncing Solaris zone configs

RAD

A less known jewel of Solaris 11 is RAD (Remote Administration Daemon). Since, as I just found out, I don’t have any RAD posts yet, let’s talk about what it is and offers before we go on.
what RAD does is it provides programmatic interfaces to manage Solaris. Users, zones, zfs and smf are just a few examples. RAD offers APIs for C, Java, Python and REST. It can be used locally as well as remotely. It can be used to read data but also to write data. As an example you can get zfs informations as well as create new datasets or change current settings. There are a couple of great examples and posts out there from e.g. Glynn Foster and Robert Milkowski

Why am I telling you this? Because you can PROGRAMMATICALLY manage your enterprise operating system now.

Use case

Imagine an environment of SPARC T4, T5, T7 or S7 servers running quiet few non-global zones whether in LDOMs or not.
Over the last weeks I tested kernel zones pretty heavily. The chance of getting rid of LDOMs is just too good to not go for it. Don’t understand me wrong, LDOMs work fine and are a key part of our current DR concept. But there are also things I really don’t like at all. Guess this will make a good post in the nearer future. ;)
As I said, I was using kernel zones but for DR purposes (let’s say one data center dies) I need to be able to boot the kzones from the other data center. In order to do so the zone configuration has to be available. Well, shared storage and so on too but for the purpose of this post let’s say that’s all taking care of automatically (it really is ;) ).
At the moment Zone configurations are saved via a SMF service on a NFS server. But I don’t want to create zones first while the angry mob can’t work out there and tries to figure out where I am sitting.
When I used kernel zone live migration I started thinking about how I want to solve this issue. For those who haven’t used kzone live migration yet, it creates a zone configuration on the target side and leaves the old one in configured state. Which means once you have live migrated a zone the problem seems to be solved. But what if something changes? What if it runs on a different server (hw) by the time of a disaster.
These two facts, programmatic interfaces and LDOM replacement, lead me to the idea of actually having a SMF scheduled service that takes care of zone configurations. For that I use RAD’s zonemanager and Python.
Besides the RAD IPS packages (rad and rad-zonemgr) being installed you will need a user with sufficient privileges for rad and zones administration/management.

RAD Python

The script zones-sync is part of a larger RAD script that makes all sort of stuff.
Simply said it checks which zone’s configuration is missing on a target server and then imports/creates it. Quiet trivial.

Let’s start wit the imports.

import rad.connect as radc
import rad.bindings.com.oracle.solaris.rad.zonemgr_1 as zonemgr
import string
import socket
import sys

Line 36 let’s you connect to RAD as the name already says while the import in line 37 adds the ability to us zone management. Quiet self-explaining I would say.

In order to know which global zones are suppose to be synced I started with getting the source and target hostname straight, depending on the used arguments. So, if only one hostname is given the other one will be considered to be localhost. For remote purposes two hostnames need to be provided. For the purpose of automation I added [-service|-svc] which in this case maps certain pattern of hostnames. The name pattern is used to find the corresponding global zone.
In the end anything that helps to automate getting the hostnames should be put here.

def getHostnames():
    global source_hostname
    global target_hostname
    
    if sys.argv[1] == "-service" or sys.argv[1] == "-svc":
        if source_hostname[3] == 'A':
            target_hostname = source_hostname[:3]+'S'+source_hostname[4:]
        elif source_hostname[3] == 'a':
            target_hostname = source_hostname[:3]+'s'+source_hostname[4:]
        elif source_hostname[3] == 'S':
            target_hostname = source_hostname[:3]+'A'+source_hostname[4:]
        elif source_hostname[3] == 's':
            target_hostname = source_hostname[:3]+'a'+source_hostname[4:]
    elif len(sys.argv) == 3:
        source_hostname = sys.argv[1]
        target_hostname = sys.argv[2]
    elif len(sys.argv) == 2:
        target_hostname = sys.argv[1]
    return (source_hostname,target_hostname)

Now that it is clarified which systems will be involved the next step is to connect to RAD. As you can see in lines 99 and 104 I am using ssh to connect to remote systems and Unix socket for local connections as shown in line 102.
Again, the user that executes this script must have sufficient privileges on the involved systems. In addition to that the rad:remote and/or rad:local service has to be enabled and online.

def connectRAD():
    global source_rc
    global target_rc
    
    if len(sys.argv) == 3:
        source_uri = radc.RadURI("ssh://"+source_hostname)
        source_rc = source_uri.connect()
    else:
        source_uri = radc.RadURI("unix:///")
        source_rc = source_uri.connect()
    target_uri = radc.RadURI("ssh://"+target_hostname)
    target_rc = target_uri.connect()

    return (source_rc,target_rc)

The next step is to get a list of zones of each global zone. What it actually is is a list of the objects of each zone.

def getZoneLists():
    global zones_s
    global zones_t
    
    zones_s = source_rc.list_objects(zonemgr.Zone())
    zones_t = target_rc.list_objects(zonemgr.Zone())

    return (source_zones,target_zones)

Each object includes the values of a zone. For example name, state, brand, etc..
The previous is done to get each ng/kzone’s state and therefore to decide whether it is synced or not. Incomplete zones for example are not worth synchronizing. A configured zone’s config will be replaced by the one of a running zone in order to have the most current version configured.

def getSourceZones():
    printHeader(source_hostname)
    for name_s in zones_s:
        zone_s = source_rc.get_object(name_s)
        print "\t%-16s %-11s %-6s" % (zone_s.name, zone_s.state,zone_s.brand)
        if zone_s.state != 'incomplete':
            source_zones.append(zone_s.name)
        if zone_s.state == 'configured':
            source_conf_zones.append(zone_s.name)
    return (source_zones,source_conf_zones) 

def getInstalledTargetZones():
    printHeader(target_hostname)
    for name_t in zones_t:
        zone_t = target_rc.get_object(name_t)
        print "\t%-16s %-11s %-6s" % (zone_t.name, zone_t.state,zone_t.brand)
        if zone_t.state != 'configured' and zone_t.state != 'incomplete':
            target_zones.append(zone_t.name)
        if zone_t.state == 'configured':
            target_conf_zones.append(zone_t.name)
    return (target_zones,target_conf_zones)

After comparing the states, the script deletes existing configurations that are about to be replaced.
In line 152 you can see the preparation of connecting to the target machine’s RAD zonemgr (rad.bindings.com.oracle.solaris.rad.zonemgr_1). The class that is used here is ZoneManager(rad.client.RADInterface)

Quote from the python help for rad.bindings.com.oracle.solaris.rad.zonemgr_1.ZoneManager:


| Create and delete zones. Changes in the state of zones can be
| monitored through the StateChange event.

def deleteExistingConfiguredTargetZone():
    delete_zone = target_rc.get_object(zonemgr.ZoneManager())
    global zones_t

    for name_d in import_zones:
        for name_t in zones_t:
            zone_t = target_rc.get_object(name_t)
            if name_d.name == zone_t.name:
                delete_zone.delete(name_d.name)
                print "DELETED: %s" % name_d.name
                zones_t.remove(name_t)
                break 

As you can see above in line 159 the method used is called delete.

When this is done it is time to export and import configurations.
To export a zone configuration via RAD (line 168) the proper class is called Zone with its method exportConfig(*args, **kwargs). Imports (line 170) are done by using the class ZoneManager and (importConfig(*args, **kwargs) as the method.
zones-sync.py [-h|help] [-service|-svc] [target hostname/ip]

def expImpConfig():
    mgr = target_rc.get_object(zonemgr.ZoneManager())
    for name_i in import_zones:
        zone_i = source_rc.get_object(name_i)
        z_config = zone_i.exportConfig()
        split_config = z_config.splitlines(True)
        mgr.importConfig(False,zone_i.name,split_config)
        print "IMPORTED: %s" % zone_i.name

Well, all that is left to do is to close the connections.

def closeRc():
    source_rc.close()
    target_rc.close()

And for you to have an idea what this may look like and what it does, let’s check out the following outputs.

In the following the script was used with -service. This is the way I use for scheduled/periotic services. The hostname’s pattern is used to define the source and target hostnames.

u2034611@AC6A000:~$ /net/ap6shr1/data/shares/soladm/intern/tm/zones-sync.py -service 
AC6A000: 
        NAME             STATUS      BRAND 
        kzone1           configured  solaris-kz 
        zone2            configured  solaris 
        zone3            configured  solaris 
        fuu1             configured  solaris 
        fu1              configured  solaris 
        oo1              configured  solaris 
        kzone            configured  solaris 
        zone1            configured  solaris 
AC6S000: 
        NAME             STATUS      BRAND 
        kzone1           running     solaris-kz 
        zone2            installed   solaris 
        zone3            configured  solaris 
        fuu1             configured  solaris 
        fu1              configured  solaris 
        oo1              configured  solaris 
        kzone            configured  solaris 
        zone1            configured  solaris 

As you can see nothing was deleted or imported. This is what it looks like when everything is in sync.

Let’s tell the script which server is suppose the target in order to sync it with the localhost.

u2034611@AC6A000:~$ /net/ap6shr1/data/shares/soladm/intern/tm/zones-sync.py AC4S000 
AC6A000: 
        NAME             STATUS      BRAND 
        kzone1           configured  solaris-kz 
        zone2            configured  solaris 
        zone3            configured  solaris 
        fuu1             configured  solaris 
        fu1              configured  solaris 
        oo1              configured  solaris 
        kzone            configured  solaris 
        zone1            configured  solaris 
AC4S000: 
        NAME             STATUS      BRAND  

IMPORTED: kzone1 
IMPORTED: zone2 
IMPORTED: zone3 
IMPORTED: fuu1 
IMPORTED: fu1 
IMPORTED: oo1 
IMPORTED: kzone 
IMPORTED: zone1 

Above you can see that on the one server (AC6A000, in this case the local machine) a bunch of different zones are configured and none exist on the target side. Therefore all of the zone configurations are imported.

Let’s say you have a central server or your local workstation from which you want to sync two global zones. In the following example two hostnames are passed on.

G u2034611@r0065262 % /var/tmp/zones-sync.py ac6s000 10.1.30.107                         
ac6s000: 
        NAME             STATUS      BRAND 
        kzone1           running     solaris-kz 
        zone2            installed   solaris 
        zone3            configured  solaris 
        fuu1             configured  solaris 
        fu1              configured  solaris 
        oo1              configured  solaris 
        kzone            configured  solaris 
        zone1            configured  solaris  
10.1.30.107: 
        NAME             STATUS      BRAND 
        kzone1           configured  solaris-kz 
        zone2            configured  solaris 
        zone3            configured  solaris 
        fuu1             configured  solaris 
        fu1              configured  solaris 
        oo1              configured  solaris 
        kzone            configured  solaris 
        zone1            configured  solaris 

DELETED: kzone1 
DELETED: zone2 
IMPORTED: kzone1 
IMPORTED: zone2 

This time both global zones have several zones in the configured state and on one host one zone is running and another one is in the installed state. Which means that two zone configurations (configured state). The more current ones, which in this case means rather “runnable” or running, are synced and the “only” configured zones’ configuration was deleted.

There are many more things to explore about RAD, so have fun and stay calm ;) !

Attachments

Deploying automated CVE reporting for Solaris 11.3

With Solaris 11.2 Oracle started including quiet few new Solaris features for security and automated deployment. Besides bringing in immutable zones, which I didn’t get to write about yet (which is a shame since these are wonderful), and compliance, Solaris IPS received a new package called pkg://solaris/support/critical-patch-update/solaris-11-cpu. This package includes packages that are considered to be part of the Critical Patch Update. In addition to the package name and version this package now enables you to see which CVE each of these packages belong to.
You can use the pkg command to do some basic searches. Immutable zones, compliance and CVEs are only three of the different security features that were added to Solaris 11.2 and Solaris 11.3.

Most likely an admin will not want to login to each of his hundreds, thousands or even more Solaris installations in order to install needed packages and take care of a proper configuration. Can’t blame him. That’s probably what the Solaris team thought when puppet became part of the IPS repository with Solaris 11.2. There is not that much to say about it for those who do not know it. It does what is suppose to and is a relief for every admin if used right. In case you are interested in some really great articles go check out Manuel Zach’s blog. For automation in general you will want to go and also read Glynn Foster’s blog.
Now why am I writing about these “old” Solaris 11.2 features if Solaris 11.3 beta was released a few weeks ago already. Well, these are fundamental technologies in order to get the most out of Solaris 11.3.

Bringing Solaris IPS and NIST together

Companies mostly use an external software that alerts and reports every single CVE that is out there and triggers a service request for the responsible team. The thing is, it is slow, costs a lot and you get service requests for software that is not installed on any system. So what happens is the admin ends up checking it himself.

So I figured I will just do it Solaris style. By now as I write this post, we do have a fully automated reporting for CVEs for no extra costs at work.

Let’s start with IPS and CVEs. As mentioned before you will need to have a certain package installed.

# pkg install support/critical-patch-update/solaris-11-cpu

This package is updated with every SRU and will include every known CVE for Solaris 11. If you want to know a few basics about it read Darren Moffat’s blog.
Use the following command to see all the included packages:

# pkg contents -ro name,value solaris-11-cpu|grep '^CVE.*'
CVE-1999-0103                 pkg://solaris/system/security/kerberos-5@0.5.11,5.11-0.175.1.10.0.3.2
CVE-2002-2443                 pkg://solaris/system/security/kerberos-5@0.5.11,5.11-0.175.1.10.0.3.2
CVE-2003-0001                 pkg://solaris/driver/network/ethernet/pcn@0.5.11,5.11-0.175.1.11.0.3.2
CVE-2004-0230                 pkg://solaris/system/kernel@0.5.11,5.11-0.175.1.15.0.4.2
CVE-2004-0452                 pkg://solaris/runtime/perl-584/extra@5.8.4,5.11-0.175.1.11.0.3.2
CVE-2004-0452                 pkg://solaris/runtime/perl-584@5.8.4,5.11-0.175.1.11.0.3.2
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-apc@3.0.19,5.11-0.175.2.8.0.3.0
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-idn@0.2.0,5.11-0.175.2.8.0.3.0
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-memcache@2.2.5,5.11-0.175.2.8.0.3.0
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-mysql@5.2.17,5.11-0.175.2.8.0.3.0
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-pear@5.2.17,5.11-0.175.2.8.0.3.0
...
CVE-2015-4024                 pkg://solaris/web/php-53/extension/php-tcpwrap@1.1.3,5.11-0.175.2.13.0.4.0
CVE-2015-4024                 pkg://solaris/web/php-53/extension/php-xdebug@2.2.0,5.11-0.175.2.13.0.4.0
CVE-2015-4024                 pkg://solaris/web/php-53/extension/php-zendopcache@7.0.2,5.11-0.175.2.13.0.4.0
CVE-2015-4024                 pkg://solaris/web/php-53@5.3.29,5.11-0.175.2.13.0.4.0
CVE-2015-4024                 pkg://solaris/web/server/apache-22/module/apache-php52@5.2.17,5.11-0.175.2.13.0.4.0
CVE-2015-4024                 pkg://solaris/web/server/apache-22/module/apache-php53@5.3.29,5.11-0.175.2.13.0.4.0
CVE-2015-4770                 pkg://solaris/system/file-system/ufs@0.5.11,5.11-0.175.2.11.0.3.2
CVE-2015-4770                 pkg://solaris/system/kernel/platform@0.5.11,5.11-0.175.2.11.0.4.2
CVE-2015-5073                 pkg://solaris/library/pcre@8.37,5.11-0.175.2.13.0.3.0
CVE-2015-5477                 pkg://solaris/network/dns/bind@9.6.3.11.2,5.11-0.175.2.12.0.7.0
CVE-2015-5477                 pkg://solaris/network/dns/bind@9.6.3.11.2,5.11-0.175.2.13.0.5.0
CVE-2015-5477                 pkg://solaris/service/network/dns/bind@9.6.3.11.2,5.11-0.175.2.12.0.7.0
CVE-2015-5477                 pkg://solaris/service/network/dns/bind@9.6.3.11.2,5.11-0.175.2.13.0.5.0

Now we got the information of which Solaris IPS package belongs to which CVE-ID. That’s nice but how do we get all the other CVE information? Base score, summary, access vector, etc.?! In order to add more details to it I imported the NIST nvd-files into a sqlite3 database.
The files can be either downloaded as a compressed gz-file or regular xml. For more information visit https://nvd.nist.gov/download.cfm.
I imported the xml files into a sqlite3 database for a better performance. If you don’t want to work your way through the xml structures yourself use this python program that I came across while writing it myself. I like the approach of just having to do:

# curl https://nvd.nist.gov/static/feeds/xml/cve/nvdcve-2.0-2015.xml | nvd2sqlite3 -d /wherever/you/like/to/keep/the/dbfile

In order to keep your NIST CVE database current I put the commands in a script and created a crontab entry.

getNvdCve.sh:

#!/usr/bin/bash

curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2002.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2003.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2004.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2005.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2006.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2007.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2008.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2009.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2010.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2011.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2012.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2013.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2014.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-2015.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-modified.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl https://nvd.nist.gov/feeds/xml/cve/nvdcve-2.0-recent.xml | nvd2sqlite3 -d /data/shares/NIST/cvedb

cron:

0 5 * * * /scripts/admin/getNvdCve.sh

Alright this gives us a database with all the information we need and want. The schema of the sqlite3 database looks like this:

sqlite> .schema
CREATE TABLE nvd (access_vector varchar,
                                            access_complexity varchar,
                                            authentication varchar,
                                            availability_impact varchar,
                                            confidentiality_impact varchar,
                                            cve_id text primary key,
                                            integrity_impact varchar,
                                            last_modified_datetime varchar,
                                            published_datetime varchar,
                                            score real,
                                            summary varchar,
                                            urls varchar,
                                            vulnerable_software_list);

Next to do is to match the data of the db with the IPS information. When I started working on this I focused on console output only but when I looked at our centralized compliance reports I wanted the same thing for CVEs. A central CVE reporting. So I ended up with writing the output to a html-file on an apache webserver.

Since the standard Perl in Solaris does not contain DBD::SQLite I switched to Python.
cveList.py does the following:

  • get all the installed package information from IPS
  • get all the information from the solaris-11-cpu package
  • match the above data and filter which pkg is installed and what version does it have (lower version = unpatched CVE)
  • create html report file with all the needed elements
  • connect to the sqlite3 db and get cve_id, access_vector, score and summary
  • write the select output to file, sorted by unpatched and patched CVEs

The CVE report looks like this:
cveReport

What we got now is a script that pulls in all the nvd information from NIST and stores it in a sqlite3 database. And we got a script that matches these information with the installed IPS packages and generates a CVE report in a html format.

Scheduled Services with Solaris 11.3

Next up is to automatically generate these reports. With Solaris 11.2 cron would be the way to do it. Trivial entry in the crontab and done.

30 5 * * * /scripts/admin/cveList.py

With Solaris 11.3 cron is almost obsolete. Why? Because of SMF and the new scheduled and periodic services. I’m not gonna talk about why SMF is great or not. To me it is great and I never ran into any serious problem. If it is a Solaris 11.3 installation I will move custom cronjobs to SMF and create scheduled services.
What these do is the same as cron plus everything else SMF has to offer.
The scheduled service I use for the CVE reporting is the following:

<?xml version="1.0" ?>
<!DOCTYPE service_bundle
  SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
<!--
    Manifest created by svcbundle (2015-Sep-04 15:05:15+0200)
-->
<service_bundle type="manifest" name="site/cveList">
    <service version="1" type="service" name="site/cveList">
        <!--
            The following dependency keeps us from starting until the
            multi-user milestone is reached.
        -->
        <dependency restart_on="none" type="service"
            name="multi_user_dependency" grouping="require_all">
            <service_fmri value="svc:/milestone/multi-user"/>
        </dependency>
        <instance enabled="true" name="default" >
                <scheduled_method
                        interval='day'
                        hour='5'
                        minute='30'
                        exec='/lib/svc/method/cveList.py'
                        timeout_seconds='0'>
                                <method_context>
                                        <method_credential user='root' group='root' />
                                </method_context>
                </scheduled_method>
        </instance>
    </service>
</service_bundle>

Use svcbundle to generate your own manifest.

# svcbundle -o /var/tmp/cveList.xml -s service-name=site/cveList -s start-method=/lib/svc/method/cveList.py -s interval=day -s hour=5 -s minute=30
# svccfg validate /var/tmp/cveList.xml

It’s as easy as that. Add to it whatever you feel like and is needed. Mail reporting in case of a status change for example.

Well now we have a service that generates a CVE report every day at 5:30am of a server.
We need more so let’s move on to the next piece.

Building a custom IPS package

The best way to deploy any piece of software on a Solaris 11.x server is with IPS.
IPS packages are very easy to use when they already build and published. List, install, info, uninstall, contents, search, freeze, unfreeze, etc.. It is always the same command pattern that makes it that way. But how do you build your own packages. That is always a bit more tricky than using them. Instead of explaining how it works I will just link to another article written by Glynn Foster which covers everything you need to know.
If you don’t want to type in every single step this little script might help. Adjust your IPS repository and your paths and all you need is a so called mog file which in this case could look like this:

set name=pkg.fmri value=pkg://custom/security/custom-cveList@1.0.2
set name=variant.arch value=sparc value=i386
set name=pkg.description value="custom CVE reporting"
set name=pkg.summary value="custom Solaris CVE reports"
<transform dir path=lib$ -> drop>
<transform dir path=lib/svc$ -> drop>
<transform dir path=lib/svc/manifest$ -> drop>
<transform dir path=lib/svc/manifest/site$ -> set owner root>
<transform dir path=lib/svc/manifest/site$ -> set group sys>
<transform file path=lib/svc/manifest/site/cveList\.xml$ -> set owner root>
<transform file path=lib/svc/manifest/site/cveList\.xml$ -> set group bin>
<transform file path=lib/svc/manifest/site/cveList\.xml$ -> set mode 0444>
<transform file path=lib/svc/manifest/site/cveList\.xml$ -> default restart_fmri svc:/system/manifest-import:default>
<transform dir path=lib/svc/method$ -> drop>
<transform file path=lib/svc/method/cveList\.py$ -> set owner root>
<transform file path=lib/svc/method/cveList\.py$ -> set group bin>
<transform file path=lib/svc/method/cveList\.py$ -> set mode 0555>

Besides the mog file you just enter the path to your proto directory that includes the software that is suppose to be packaged up and you are good to go. You will be asked to type in the name of the package and that’s it. Rest is done automatically. You might have to adjust the configuration inside of your mog file in case of unresolved dependencies for example. Should you be missing a custom IPS repo create one right quick and then start packaging.

Creating a custom IPS repo and share it via nfs:

# zfs create -po mountpoint=/ips/custom rpool/ips/custom
# zfs list -r rpool/ips
NAME              USED  AVAIL  REFER  MOUNTPOINT
rpool/ips          62K  36.2G    31K  /rpool/ips
rpool/ips/custom   31K  36.2G    31K  /ips/custom
# pkgrepo create /ips/custom
# zfs set share=name=custom_ips,path=/ips/custom,prot=nfs rpool/ips/custom
name=custom_ips,path=/ips/custom,prot=nfs
# zfs set share.nfs=on rpool/ips/custom
# zfs get share
NAME                                                           PROPERTY  VALUE  SOURCE
rpool/ips/custom                                               share     name=custom_ips,path=/ips/custom,prot=nfs  local

Let’s actually build the cveList IPS pkg.

# /scripts/admin/buildIpsPkg.sh /scripts/admin/IPS/CVE/MOG/custom-cveList.mog /scripts/admin/IPS/CVE/PROTO.CVE

Need some information about the package. Answer the following questions to generate a mogrify-file (package_name.mog) or if you have a package_name.mog template execute this script with args:

 /scripts/admin/buildIpsPkg.sh [path_to_mog_file] [path_to_proto_dir]


Enter Package Name (eg. custom-compliance): custom-cveList

Ready! Generating the manifest.

pkgsend generate... OK
pkgmogrify... OK
pkgdepend generate... OK
pkgdepend resolve... OK
eliminating version numbers on required dependencies... OK
testing manifest against Solaris 11.2 repository, pkglint ... 
Lint engine setup...

Ignoring -r option, existing image found.
Starting lint run...

OK

Review the manifest file custom-cveList.p5m.4.res!


publish the ips package with:
pkgsend publish -s file:///data/ips/custom -d /scripts/admin/IPS/CVE/PROTO.CVE /scripts/admin/IPS/CVE/custom-cveList.p5m.4.res

check the package with:
pkg refresh
pkg info -r custom-cveList
pkg contents -m -r custom-cveList
pkg install -nv custom-cveList

remove it:
pkgrepo remove -s file:///data/ips/custom pkg://custom/security/custom-cveList@1.0.2

Et voilà, the package is ready to be published.

# pkgsend publish -s file:///data/ips/custom -d /scripts/admin/IPS/CVE/PROTO.CVE /scripts/admin/IPS/CVE/custom-cveList.p5m.4.res
# pkg refresh
# pkg info custom-cveList
             Name: security/custom-cveList
          Summary: custom Solaris CVE reports
      Description: custom CVE reporting
            State: Installed
        Publisher: custom
          Version: 1.0.2
           Branch: None
   Packaging Date: Tue Sep 08 16:48:50 2015
Last Install Time: Tue Sep 08 16:52:07 2015
             Size: 8.80 kB
             FMRI: pkg://custom/security/custom-cveList@1.0.2:20150908T164850Z

DONE! At least with getting CVEs, matching CVEs, scheduling reports and building a package out of all of this.

Let’s deploy.

Let puppet do your job

In this case I am already running a puppet master and several puppet agents. Since I talked about multiple hundreds or thousands of Solaris installations a master-agent setup is exactly what we want.
Nobody has the time and endurance to login on each system and do a pkg install custom-cveList.
I figured a puppet module would be just what I want.
And to save time, here it is:

# cat /etc/puppet/modules/cve/manifests/init.pp
class cve {
        if $::operatingsystemrelease == '11.3' or $::operatingsystemmajrelease == '12' {
                package { 'custom-cveList':
                        ensure => 'present',
                }
        } 
        if $::operatingsystemrelease == '11.2' {
                cron { 'cveList' :
                        ensure => 'present',
                        command => '/scripts/admin/cveList.py',
                        user => 'root',
                        hour => 5,
                        minute => 30,
                }
        }
}

The first if-statement will install the recently build and published IPS pkg. Since scheduled services are not available in Solaris 11.2 I had to add a crontab entry for that case, which would be the second if-statement in the above.
Now just add it to your /etc/puppet/manifest/site.pp and you are all set up.

node default {
        include nameservice
        include tsm
        include arc
        include compliance
        include mounts
        include users
        include cve
}

This is it now. From now on, every single Solaris server that runs a puppet agent will have your custom CVE reporting deployed.
Reading all this actually takes longer than just doing it and you only need to go through all of this once.

I know this looks like it is a lot but it really isn’t. If you want to leave out the IPS part just add your scripts and service/cron to your puppet configuration.
This is easier to handle than a third party tool. If want you could just implement it to your ITIL process and its tools to automate CVE service request handling.

Attachments