Privileged Command Execution History Reporting

How often were you asked by the management or auditors to show a list of administrative commands that were used on a system.
With a properly (e.g. cusa) configured Solaris 11.3 and prior either already had some script or pricy external tool that filters the audit streams for you or you had to do it manually (auditreduce/praudit) and whatever needs to be done to make it worth show anyone.
With Solaris 11.4 Oracle ships the admhist utilty, which takes all the manual overhead away and adds great options to narrow down the results for a certain date, time, or type of event.

For a better understanding and overview upfront here is the help output from the admhist command:

root@wacken:~# admhist -h
admhist: illegal option -- h
usage:  admhist [-a date-time] [-b date-time] [-d date-time]
         [-t [tags-file:]tag[,tag,...]] [-z zonename] [-v] [audit-trail-file]...
        admhist [-a date-time] [-b date-time] [-d date-time]
         [-t [tags-file:]tag[,tag,...]] [-z zonename] [-v] -R pathname
        Valid date-time formats include:
                today, yesterday
                last week, last month
                last 3 days, last 8 hours

So let’s check for what was going over the last 4 hours for example. The -a option show entries after the giving date-time. In this case (-a “last 4 hours”) everything within the last 4 hours. In case you want every privileged execution before the last 4 hours just -b instead of -a

root@wacken:~# admhist -a "last 4 hours"
2018-02-27 09:59:16.190+01:00 /usr/sbin/zfs zfs help
2018-02-27 10:00:22.954+01:00 /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:22.972+01:00 /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:23.474+01:00 /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:24.736+01:00 /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:26.646+01:00 /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:27.237+01:00 /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:33.124+01:00 /usr/sbin/zpool zpool create tpool /var/tmp/f1 /var/tmp/f2
2018-02-27 10:01:25.822+01:00 /usr/sbin/zpool zpool list
2018-02-27 10:01:37.868+01:00 /usr/sbin/quota
2018-02-27 10:03:07.955+01:00 /usr/sbin/zpool zpool create tpool /var/tmp/f1 /var/tmp/f2
2018-02-27 10:03:17.057+01:00 /usr/sbin/zpool zpool status tpool
2018-02-27 10:03:20.037+01:00 /usr/sbin/zfs zfs
2018-02-27 10:03:22.404+01:00 /usr/sbin/zfs zfs help
2018-02-27 10:03:38.249+01:00 /usr/sbin/zpool zpool upgrade
2018-02-27 10:03:40.886+01:00 /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:03:45.684+01:00 /usr/sbin/zpool zpool upgrade -a
2018-02-27 10:04:02.408+01:00 /usr/sbin/zfs zfs upgrade -v
2018-02-27 10:04:05.613+01:00 /usr/sbin/zfs zfs upgrade
2018-02-27 10:04:11.243+01:00 /usr/sbin/zpool zpool help
2018-02-27 10:04:17.903+01:00 /usr/sbin/zpool zpool status
2018-02-27 10:04:21.769+01:00 /usr/sbin/zpool zpool XXX
2018-02-27 10:04:25.335+01:00 /usr/sbin/zpool zpool XXX tpool
2018-02-27 10:04:31.436+01:00 /usr/sbin/zpool zpool XXX tpool /var/tmp/f2
2018-02-27 10:04:33.208+01:00 /usr/sbin/zpool zpool XXX tpool
2018-02-27 10:04:36.321+01:00 /usr/sbin/zpool zpool status
2018-02-27 10:06:02.968+01:00 /usr/sbin/zpool zpool status
2018-02-27 10:06:23.058+01:00 /usr/sbin/zpool zpool XXX tpool /var/tmp/f2 /var/tmp/f3
2018-02-27 10:06:24.896+01:00 /usr/sbin/zpool zpool status
2018-02-27 10:06:32.197+01:00 /usr/sbin/zpool zpool XXX tpool /var/tmp/f2
2018-02-27 10:06:33.828+01:00 /usr/sbin/zpool zpool status
2018-02-27 10:11:18.879+01:00 /usr/sbin/zoneadm -R / list -cp
2018-02-27 10:11:18.962+01:00 /usr/bin/amd64/pkg /usr/bin/64/python2.7 /usr/bin/pkg info entire

In order to see user and hostname just use the option -v:

root@wacken:~# admhist -v -a "last 4 hours"
2018-02-27 09:59:16.190+01:00 muehle@wacken cwd=/export/home/muehle /usr/sbin/zfs zfs help
2018-02-27 10:00:22.954+01:00 muehle@wacken cwd=/export/home/muehle /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:22.972+01:00 muehle@wacken cwd=/export/home/muehle /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:23.474+01:00 muehle@wacken cwd=/export/home/muehle /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:24.736+01:00 muehle@wacken cwd=/export/home/muehle /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:26.646+01:00 muehle@wacken cwd=/export/home/muehle /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:27.237+01:00 muehle@wacken cwd=/export/home/muehle /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:33.124+01:00 muehle@wacken cwd=/export/home/muehle /usr/sbin/zpool zpool create tpool /var/tmp/f1 /var/tmp/f2
2018-02-27 10:01:25.822+01:00 muehle@wacken cwd=/var/tmp /usr/sbin/zpool zpool list
2018-02-27 10:01:37.868+01:00 muehle@wacken cwd=/root /usr/sbin/quota
2018-02-27 10:03:07.955+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool create tpool /var/tmp/f1 /var/tmp/f2
2018-02-27 10:03:17.057+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool status tpool
2018-02-27 10:03:20.037+01:00 muehle@wacken cwd=/root /usr/sbin/zfs zfs
2018-02-27 10:03:22.404+01:00 muehle@wacken cwd=/root /usr/sbin/zfs zfs help
2018-02-27 10:03:38.249+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool upgrade
2018-02-27 10:03:40.886+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:03:45.684+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool upgrade -a
2018-02-27 10:04:02.408+01:00 muehle@wacken cwd=/root /usr/sbin/zfs zfs upgrade -v
2018-02-27 10:04:05.613+01:00 muehle@wacken cwd=/root /usr/sbin/zfs zfs upgrade
2018-02-27 10:04:11.243+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool help
2018-02-27 10:04:17.903+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool status
2018-02-27 10:04:21.769+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool XXX
2018-02-27 10:04:25.335+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool XXX tpool
2018-02-27 10:04:31.436+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool XXX tpool /var/tmp/f2
2018-02-27 10:04:33.208+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool XXX tpool
2018-02-27 10:04:36.321+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool status
2018-02-27 10:06:02.968+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool status
2018-02-27 10:06:23.058+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool add tpool /var/tmp/f2 /var/tmp/f3
2018-02-27 10:06:24.896+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool status
2018-02-27 10:06:32.197+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool XXX tpool /var/tmp/f2
2018-02-27 10:06:33.828+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool status
2018-02-27 10:11:18.879+01:00 muehle@wacken cwd=/ /usr/sbin/zoneadm -R / list -cp
2018-02-27 10:11:18.962+01:00 muehle@wacken cwd=/root /usr/bin/amd64/pkg /usr/bin/64/python2.7 /usr/bin/pkg info entire

With no further options given it will just list you all the privileged commands executed.

root@wacken:~# admhist
2017-04-05 06:08:41.307+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 06:09:14.591+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 06:32:58.689+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:04:04.313+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:19:13.614+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:25:20.168+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:25:40.142+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:26:52.158+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:27:10.400+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:27:35.560+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:28:03.857+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:28:59.362+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:31:26.702+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:31:29.059+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:32:09.722+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:32:16.210+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:32:18.050+02:00 /usr/bin/amd64/pkg /usr/bin/64/python2.7 /usr/bin/pkg exact-install --accept --be-name s12_b115 entire@5.12- solaris-small-server@5.12-
2017-04-05 07:32:18.051+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:37:52.352+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:51:31.862+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:52:11.834+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:55:48.995+02:00 /usr/bin/amd64/pkg /usr/bin/64/python2.7 /usr/bin/pkg install docker
2017-04-05 07:55:48.997+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 08:15:30.826+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 08:15:52.467+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 08:23:38.643+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 09:11:41.226+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-24 10:09:59.772+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-24 10:10:02.842+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-24 10:10:17.952+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-24 10:10:18.553+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-24 10:17:04.912+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-24 11:25:39.775+02:00 /usr/lib/svcadm pfexec-auth /usr/sbin/svcadm svcadm disable ocm
2017-04-24 11:27:24.889+02:00 /usr/lib/zfs pfexec-auth /usr/sbin/zfs zfs list -r -o name,used,avail,refer,compressratio,quota,reserv,aclmode,aclinherit,compression,atime,dedup,mounted,mountpoint
2017-04-24 11:28:51.554+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-24 11:28:51.640+02:00 /usr/lib/pkg pfexec-auth /usr/bin/pkg pkg install docker
2017-04-24 11:29:26.933+02:00 /usr/lib/zfs pfexec-auth /usr/sbin/zfs zfs list -r -o name,used,avail,refer,compressratio,quota,reserv,aclmode,aclinherit,compression,atime,dedup,mounted,mountpoint
2017-04-24 11:31:15.562+02:00 /usr/sbin/zfs zfs create -o mountpoint=/var/lib/docker rpool/VARSHARE/docker
2017-04-24 11:31:24.490+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-24 11:33:00.123+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1

This is a very handy utility if you ask me. Nice and easy to use. Especially since you don’t have to use the exact time and date when you instead pass on “last 2 days”, “last 48 hours”, “last month”, or so.

Maybe something like -u (certain user/uid) would be a nice additional option too.


Tailoring meets Solaris Compliance

With Solaris 11.3 Oracle addded a new feature to compliance. Tailoring it is called and pretty much does exactly that. Instead of having to manually customize benchmark files tailoring will do the job for you. That’s the trivial description of what tailoring does.
But underneath the hood tailoring is capable of so much more. Used the right way it takes the automation of compliance reporting to a more sophisticated level.

How to get started

Before talking about how tailoring can enhance the way you use and customize compliance in Solaris let me quickly walk you through how it works.
Using tailoring is as simple and intuitive as running an assessment. All you need to do is type “compliance tailor -t “. The -t option declares which tailoring shall be loaded. In case none exists it will be created. It is not a required option but in order to store the tailoring you will have to set the name manually by using “set tailoring=” later on anyway.

Example without the option:

ROOT@AP6S500 > compliance tailor

Documented commands (type help ):
clear   delete   exit    include  list  pick  value 
commit  exclude  export  info     load  set   values

Miscellaneous help topics:

tailoring> set> info
        benchmark: not set
        profile: not set>

Example with -t:

ROOT@AP6S500 > compliance tailor -t tailoring.tm2 
*** compliance tailor: Can't load tailoring 'tailoring.tm2': no existing tailoring: 'tailoring.tm2', initializing

tailoring:tailoring.tm2> info
        benchmark: not set
        profile: not set

As the examples already showed the tailoring CLI command info shows which tailoring, benchmark and profile are set.
From this point on you could use set …=… all the way till your tailoring is done and you commit it. If you rather would like to save some time and typing pick will be the command of your choice.

tailoring:tailoring.tm2> pick

Use the arrow-keys to navigate up and down and pick the benchmark and profile that you would like to take for your tailoring. This can be seen as sort of a template. When you have done your selection pres ESC. info will show what you selected.

tailoring:tailoring.tm2> info

tailoring, benchmark and profile are set, which means tests can be picked now.

tailoring:tailoring.tm2> pick

The picture above shows the tests of the earlier chosen benchmark and profile. “x” stands for excluded while “>” indicates an activated test. This is where you tailor your compliance check. As before press “ESC” when you are done.
With the command export yo can see what changes you have made. The output that is shown then are the commands that can be used to manually include and exclude tests instead of using pick.

tailoring:tailoring.tm2> export
set tailoring=tailoring.tm2
# version=2016-02-26T16:44:36.000+00:00
set benchmark=tm
set profile=tm
tailoring:tailoring.tm2> pick
tailoring:tailoring.tm2> export
set tailoring=tailoring.tm2
# version=2016-02-26T17:02:10.000+00:00
set benchmark=tm
set profile=tm
# ivv-000: Compliance integrity is given
exclude ivv-000
# ivv-001: LDAP client configuration is ok
include ivv-001
# OSC-54005: Package integrity is verified
exclude OSC-54005
# OSC-53005: The OS version is current
exclude OSC-53005
# OSC-53505: Package signature checking is globally activated
exclude OSC-53505

Should you be interested in how the tailoring file itself will look like simply use the option -x. This will give you the XML output.
All that is left to do is commit your changes et voilá … exit and done!
In case you have been fiddling around and create a few tailorings already the list will list all the existing tailorings.

Tailoring vs. Benchmarks/Profiles only

After we flew through the basics of Solaris compliance tailoring we are already know enough to talk about why EVERYONE should use tailoring.
Maybe you have read one or even all of my earlier Solaris Compliance posts or heard me talking about it, if you might remember me saying it is really quiet fast and simple to customize. Well, it just got way easier. Not all out of the box yet but almost and I am sure someone already requested an enhancement. :-D
So what am I talking about?!
The files for Solaris compliance can be found under two paths. One is /usr/lib/compliance. This was probably the only one that you might have been working in in case you customized anything. For adding benchmarks, adding tests or editing profiles this was/is where you do it. Other than that all the content here is pretty much static until a change might come with an update (SRU). With Solaris 11.3 and tailoring the compliance benchmark directories received another directory called tailorings. By default this is empty.
All the changes and information done while using the compliance command are done under /var/share/compliance. It is important to understand that this content should stay untouched. Just leave this path to Solaris and the engineering. But it is always nice and helpful to know where to look for changes.
Let’s take a look at /var/share/compliance/tailorings.

G muehle@AP6S500 % ls -l /var/share/compliance/tailorings 
total 60
-rw-r--r--   1 root     root         495 Feb 16 14:21 ivv-tailor.xccdf.xml
-rw-r--r--   1 root     root         964 Feb 16 14:05
-rw-r--r--   1 root     root         952 Feb 26 18:07 tailoring.tm2.xccdf.xml
-rw-r--r--   1 root     root         489 Feb 17 14:03 test.xccdf.xml
-rw-r--r--   1 root     root       24844 Feb 17 15:11 test123.xccdf.xml

This is the place compliance tailor saves the tailorings after committing it. The content of /var/share/compliance/tailorings/tailoring.tm2.xccdf.xml is exactly what export -x showed us earlier.

Another very interesting directory is /var/share/compliance/assessments. I will write more about why this is hopefully soon. I am working on customizing Solaris compliance for a larger scale environment and this directory plays an important role for that.

But let’s get back on track and talk about how much of an enhancement tailoring is.
At the moment we have different IPS packages with different benchmarks. Each with different profiles. Just so different scenarios are covered.
Which means we spend some time customizing large XML files and we also do have to spend time on maintaining it.
Now, all we do is package up your tailoring file or a compliance tailor -f command file with includes and excludes in IPS. Less complexity and less maintaining! No more duplicating lines and lines of code only to have a different set of tests that is suppose to be used.
When you think about it tailorings are the delta to a certain benchmark. So, what if you would have one large benchmark that includes all the available tests and let’s say a preconfigured profile for solaris, pci-dss and a “complete profile”. To cerate your own profile just place your tailoring in /usr/lib/compliance/benchmark/benchmark-name/tailorings/ and run the following:

# compliance assess -t tailoring-name

Using different tests depending on the application has become really simple and quick to prepare and do. Your tailoring works everywhere no matter if a benchmarks has tests included or excluded. Really nice! Add IPS and Puppet to all of this and you can much more time on other topics.

Right now this “complete” benchmark needs to be created by the customer. Not much of a problem if you already took care of that but I would guess not too many have. But even if you have your own all containing benchmark with each update you might be missing something in it. Tests or what so ever. So you still have to maintain thousands of lines of XML content. :-(
So hopefully such a benchmark will make it into a future release of compliance.

Tailoring simplifies Solaris Compliance a lot and saves you a lot of time. It is great! Try it!

Benefits of Solaris Immutable Zones

Over the last couple of weeks or actually months I was lucky to talk to a lot of other Oracle Solaris customers and other Linux and Unix users/admins. One question that I got asked most of the time is why I use immutable zones? Where is the benefit if your data can still be read and stolen?
Since I finally got some time for a blog post I figured I will share the answer with whoever might be interested. Originally I planned on writing a HowTo post ever since this feature was released. But time has passed by and the more important question at the moment seems to be the WHY rather then the HOW.

The answer to why I use immutable (global)zones is security, simplicity, and speed. And all of this for no extra costs.
Let me explain this in more detail.


Security is often just looked at as something that keeps your data safe and protects the IT from attackers/hackers. This is definitely a part of what security is but there is so much more to it. Why are mostly hackers, attacker or let’s say external people considered a threat to the system but hardly ever the admins or users on the system itself. Why trust yourself? And what is it I want to protect or prevent?

Immutable zones aims for preventing data manipulation and protects you from headless admin mistakes like rm -r * in the wrong directory, wrong terminal or what so ever, misconfiguration of the system, and therefor of course also from anyone (attackers or users) reconfiguring your system. Yes, the data can be read but not changed. I am very sure most of you who read this have been dealing with a sudden appearance of a fault and when the question is asked what happened what was changed nobody has done anything. Why bother with this question??? I don’t want to think about what the application users might be doing in /etc or hope that the new admin is not going to destroy datasets, cleans out directories or even mis-configures RBAC.
Immutable zones ensures me that the system stays exactly the same way until I change something intentionally.

That was the technical point of view. But a growing field of security is compliance. I wonder how much money and time is spend by companies just to be able to somehow ensure the auditor that your system configuration is compliant throughout the year. And even more how much was spend to make sure it did stay the same. Scripts were written, mistakes were corrected, you got more gray hairs over the last couple of months and the meetings with the auditors will most probably not be the highlight of the year. Save time and money and just tell/show the auditor that the system was and is immutable (read-only) and therefor nothing changed!!! Easiest way to do so by the way is use the Solaris compliance framework.

Another very important fact is that this is not achieved by just mounting datasets read-only. It is rather deeply integrated in Solaris and based on privileges.


The chances are high that not every out there is working with Solaris but rather Windows or any Linux distro. So let me start with a short comment what simplicity does not mean to me as a Solaris guy. Simplicity does not mean I don’t have to install additional software or spend time on integrating a feature. That is what I am used to and what is normal to me.
I just start using security features or in case of for example RBAC “have” to use it. It is always there.

So what is it that makes immutable zones simple then? Well, let me just show you the steps it takes to turn a non-immutable zone into an immutable one.

root@GZ:~# zoneadm list -cv
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   2 ap1s001          running     /zones/ap1s001               solaris    excl

root@GZ:~# zonecfg -z ap1s001 set file-mac-profile=fixed-configuration

root@GZ:~# zlogin ap1s001 init 6

That’s it
You enable immutable Zones by simply changing the file-mac-profile value to either strict(be careful!), fixed-configuration or flexible-configuration and then reboot the zone. For the global zone dynamic-zones is available as well. In case you want to go back to a regular type of zone just use none as a value for file-mac-profile.
Here is a quote of the zonecfg man page:

none makes the zone exactly the same as a normal, r/w zone. strict
allows no exceptions to the read-only policy. fixed-configuration
allows the zone to write to files in and below /var, except direc-
tories containing configuration files:


dynamic-zones is equal to fixed-configuration but allows creating
and destroying non-global zones and kernel zones. This profile is
only valid for global zones, including the global zone of a kernel

flexible-configuration is equal to dynamic-zones, but allows writ-
ing to files in /etc in addition.

zoneadm list -p shows whether a zone is immutable or not.

root@GZ:~# zoneadm list -p

Listed are the fields zoneid:zonename:state:zonepath:uuid:brand:ip-type:r/w:file-mac-profile.

To add more simplicity a trusted path (zlogin -T|U) from the global zone / console can be used to do necessary changes. For example adding a directory or adding a user.
You also don’t have to do wild things when it comes to updating/patching. Just use the pkg update command as you always do.

As you can see it is just simple!


Thanks to the sophisticated integration of immutable zones there is no overhead. No software installed on top of the operating system. No daemon running and checking fo/preventing open system calls or what so ever. Immutable zones run just as fast as non-immutable zones.
Changing back and worth is just a reboot. I could imagine this might not even be necessary anymore somewhen.
Besides that this feature will speed up the auditor meetings as mentioned before.
And the process of setting it up is lightning fast compared to other tools out there.
Not worrying about your configuration of your system anymore will speed up other projects/topics you are working on by saving time and thoughts/distractions.

Short answer

To tell you the truth this is what my very first answer to the why question always is before getting into details:

Why not?!?!?! Why shouldn’t I use a security feature that is there for free, that works and is a no-brainer to use?!
I don’t wanna worry about my own stupid mistakes or even the ones of others. I don’t trust application admins/users. Auditor meetings are over before they even begin. It’s just a great feature!

As I said at the beginning I was lucky to be able to talk to quiet a lot of different admins, engineers, managers, etc. and it was really nice to see how most of them started thinking “Why not, true!”. This feature might not fit in every single environment. But does every machine have IPsec, IPfilter, … enabled? Probably not.

I hope this will encourage some of you to make your own experience with this great Solaris feature.

IBM GSKit takes advantage of SPARC M7 hardware encryption

As the Oracle post states: “This, in turn, means that several IBM software products can now make use of on-chip SPARC hardware encryption today, automatically, without significant performance impact.”

Deploying automated CVE reporting for Solaris 11.3

With Solaris 11.2 Oracle started including quiet few new Solaris features for security and automated deployment. Besides bringing in immutable zones, which I didn’t get to write about yet (which is a shame since these are wonderful), and compliance, Solaris IPS received a new package called pkg://solaris/support/critical-patch-update/solaris-11-cpu. This package includes packages that are considered to be part of the Critical Patch Update. In addition to the package name and version this package now enables you to see which CVE each of these packages belong to.
You can use the pkg command to do some basic searches. Immutable zones, compliance and CVEs are only three of the different security features that were added to Solaris 11.2 and Solaris 11.3.

Most likely an admin will not want to login to each of his hundreds, thousands or even more Solaris installations in order to install needed packages and take care of a proper configuration. Can’t blame him. That’s probably what the Solaris team thought when puppet became part of the IPS repository with Solaris 11.2. There is not that much to say about it for those who do not know it. It does what is suppose to and is a relief for every admin if used right. In case you are interested in some really great articles go check out Manuel Zach’s blog. For automation in general you will want to go and also read Glynn Foster’s blog.
Now why am I writing about these “old” Solaris 11.2 features if Solaris 11.3 beta was released a few weeks ago already. Well, these are fundamental technologies in order to get the most out of Solaris 11.3.

Bringing Solaris IPS and NIST together

Companies mostly use an external software that alerts and reports every single CVE that is out there and triggers a service request for the responsible team. The thing is, it is slow, costs a lot and you get service requests for software that is not installed on any system. So what happens is the admin ends up checking it himself.

So I figured I will just do it Solaris style. By now as I write this post, we do have a fully automated reporting for CVEs for no extra costs at work.

Let’s start with IPS and CVEs. As mentioned before you will need to have a certain package installed.

# pkg install support/critical-patch-update/solaris-11-cpu

This package is updated with every SRU and will include every known CVE for Solaris 11. If you want to know a few basics about it read Darren Moffat’s blog.
Use the following command to see all the included packages:

# pkg contents -ro name,value solaris-11-cpu|grep '^CVE.*'
CVE-1999-0103                 pkg://solaris/system/security/kerberos-5@0.5.11,5.11-
CVE-2002-2443                 pkg://solaris/system/security/kerberos-5@0.5.11,5.11-
CVE-2003-0001                 pkg://solaris/driver/network/ethernet/pcn@0.5.11,5.11-
CVE-2004-0230                 pkg://solaris/system/kernel@0.5.11,5.11-
CVE-2004-0452                 pkg://solaris/runtime/perl-584/extra@5.8.4,5.11-
CVE-2004-0452                 pkg://solaris/runtime/perl-584@5.8.4,5.11-
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-apc@3.0.19,5.11-
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-idn@0.2.0,5.11-
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-memcache@2.2.5,5.11-
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-mysql@5.2.17,5.11-
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-pear@5.2.17,5.11-
CVE-2015-4024                 pkg://solaris/web/php-53/extension/php-tcpwrap@1.1.3,5.11-
CVE-2015-4024                 pkg://solaris/web/php-53/extension/php-xdebug@2.2.0,5.11-
CVE-2015-4024                 pkg://solaris/web/php-53/extension/php-zendopcache@7.0.2,5.11-
CVE-2015-4024                 pkg://solaris/web/php-53@5.3.29,5.11-
CVE-2015-4024                 pkg://solaris/web/server/apache-22/module/apache-php52@5.2.17,5.11-
CVE-2015-4024                 pkg://solaris/web/server/apache-22/module/apache-php53@5.3.29,5.11-
CVE-2015-4770                 pkg://solaris/system/file-system/ufs@0.5.11,5.11-
CVE-2015-4770                 pkg://solaris/system/kernel/platform@0.5.11,5.11-
CVE-2015-5073                 pkg://solaris/library/pcre@8.37,5.11-
CVE-2015-5477                 pkg://solaris/network/dns/bind@,5.11-
CVE-2015-5477                 pkg://solaris/network/dns/bind@,5.11-
CVE-2015-5477                 pkg://solaris/service/network/dns/bind@,5.11-
CVE-2015-5477                 pkg://solaris/service/network/dns/bind@,5.11-

Now we got the information of which Solaris IPS package belongs to which CVE-ID. That’s nice but how do we get all the other CVE information? Base score, summary, access vector, etc.?! In order to add more details to it I imported the NIST nvd-files into a sqlite3 database.
The files can be either downloaded as a compressed gz-file or regular xml. For more information visit
I imported the xml files into a sqlite3 database for a better performance. If you don’t want to work your way through the xml structures yourself use this python program that I came across while writing it myself. I like the approach of just having to do:

# curl | nvd2sqlite3 -d /wherever/you/like/to/keep/the/dbfile

In order to keep your NIST CVE database current I put the commands in a script and created a crontab entry.


curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb


0 5 * * * /scripts/admin/

Alright this gives us a database with all the information we need and want. The schema of the sqlite3 database looks like this:

sqlite> .schema
CREATE TABLE nvd (access_vector varchar,
                                            access_complexity varchar,
                                            authentication varchar,
                                            availability_impact varchar,
                                            confidentiality_impact varchar,
                                            cve_id text primary key,
                                            integrity_impact varchar,
                                            last_modified_datetime varchar,
                                            published_datetime varchar,
                                            score real,
                                            summary varchar,
                                            urls varchar,

Next to do is to match the data of the db with the IPS information. When I started working on this I focused on console output only but when I looked at our centralized compliance reports I wanted the same thing for CVEs. A central CVE reporting. So I ended up with writing the output to a html-file on an apache webserver.

Since the standard Perl in Solaris does not contain DBD::SQLite I switched to Python. does the following:

  • get all the installed package information from IPS
  • get all the information from the solaris-11-cpu package
  • match the above data and filter which pkg is installed and what version does it have (lower version = unpatched CVE)
  • create html report file with all the needed elements
  • connect to the sqlite3 db and get cve_id, access_vector, score and summary
  • write the select output to file, sorted by unpatched and patched CVEs

The CVE report looks like this:

What we got now is a script that pulls in all the nvd information from NIST and stores it in a sqlite3 database. And we got a script that matches these information with the installed IPS packages and generates a CVE report in a html format.

Scheduled Services with Solaris 11.3

Next up is to automatically generate these reports. With Solaris 11.2 cron would be the way to do it. Trivial entry in the crontab and done.

30 5 * * * /scripts/admin/

With Solaris 11.3 cron is almost obsolete. Why? Because of SMF and the new scheduled and periodic services. I’m not gonna talk about why SMF is great or not. To me it is great and I never ran into any serious problem. If it is a Solaris 11.3 installation I will move custom cronjobs to SMF and create scheduled services.
What these do is the same as cron plus everything else SMF has to offer.
The scheduled service I use for the CVE reporting is the following:

<?xml version="1.0" ?>
<!DOCTYPE service_bundle
  SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
    Manifest created by svcbundle (2015-Sep-04 15:05:15+0200)
<service_bundle type="manifest" name="site/cveList">
    <service version="1" type="service" name="site/cveList">
            The following dependency keeps us from starting until the
            multi-user milestone is reached.
        <dependency restart_on="none" type="service"
            name="multi_user_dependency" grouping="require_all">
            <service_fmri value="svc:/milestone/multi-user"/>
        <instance enabled="true" name="default" >
                                        <method_credential user='root' group='root' />

Use svcbundle to generate your own manifest.

# svcbundle -o /var/tmp/cveList.xml -s service-name=site/cveList -s start-method=/lib/svc/method/ -s interval=day -s hour=5 -s minute=30
# svccfg validate /var/tmp/cveList.xml

It’s as easy as that. Add to it whatever you feel like and is needed. Mail reporting in case of a status change for example.

Well now we have a service that generates a CVE report every day at 5:30am of a server.
We need more so let’s move on to the next piece.

Building a custom IPS package

The best way to deploy any piece of software on a Solaris 11.x server is with IPS.
IPS packages are very easy to use when they already build and published. List, install, info, uninstall, contents, search, freeze, unfreeze, etc.. It is always the same command pattern that makes it that way. But how do you build your own packages. That is always a bit more tricky than using them. Instead of explaining how it works I will just link to another article written by Glynn Foster which covers everything you need to know.
If you don’t want to type in every single step this little script might help. Adjust your IPS repository and your paths and all you need is a so called mog file which in this case could look like this:

set name=pkg.fmri value=pkg://custom/security/custom-cveList@1.0.2
set name=variant.arch value=sparc value=i386
set name=pkg.description value="custom CVE reporting"
set name=pkg.summary value="custom Solaris CVE reports"
<transform dir path=lib$ -> drop>
<transform dir path=lib/svc$ -> drop>
<transform dir path=lib/svc/manifest$ -> drop>
<transform dir path=lib/svc/manifest/site$ -> set owner root>
<transform dir path=lib/svc/manifest/site$ -> set group sys>
<transform file path=lib/svc/manifest/site/cveList\.xml$ -> set owner root>
<transform file path=lib/svc/manifest/site/cveList\.xml$ -> set group bin>
<transform file path=lib/svc/manifest/site/cveList\.xml$ -> set mode 0444>
<transform file path=lib/svc/manifest/site/cveList\.xml$ -> default restart_fmri svc:/system/manifest-import:default>
<transform dir path=lib/svc/method$ -> drop>
<transform file path=lib/svc/method/cveList\.py$ -> set owner root>
<transform file path=lib/svc/method/cveList\.py$ -> set group bin>
<transform file path=lib/svc/method/cveList\.py$ -> set mode 0555>

Besides the mog file you just enter the path to your proto directory that includes the software that is suppose to be packaged up and you are good to go. You will be asked to type in the name of the package and that’s it. Rest is done automatically. You might have to adjust the configuration inside of your mog file in case of unresolved dependencies for example. Should you be missing a custom IPS repo create one right quick and then start packaging.

Creating a custom IPS repo and share it via nfs:

# zfs create -po mountpoint=/ips/custom rpool/ips/custom
# zfs list -r rpool/ips
rpool/ips          62K  36.2G    31K  /rpool/ips
rpool/ips/custom   31K  36.2G    31K  /ips/custom
# pkgrepo create /ips/custom
# zfs set share=name=custom_ips,path=/ips/custom,prot=nfs rpool/ips/custom
# zfs set share.nfs=on rpool/ips/custom
# zfs get share
NAME                                                           PROPERTY  VALUE  SOURCE
rpool/ips/custom                                               share     name=custom_ips,path=/ips/custom,prot=nfs  local

Let’s actually build the cveList IPS pkg.

# /scripts/admin/ /scripts/admin/IPS/CVE/MOG/custom-cveList.mog /scripts/admin/IPS/CVE/PROTO.CVE

Need some information about the package. Answer the following questions to generate a mogrify-file (package_name.mog) or if you have a package_name.mog template execute this script with args:

 /scripts/admin/ [path_to_mog_file] [path_to_proto_dir]

Enter Package Name (eg. custom-compliance): custom-cveList

Ready! Generating the manifest.

pkgsend generate... OK
pkgmogrify... OK
pkgdepend generate... OK
pkgdepend resolve... OK
eliminating version numbers on required dependencies... OK
testing manifest against Solaris 11.2 repository, pkglint ... 
Lint engine setup...

Ignoring -r option, existing image found.
Starting lint run...


Review the manifest file custom-cveList.p5m.4.res!

publish the ips package with:
pkgsend publish -s file:///data/ips/custom -d /scripts/admin/IPS/CVE/PROTO.CVE /scripts/admin/IPS/CVE/custom-cveList.p5m.4.res

check the package with:
pkg refresh
pkg info -r custom-cveList
pkg contents -m -r custom-cveList
pkg install -nv custom-cveList

remove it:
pkgrepo remove -s file:///data/ips/custom pkg://custom/security/custom-cveList@1.0.2

Et voilà, the package is ready to be published.

# pkgsend publish -s file:///data/ips/custom -d /scripts/admin/IPS/CVE/PROTO.CVE /scripts/admin/IPS/CVE/custom-cveList.p5m.4.res
# pkg refresh
# pkg info custom-cveList
             Name: security/custom-cveList
          Summary: custom Solaris CVE reports
      Description: custom CVE reporting
            State: Installed
        Publisher: custom
          Version: 1.0.2
           Branch: None
   Packaging Date: Tue Sep 08 16:48:50 2015
Last Install Time: Tue Sep 08 16:52:07 2015
             Size: 8.80 kB
             FMRI: pkg://custom/security/custom-cveList@1.0.2:20150908T164850Z

DONE! At least with getting CVEs, matching CVEs, scheduling reports and building a package out of all of this.

Let’s deploy.

Let puppet do your job

In this case I am already running a puppet master and several puppet agents. Since I talked about multiple hundreds or thousands of Solaris installations a master-agent setup is exactly what we want.
Nobody has the time and endurance to login on each system and do a pkg install custom-cveList.
I figured a puppet module would be just what I want.
And to save time, here it is:

# cat /etc/puppet/modules/cve/manifests/init.pp
class cve {
        if $::operatingsystemrelease == '11.3' or $::operatingsystemmajrelease == '12' {
                package { 'custom-cveList':
                        ensure => 'present',
        if $::operatingsystemrelease == '11.2' {
                cron { 'cveList' :
                        ensure => 'present',
                        command => '/scripts/admin/',
                        user => 'root',
                        hour => 5,
                        minute => 30,

The first if-statement will install the recently build and published IPS pkg. Since scheduled services are not available in Solaris 11.2 I had to add a crontab entry for that case, which would be the second if-statement in the above.
Now just add it to your /etc/puppet/manifest/site.pp and you are all set up.

node default {
        include nameservice
        include tsm
        include arc
        include compliance
        include mounts
        include users
        include cve

This is it now. From now on, every single Solaris server that runs a puppet agent will have your custom CVE reporting deployed.
Reading all this actually takes longer than just doing it and you only need to go through all of this once.

I know this looks like it is a lot but it really isn’t. If you want to leave out the IPS part just add your scripts and service/cron to your puppet configuration.
This is easier to handle than a third party tool. If want you could just implement it to your ITIL process and its tools to automate CVE service request handling.


New Compliance Report Design

Oracle just released the beta version of Solaris 11.3 which means we finally get to use new features and improvements.

If you already use Solaris compliance you will run right into one of the improvements that come with the latest release. A new design for the html report. In case you haven’t used compliance yet (you should definately take a look) or just don’t remember what a html compliance report looked like with Solaris 11.2 here is a small example.


Quiet static besides the links to the details of each check.
With Solaris 11.3 bootstrap is used to give the html reports a new look and feel. And it is great! Here is an example of the new design.


Besides the fresh look the major improvement that comes with bootstrap is flexibility. Whenever you wanted to see only passed or failed checks you ran “compliance report -s …” twice. One report for passed and one for failed checks in your report. Now you just need one report that includes all the checks and you choose what kind of result will be visible. Multiple selections are possible.


As you can see you can also search for certain rules/checks. Which is quiet handy.
Moreover an enhanced grouping of the results is included now too. All in all the new design gives the user a better overview of the results and additional information.

Solaris 11.3 compliance has way more than just this to offer but it’s the small changes that matter too.

Script for CVE in Solaris 11 IPS – Part 1

Quiet a while ago Darren Moffat posted some details on how CVEs in Solaris 11.2 work. Great feature that will make life so much easier.

Still there was one thing that I felt is missing. How do you check which CVEs are patched by the currently installed/running Solaris version.
I’m sure this will be added somewhen in the future and until then I figured I would write a short and simple Perl script that does the work for you.
These are a view lines of output.

root@s11-2:~# /scripts/admin/

Installed Version: (Oracle Solaris
Latest    Version: (Oracle Solaris

|---- CVE ----|               |----- PKG @ version ---------------------------------------------------------------------|
 CVE-2012-3548                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.8.3,5.11-
 CVE-2012-5237                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.8.3,5.11-
 CVE-2012-5238                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.8.3,5.11-
 CVE-2012-5239                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.8.3,5.11-
 CVE-2012-5240                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.8.3,5.11-
 CVE-2012-5592                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.8.5,5.11-
 CVE-2012-5593                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.8.5,5.11-
 CVE-2012-5594                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.8.5,5.11-
 CVE-2012-5595                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.8.5,5.11-
 CVE-2012-5596                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.8.5,5.11-
 CVE-2012-5597                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.8.5,5.11-
 CVE-2013-3561                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.8.8,5.11-
 CVE-2013-3562                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.8.8,5.11-
 CVE-2013-4083                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.8.8,5.11-
 CVE-2013-4920                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.8.9,5.11-
 CVE-2013-4921                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.8.9,5.11-
 CVE-2013-4922                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.8.9,5.11-
 CVE-2013-4923                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.8.9,5.11-
 CVE-2013-4924                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.8.9,5.11-
 CVE-2014-5164                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.10.9,5.11-
 CVE-2014-5165                 pkg://solaris/diagnostic/wireshark/wireshark-common@1.10.9,5.11-
 CVE-2014-6529                 pkg://solaris/driver/infiniband/connectx@0.5.11,5.11-
 CVE-2012-4564                 pkg://solaris/image/library/libtiff@3.9.5,5.11-
 CVE-2012-5581                 pkg://solaris/image/library/libtiff@3.9.5,5.11-
 CVE-2013-1960                 pkg://solaris/image/library/libtiff@3.9.5,5.11-
 CVE-2013-1961                 pkg://solaris/image/library/libtiff@3.9.5,5.11-
 CVE-2013-4231                 pkg://solaris/image/library/libtiff@3.9.5,5.11-
 CVE-2013-4232                 pkg://solaris/image/library/libtiff@3.9.5,5.11-
 CVE-2013-1619                 pkg://solaris/library/gnutls@2.8.6,5.11-

As you can see it will show you every single CVE and pkg that has been fixed in a previous and/or the currently installed Solaris 11 version.

Here is the script:

I will soon add some more information.


Solaris 11.2 – Compliance Basics

With version 11.1 Oracle added OpenSCAP to its Solaris IPS repository.
OpenSCAP is using NIST standards to verify the compliance of a system. Wether it is about installed packages or certain system configurations. This sounds really great but it is not as easy to handle. There are a few tools out there to handly the different data exchange formats and help you create your own checks. Which means you will end up with a handfull of tools to manage the compliance topic. Still better than nothing though or doing it all by hand.

The Solaris engineering though seemed to feel with the users and used their Python expertise to simplify users experience. With Solaris 11.2 there are only a few things to know to get started.
OpenScap is still installed but the user doen’t need to use its complex command structure. With Solaris 11.2 it is all about compliance! And that’s the command too. Easy, right!
Let’s start with the complaince command.

# compliance
No command specified
        compliance list [-v] [-p]
        compliance list -b [-v] [-p] [benchmark ...]
        compliance list -a [-v] [assessment ...]
        compliance guide [-p profile] [-b benchmark] [-o file]
        compliance guide -a
        compliance assess [-p profile] [-b benchmark] [-a assessment]
        compliance report [-f format] [-s what] [-a assessment] [-o file]
        compliance delete assessment

As you can see this will be almost trivial to use. The comends speak for itself. List will show show you information about benchmarks, profiles and assessments. Guide is great for people who like to read about a feature before using it ;). Assess will get you really going and by default outputs everything on stdout. Report let’s you generate reports in three different formats (log, xccdf, and html).
After you installed compliance

# pkg install compliance

you are ready to run compliance checks. And as I said before it is simple without any additional configuration needed.

# compliance assess
Assessment will be named 'solaris.Baseline.2015-02-02,11:14'
        Package integrity is verified
        Check all default audit properties

Done. Actually if you just want to get started with compliance and get a hang of it this would be all you need. What this does is to use the default benchmark and it’s default profile.
In this case it is solaris – Baseline. Instead of just using assess you could also say compliance assess -b solaris -p Baseline but no need for the all the extra typing unless you want to use a different benchmark or/and profile.

#  compliance list -p
pci-dss:        Solaris_PCI-DSS
solaris:        Baseline, Recommended

As you can see above -p will not only list the available assesssment(s) and benchmarks but also it’s profile(s).
The following will runn the pci-dss benchmark.

# compliance assess -b pci-dss

Let’s check out the report command. As I have mentioned it earlier in this post compliance in Solaris 11.2 is all about giving the user the opportunity to take care of compliance in a simple administrative way.
So this is how you generate a html report:

# compliance report

The header includes a handfull of information like the hostname, date, profile, etc.. The score indicates how many of the run tests failed or passed. For more details just look at the Rule Results Summary. As you can see out of 200 rules/tests/checks 125 passed, 18 failed, and 57 where not selected. If a rule fails just click on the link and more information will be provided.

It can’t be easier than this. I am awere that there are tools out there and that this is OpenSCAP in the background, but which OS provides you such a handy tool to skip the annoying usage of extremly long commands or the setup of third-party tools.
And remember this was only the basics which everyone can do right away after installation. Compliance has more to offer than just this.

As Darren Moffat already pointed out in his blog entry so far this needs to be done on the server itself, but the engineers are working on a remote version of compliance.

One more small thing, don’t panic if you run into failed rules which in your eyes should pass. The compliance team is aware of this and will deliver the fixes within the upcoming SRUs. Most tests have been already fixed. So the best thing would be to use the lastest version. Latest is greatest!

CA with openssl

Solaris is known for it’s SBD feature and offers a lot of different tools and mechanics to secure your system and it’s data. Though when it comes to secure communication for example you are in the spot where you want to use your own certificates and keys. This sounded very complex and time-consuming. This was before I actually started to need and use it. For everyone who hesitates to take this step here is one way of setting up a small CA environment which can be used to generate or sign keys and certificates.

Let’s start with creating a zone named zoneCA:

# zonecfg -z zoneCA 'create'
# zoneadm -z zoneCA install
# zoneadm -z zoneCA boot
# zlogin -e "+." -C zoneCA

Click your way through the configuration.

Once this is done and the zone configured and ready some preperation needs to be done:

# useradd -u 1580 -g 10 -d /data/apps/solCA/ -m -s /usr/bin/bash -c "solaris ca user" solcaadm

# echo "export OPENSSL_CONF=/data/apps/solCA/conf/openssl.cnf" >>/data/apps/solCA/.profile

# mkdir /data/apps/solCA/conf
# cp /etc/openssl/openssl.cnf /data/apps/solCA/conf/

# mkdir certs crl newcerts private pass

# touch index.txt
# echo "01" >serial
# echo "1000" >crlnumber

Continue Reading →