Per File Auditing

Another cool improvement that the engineering added to Solaris 11.4 is the ability to set auditing for for single files.
chmod(1) received another ACE type called audit.
Use the chmod command like you are used to with for example ACLs and just decide on the permissions you want to audit, use audit instead of e.g. allow and(et) voilà, per file auditing is set.
In the following I created a file named pfa.file.
After I set the PFA via the chmod command you can see auditing has been set for everyone’s reads and writes.
As a regular user, in this case muehle, I don’t have permission to write.
Whereas root and owner in this case I can write to the file.
The corresponding entries in the audit trail are below that first example.

root@wacken:~# touch /var/tmp/pfa.file
root@wacken:~# ls -V /var/tmp/pfa.file
-rw-r--r--   1 root     root           0 Feb 27 12:49 /var/tmp/pfa.file
root@wacken:~# chmod A+everyone@:write_data/read_data:successful_access/failed_access:audit /var/tmp/pfa.file
root@wacken:~# ls -V /var/tmp/pfa.file
-rw-r--r--   1 root     root           0 Feb 27 12:49 /var/tmp/pfa.file
root@wacken:~# su - muehle
Oracle Corporation      SunOS 5.11      st_015.server   February 2018
You have new mail.
L muehle@wacken % echo "TEST STRING" >> /var/tmp/pfa.file                                   /export/home/muehle 0
zsh: permission denied: /var/tmp/pfa.file
L muehle@wacken %                                                                           /export/home/muehle 1
root@wacken:~# echo "TEST STRING" >> /var/tmp/pfa.file

Audit output:

root@wacken:~# tail -0f /var/share/audit/20180227084517.not_terminated.wacken|praudit -s
header,97,2,AUE_su,,wacken,2018-02-27 12:55:22.317+01:00
subject,muehle,muehle,staff,muehle,staff,1666,1608065368,151 1
header,122,2,AUE_OPEN_WC,ace:fp:fe,wacken,2018-02-27 12:55:24.518+01:00
subject,muehle,muehle,staff,muehle,staff,1667,1608065368,151 1
use of privilege,failed use of priv,ALL
return,failure: Permission denied,-1
header,97,2,AUE_su_logout,,wacken,2018-02-27 12:55:32.249+01:00
subject,muehle,muehle,staff,muehle,staff,1666,1608065368,151 1
header,153,2,AUE_CMD_PRIVS,,wacken,2018-02-27 12:55:32.250+01:00
use of privilege,successful use of priv,sys_res_config
subject,muehle,root,root,root,root,1666,1608065368,151 1
header,147,2,AUE_OPEN_W,ace,wacken,2018-02-27 12:55:35.449+01:00
subject,muehle,root,root,root,root,1303,1608065368,151 1

Have fun experimenting with it and enhancing your auditing.


Privileged Command Execution History Reporting

How often were you asked by the management or auditors to show a list of administrative commands that were used on a system.
With a properly (e.g. cusa) configured Solaris 11.3 and prior either already had some script or pricy external tool that filters the audit streams for you or you had to do it manually (auditreduce/praudit) and whatever needs to be done to make it worth show anyone.
With Solaris 11.4 Oracle ships the admhist utilty, which takes all the manual overhead away and adds great options to narrow down the results for a certain date, time, or type of event.

For a better understanding and overview upfront here is the help output from the admhist command:

root@wacken:~# admhist -h
admhist: illegal option -- h
usage:  admhist [-a date-time] [-b date-time] [-d date-time]
         [-t [tags-file:]tag[,tag,...]] [-z zonename] [-v] [audit-trail-file]...
        admhist [-a date-time] [-b date-time] [-d date-time]
         [-t [tags-file:]tag[,tag,...]] [-z zonename] [-v] -R pathname
        Valid date-time formats include:
                today, yesterday
                last week, last month
                last 3 days, last 8 hours

So let’s check for what was going over the last 4 hours for example. The -a option show entries after the giving date-time. In this case (-a “last 4 hours”) everything within the last 4 hours. In case you want every privileged execution before the last 4 hours just -b instead of -a

root@wacken:~# admhist -a "last 4 hours"
2018-02-27 09:59:16.190+01:00 /usr/sbin/zfs zfs help
2018-02-27 10:00:22.954+01:00 /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:22.972+01:00 /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:23.474+01:00 /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:24.736+01:00 /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:26.646+01:00 /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:27.237+01:00 /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:33.124+01:00 /usr/sbin/zpool zpool create tpool /var/tmp/f1 /var/tmp/f2
2018-02-27 10:01:25.822+01:00 /usr/sbin/zpool zpool list
2018-02-27 10:01:37.868+01:00 /usr/sbin/quota
2018-02-27 10:03:07.955+01:00 /usr/sbin/zpool zpool create tpool /var/tmp/f1 /var/tmp/f2
2018-02-27 10:03:17.057+01:00 /usr/sbin/zpool zpool status tpool
2018-02-27 10:03:20.037+01:00 /usr/sbin/zfs zfs
2018-02-27 10:03:22.404+01:00 /usr/sbin/zfs zfs help
2018-02-27 10:03:38.249+01:00 /usr/sbin/zpool zpool upgrade
2018-02-27 10:03:40.886+01:00 /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:03:45.684+01:00 /usr/sbin/zpool zpool upgrade -a
2018-02-27 10:04:02.408+01:00 /usr/sbin/zfs zfs upgrade -v
2018-02-27 10:04:05.613+01:00 /usr/sbin/zfs zfs upgrade
2018-02-27 10:04:11.243+01:00 /usr/sbin/zpool zpool help
2018-02-27 10:04:17.903+01:00 /usr/sbin/zpool zpool status
2018-02-27 10:04:21.769+01:00 /usr/sbin/zpool zpool XXX
2018-02-27 10:04:25.335+01:00 /usr/sbin/zpool zpool XXX tpool
2018-02-27 10:04:31.436+01:00 /usr/sbin/zpool zpool XXX tpool /var/tmp/f2
2018-02-27 10:04:33.208+01:00 /usr/sbin/zpool zpool XXX tpool
2018-02-27 10:04:36.321+01:00 /usr/sbin/zpool zpool status
2018-02-27 10:06:02.968+01:00 /usr/sbin/zpool zpool status
2018-02-27 10:06:23.058+01:00 /usr/sbin/zpool zpool XXX tpool /var/tmp/f2 /var/tmp/f3
2018-02-27 10:06:24.896+01:00 /usr/sbin/zpool zpool status
2018-02-27 10:06:32.197+01:00 /usr/sbin/zpool zpool XXX tpool /var/tmp/f2
2018-02-27 10:06:33.828+01:00 /usr/sbin/zpool zpool status
2018-02-27 10:11:18.879+01:00 /usr/sbin/zoneadm -R / list -cp
2018-02-27 10:11:18.962+01:00 /usr/bin/amd64/pkg /usr/bin/64/python2.7 /usr/bin/pkg info entire

In order to see user and hostname just use the option -v:

root@wacken:~# admhist -v -a "last 4 hours"
2018-02-27 09:59:16.190+01:00 muehle@wacken cwd=/export/home/muehle /usr/sbin/zfs zfs help
2018-02-27 10:00:22.954+01:00 muehle@wacken cwd=/export/home/muehle /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:22.972+01:00 muehle@wacken cwd=/export/home/muehle /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:23.474+01:00 muehle@wacken cwd=/export/home/muehle /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:24.736+01:00 muehle@wacken cwd=/export/home/muehle /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:26.646+01:00 muehle@wacken cwd=/export/home/muehle /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:27.237+01:00 muehle@wacken cwd=/export/home/muehle /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:00:33.124+01:00 muehle@wacken cwd=/export/home/muehle /usr/sbin/zpool zpool create tpool /var/tmp/f1 /var/tmp/f2
2018-02-27 10:01:25.822+01:00 muehle@wacken cwd=/var/tmp /usr/sbin/zpool zpool list
2018-02-27 10:01:37.868+01:00 muehle@wacken cwd=/root /usr/sbin/quota
2018-02-27 10:03:07.955+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool create tpool /var/tmp/f1 /var/tmp/f2
2018-02-27 10:03:17.057+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool status tpool
2018-02-27 10:03:20.037+01:00 muehle@wacken cwd=/root /usr/sbin/zfs zfs
2018-02-27 10:03:22.404+01:00 muehle@wacken cwd=/root /usr/sbin/zfs zfs help
2018-02-27 10:03:38.249+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool upgrade
2018-02-27 10:03:40.886+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool upgrade -v
2018-02-27 10:03:45.684+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool upgrade -a
2018-02-27 10:04:02.408+01:00 muehle@wacken cwd=/root /usr/sbin/zfs zfs upgrade -v
2018-02-27 10:04:05.613+01:00 muehle@wacken cwd=/root /usr/sbin/zfs zfs upgrade
2018-02-27 10:04:11.243+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool help
2018-02-27 10:04:17.903+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool status
2018-02-27 10:04:21.769+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool XXX
2018-02-27 10:04:25.335+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool XXX tpool
2018-02-27 10:04:31.436+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool XXX tpool /var/tmp/f2
2018-02-27 10:04:33.208+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool XXX tpool
2018-02-27 10:04:36.321+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool status
2018-02-27 10:06:02.968+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool status
2018-02-27 10:06:23.058+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool add tpool /var/tmp/f2 /var/tmp/f3
2018-02-27 10:06:24.896+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool status
2018-02-27 10:06:32.197+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool XXX tpool /var/tmp/f2
2018-02-27 10:06:33.828+01:00 muehle@wacken cwd=/root /usr/sbin/zpool zpool status
2018-02-27 10:11:18.879+01:00 muehle@wacken cwd=/ /usr/sbin/zoneadm -R / list -cp
2018-02-27 10:11:18.962+01:00 muehle@wacken cwd=/root /usr/bin/amd64/pkg /usr/bin/64/python2.7 /usr/bin/pkg info entire

With no further options given it will just list you all the privileged commands executed.

root@wacken:~# admhist
2017-04-05 06:08:41.307+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 06:09:14.591+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 06:32:58.689+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:04:04.313+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:19:13.614+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:25:20.168+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:25:40.142+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:26:52.158+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:27:10.400+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:27:35.560+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:28:03.857+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:28:59.362+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:31:26.702+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:31:29.059+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:32:09.722+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:32:16.210+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:32:18.050+02:00 /usr/bin/amd64/pkg /usr/bin/64/python2.7 /usr/bin/pkg exact-install --accept --be-name s12_b115 entire@5.12- solaris-small-server@5.12-
2017-04-05 07:32:18.051+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:37:52.352+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:51:31.862+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:52:11.834+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 07:55:48.995+02:00 /usr/bin/amd64/pkg /usr/bin/64/python2.7 /usr/bin/pkg install docker
2017-04-05 07:55:48.997+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 08:15:30.826+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 08:15:52.467+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 08:23:38.643+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-05 09:11:41.226+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-24 10:09:59.772+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-24 10:10:02.842+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-24 10:10:17.952+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-24 10:10:18.553+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-24 10:17:04.912+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-24 11:25:39.775+02:00 /usr/lib/svcadm pfexec-auth /usr/sbin/svcadm svcadm disable ocm
2017-04-24 11:27:24.889+02:00 /usr/lib/zfs pfexec-auth /usr/sbin/zfs zfs list -r -o name,used,avail,refer,compressratio,quota,reserv,aclmode,aclinherit,compression,atime,dedup,mounted,mountpoint
2017-04-24 11:28:51.554+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-24 11:28:51.640+02:00 /usr/lib/pkg pfexec-auth /usr/bin/pkg pkg install docker
2017-04-24 11:29:26.933+02:00 /usr/lib/zfs pfexec-auth /usr/sbin/zfs zfs list -r -o name,used,avail,refer,compressratio,quota,reserv,aclmode,aclinherit,compression,atime,dedup,mounted,mountpoint
2017-04-24 11:31:15.562+02:00 /usr/sbin/zfs zfs create -o mountpoint=/var/lib/docker rpool/VARSHARE/docker
2017-04-24 11:31:24.490+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1
2017-04-24 11:33:00.123+02:00 /usr/lib/rad/rad -m /usr/lib/rad/transport -m /usr/lib/rad/protocol -m /usr/lib/rad/module -m /usr/lib/rad/site-modules -t pipe:fd=3,exit -e 180 -i 1

This is a very handy utility if you ask me. Nice and easy to use. Especially since you don’t have to use the exact time and date when you instead pass on “last 2 days”, “last 48 hours”, “last month”, or so.

Maybe something like -u (certain user/uid) would be a nice additional option too.


Solaris is dead – long lives Solaris

The year 2018 just started and Oracle just released Solaris 11.4 Beta.
Only a few months after the media and social networks called Oracle Solaris dead, and if it was to them it would have been buried and gone by now.
Oracle drew the curtain on what was formerly known as Solaris 12. Oracle Solaris 11.4 is as mentioned by quite a few people (Oracle and non-Oracle) before not just another Solaris 11 release. It is new! Go grab the public beta version and find out yourself. The list of changes, improvements and new features is long.
I am in the privileged position that I have been using this for a couple of years now. I saw it grow, shrink, mutate and evolve. Glad it is finally available publicly.

A lot of people have worked very hard on this over the last couple of years.
And no matter where all of you are now or what you work on now, CONGRATULATIONS! Well done!

Update: After a forced offline time I will see check the blogs and get finally going with cool features and enhancements.

Oracle Solaris Blogs

Enterprise is not enterprise

Over the last couple of weeks and months I looked a bit more into Linux than before and especially Red Hat. Just to check out what new features and technologies it offers and also if anything might have improved that was already offered in the past. The list of possible topics for posts about comparisons between Linux and Unix and pros and cons grew rapidly and was too long. But one thing became clearer and clearer – Enterprise is not enterprise. Putting the word enterprise in your name is not either.

Calling yourself Enterprise is not the same as being called enterprise.

As I mentioned, the list of topics is long but there is a perfect example that I want to use in this post to emphasize my statement above.

One might get the feeling that I believe Linux or in this case Red Hat is totally garbage. No, it’s not. There are things it fits better than Solaris. But now it is about enterprise. The Red Hat Product Security Center for example is fantastic. I love it. Lots of information and tools to stay on top of security topics.
But at the same time the Red Hat Product Security Center is making me wonder why the name of the Linux distribution includes the word Enterprise.
As Red Hat shows on their website they are able to provide a great value of data in terms of compliance, cve, etc., but where is this value when it comes to the operating system? Gone!

Openscap has been available in Red Hat Linux longer than in Oracle Solaris but just making it available is nothing but a nice offer that takes the work of downloading and installing away from the customer and that’s it. That’s just as much enterprise as offering a shell with default settings or any other application/program.

I am well aware that the meaning of the word enterprise depends on quiet a lot of factors and mostly just on the subjective point of view. It might be facts like scalability, stability, usability, different performance aspects or the rate of consolidation one can achieve by using the product. These and many others are important but often depend on what you need it for.
One thing though always matters and that’s the rate and quality the product improves and matures. Wouldn’t it make you sad and mad at the same time when you have to put quiet a lot of time, nerves and effort into something that is already there but not passed on to you as a customer? Have you ever used the openscap command? As great as the tool is just as annoying it is to use and especially to get going with it. As usual, once you get the hang of it it’s ok. But really not more than that. What happened the last couple of years while the security topic got pushed all the way to the front of IT? Well, let’s look at the facts. Red Hat created a nice, no actually an extremely nice, website with a fantastic security section. You will get all the information on for example current CVEs that you want and need. Really enterprise like if you ask me! Chapeau! Love it. I actually use it as an example of what I expect when I talk to the Oracle Solaris team. Yesterday was the last time even. :-) Greetings, guys.
So obviously there was some work that happened on the compliance topic. Here is what did not happen over the years. Nothing close to what the Red Hat Product Security Center offers can be easily done when you are running a Red Hat Enterprise Linux server. Let me just show you a “simple” command used to start a compliance run of a certain benchmark for RHEL and Solaris.


# oscap xccdf eval --profile xccdf_org.ssgproject.content_profile_rht-ccp --results scan-xccdf-results.xml /usr/share/xml/scap/ssg/content/ssg-rhel7-ds.xml

Solaris 11:

# compliance assess -p pci-dss

In case you want to run the benchmark and profile that is set as the default you even only need the following:

# compliance assess

Can you tell the difference? This is a customer orientated implementation. And again, it is just one small example.
Well, there really is not much more to say. I just much rather spend money on enterprise features and implementations than Enterprise names.
For all the admins and engineers go and check it out. And even if you don’t want to use Solaris and just simply can’t bother your OS vendor. Sorry it is RHEL in this case but that’s just because I am closer to RHEL than to other distributions.

All in all, I promise you, it is worth checking it out! Have fun.

RAD – get ZFS properties

After starting out with Python RAD zonemgr module I thought it is time to write about another available Python RAD module. In the following I want to give you a short and simple look and feel of how to get ZFS property information (name, type, atime, compression, compressratio, dedup, mounted and mountpoint). The puprpose is to get people interested in Solaris going with RAD and show how easily it can be used.

Let’s start with the usual imports which are pretty self-explaining. One is needed to open a connection and the other one depends on the purpose of the script. In this case we want to get ZFS properties which means we will use zfsmgr.
In case it is not already installed just run: pkg install rad-zfsmgr

import rad.connect as radc
import as zfsmgr

Next we need to connect to a RAD client. In this case it is a local one and a unix socket can be used.
If you like to use a remote connection go ahead and use ssh://[hostname] instead.

uri = radc.RadURI("unix:///")
rc = uri.connect()

We now have an open connection (rc) and we can move on to the ZFS part.
First we need is to know what the existing datasets are. rc.list_object(zfsmgr.ZfsDataset()) will do exactly this for us. It lists all the ZFS dataset objects (ZfsDataset) the RAD zfs manager (zfsmgr) has to offer for the chosen connection (rc).

zfsDataSets = rc.list_objects(zfsmgr.ZfsDataset())

Well, the list of datasets is complete. But we will need more than just the objects of each dataset.
In order to get the information we are interested in we need to define it first. Therefore ZfsPropRequest is used.

prop0 = zfsmgr.ZfsPropRequest(name="name")
prop1 = zfsmgr.ZfsPropRequest(name="type")
prop2 = zfsmgr.ZfsPropRequest(name="atime")
prop3 = zfsmgr.ZfsPropRequest(name="compression")
prop4 = zfsmgr.ZfsPropRequest(name="compressratio")
prop5 = zfsmgr.ZfsPropRequest(name="dedup")
prop6 = zfsmgr.ZfsPropRequest(name="mounted")
prop7 = zfsmgr.ZfsPropRequest(name="mountpoint")

After defining the properties we just loop through each of the object list of the ZFS datasets and request the value for the just defined keys of the current object (zobj).

for dataset in zfsDataSets:
    zobj = rc.get_object(dataset)
    zvalues = zobj.get_props([prop0, prop1, prop2, prop3, prop4, prop5, prop6, prop7])
    print "%-40s%-14s%-8s%-13s%-15s%-7s%-9s%s" % (zvalues[0].value, zvalues[1].value, zvalues[2].value, zvalues[3].value, zvalues[4].value, zvalues[5].value, zvalues[6].value, zvalues[7].value)

Done. Got the information and therefore can close the connection at this point.


The output will remind you of a regular zfs list -o … output. One may say why should I use RAD then and the answer is quiet simple. Because this is just a trivial example of how you can make use RAD of the zfsmgr to get dataset information. The next steps would be to use the above and automate whatever comes to your mind. Juggle around with the objects, keys, values, etc.. Add more functions to it and even combine it with more RAD modules (e.g.: rad-zonemgr). That’s where you will benefit the most. But even small automation tasks are perfect for this.

Last but not least, here is an example of what it might look like. I had to take out a few lines because it included Solaris beta content.


Remember, the purpose was to make the very first step with RAD together with ZFS. Try it out and you will most probably like it and stick to it.

RAD – syncing Solaris zone configs


A less known jewel of Solaris 11 is RAD (Remote Administration Daemon). Since, as I just found out, I don’t have any RAD posts yet, let’s talk about what it is and offers before we go on.
what RAD does is it provides programmatic interfaces to manage Solaris. Users, zones, zfs and smf are just a few examples. RAD offers APIs for C, Java, Python and REST. It can be used locally as well as remotely. It can be used to read data but also to write data. As an example you can get zfs informations as well as create new datasets or change current settings. There are a couple of great examples and posts out there from e.g. Glynn Foster and Robert Milkowski

Why am I telling you this? Because you can PROGRAMMATICALLY manage your enterprise operating system now.

Use case

Imagine an environment of SPARC T4, T5, T7 or S7 servers running quiet few non-global zones whether in LDOMs or not.
Over the last weeks I tested kernel zones pretty heavily. The chance of getting rid of LDOMs is just too good to not go for it. Don’t understand me wrong, LDOMs work fine and are a key part of our current DR concept. But there are also things I really don’t like at all. Guess this will make a good post in the nearer future. ;)
As I said, I was using kernel zones but for DR purposes (let’s say one data center dies) I need to be able to boot the kzones from the other data center. In order to do so the zone configuration has to be available. Well, shared storage and so on too but for the purpose of this post let’s say that’s all taking care of automatically (it really is ;) ).
At the moment Zone configurations are saved via a SMF service on a NFS server. But I don’t want to create zones first while the angry mob can’t work out there and tries to figure out where I am sitting.
When I used kernel zone live migration I started thinking about how I want to solve this issue. For those who haven’t used kzone live migration yet, it creates a zone configuration on the target side and leaves the old one in configured state. Which means once you have live migrated a zone the problem seems to be solved. But what if something changes? What if it runs on a different server (hw) by the time of a disaster.
These two facts, programmatic interfaces and LDOM replacement, lead me to the idea of actually having a SMF scheduled service that takes care of zone configurations. For that I use RAD’s zonemanager and Python.
Besides the RAD IPS packages (rad and rad-zonemgr) being installed you will need a user with sufficient privileges for rad and zones administration/management.

RAD Python

The script zones-sync is part of a larger RAD script that makes all sort of stuff.
Simply said it checks which zone’s configuration is missing on a target server and then imports/creates it. Quiet trivial.

Let’s start wit the imports.

import rad.connect as radc
import as zonemgr
import string
import socket
import sys

Line 36 let’s you connect to RAD as the name already says while the import in line 37 adds the ability to us zone management. Quiet self-explaining I would say.

In order to know which global zones are suppose to be synced I started with getting the source and target hostname straight, depending on the used arguments. So, if only one hostname is given the other one will be considered to be localhost. For remote purposes two hostnames need to be provided. For the purpose of automation I added [-service|-svc] which in this case maps certain pattern of hostnames. The name pattern is used to find the corresponding global zone.
In the end anything that helps to automate getting the hostnames should be put here.

def getHostnames():
    global source_hostname
    global target_hostname
    if sys.argv[1] == "-service" or sys.argv[1] == "-svc":
        if source_hostname[3] == 'A':
            target_hostname = source_hostname[:3]+'S'+source_hostname[4:]
        elif source_hostname[3] == 'a':
            target_hostname = source_hostname[:3]+'s'+source_hostname[4:]
        elif source_hostname[3] == 'S':
            target_hostname = source_hostname[:3]+'A'+source_hostname[4:]
        elif source_hostname[3] == 's':
            target_hostname = source_hostname[:3]+'a'+source_hostname[4:]
    elif len(sys.argv) == 3:
        source_hostname = sys.argv[1]
        target_hostname = sys.argv[2]
    elif len(sys.argv) == 2:
        target_hostname = sys.argv[1]
    return (source_hostname,target_hostname)

Now that it is clarified which systems will be involved the next step is to connect to RAD. As you can see in lines 99 and 104 I am using ssh to connect to remote systems and Unix socket for local connections as shown in line 102.
Again, the user that executes this script must have sufficient privileges on the involved systems. In addition to that the rad:remote and/or rad:local service has to be enabled and online.

def connectRAD():
    global source_rc
    global target_rc
    if len(sys.argv) == 3:
        source_uri = radc.RadURI("ssh://"+source_hostname)
        source_rc = source_uri.connect()
        source_uri = radc.RadURI("unix:///")
        source_rc = source_uri.connect()
    target_uri = radc.RadURI("ssh://"+target_hostname)
    target_rc = target_uri.connect()

    return (source_rc,target_rc)

The next step is to get a list of zones of each global zone. What it actually is is a list of the objects of each zone.

def getZoneLists():
    global zones_s
    global zones_t
    zones_s = source_rc.list_objects(zonemgr.Zone())
    zones_t = target_rc.list_objects(zonemgr.Zone())

    return (source_zones,target_zones)

Each object includes the values of a zone. For example name, state, brand, etc..
The previous is done to get each ng/kzone’s state and therefore to decide whether it is synced or not. Incomplete zones for example are not worth synchronizing. A configured zone’s config will be replaced by the one of a running zone in order to have the most current version configured.

def getSourceZones():
    for name_s in zones_s:
        zone_s = source_rc.get_object(name_s)
        print "\t%-16s %-11s %-6s" % (, zone_s.state,zone_s.brand)
        if zone_s.state != 'incomplete':
        if zone_s.state == 'configured':
    return (source_zones,source_conf_zones) 

def getInstalledTargetZones():
    for name_t in zones_t:
        zone_t = target_rc.get_object(name_t)
        print "\t%-16s %-11s %-6s" % (, zone_t.state,zone_t.brand)
        if zone_t.state != 'configured' and zone_t.state != 'incomplete':
        if zone_t.state == 'configured':
    return (target_zones,target_conf_zones)

After comparing the states, the script deletes existing configurations that are about to be replaced.
In line 152 you can see the preparation of connecting to the target machine’s RAD zonemgr ( The class that is used here is ZoneManager(rad.client.RADInterface)

Quote from the python help for

| Create and delete zones. Changes in the state of zones can be
| monitored through the StateChange event.

def deleteExistingConfiguredTargetZone():
    delete_zone = target_rc.get_object(zonemgr.ZoneManager())
    global zones_t

    for name_d in import_zones:
        for name_t in zones_t:
            zone_t = target_rc.get_object(name_t)
            if ==
                print "DELETED: %s" %

As you can see above in line 159 the method used is called delete.

When this is done it is time to export and import configurations.
To export a zone configuration via RAD (line 168) the proper class is called Zone with its method exportConfig(*args, **kwargs). Imports (line 170) are done by using the class ZoneManager and (importConfig(*args, **kwargs) as the method. [-h|help] [-service|-svc] [target hostname/ip]

def expImpConfig():
    mgr = target_rc.get_object(zonemgr.ZoneManager())
    for name_i in import_zones:
        zone_i = source_rc.get_object(name_i)
        z_config = zone_i.exportConfig()
        split_config = z_config.splitlines(True)
        print "IMPORTED: %s" %

Well, all that is left to do is to close the connections.

def closeRc():

And for you to have an idea what this may look like and what it does, let’s check out the following outputs.

In the following the script was used with -service. This is the way I use for scheduled/periotic services. The hostname’s pattern is used to define the source and target hostnames.

u2034611@AC6A000:~$ /net/ap6shr1/data/shares/soladm/intern/tm/ -service 
        NAME             STATUS      BRAND 
        kzone1           configured  solaris-kz 
        zone2            configured  solaris 
        zone3            configured  solaris 
        fuu1             configured  solaris 
        fu1              configured  solaris 
        oo1              configured  solaris 
        kzone            configured  solaris 
        zone1            configured  solaris 
        NAME             STATUS      BRAND 
        kzone1           running     solaris-kz 
        zone2            installed   solaris 
        zone3            configured  solaris 
        fuu1             configured  solaris 
        fu1              configured  solaris 
        oo1              configured  solaris 
        kzone            configured  solaris 
        zone1            configured  solaris 

As you can see nothing was deleted or imported. This is what it looks like when everything is in sync.

Let’s tell the script which server is suppose the target in order to sync it with the localhost.

u2034611@AC6A000:~$ /net/ap6shr1/data/shares/soladm/intern/tm/ AC4S000 
        NAME             STATUS      BRAND 
        kzone1           configured  solaris-kz 
        zone2            configured  solaris 
        zone3            configured  solaris 
        fuu1             configured  solaris 
        fu1              configured  solaris 
        oo1              configured  solaris 
        kzone            configured  solaris 
        zone1            configured  solaris 
        NAME             STATUS      BRAND  

IMPORTED: kzone1 
IMPORTED: zone2 
IMPORTED: zone3 
IMPORTED: kzone 
IMPORTED: zone1 

Above you can see that on the one server (AC6A000, in this case the local machine) a bunch of different zones are configured and none exist on the target side. Therefore all of the zone configurations are imported.

Let’s say you have a central server or your local workstation from which you want to sync two global zones. In the following example two hostnames are passed on.

G u2034611@r0065262 % /var/tmp/ ac6s000                         
        NAME             STATUS      BRAND 
        kzone1           running     solaris-kz 
        zone2            installed   solaris 
        zone3            configured  solaris 
        fuu1             configured  solaris 
        fu1              configured  solaris 
        oo1              configured  solaris 
        kzone            configured  solaris 
        zone1            configured  solaris 
        NAME             STATUS      BRAND 
        kzone1           configured  solaris-kz 
        zone2            configured  solaris 
        zone3            configured  solaris 
        fuu1             configured  solaris 
        fu1              configured  solaris 
        oo1              configured  solaris 
        kzone            configured  solaris 
        zone1            configured  solaris 

DELETED: kzone1 
DELETED: zone2 
IMPORTED: kzone1 
IMPORTED: zone2 

This time both global zones have several zones in the configured state and on one host one zone is running and another one is in the installed state. Which means that two zone configurations (configured state). The more current ones, which in this case means rather “runnable” or running, are synced and the “only” configured zones’ configuration was deleted.

There are many more things to explore about RAD, so have fun and stay calm ;) !


Tailoring meets Solaris Compliance

With Solaris 11.3 Oracle addded a new feature to compliance. Tailoring it is called and pretty much does exactly that. Instead of having to manually customize benchmark files tailoring will do the job for you. That’s the trivial description of what tailoring does.
But underneath the hood tailoring is capable of so much more. Used the right way it takes the automation of compliance reporting to a more sophisticated level.

How to get started

Before talking about how tailoring can enhance the way you use and customize compliance in Solaris let me quickly walk you through how it works.
Using tailoring is as simple and intuitive as running an assessment. All you need to do is type “compliance tailor -t “. The -t option declares which tailoring shall be loaded. In case none exists it will be created. It is not a required option but in order to store the tailoring you will have to set the name manually by using “set tailoring=” later on anyway.

Example without the option:

ROOT@AP6S500 > compliance tailor

Documented commands (type help ):
clear   delete   exit    include  list  pick  value 
commit  exclude  export  info     load  set   values

Miscellaneous help topics:

tailoring> set> info
        benchmark: not set
        profile: not set>

Example with -t:

ROOT@AP6S500 > compliance tailor -t tailoring.tm2 
*** compliance tailor: Can't load tailoring 'tailoring.tm2': no existing tailoring: 'tailoring.tm2', initializing

tailoring:tailoring.tm2> info
        benchmark: not set
        profile: not set

As the examples already showed the tailoring CLI command info shows which tailoring, benchmark and profile are set.
From this point on you could use set …=… all the way till your tailoring is done and you commit it. If you rather would like to save some time and typing pick will be the command of your choice.

tailoring:tailoring.tm2> pick

Use the arrow-keys to navigate up and down and pick the benchmark and profile that you would like to take for your tailoring. This can be seen as sort of a template. When you have done your selection pres ESC. info will show what you selected.

tailoring:tailoring.tm2> info

tailoring, benchmark and profile are set, which means tests can be picked now.

tailoring:tailoring.tm2> pick

The picture above shows the tests of the earlier chosen benchmark and profile. “x” stands for excluded while “>” indicates an activated test. This is where you tailor your compliance check. As before press “ESC” when you are done.
With the command export yo can see what changes you have made. The output that is shown then are the commands that can be used to manually include and exclude tests instead of using pick.

tailoring:tailoring.tm2> export
set tailoring=tailoring.tm2
# version=2016-02-26T16:44:36.000+00:00
set benchmark=tm
set profile=tm
tailoring:tailoring.tm2> pick
tailoring:tailoring.tm2> export
set tailoring=tailoring.tm2
# version=2016-02-26T17:02:10.000+00:00
set benchmark=tm
set profile=tm
# ivv-000: Compliance integrity is given
exclude ivv-000
# ivv-001: LDAP client configuration is ok
include ivv-001
# OSC-54005: Package integrity is verified
exclude OSC-54005
# OSC-53005: The OS version is current
exclude OSC-53005
# OSC-53505: Package signature checking is globally activated
exclude OSC-53505

Should you be interested in how the tailoring file itself will look like simply use the option -x. This will give you the XML output.
All that is left to do is commit your changes et voilá … exit and done!
In case you have been fiddling around and create a few tailorings already the list will list all the existing tailorings.

Tailoring vs. Benchmarks/Profiles only

After we flew through the basics of Solaris compliance tailoring we are already know enough to talk about why EVERYONE should use tailoring.
Maybe you have read one or even all of my earlier Solaris Compliance posts or heard me talking about it, if you might remember me saying it is really quiet fast and simple to customize. Well, it just got way easier. Not all out of the box yet but almost and I am sure someone already requested an enhancement. :-D
So what am I talking about?!
The files for Solaris compliance can be found under two paths. One is /usr/lib/compliance. This was probably the only one that you might have been working in in case you customized anything. For adding benchmarks, adding tests or editing profiles this was/is where you do it. Other than that all the content here is pretty much static until a change might come with an update (SRU). With Solaris 11.3 and tailoring the compliance benchmark directories received another directory called tailorings. By default this is empty.
All the changes and information done while using the compliance command are done under /var/share/compliance. It is important to understand that this content should stay untouched. Just leave this path to Solaris and the engineering. But it is always nice and helpful to know where to look for changes.
Let’s take a look at /var/share/compliance/tailorings.

G muehle@AP6S500 % ls -l /var/share/compliance/tailorings 
total 60
-rw-r--r--   1 root     root         495 Feb 16 14:21 ivv-tailor.xccdf.xml
-rw-r--r--   1 root     root         964 Feb 16 14:05
-rw-r--r--   1 root     root         952 Feb 26 18:07 tailoring.tm2.xccdf.xml
-rw-r--r--   1 root     root         489 Feb 17 14:03 test.xccdf.xml
-rw-r--r--   1 root     root       24844 Feb 17 15:11 test123.xccdf.xml

This is the place compliance tailor saves the tailorings after committing it. The content of /var/share/compliance/tailorings/tailoring.tm2.xccdf.xml is exactly what export -x showed us earlier.

Another very interesting directory is /var/share/compliance/assessments. I will write more about why this is hopefully soon. I am working on customizing Solaris compliance for a larger scale environment and this directory plays an important role for that.

But let’s get back on track and talk about how much of an enhancement tailoring is.
At the moment we have different IPS packages with different benchmarks. Each with different profiles. Just so different scenarios are covered.
Which means we spend some time customizing large XML files and we also do have to spend time on maintaining it.
Now, all we do is package up your tailoring file or a compliance tailor -f command file with includes and excludes in IPS. Less complexity and less maintaining! No more duplicating lines and lines of code only to have a different set of tests that is suppose to be used.
When you think about it tailorings are the delta to a certain benchmark. So, what if you would have one large benchmark that includes all the available tests and let’s say a preconfigured profile for solaris, pci-dss and a “complete profile”. To cerate your own profile just place your tailoring in /usr/lib/compliance/benchmark/benchmark-name/tailorings/ and run the following:

# compliance assess -t tailoring-name

Using different tests depending on the application has become really simple and quick to prepare and do. Your tailoring works everywhere no matter if a benchmarks has tests included or excluded. Really nice! Add IPS and Puppet to all of this and you can much more time on other topics.

Right now this “complete” benchmark needs to be created by the customer. Not much of a problem if you already took care of that but I would guess not too many have. But even if you have your own all containing benchmark with each update you might be missing something in it. Tests or what so ever. So you still have to maintain thousands of lines of XML content. :-(
So hopefully such a benchmark will make it into a future release of compliance.

Tailoring simplifies Solaris Compliance a lot and saves you a lot of time. It is great! Try it!

Benefits of Solaris Immutable Zones

Over the last couple of weeks or actually months I was lucky to talk to a lot of other Oracle Solaris customers and other Linux and Unix users/admins. One question that I got asked most of the time is why I use immutable zones? Where is the benefit if your data can still be read and stolen?
Since I finally got some time for a blog post I figured I will share the answer with whoever might be interested. Originally I planned on writing a HowTo post ever since this feature was released. But time has passed by and the more important question at the moment seems to be the WHY rather then the HOW.

The answer to why I use immutable (global)zones is security, simplicity, and speed. And all of this for no extra costs.
Let me explain this in more detail.


Security is often just looked at as something that keeps your data safe and protects the IT from attackers/hackers. This is definitely a part of what security is but there is so much more to it. Why are mostly hackers, attacker or let’s say external people considered a threat to the system but hardly ever the admins or users on the system itself. Why trust yourself? And what is it I want to protect or prevent?

Immutable zones aims for preventing data manipulation and protects you from headless admin mistakes like rm -r * in the wrong directory, wrong terminal or what so ever, misconfiguration of the system, and therefor of course also from anyone (attackers or users) reconfiguring your system. Yes, the data can be read but not changed. I am very sure most of you who read this have been dealing with a sudden appearance of a fault and when the question is asked what happened what was changed nobody has done anything. Why bother with this question??? I don’t want to think about what the application users might be doing in /etc or hope that the new admin is not going to destroy datasets, cleans out directories or even mis-configures RBAC.
Immutable zones ensures me that the system stays exactly the same way until I change something intentionally.

That was the technical point of view. But a growing field of security is compliance. I wonder how much money and time is spend by companies just to be able to somehow ensure the auditor that your system configuration is compliant throughout the year. And even more how much was spend to make sure it did stay the same. Scripts were written, mistakes were corrected, you got more gray hairs over the last couple of months and the meetings with the auditors will most probably not be the highlight of the year. Save time and money and just tell/show the auditor that the system was and is immutable (read-only) and therefor nothing changed!!! Easiest way to do so by the way is use the Solaris compliance framework.

Another very important fact is that this is not achieved by just mounting datasets read-only. It is rather deeply integrated in Solaris and based on privileges.


The chances are high that not every out there is working with Solaris but rather Windows or any Linux distro. So let me start with a short comment what simplicity does not mean to me as a Solaris guy. Simplicity does not mean I don’t have to install additional software or spend time on integrating a feature. That is what I am used to and what is normal to me.
I just start using security features or in case of for example RBAC “have” to use it. It is always there.

So what is it that makes immutable zones simple then? Well, let me just show you the steps it takes to turn a non-immutable zone into an immutable one.

root@GZ:~# zoneadm list -cv
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   2 ap1s001          running     /zones/ap1s001               solaris    excl

root@GZ:~# zonecfg -z ap1s001 set file-mac-profile=fixed-configuration

root@GZ:~# zlogin ap1s001 init 6

That’s it
You enable immutable Zones by simply changing the file-mac-profile value to either strict(be careful!), fixed-configuration or flexible-configuration and then reboot the zone. For the global zone dynamic-zones is available as well. In case you want to go back to a regular type of zone just use none as a value for file-mac-profile.
Here is a quote of the zonecfg man page:

none makes the zone exactly the same as a normal, r/w zone. strict
allows no exceptions to the read-only policy. fixed-configuration
allows the zone to write to files in and below /var, except direc-
tories containing configuration files:


dynamic-zones is equal to fixed-configuration but allows creating
and destroying non-global zones and kernel zones. This profile is
only valid for global zones, including the global zone of a kernel

flexible-configuration is equal to dynamic-zones, but allows writ-
ing to files in /etc in addition.

zoneadm list -p shows whether a zone is immutable or not.

root@GZ:~# zoneadm list -p

Listed are the fields zoneid:zonename:state:zonepath:uuid:brand:ip-type:r/w:file-mac-profile.

To add more simplicity a trusted path (zlogin -T|U) from the global zone / console can be used to do necessary changes. For example adding a directory or adding a user.
You also don’t have to do wild things when it comes to updating/patching. Just use the pkg update command as you always do.

As you can see it is just simple!


Thanks to the sophisticated integration of immutable zones there is no overhead. No software installed on top of the operating system. No daemon running and checking fo/preventing open system calls or what so ever. Immutable zones run just as fast as non-immutable zones.
Changing back and worth is just a reboot. I could imagine this might not even be necessary anymore somewhen.
Besides that this feature will speed up the auditor meetings as mentioned before.
And the process of setting it up is lightning fast compared to other tools out there.
Not worrying about your configuration of your system anymore will speed up other projects/topics you are working on by saving time and thoughts/distractions.

Short answer

To tell you the truth this is what my very first answer to the why question always is before getting into details:

Why not?!?!?! Why shouldn’t I use a security feature that is there for free, that works and is a no-brainer to use?!
I don’t wanna worry about my own stupid mistakes or even the ones of others. I don’t trust application admins/users. Auditor meetings are over before they even begin. It’s just a great feature!

As I said at the beginning I was lucky to be able to talk to quiet a lot of different admins, engineers, managers, etc. and it was really nice to see how most of them started thinking “Why not, true!”. This feature might not fit in every single environment. But does every machine have IPsec, IPfilter, … enabled? Probably not.

I hope this will encourage some of you to make your own experience with this great Solaris feature.

More FOSS in Oracle Solaris

It is probably not a big surprise to any of you Solaris people when I tell you I keep hearing Solaris is not open to other Software vendors or even OpenSource products/projects.
Well, the biggest features that came with Solaris 11 and its releases were Openstack, openSCAP, and Puppet to just name the most famous ones.

Now, Oracle Solaris comes with new FOSS evaluation packages. It is part of the default release repository.

The Oracle blog post FOSS Evaluation Packages for Solaris 11.3 lists the included components and and points you to a guide on how to use/install the packages.