Benefits of Solaris Immutable Zones

Over the last couple of weeks or actually months I was lucky to talk to a lot of other Oracle Solaris customers and other Linux and Unix users/admins. One question that I got asked most of the time is why I use immutable zones? Where is the benefit if your data can still be read and stolen?
Since I finally got some time for a blog post I figured I will share the answer with whoever might be interested. Originally I planned on writing a HowTo post ever since this feature was released. But time has passed by and the more important question at the moment seems to be the WHY rather then the HOW.

The answer to why I use immutable (global)zones is security, simplicity, and speed. And all of this for no extra costs.
Let me explain this in more detail.


Security is often just looked at as something that keeps your data safe and protects the IT from attackers/hackers. This is definitely a part of what security is but there is so much more to it. Why are mostly hackers, attacker or let’s say external people considered a threat to the system but hardly ever the admins or users on the system itself. Why trust yourself? And what is it I want to protect or prevent?

Immutable zones aims for preventing data manipulation and protects you from headless admin mistakes like rm -r * in the wrong directory, wrong terminal or what so ever, misconfiguration of the system, and therefor of course also from anyone (attackers or users) reconfiguring your system. Yes, the data can be read but not changed. I am very sure most of you who read this have been dealing with a sudden appearance of a fault and when the question is asked what happened what was changed nobody has done anything. Why bother with this question??? I don’t want to think about what the application users might be doing in /etc or hope that the new admin is not going to destroy datasets, cleans out directories or even mis-configures RBAC.
Immutable zones ensures me that the system stays exactly the same way until I change something intentionally.

That was the technical point of view. But a growing field of security is compliance. I wonder how much money and time is spend by companies just to be able to somehow ensure the auditor that your system configuration is compliant throughout the year. And even more how much was spend to make sure it did stay the same. Scripts were written, mistakes were corrected, you got more gray hairs over the last couple of months and the meetings with the auditors will most probably not be the highlight of the year. Save time and money and just tell/show the auditor that the system was and is immutable (read-only) and therefor nothing changed!!! Easiest way to do so by the way is use the Solaris compliance framework.

Another very important fact is that this is not achieved by just mounting datasets read-only. It is rather deeply integrated in Solaris and based on privileges.


The chances are high that not every out there is working with Solaris but rather Windows or any Linux distro. So let me start with a short comment what simplicity does not mean to me as a Solaris guy. Simplicity does not mean I don’t have to install additional software or spend time on integrating a feature. That is what I am used to and what is normal to me.
I just start using security features or in case of for example RBAC “have” to use it. It is always there.

So what is it that makes immutable zones simple then? Well, let me just show you the steps it takes to turn a non-immutable zone into an immutable one.

root@GZ:~# zoneadm list -cv
  ID NAME             STATUS      PATH                         BRAND      IP
   0 global           running     /                            solaris    shared
   2 ap1s001          running     /zones/ap1s001               solaris    excl

root@GZ:~# zonecfg -z ap1s001 set file-mac-profile=fixed-configuration

root@GZ:~# zlogin ap1s001 init 6

That’s it
You enable immutable Zones by simply changing the file-mac-profile value to either strict(be careful!), fixed-configuration or flexible-configuration and then reboot the zone. For the global zone dynamic-zones is available as well. In case you want to go back to a regular type of zone just use none as a value for file-mac-profile.
Here is a quote of the zonecfg man page:

none makes the zone exactly the same as a normal, r/w zone. strict
allows no exceptions to the read-only policy. fixed-configuration
allows the zone to write to files in and below /var, except direc-
tories containing configuration files:


dynamic-zones is equal to fixed-configuration but allows creating
and destroying non-global zones and kernel zones. This profile is
only valid for global zones, including the global zone of a kernel

flexible-configuration is equal to dynamic-zones, but allows writ-
ing to files in /etc in addition.

zoneadm list -p shows whether a zone is immutable or not.

root@GZ:~# zoneadm list -p

Listed are the fields zoneid:zonename:state:zonepath:uuid:brand:ip-type:r/w:file-mac-profile.

To add more simplicity a trusted path (zlogin -T|U) from the global zone / console can be used to do necessary changes. For example adding a directory or adding a user.
You also don’t have to do wild things when it comes to updating/patching. Just use the pkg update command as you always do.

As you can see it is just simple!


Thanks to the sophisticated integration of immutable zones there is no overhead. No software installed on top of the operating system. No daemon running and checking fo/preventing open system calls or what so ever. Immutable zones run just as fast as non-immutable zones.
Changing back and worth is just a reboot. I could imagine this might not even be necessary anymore somewhen.
Besides that this feature will speed up the auditor meetings as mentioned before.
And the process of setting it up is lightning fast compared to other tools out there.
Not worrying about your configuration of your system anymore will speed up other projects/topics you are working on by saving time and thoughts/distractions.

Short answer

To tell you the truth this is what my very first answer to the why question always is before getting into details:

Why not?!?!?! Why shouldn’t I use a security feature that is there for free, that works and is a no-brainer to use?!
I don’t wanna worry about my own stupid mistakes or even the ones of others. I don’t trust application admins/users. Auditor meetings are over before they even begin. It’s just a great feature!

As I said at the beginning I was lucky to be able to talk to quiet a lot of different admins, engineers, managers, etc. and it was really nice to see how most of them started thinking “Why not, true!”. This feature might not fit in every single environment. But does every machine have IPsec, IPfilter, … enabled? Probably not.

I hope this will encourage some of you to make your own experience with this great Solaris feature.

Deploying automated CVE reporting for Solaris 11.3

With Solaris 11.2 Oracle started including quiet few new Solaris features for security and automated deployment. Besides bringing in immutable zones, which I didn’t get to write about yet (which is a shame since these are wonderful), and compliance, Solaris IPS received a new package called pkg://solaris/support/critical-patch-update/solaris-11-cpu. This package includes packages that are considered to be part of the Critical Patch Update. In addition to the package name and version this package now enables you to see which CVE each of these packages belong to.
You can use the pkg command to do some basic searches. Immutable zones, compliance and CVEs are only three of the different security features that were added to Solaris 11.2 and Solaris 11.3.

Most likely an admin will not want to login to each of his hundreds, thousands or even more Solaris installations in order to install needed packages and take care of a proper configuration. Can’t blame him. That’s probably what the Solaris team thought when puppet became part of the IPS repository with Solaris 11.2. There is not that much to say about it for those who do not know it. It does what is suppose to and is a relief for every admin if used right. In case you are interested in some really great articles go check out Manuel Zach’s blog. For automation in general you will want to go and also read Glynn Foster’s blog.
Now why am I writing about these “old” Solaris 11.2 features if Solaris 11.3 beta was released a few weeks ago already. Well, these are fundamental technologies in order to get the most out of Solaris 11.3.

Bringing Solaris IPS and NIST together

Companies mostly use an external software that alerts and reports every single CVE that is out there and triggers a service request for the responsible team. The thing is, it is slow, costs a lot and you get service requests for software that is not installed on any system. So what happens is the admin ends up checking it himself.

So I figured I will just do it Solaris style. By now as I write this post, we do have a fully automated reporting for CVEs for no extra costs at work.

Let’s start with IPS and CVEs. As mentioned before you will need to have a certain package installed.

# pkg install support/critical-patch-update/solaris-11-cpu

This package is updated with every SRU and will include every known CVE for Solaris 11. If you want to know a few basics about it read Darren Moffat’s blog.
Use the following command to see all the included packages:

# pkg contents -ro name,value solaris-11-cpu|grep '^CVE.*'
CVE-1999-0103                 pkg://solaris/system/security/kerberos-5@0.5.11,5.11-
CVE-2002-2443                 pkg://solaris/system/security/kerberos-5@0.5.11,5.11-
CVE-2003-0001                 pkg://solaris/driver/network/ethernet/pcn@0.5.11,5.11-
CVE-2004-0230                 pkg://solaris/system/kernel@0.5.11,5.11-
CVE-2004-0452                 pkg://solaris/runtime/perl-584/extra@5.8.4,5.11-
CVE-2004-0452                 pkg://solaris/runtime/perl-584@5.8.4,5.11-
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-apc@3.0.19,5.11-
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-idn@0.2.0,5.11-
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-memcache@2.2.5,5.11-
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-mysql@5.2.17,5.11-
CVE-2004-1019                 pkg://solaris/web/php-52/extension/php-pear@5.2.17,5.11-
CVE-2015-4024                 pkg://solaris/web/php-53/extension/php-tcpwrap@1.1.3,5.11-
CVE-2015-4024                 pkg://solaris/web/php-53/extension/php-xdebug@2.2.0,5.11-
CVE-2015-4024                 pkg://solaris/web/php-53/extension/php-zendopcache@7.0.2,5.11-
CVE-2015-4024                 pkg://solaris/web/php-53@5.3.29,5.11-
CVE-2015-4024                 pkg://solaris/web/server/apache-22/module/apache-php52@5.2.17,5.11-
CVE-2015-4024                 pkg://solaris/web/server/apache-22/module/apache-php53@5.3.29,5.11-
CVE-2015-4770                 pkg://solaris/system/file-system/ufs@0.5.11,5.11-
CVE-2015-4770                 pkg://solaris/system/kernel/platform@0.5.11,5.11-
CVE-2015-5073                 pkg://solaris/library/pcre@8.37,5.11-
CVE-2015-5477                 pkg://solaris/network/dns/bind@,5.11-
CVE-2015-5477                 pkg://solaris/network/dns/bind@,5.11-
CVE-2015-5477                 pkg://solaris/service/network/dns/bind@,5.11-
CVE-2015-5477                 pkg://solaris/service/network/dns/bind@,5.11-

Now we got the information of which Solaris IPS package belongs to which CVE-ID. That’s nice but how do we get all the other CVE information? Base score, summary, access vector, etc.?! In order to add more details to it I imported the NIST nvd-files into a sqlite3 database.
The files can be either downloaded as a compressed gz-file or regular xml. For more information visit
I imported the xml files into a sqlite3 database for a better performance. If you don’t want to work your way through the xml structures yourself use this python program that I came across while writing it myself. I like the approach of just having to do:

# curl | nvd2sqlite3 -d /wherever/you/like/to/keep/the/dbfile

In order to keep your NIST CVE database current I put the commands in a script and created a crontab entry.


curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb
curl | nvd2sqlite3 -d /data/shares/NIST/cvedb


0 5 * * * /scripts/admin/

Alright this gives us a database with all the information we need and want. The schema of the sqlite3 database looks like this:

sqlite> .schema
CREATE TABLE nvd (access_vector varchar,
                                            access_complexity varchar,
                                            authentication varchar,
                                            availability_impact varchar,
                                            confidentiality_impact varchar,
                                            cve_id text primary key,
                                            integrity_impact varchar,
                                            last_modified_datetime varchar,
                                            published_datetime varchar,
                                            score real,
                                            summary varchar,
                                            urls varchar,

Next to do is to match the data of the db with the IPS information. When I started working on this I focused on console output only but when I looked at our centralized compliance reports I wanted the same thing for CVEs. A central CVE reporting. So I ended up with writing the output to a html-file on an apache webserver.

Since the standard Perl in Solaris does not contain DBD::SQLite I switched to Python. does the following:

  • get all the installed package information from IPS
  • get all the information from the solaris-11-cpu package
  • match the above data and filter which pkg is installed and what version does it have (lower version = unpatched CVE)
  • create html report file with all the needed elements
  • connect to the sqlite3 db and get cve_id, access_vector, score and summary
  • write the select output to file, sorted by unpatched and patched CVEs

The CVE report looks like this:

What we got now is a script that pulls in all the nvd information from NIST and stores it in a sqlite3 database. And we got a script that matches these information with the installed IPS packages and generates a CVE report in a html format.

Scheduled Services with Solaris 11.3

Next up is to automatically generate these reports. With Solaris 11.2 cron would be the way to do it. Trivial entry in the crontab and done.

30 5 * * * /scripts/admin/

With Solaris 11.3 cron is almost obsolete. Why? Because of SMF and the new scheduled and periodic services. I’m not gonna talk about why SMF is great or not. To me it is great and I never ran into any serious problem. If it is a Solaris 11.3 installation I will move custom cronjobs to SMF and create scheduled services.
What these do is the same as cron plus everything else SMF has to offer.
The scheduled service I use for the CVE reporting is the following:

<?xml version="1.0" ?>
<!DOCTYPE service_bundle
  SYSTEM '/usr/share/lib/xml/dtd/service_bundle.dtd.1'>
    Manifest created by svcbundle (2015-Sep-04 15:05:15+0200)
<service_bundle type="manifest" name="site/cveList">
    <service version="1" type="service" name="site/cveList">
            The following dependency keeps us from starting until the
            multi-user milestone is reached.
        <dependency restart_on="none" type="service"
            name="multi_user_dependency" grouping="require_all">
            <service_fmri value="svc:/milestone/multi-user"/>
        <instance enabled="true" name="default" >
                                        <method_credential user='root' group='root' />

Use svcbundle to generate your own manifest.

# svcbundle -o /var/tmp/cveList.xml -s service-name=site/cveList -s start-method=/lib/svc/method/ -s interval=day -s hour=5 -s minute=30
# svccfg validate /var/tmp/cveList.xml

It’s as easy as that. Add to it whatever you feel like and is needed. Mail reporting in case of a status change for example.

Well now we have a service that generates a CVE report every day at 5:30am of a server.
We need more so let’s move on to the next piece.

Building a custom IPS package

The best way to deploy any piece of software on a Solaris 11.x server is with IPS.
IPS packages are very easy to use when they already build and published. List, install, info, uninstall, contents, search, freeze, unfreeze, etc.. It is always the same command pattern that makes it that way. But how do you build your own packages. That is always a bit more tricky than using them. Instead of explaining how it works I will just link to another article written by Glynn Foster which covers everything you need to know.
If you don’t want to type in every single step this little script might help. Adjust your IPS repository and your paths and all you need is a so called mog file which in this case could look like this:

set name=pkg.fmri value=pkg://custom/security/custom-cveList@1.0.2
set name=variant.arch value=sparc value=i386
set name=pkg.description value="custom CVE reporting"
set name=pkg.summary value="custom Solaris CVE reports"
<transform dir path=lib$ -> drop>
<transform dir path=lib/svc$ -> drop>
<transform dir path=lib/svc/manifest$ -> drop>
<transform dir path=lib/svc/manifest/site$ -> set owner root>
<transform dir path=lib/svc/manifest/site$ -> set group sys>
<transform file path=lib/svc/manifest/site/cveList\.xml$ -> set owner root>
<transform file path=lib/svc/manifest/site/cveList\.xml$ -> set group bin>
<transform file path=lib/svc/manifest/site/cveList\.xml$ -> set mode 0444>
<transform file path=lib/svc/manifest/site/cveList\.xml$ -> default restart_fmri svc:/system/manifest-import:default>
<transform dir path=lib/svc/method$ -> drop>
<transform file path=lib/svc/method/cveList\.py$ -> set owner root>
<transform file path=lib/svc/method/cveList\.py$ -> set group bin>
<transform file path=lib/svc/method/cveList\.py$ -> set mode 0555>

Besides the mog file you just enter the path to your proto directory that includes the software that is suppose to be packaged up and you are good to go. You will be asked to type in the name of the package and that’s it. Rest is done automatically. You might have to adjust the configuration inside of your mog file in case of unresolved dependencies for example. Should you be missing a custom IPS repo create one right quick and then start packaging.

Creating a custom IPS repo and share it via nfs:

# zfs create -po mountpoint=/ips/custom rpool/ips/custom
# zfs list -r rpool/ips
rpool/ips          62K  36.2G    31K  /rpool/ips
rpool/ips/custom   31K  36.2G    31K  /ips/custom
# pkgrepo create /ips/custom
# zfs set share=name=custom_ips,path=/ips/custom,prot=nfs rpool/ips/custom
# zfs set share.nfs=on rpool/ips/custom
# zfs get share
NAME                                                           PROPERTY  VALUE  SOURCE
rpool/ips/custom                                               share     name=custom_ips,path=/ips/custom,prot=nfs  local

Let’s actually build the cveList IPS pkg.

# /scripts/admin/ /scripts/admin/IPS/CVE/MOG/custom-cveList.mog /scripts/admin/IPS/CVE/PROTO.CVE

Need some information about the package. Answer the following questions to generate a mogrify-file (package_name.mog) or if you have a package_name.mog template execute this script with args:

 /scripts/admin/ [path_to_mog_file] [path_to_proto_dir]

Enter Package Name (eg. custom-compliance): custom-cveList

Ready! Generating the manifest.

pkgsend generate... OK
pkgmogrify... OK
pkgdepend generate... OK
pkgdepend resolve... OK
eliminating version numbers on required dependencies... OK
testing manifest against Solaris 11.2 repository, pkglint ... 
Lint engine setup...

Ignoring -r option, existing image found.
Starting lint run...


Review the manifest file custom-cveList.p5m.4.res!

publish the ips package with:
pkgsend publish -s file:///data/ips/custom -d /scripts/admin/IPS/CVE/PROTO.CVE /scripts/admin/IPS/CVE/custom-cveList.p5m.4.res

check the package with:
pkg refresh
pkg info -r custom-cveList
pkg contents -m -r custom-cveList
pkg install -nv custom-cveList

remove it:
pkgrepo remove -s file:///data/ips/custom pkg://custom/security/custom-cveList@1.0.2

Et voilà, the package is ready to be published.

# pkgsend publish -s file:///data/ips/custom -d /scripts/admin/IPS/CVE/PROTO.CVE /scripts/admin/IPS/CVE/custom-cveList.p5m.4.res
# pkg refresh
# pkg info custom-cveList
             Name: security/custom-cveList
          Summary: custom Solaris CVE reports
      Description: custom CVE reporting
            State: Installed
        Publisher: custom
          Version: 1.0.2
           Branch: None
   Packaging Date: Tue Sep 08 16:48:50 2015
Last Install Time: Tue Sep 08 16:52:07 2015
             Size: 8.80 kB
             FMRI: pkg://custom/security/custom-cveList@1.0.2:20150908T164850Z

DONE! At least with getting CVEs, matching CVEs, scheduling reports and building a package out of all of this.

Let’s deploy.

Let puppet do your job

In this case I am already running a puppet master and several puppet agents. Since I talked about multiple hundreds or thousands of Solaris installations a master-agent setup is exactly what we want.
Nobody has the time and endurance to login on each system and do a pkg install custom-cveList.
I figured a puppet module would be just what I want.
And to save time, here it is:

# cat /etc/puppet/modules/cve/manifests/init.pp
class cve {
        if $::operatingsystemrelease == '11.3' or $::operatingsystemmajrelease == '12' {
                package { 'custom-cveList':
                        ensure => 'present',
        if $::operatingsystemrelease == '11.2' {
                cron { 'cveList' :
                        ensure => 'present',
                        command => '/scripts/admin/',
                        user => 'root',
                        hour => 5,
                        minute => 30,

The first if-statement will install the recently build and published IPS pkg. Since scheduled services are not available in Solaris 11.2 I had to add a crontab entry for that case, which would be the second if-statement in the above.
Now just add it to your /etc/puppet/manifest/site.pp and you are all set up.

node default {
        include nameservice
        include tsm
        include arc
        include compliance
        include mounts
        include users
        include cve

This is it now. From now on, every single Solaris server that runs a puppet agent will have your custom CVE reporting deployed.
Reading all this actually takes longer than just doing it and you only need to go through all of this once.

I know this looks like it is a lot but it really isn’t. If you want to leave out the IPS part just add your scripts and service/cron to your puppet configuration.
This is easier to handle than a third party tool. If want you could just implement it to your ITIL process and its tools to automate CVE service request handling.