After some back and forth about some device issues with the server hosting provider of my domain I’m at least online again. Hopefully the support will get going and the performance issues will be part of the past soon.
… and here is one of the great teams behind Oracle Solaris.
It is Oracle Solaris (Product Management)
Wonderful idea. Good to see everyone only dropped their signs and not more. ;-)
You guys rock.
After starting out with Python RAD zonemgr module I thought it is time to write about another available Python RAD module. In the following I want to give you a short and simple look and feel of how to get ZFS property information (name, type, atime, compression, compressratio, dedup, mounted and mountpoint). The puprpose is to get people interested in Solaris going with RAD and show how easily it can be used.
Let’s start with the usual imports which are pretty self-explaining. One is needed to open a connection and the other one depends on the purpose of the script. In this case we want to get ZFS properties which means we will use zfsmgr.
In case it is not already installed just run: pkg install rad-zfsmgr
import rad.connect as radc import rad.bindings.com.oracle.solaris.rad.zfsmgr_1 as zfsmgr
Next we need to connect to a RAD client. In this case it is a local one and a unix socket can be used.
If you like to use a remote connection go ahead and use ssh://[hostname] instead.
uri = radc.RadURI("unix:///") rc = uri.connect()
We now have an open connection (rc) and we can move on to the ZFS part.
First we need is to know what the existing datasets are. rc.list_object(zfsmgr.ZfsDataset()) will do exactly this for us. It lists all the ZFS dataset objects (ZfsDataset) the RAD zfs manager (zfsmgr) has to offer for the chosen connection (rc).
zfsDataSets = rc.list_objects(zfsmgr.ZfsDataset())
Well, the list of datasets is complete. But we will need more than just the objects of each dataset.
In order to get the information we are interested in we need to define it first. Therefore ZfsPropRequest is used.
prop0 = zfsmgr.ZfsPropRequest(name="name") prop1 = zfsmgr.ZfsPropRequest(name="type") prop2 = zfsmgr.ZfsPropRequest(name="atime") prop3 = zfsmgr.ZfsPropRequest(name="compression") prop4 = zfsmgr.ZfsPropRequest(name="compressratio") prop5 = zfsmgr.ZfsPropRequest(name="dedup") prop6 = zfsmgr.ZfsPropRequest(name="mounted") prop7 = zfsmgr.ZfsPropRequest(name="mountpoint")
After defining the properties we just loop through each of the object list of the ZFS datasets and request the value for the just defined keys of the current object (zobj).
for dataset in zfsDataSets: zobj = rc.get_object(dataset) zvalues = zobj.get_props([prop0, prop1, prop2, prop3, prop4, prop5, prop6, prop7]) print "%-40s%-14s%-8s%-13s%-15s%-7s%-9s%s" % (zvalues.value, zvalues.value, zvalues.value, zvalues.value, zvalues.value, zvalues.value, zvalues.value, zvalues.value)
Done. Got the information and therefore can close the connection at this point.
The output will remind you of a regular zfs list -o … output. One may say why should I use RAD then and the answer is quiet simple. Because this is just a trivial example of how you can make use RAD of the zfsmgr to get dataset information. The next steps would be to use the above and automate whatever comes to your mind. Juggle around with the objects, keys, values, etc.. Add more functions to it and even combine it with more RAD modules (e.g.: rad-zonemgr). That’s where you will benefit the most. But even small automation tasks are perfect for this.
Last but not least, here is an example of what it might look like. I had to take out a few lines because it included Solaris beta content.
Remember, the purpose was to make the very first step with RAD together with ZFS. Try it out and you will most probably like it and stick to it.
A less known jewel of Solaris 11 is RAD (Remote Administration Daemon). Since, as I just found out, I don’t have any RAD posts yet, let’s talk about what it is and offers before we go on.
what RAD does is it provides programmatic interfaces to manage Solaris. Users, zones, zfs and smf are just a few examples. RAD offers APIs for C, Java, Python and REST. It can be used locally as well as remotely. It can be used to read data but also to write data. As an example you can get zfs informations as well as create new datasets or change current settings. There are a couple of great examples and posts out there from e.g. Glynn Foster and Robert Milkowski
Why am I telling you this? Because you can PROGRAMMATICALLY manage your enterprise operating system now.
Imagine an environment of SPARC T4, T5, T7 or S7 servers running quiet few non-global zones whether in LDOMs or not.
Over the last weeks I tested kernel zones pretty heavily. The chance of getting rid of LDOMs is just too good to not go for it. Don’t understand me wrong, LDOMs work fine and are a key part of our current DR concept. But there are also things I really don’t like at all. Guess this will make a good post in the nearer future. ;)
As I said, I was using kernel zones but for DR purposes (let’s say one data center dies) I need to be able to boot the kzones from the other data center. In order to do so the zone configuration has to be available. Well, shared storage and so on too but for the purpose of this post let’s say that’s all taking care of automatically (it really is ;) ).
At the moment Zone configurations are saved via a SMF service on a NFS server. But I don’t want to create zones first while the angry mob can’t work out there and tries to figure out where I am sitting.
When I used kernel zone live migration I started thinking about how I want to solve this issue. For those who haven’t used kzone live migration yet, it creates a zone configuration on the target side and leaves the old one in configured state. Which means once you have live migrated a zone the problem seems to be solved. But what if something changes? What if it runs on a different server (hw) by the time of a disaster.
These two facts, programmatic interfaces and LDOM replacement, lead me to the idea of actually having a SMF scheduled service that takes care of zone configurations. For that I use RAD’s zonemanager and Python.
Besides the RAD IPS packages (rad and rad-zonemgr) being installed you will need a user with sufficient privileges for rad and zones administration/management.
The script zones-sync is part of a larger RAD script that makes all sort of stuff.
Simply said it checks which zone’s configuration is missing on a target server and then imports/creates it. Quiet trivial.
Let’s start wit the imports.
import rad.connect as radc import rad.bindings.com.oracle.solaris.rad.zonemgr_1 as zonemgr import string import socket import sys
Line 36 let’s you connect to RAD as the name already says while the import in line 37 adds the ability to us zone management. Quiet self-explaining I would say.
In order to know which global zones are suppose to be synced I started with getting the source and target hostname straight, depending on the used arguments. So, if only one hostname is given the other one will be considered to be localhost. For remote purposes two hostnames need to be provided. For the purpose of automation I added [-service|-svc] which in this case maps certain pattern of hostnames. The name pattern is used to find the corresponding global zone.
In the end anything that helps to automate getting the hostnames should be put here.
def getHostnames(): global source_hostname global target_hostname if sys.argv == "-service" or sys.argv == "-svc": if source_hostname == 'A': target_hostname = source_hostname[:3]+'S'+source_hostname[4:] elif source_hostname == 'a': target_hostname = source_hostname[:3]+'s'+source_hostname[4:] elif source_hostname == 'S': target_hostname = source_hostname[:3]+'A'+source_hostname[4:] elif source_hostname == 's': target_hostname = source_hostname[:3]+'a'+source_hostname[4:] elif len(sys.argv) == 3: source_hostname = sys.argv target_hostname = sys.argv elif len(sys.argv) == 2: target_hostname = sys.argv return (source_hostname,target_hostname)
Now that it is clarified which systems will be involved the next step is to connect to RAD. As you can see in lines 99 and 104 I am using ssh to connect to remote systems and Unix socket for local connections as shown in line 102.
Again, the user that executes this script must have sufficient privileges on the involved systems. In addition to that the rad:remote and/or rad:local service has to be enabled and online.
def connectRAD(): global source_rc global target_rc if len(sys.argv) == 3: source_uri = radc.RadURI("ssh://"+source_hostname) source_rc = source_uri.connect() else: source_uri = radc.RadURI("unix:///") source_rc = source_uri.connect() target_uri = radc.RadURI("ssh://"+target_hostname) target_rc = target_uri.connect() return (source_rc,target_rc)
The next step is to get a list of zones of each global zone. What it actually is is a list of the objects of each zone.
def getZoneLists(): global zones_s global zones_t zones_s = source_rc.list_objects(zonemgr.Zone()) zones_t = target_rc.list_objects(zonemgr.Zone()) return (source_zones,target_zones)
Each object includes the values of a zone. For example name, state, brand, etc..
The previous is done to get each ng/kzone’s state and therefore to decide whether it is synced or not. Incomplete zones for example are not worth synchronizing. A configured zone’s config will be replaced by the one of a running zone in order to have the most current version configured.
def getSourceZones(): printHeader(source_hostname) for name_s in zones_s: zone_s = source_rc.get_object(name_s) print "\t%-16s %-11s %-6s" % (zone_s.name, zone_s.state,zone_s.brand) if zone_s.state != 'incomplete': source_zones.append(zone_s.name) if zone_s.state == 'configured': source_conf_zones.append(zone_s.name) return (source_zones,source_conf_zones) def getInstalledTargetZones(): printHeader(target_hostname) for name_t in zones_t: zone_t = target_rc.get_object(name_t) print "\t%-16s %-11s %-6s" % (zone_t.name, zone_t.state,zone_t.brand) if zone_t.state != 'configured' and zone_t.state != 'incomplete': target_zones.append(zone_t.name) if zone_t.state == 'configured': target_conf_zones.append(zone_t.name) return (target_zones,target_conf_zones)
After comparing the states, the script deletes existing configurations that are about to be replaced.
In line 152 you can see the preparation of connecting to the target machine’s RAD zonemgr (rad.bindings.com.oracle.solaris.rad.zonemgr_1). The class that is used here is ZoneManager(rad.client.RADInterface)
Quote from the python help for rad.bindings.com.oracle.solaris.rad.zonemgr_1.ZoneManager:
| Create and delete zones. Changes in the state of zones can be
| monitored through the StateChange event.
def deleteExistingConfiguredTargetZone(): delete_zone = target_rc.get_object(zonemgr.ZoneManager()) global zones_t for name_d in import_zones: for name_t in zones_t: zone_t = target_rc.get_object(name_t) if name_d.name == zone_t.name: delete_zone.delete(name_d.name) print "DELETED: %s" % name_d.name zones_t.remove(name_t) break
As you can see above in line 159 the method used is called delete.
When this is done it is time to export and import configurations.
To export a zone configuration via RAD (line 168) the proper class is called Zone with its method exportConfig(*args, **kwargs). Imports (line 170) are done by using the class ZoneManager and (importConfig(*args, **kwargs) as the method.
zones-sync.py [-h|help] [-service|-svc] [target hostname/ip]
def expImpConfig(): mgr = target_rc.get_object(zonemgr.ZoneManager()) for name_i in import_zones: zone_i = source_rc.get_object(name_i) z_config = zone_i.exportConfig() split_config = z_config.splitlines(True) mgr.importConfig(False,zone_i.name,split_config) print "IMPORTED: %s" % zone_i.name
Well, all that is left to do is to close the connections.
def closeRc(): source_rc.close() target_rc.close()
And for you to have an idea what this may look like and what it does, let’s check out the following outputs.
In the following the script was used with -service. This is the way I use for scheduled/periotic services. The hostname’s pattern is used to define the source and target hostnames.
u2034611@AC6A000:~$ /net/ap6shr1/data/shares/soladm/intern/tm/zones-sync.py -service AC6A000: NAME STATUS BRAND kzone1 configured solaris-kz zone2 configured solaris zone3 configured solaris fuu1 configured solaris fu1 configured solaris oo1 configured solaris kzone configured solaris zone1 configured solaris AC6S000: NAME STATUS BRAND kzone1 running solaris-kz zone2 installed solaris zone3 configured solaris fuu1 configured solaris fu1 configured solaris oo1 configured solaris kzone configured solaris zone1 configured solaris
As you can see nothing was deleted or imported. This is what it looks like when everything is in sync.
Let’s tell the script which server is suppose the target in order to sync it with the localhost.
u2034611@AC6A000:~$ /net/ap6shr1/data/shares/soladm/intern/tm/zones-sync.py AC4S000 AC6A000: NAME STATUS BRAND kzone1 configured solaris-kz zone2 configured solaris zone3 configured solaris fuu1 configured solaris fu1 configured solaris oo1 configured solaris kzone configured solaris zone1 configured solaris AC4S000: NAME STATUS BRAND IMPORTED: kzone1 IMPORTED: zone2 IMPORTED: zone3 IMPORTED: fuu1 IMPORTED: fu1 IMPORTED: oo1 IMPORTED: kzone IMPORTED: zone1
Above you can see that on the one server (AC6A000, in this case the local machine) a bunch of different zones are configured and none exist on the target side. Therefore all of the zone configurations are imported.
Let’s say you have a central server or your local workstation from which you want to sync two global zones. In the following example two hostnames are passed on.
G u2034611@r0065262 % /var/tmp/zones-sync.py ac6s000 10.1.30.107 ac6s000: NAME STATUS BRAND kzone1 running solaris-kz zone2 installed solaris zone3 configured solaris fuu1 configured solaris fu1 configured solaris oo1 configured solaris kzone configured solaris zone1 configured solaris 10.1.30.107: NAME STATUS BRAND kzone1 configured solaris-kz zone2 configured solaris zone3 configured solaris fuu1 configured solaris fu1 configured solaris oo1 configured solaris kzone configured solaris zone1 configured solaris DELETED: kzone1 DELETED: zone2 IMPORTED: kzone1 IMPORTED: zone2
This time both global zones have several zones in the configured state and on one host one zone is running and another one is in the installed state. Which means that two zone configurations (configured state). The more current ones, which in this case means rather “runnable” or running, are synced and the “only” configured zones’ configuration was deleted.
There are many more things to explore about RAD, so have fun and stay calm ;) !
With Solaris 11.3 Oracle addded a new feature to compliance. Tailoring it is called and pretty much does exactly that. Instead of having to manually customize benchmark files tailoring will do the job for you. That’s the trivial description of what tailoring does.
But underneath the hood tailoring is capable of so much more. Used the right way it takes the automation of compliance reporting to a more sophisticated level.
How to get started
Before talking about how tailoring can enhance the way you use and customize compliance in Solaris let me quickly walk you through how it works.
Using tailoring is as simple and intuitive as running an assessment. All you need to do is type “compliance tailor -t
Example without the option:
ROOT@AP6S500 > compliance tailor Documented commands (type help
): ======================================== clear delete exit include list pick value commit exclude export info load set values Miscellaneous help topics: ========================== tailoring tailoring> set tailoring=tailoring.tm tailoring:tailoring.tm> info Properties: tailoring=tailoring.tm benchmark: not set profile: not set tailoring:tailoring.tm>
Example with -t:
ROOT@AP6S500 > compliance tailor -t tailoring.tm2 *** compliance tailor: Can't load tailoring 'tailoring.tm2': no existing tailoring: 'tailoring.tm2', initializing tailoring:tailoring.tm2> info Properties: tailoring=tailoring.tm2 benchmark: not set profile: not set
As the examples already showed the tailoring CLI command info shows which tailoring, benchmark and profile are set.
From this point on you could use set …=… all the way till your tailoring is done and you commit it. If you rather would like to save some time and typing pick will be the command of your choice.
Use the arrow-keys to navigate up and down and pick the benchmark and profile that you would like to take for your tailoring. This can be seen as sort of a template. When you have done your selection pres ESC. info will show what you selected.
tailoring:tailoring.tm2> info Properties: tailoring=tailoring.tm2 benchmark=tm profile=tm
tailoring, benchmark and profile are set, which means tests can be picked now.
The picture above shows the tests of the earlier chosen benchmark and profile. “x” stands for excluded while “>” indicates an activated test. This is where you tailor your compliance check. As before press “ESC” when you are done.
With the command export yo can see what changes you have made. The output that is shown then are the commands that can be used to manually include and exclude tests instead of using pick.
tailoring:tailoring.tm2> export set tailoring=tailoring.tm2 # version=2016-02-26T16:44:36.000+00:00 set benchmark=tm set profile=tm tailoring:tailoring.tm2> pick tailoring:tailoring.tm2> export set tailoring=tailoring.tm2 # version=2016-02-26T17:02:10.000+00:00 set benchmark=tm set profile=tm # ivv-000: Compliance integrity is given exclude ivv-000 # ivv-001: LDAP client configuration is ok include ivv-001 # OSC-54005: Package integrity is verified exclude OSC-54005 # OSC-53005: The OS version is current exclude OSC-53005 # OSC-53505: Package signature checking is globally activated exclude OSC-53505
Should you be interested in how the tailoring file itself will look like simply use the option -x. This will give you the XML output.
All that is left to do is commit your changes et voilá … exit and done!
In case you have been fiddling around and create a few tailorings already the list will list all the existing tailorings.
Tailoring vs. Benchmarks/Profiles only
After we flew through the basics of Solaris compliance tailoring we are already know enough to talk about why EVERYONE should use tailoring.
Maybe you have read one or even all of my earlier Solaris Compliance posts or heard me talking about it, if you might remember me saying it is really quiet fast and simple to customize. Well, it just got way easier. Not all out of the box yet but almost and I am sure someone already requested an enhancement. :-D
So what am I talking about?!
The files for Solaris compliance can be found under two paths. One is /usr/lib/compliance. This was probably the only one that you might have been working in in case you customized anything. For adding benchmarks, adding tests or editing profiles this was/is where you do it. Other than that all the content here is pretty much static until a change might come with an update (SRU). With Solaris 11.3 and tailoring the compliance benchmark directories received another directory called tailorings. By default this is empty.
All the changes and information done while using the compliance command are done under /var/share/compliance. It is important to understand that this content should stay untouched. Just leave this path to Solaris and the engineering. But it is always nice and helpful to know where to look for changes.
Let’s take a look at /var/share/compliance/tailorings.
G muehle@AP6S500 % ls -l /var/share/compliance/tailorings total 60 -rw-r--r-- 1 root root 495 Feb 16 14:21 ivv-tailor.xccdf.xml -rw-r--r-- 1 root root 964 Feb 16 14:05 tailoring.tm.xccdf.xml -rw-r--r-- 1 root root 952 Feb 26 18:07 tailoring.tm2.xccdf.xml -rw-r--r-- 1 root root 489 Feb 17 14:03 test.xccdf.xml -rw-r--r-- 1 root root 24844 Feb 17 15:11 test123.xccdf.xml
This is the place compliance tailor saves the tailorings after committing it. The content of /var/share/compliance/tailorings/tailoring.tm2.xccdf.xml is exactly what export -x showed us earlier.
Another very interesting directory is /var/share/compliance/assessments. I will write more about why this is hopefully soon. I am working on customizing Solaris compliance for a larger scale environment and this directory plays an important role for that.
But let’s get back on track and talk about how much of an enhancement tailoring is.
At the moment we have different IPS packages with different benchmarks. Each with different profiles. Just so different scenarios are covered.
Which means we spend some time customizing large XML files and we also do have to spend time on maintaining it.
Now, all we do is package up your tailoring file or a compliance tailor -f command file with includes and excludes in IPS. Less complexity and less maintaining! No more duplicating lines and lines of code only to have a different set of tests that is suppose to be used.
When you think about it tailorings are the delta to a certain benchmark. So, what if you would have one large benchmark that includes all the available tests and let’s say a preconfigured profile for solaris, pci-dss and a “complete profile”. To cerate your own profile just place your tailoring in /usr/lib/compliance/benchmark/benchmark-name/tailorings/ and run the following:
# compliance assess -t tailoring-name
Using different tests depending on the application has become really simple and quick to prepare and do. Your tailoring works everywhere no matter if a benchmarks has tests included or excluded. Really nice! Add IPS and Puppet to all of this and you can much more time on other topics.
Right now this “complete” benchmark needs to be created by the customer. Not much of a problem if you already took care of that but I would guess not too many have. But even if you have your own all containing benchmark with each update you might be missing something in it. Tests or what so ever. So you still have to maintain thousands of lines of XML content. :-(
So hopefully such a benchmark will make it into a future release of compliance.
Tailoring simplifies Solaris Compliance a lot and saves you a lot of time. It is great! Try it!
Over the last couple of weeks or actually months I was lucky to talk to a lot of other Oracle Solaris customers and other Linux and Unix users/admins. One question that I got asked most of the time is why I use immutable zones? Where is the benefit if your data can still be read and stolen?
Since I finally got some time for a blog post I figured I will share the answer with whoever might be interested. Originally I planned on writing a HowTo post ever since this feature was released. But time has passed by and the more important question at the moment seems to be the WHY rather then the HOW.
The answer to why I use immutable (global)zones is security, simplicity, and speed. And all of this for no extra costs.
Let me explain this in more detail.
Security is often just looked at as something that keeps your data safe and protects the IT from attackers/hackers. This is definitely a part of what security is but there is so much more to it. Why are mostly hackers, attacker or let’s say external people considered a threat to the system but hardly ever the admins or users on the system itself. Why trust yourself? And what is it I want to protect or prevent?
Immutable zones aims for preventing data manipulation and protects you from headless admin mistakes like rm -r * in the wrong directory, wrong terminal or what so ever, misconfiguration of the system, and therefor of course also from anyone (attackers or users) reconfiguring your system. Yes, the data can be read but not changed. I am very sure most of you who read this have been dealing with a sudden appearance of a fault and when the question is asked what happened what was changed nobody has done anything. Why bother with this question??? I don’t want to think about what the application users might be doing in /etc or hope that the new admin is not going to destroy datasets, cleans out directories or even mis-configures RBAC.
Immutable zones ensures me that the system stays exactly the same way until I change something intentionally.
That was the technical point of view. But a growing field of security is compliance. I wonder how much money and time is spend by companies just to be able to somehow ensure the auditor that your system configuration is compliant throughout the year. And even more how much was spend to make sure it did stay the same. Scripts were written, mistakes were corrected, you got more gray hairs over the last couple of months and the meetings with the auditors will most probably not be the highlight of the year. Save time and money and just tell/show the auditor that the system was and is immutable (read-only) and therefor nothing changed!!! Easiest way to do so by the way is use the Solaris compliance framework.
Another very important fact is that this is not achieved by just mounting datasets read-only. It is rather deeply integrated in Solaris and based on privileges.
The chances are high that not every out there is working with Solaris but rather Windows or any Linux distro. So let me start with a short comment what simplicity does not mean to me as a Solaris guy. Simplicity does not mean I don’t have to install additional software or spend time on integrating a feature. That is what I am used to and what is normal to me.
I just start using security features or in case of for example RBAC “have” to use it. It is always there.
So what is it that makes immutable zones simple then? Well, let me just show you the steps it takes to turn a non-immutable zone into an immutable one.
root@GZ:~# zoneadm list -cv ID NAME STATUS PATH BRAND IP 0 global running / solaris shared 2 ap1s001 running /zones/ap1s001 solaris excl root@GZ:~# zonecfg -z ap1s001 set file-mac-profile=fixed-configuration root@GZ:~# zlogin ap1s001 init 6
You enable immutable Zones by simply changing the file-mac-profile value to either strict(be careful!), fixed-configuration or flexible-configuration and then reboot the zone. For the global zone dynamic-zones is available as well. In case you want to go back to a regular type of zone just use none as a value for file-mac-profile.
Here is a quote of the zonecfg man page:
none makes the zone exactly the same as a normal, r/w zone. strict
allows no exceptions to the read-only policy. fixed-configuration
allows the zone to write to files in and below /var, except direc-
tories containing configuration files:
dynamic-zones is equal to fixed-configuration but allows creating
and destroying non-global zones and kernel zones. This profile is
only valid for global zones, including the global zone of a kernel
flexible-configuration is equal to dynamic-zones, but allows writ-
ing to files in /etc in addition.
zoneadm list -p shows whether a zone is immutable or not.
root@GZ:~# zoneadm list -p 0:global:running:/::solaris:shared:-:none: 2:ap1s001:running:/zones/ap1s001:98efffaa-2070-adc3-e29a-9d95e123c62e:solaris:excl:R:fixed-configuration:
Listed are the fields zoneid:zonename:state:zonepath:uuid:brand:ip-type:r/w:file-mac-profile.
To add more simplicity a trusted path (zlogin -T|U) from the global zone / console can be used to do necessary changes. For example adding a directory or adding a user.
You also don’t have to do wild things when it comes to updating/patching. Just use the pkg update command as you always do.
As you can see it is just simple!
Thanks to the sophisticated integration of immutable zones there is no overhead. No software installed on top of the operating system. No daemon running and checking fo/preventing open system calls or what so ever. Immutable zones run just as fast as non-immutable zones.
Changing back and worth is just a reboot. I could imagine this might not even be necessary anymore somewhen.
Besides that this feature will speed up the auditor meetings as mentioned before.
And the process of setting it up is lightning fast compared to other tools out there.
Not worrying about your configuration of your system anymore will speed up other projects/topics you are working on by saving time and thoughts/distractions.
To tell you the truth this is what my very first answer to the why question always is before getting into details:
Why not?!?!?! Why shouldn’t I use a security feature that is there for free, that works and is a no-brainer to use?!
I don’t wanna worry about my own stupid mistakes or even the ones of others. I don’t trust application admins/users. Auditor meetings are over before they even begin. It’s just a great feature!
As I said at the beginning I was lucky to be able to talk to quiet a lot of different admins, engineers, managers, etc. and it was really nice to see how most of them started thinking “Why not, true!”. This feature might not fit in every single environment. But does every machine have IPsec, IPfilter, … enabled? Probably not.
I hope this will encourage some of you to make your own experience with this great Solaris feature.
It is probably not a big surprise to any of you Solaris people when I tell you I keep hearing Solaris is not open to other Software vendors or even OpenSource products/projects.
Well, the biggest features that came with Solaris 11 and its releases were Openstack, openSCAP, and Puppet to just name the most famous ones.
Now, Oracle Solaris comes with new FOSS evaluation packages. It is part of the default release repository.
Here is a very nice blog post by Mike Mulkey on what others than Oracle say about the SPARC M7. Well worth reading.
SPARC M7: Are You Kidding Me!?
As the Oracle post states: “This, in turn, means that several IBM software products can now make use of on-chip SPARC hardware encryption today, automatically, without significant performance impact.”
Start to tailor, throw cron out and get scheduled services in and spice up your compression with lz4 because Oracle has just released the latest version of Solaris.
This is calling for maintenance and some updates!!!