Here’s a blog post on something I didn’t even know existed until recently: Informix simple password encryption. This does what it says in the tin and nothing else: it encrypts the password you use to authenticate to the database server which is sent in the clear otherwise. If you weren’t aware of this then it’s certainly worth reading on!
I don’t know when this feature came into the product but some of my IBM contacts didn’t seem to be aware that it exists either and having tried to use it, it’s apparent that this is a cobweb area of the product with silly niggles stopping it from working properly in a client/server environment.
However it is a documented feature and the documentation is here. I’ve linked to the 11.70 docs but, as far as I can tell, the 12.10 docs are identical.
So why am I not using the more advanced network data encryption which has come into the product more recently, encrypts all your network traffic using SSL and is probably better supported? There are two reasons: this does more than I need and there will be an overhead to using SSL. This may be small but I have not had the chance to quantify it yet.
Below is what your auditor doesn’t want to see. Below is a TCP dump of me connecting to an environment called ‘test 4’ with user test_user and password ABCDefgh1234. I’m using a standard onsoctcp TCP connection.
To do this test I put in a file called connect.sql the code:
connect to 'sb_test4' user 'test_user' using 'ABCDefgh1234';
Then I connect using dbaccess:
$ dbaccess - connect.sql
At the server end I’m capturing traffic and (oh dear):
# tcpdump -nnvvXSs 1514 -i eth0 port 9099 and tcp
0x0000: 4500 01e5 1902 4000 4006 a0a9 ac10 936a E.....@.@......j
0x0010: ac10 93dc 80da 238b 42f5 709c 3748 53b9 ......#.B.p.7HS.
0x0020: 8018 0073 3af7 0000 0101 080a 43e7 699c ...s:.......C.i.
0x0030: f8a3 d51b 7371 4161 3042 5051 4141 7371 ....sqAa0BPQAAsq
0x0040: 6c65 7865 6320 7465 7374 5f75 7365 7220 lexec.test_user.
0x0050: 2d70 4142 4344 6566 6768 3132 3334 2039 -pABCDefgh1234.9
0x0060: 2e32 3430 2041 4141 2342 3030 3030 3030 .240.AAA#B000000
0x0070: 202d 6473 625f 7465 7374 3420 2d66 4945 .-dsb_test4.-fIE
0x0080: 4545 4920 4442 5041 5448 3d2f 2f74 6573 EEI.DBPATH=//tes
0x0090: 7434 5f74 6370 2043 4c49 454e 545f 4c4f t4_tcp.CLIENT_LO
0x00a0: 4341 4c45 3d65 6e5f 5553 2e38 3835 392d CALE=en_US.8859-
0x00b0: 3120 4e4f 4445 4644 4143 3d6e 6f20 434c 1.NODEFDAC=no.CL
0x00c0: 4e54 5f50 414d 5f43 4150 4142 4c45 3d31 NT_PAM_CAPABLE=1
0x00d0: 203a 4147 3041 4141 4139 6232 3441 4141 .:AG0AAAA9b24AAA
0x00e0: 4141 4141 4141 4141 4139 6332 396a 6447 AAAAAAAAA9c29jdG
0x00f0: 4e77 4141 4141 4141 4142 4141 4142 5041 NwAAAAAAABAAABPA
0x0100: 4141 4141 4141 4141 4141 6333 4673 5a58 AAAAAAAAAAc3FsZX
0x0110: 686c 5977 4141 4141 4141 4141 567a 6357 hlYwAAAAAAAAVzcW
0x0120: 7870 4141 414c 4141 4141 4177 414b 6447 xpAAALAAAAAwAKdG
0x0130: 567a 6444 5266 6447 4e77 4141 4272 4141 VzdDRfdGNwAABrAA
0x0140: 4141 4141 4141 6154 4141 4141 4141 4142 AAAAAAaTAAAAAAAB
0x0150: 746e 6457 5668 596d 5630 5958 4277 6448 tndWVhYmV0YXBwdH
0x0160: 4e30 4d44 6375 6332 7435 596d 5630 4c6d N0MDcuc2t5YmV0Lm
0x0170: 356c 6441 4141 4443 396b 5a58 5976 6348 5ldAAADC9kZXYvcH
0x0180: 527a 4c7a 4977 4141 4151 4c32 6876 6257 RzLzIwAAAQL2hvbW
0x0190: 5576 6447 6876 6258 427a 6232 3569 4141 UvdGhvbXBzb25iAA
0x01a0: 4275 4141 5141 4141 4153 4148 5141 4a51 BuAAQAAAASAHQAJQ
0x01b0: 4359 6c39 6b41 4143 6352 4142 7376 6233 CYl9kAACcRABsvb3
0x01c0: 4230 4c32 6c75 5a6d 3979 6257 6c34 4c32 B0L2luZm9ybWl4L2
0x01d0: 4a70 6269 396b 596d 466a 5932 567a 6377 Jpbi9kYmFjY2Vzcw
0x01e0: 4141 6677 00 AAfw.
You can see in bold both the user name and password.
Simple password encryption is easy to set up but, as we’ll see, the manual misses a step or two and a couple of hacks are needed to get this working. All the problems are at the client end and you won’t see them if you test the connectivity entirely within the server using a full engine install.
At the server end we need a concsm.cfg file, which by default lives in $INFORMIXDIR/etc/concsm.cfg but you can override with the easy to remember INFORMIXCONCSMCFG environment variable, which works in a similar fashion to INFORMIXSQLHOSTS.
My server concsm.cfg file contains the following:
SPWDCSM("/opt/informix_test4/lib/csm/libixspw.so", "", "p=1")
I also set up an additional sqlhosts entry:
test4_tcp_secure onsoctcp myserver 9101 csm=(SPWDCSM)
Finally I make sure test4_tcp_secure is listed as one of my DBSERVERALIASES in my onconfig and bounce the instance. Unfortunately I don’t think this parameter is dynamically configurable.
We can of course now test this all with the confines of the server and it will work!
Let’s move onto the client side where things are not quite as straightforward.
One of my mottos is that everyday is a school day and on one day last week I learnt that when you install the Informix server the gskit is installed for you. The gskit is mentioned in the machine notes for the Linux x86_64 release, for example:
14. Secure Sockets Layer
IBM Informix Database Server uses the libraries and utilities provided by
the IBM Global Security Kit (GSKit) for Secure Sockets Layer (SSL)
a. Before uninstalling GSKit, verify that it is not needed on your
system. It is possible that software other than Informix Database
Server requires GSKit. Uninstall by identifying and removing GSKit
packages using the command-line interface:
Run rpm command with the -qa option to obtain a list of installed
GSKit packages with their exact names.
rpm -qa | grep gsk
As root user run the rpm command to remove each package as needed.
rpm -ev gskssl64-184.108.40.206 gskcrypt64-220.127.116.11
b. If you want to restore Secure Sockets Layer capability after you
have uninstalled GSKit, see the readme file in $INFORMIXDIR/gskit
for how to install GSKit.
15. Simple Password Communications Support Module
The name of the IBM Informix shared library for Simple Password CSM on
Linux is libixspw.so.
I’ve also drawn attention to point 15, in case this is different on your platform.
The machine notes led me to check this (still on the server):
> rpm -qa | grep gsk
At the client end I am using Client SDK 3.70.FC8DE. The set up steps are very similar to the server:
- Create a concsm.cfg file, optionally using the INFORMIXCONCSMCFG variable.
- Add a sqlhosts entry, similar to the server.
The concsm.cfg file on the client end is different to that on the server and reflects the different path to the libixspw.so file:
SPWDCSM("/opt/informix/lib/client/csm/libixspw.so", "", "p=1")
However on the client side it may be necessary to manually install the gskit. It is simple to do, run as user root:
This step isn’t obviously documented anywhere. I had to resort to strace on dbaccess to find what was wrong when my connection didn’t work:
open("/lib64/tls/libgsk8ssl_64.so", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/lib64/libgsk8ssl_64.so", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/lib64/tls/libgsk8ssl_64.so", O_RDONLY) = -1 ENOENT (No such file or directory)
open("/usr/lib64/libgsk8ssl_64.so", O_RDONLY) = -1 ENOENT (No such file or directory)
munmap(0x7fc5d5299000, 40473) = 0
--- SIGSEGV (Segmentation fault) @ 0 (0) ---
+++ killed by SIGSEGV (core dumped) +++
Segmentation fault (core dumped)
The above gave me enough clues to find references to the gskit in the manual and fix the problem.
There is another gremlin as well and without this fix you will see unhelpful error messages like this when using dbaccess:
14581: Cannot open file 'css.iem'.
It turns out it’s necessary to add links to some of the language files, which have different names when distributed with Client SDK to the server.
The above error message can be fixed on the client by doing the following:
ln -s ccss.iem css.iem
IBM Informix support also pointed out that this is needed too:
ln -s ccsm.iem csm.iem
It turns out these steps are necessary only for dbaccess and not for applications using IConnect. Hopefully these will be fixed in a future Client SDK version.
Lastly a repeat of my tcpdump test with an encrypted password:
0x0000: 4500 01e7 2471 4000 4006 9538 ac10 936a E...$q@.@..8...j
0x0010: ac10 93dc a457 238d 3bd6 ab8f e3a9 caed .....W#.;.......
0x0020: 8018 0073 1b2e 0000 0101 080a 4418 8fe2 ...s........D...
0x0030: f8d4 fb61 7371 4161 3842 5051 4141 7371 ...asqAa8BPQAAsq
0x0040: 6c65 7865 6320 7465 7374 5f75 7365 7220 lexec.test_user.
0x0050: 2039 2e32 3430 2041 4141 2342 3030 3030 .9.240.AAA#B0000
0x0060: 3030 202d 6473 625f 7465 7374 3420 2d66 00.-dsb_test4.-f
0x0070: 4945 4545 4920 4442 5041 5448 3d2f 2f74 IEEEI.DBPATH=//t
0x0080: 6573 7434 5f74 6370 5f73 6563 7572 6520 est4_tcp_secure.
0x0090: 434c 4945 4e54 5f4c 4f43 414c 453d 656e CLIENT_LOCALE=en
0x00a0: 5f55 532e 3838 3539 2d31 204e 4f44 4546 _US.8859-1.NODEF
0x00b0: 4441 433d 6e6f 2043 4c4e 545f 5041 4d5f DAC=no.CLNT_PAM_
0x00c0: 4341 5041 424c 453d 3120 3a41 4730 4141 CAPABLE=1.:AG0AA
0x00d0: 4141 3962 3234 4141 4141 4141 4141 4141 AA9b24AAAAAAAAAA
0x00e0: 4141 3963 3239 6a64 474e 7741 4141 4141 AA9c29jdGNwAAAAA
0x00f0: 4141 4241 4141 4250 4141 4141 4141 4141 AABAAABPAAAAAAAA
0x0100: 4141 4163 3346 735a 5868 6c59 7741 4141 AAAc3FsZXhlYwAAA
0x0110: 4141 4141 4156 7a63 5778 7041 4141 4c41 AAAAAVzcWxpAAALA
0x0120: 4141 4141 7741 5264 4756 7a64 4452 6664 AAAAwARdGVzdDRfd
0x0130: 474e 7758 334e 6c59 3356 795a 5141 4161 GNwX3NlY3VyZQAAa
0x0140: 7741 4141 4141 4141 4c43 6a41 4141 4141 wAAAAAAALCjAAAAA
0x0150: 4141 625a 3356 6c59 574a 6c64 4746 7763 AAbZ3VlYWJldGFwc
0x0160: 4852 7a64 4441 334c 6e4e 7265 574a 6c64 HRzdDA3LnNreWJld
0x0170: 4335 755a 5851 4141 4177 765a 4756 324c C5uZXQAAAwvZGV2L
0x0180: 3342 3063 7938 794d 4141 4145 4339 6f62 3B0cy8yMAAAEC9ob
0x0190: 3231 6c4c 3352 6f62 3231 7763 3239 7559 21lL3Rob21wc29uY
0x01a0: 6741 4162 6741 4541 4141 4145 6742 3041 gAAbgAEAAAAEgB0A
0x01b0: 4355 416d 4a66 5a41 4141 6e45 5141 624c CUAmJfZAAAnEQAbL
0x01c0: 3239 7764 4339 7062 6d5a 7663 6d31 7065 29wdC9pbmZvcm1pe
0x01d0: 4339 6961 5734 765a 474a 6859 324e 6c63 C9iaW4vZGJhY2Nlc
0x01e0: 334d 4141 4838 00 3MAAH8.
The password has gone. Where has it gone? It’s a bit hard to say given it’s now encrypted but having checked the tcpdump for the entire session I am sure that:
- It is not sent in the clear.
- We are not authenticating by hosts.equiv or any other passwordless means.
- Sending the wrong password leads to a log on failure.
Compliance is one of those things you can hardly ignore as a DBA these days. Whether it’s a PCI-DSS, financial or internal best practice audit, at some point someone is going to ask you whether you are using database auditing. In my experience the auditors struggle to ask Informix specific questions but this is one that always comes up.
I guess there are three answers to this question:
- Yes, we use database auditing.
- No we don’t use Informix auditing but we have a third party solution somewhere else in the stack that means someone else worries about it.
- Can we have a compensating control, please?
Rarely I find that auditors are too concerned about the detail of what you’re actually auditing. If you can log in, do some stuff and show them that this resulted in some messages in the audit log, they are usually happy. They are usually more concerned about where the logs go, who can read them and so on.
While the auditors are clearly not Informix DBAs familiar with all the auditing pneumonics, they are not daft and know they can take most of what they asked for granted next year and ask more probing questions next time.
So should you look at onaudit for your requirements? It’s been around a long time but I expect it may see a pick up in interest as more and more systems take payments in one way or another. In some ways it could do with some updates. Integration with syslog, allowing easy upload to a centralised question, is needed. There is an RFE open for this (id 58678). It’s not mine but it had six votes when I last checked and it deserves more!
Positives about onaudit include:
- It’s free with all editions.
- Provided you stay away from selective row auditing (I don’t cover this in this blog post) and don’t try to audit much or any of what your application does the overhead is negligible.
- It gives you as a DBA a clearer idea of what is happening on your system.
So I think it’s certainly worthy of consideration. I know some customers prefer security solutions external to the database like Guardium but these are costly. I don’t know much about them so I shall leave that thought there.
Auditing needs to be part of a more general secure framework. If everyone at your site logs in as user informix or any shared account, the worst case being the same account as your application, it’s not going to be as useful. Applying rules by user will be difficult or impossible.
Some sites I’ve seen let DBAs do all their work as user informix. It definitely saves developing a more secure framework for DBAs to work in (this is not a good thing!) but has disadvantages. Even if you avoid shared passwords by using sudo to informix (on UNIX) having logged in as yourself, you’d need then to cross-check with the secure logs on your server to see who it was and if two people have escalated privileges at the same time it can be tricky to distinguish their actions. Ideally you need DBAs and every other real person working under their own user ids as much as possible.
To work as a DBA without access to the informix account you simply add yourself to the same group as the group owning the $INFORMIXDIR/etc folder and grant yourself dba in any databases you need to do DDL in, plus sysmaster, sysadmin, sysutils, sysha and sysuser but it still presents the following challenges which may require specific sudo type solutions:
- Starting an instance; stopping one is no problem.
- Running xtrace and certain oncheck commands.
Additionally as a DBA you may need root access occasionally for installations, upgrades and to use debuggers.
So before you even start there are some highly desirable prerequisites:
- Your applications use their own account or (ideally) accounts and real users cannot run ad-hoc sessions using these.
- Real users don’t use shared accounts (and especially not shared passwords). This means locking down the informix account.
- DBAs practise what they preach and administer the system under their own accounts as much as possible.
Getting this far can be a struggle but even if you’re only some of the way there, you can still proceed.
The next step is consider whether to install Informix with role separation. I’m not going to discuss this at length so I’ll point to the documentation. There are no real gotchas here: it works pretty much as it says on the tin. The key idea is that it separates the DBAs from the people who decide what should be audited and who can see the audit trail. In practice I think total separation is impossible because the people deciding what should be audited need to understand the impact on the system of what they audit and the volume of data this produces. It is certainly possible to slow a system down by auditing every update.
So you’re now ready to switch on auditing? Nearly. If you monitor your system via onstat or have scripts which call onmode, ‘onmode -c [un]block’ being a specific example where care is required, you need to be aware that in all but the latest Informix releases, this includes right up to 12.10.FC5W1, as soon as you switch on auditing your onstat and onmode commands will run more slowly. This can also affect admin API command equivalents and not just the ones which are direct equivalents for onmode. The situation can get quite bad when lots of these commands run at the same time, leading to significant delays in the response from these commands.
Fortunately there are some fixes for this:
- TURNING ON THE AUDITING LEVEL 1 ADDS AN UNNECESSARY DELAY TO ONSTAT AND ONMODE COMMANDS
This has been around for a while and appeared in 11.70.FC7W1. However it is not very effective and only eliminates the delay if the volume of onstat commands being run on your system is low.
- TURNING ON THE AUDITING LEVEL 1 ADDS AN UNNECESSARY DELAY TO ONSTAT & ONMODE COMMANDS
This is completely effective and means that onstat and onmode behave identically to when auditing is switched off but it only works if you do not have any audit masks which log the use of these commands.
There are workarounds for the auditing delay such as using sysmaster equivalents for the onstat commands and performing onstat commands inside an onstat -i interactive session.
Finally you’ll want to consider setting up some audit masks. I take the following approach to this:
- _require mask
- This mask defines the minimum events to be audited for all users. I put everything that’s common in here.
- _default mask
- If an account is not assigned a specific mask, it will pick up all the events in there. To avoid having to assign masks to all real users, I don’t assign them any mask and then they automatically inherit this one (in addition to what is in the _require mask).
- Other masks
- For my applications and other accounts needing special treatment, I create a custom mask and assign it to the user.
Finally if you’re feeling brave switch auditing on with some commands like:
onaudit -p /path/to/audit/trail
onaudit -s 1048576 # 1 Mb files
onaudit -e 0
onaudit -l 1
Now there is just that application security model for you to tackle.
Good luck and may you sail through your audits!
Once in a while something comes along to make a DBA’s life easier.
The eagle-eyed amongst you will immediately spot the new line:
Maximum number of pages per index fragment: 2,147,483,647
This is a 128 times improvement on the previous limit of 16,775,134, the same as the data pages per fragment limit, which limited an index fragment to just shy of 256 Gb in a 16 kb dbspace. The new limit of 32 Tb (again with a 16 kb dbspace) is much easier to work with. It applies only to detached indices.
Maybe it’s not the sexiest improvement but it actually arrived at the same time as storage pools in version 11.70 and so has been with us for a while. In my own test I was able to build an unfragmented detached index of over 32 Gb in a 2 kb dbspace in 11.70.FC7.
Should you rely (in version 11.70) on undocumented functionality? Without your own testing, maybe not. However, it’s good that this extra breathing space for DBAs exists and – with 12.10 – is documented and fully supported.