In 2015 I wrote a blog post about using simple password encryption (SPW) and how – without it – your valuable passwords can be trivially sniffed on your network. If you look through the post it illustrates the vulnerability and shows just how easy it is to set your system up in a more secure way.
SPW only encrypts your password when you connect. Not everyone wants or needs full encryption of all their traffic but what reasons are there not to use SPW?
- It requires a small amount of set up extra work, although this can (should?) be automated.
- It means your database engine spawns some extra cssmbox_cn threads, although they are only used at connection time and the overhead is low.
- Consideration should be given to patching the IBM Global Security Kit (GSKit) separately from the server and client, both of which bundle it.
I don’t know of any other drawbacks. In my opinion these are nothing substantive then when you consider your peace of mind.
If you have Fix Central access you can always download the latest GSKit from here. Although it’s used by many IBM products it’s filed under Tivoli which isn’t obvious at all.
Patching the GSKit separately isn’t necessarily something you need to do but it isn’t only used by SPW: if you’ve set ENCRYPT_HDR, ENCRYPT_SMX or ENCRYPT_CDR, for example, you are using it. The GSKit doesn’t get installed in INFORMIXDIR; it’s installed by RPM (on Linux) to /usr/local/ibm and only one version can exist on your server. So if you’re used to pre-installing a new version of Informix server or Client SDK in its own folder prior to an upgrade, be aware that you may just have unwittingly upgraded the GSKit.
The feature has suffered a few issues lately and is currently broken when used with the Informix JDBC driver in 11.70.xC9; connections supported by CSDK or IConnect work fine. I think the feature would be more dependable if more people used it (or if the product testing stress tested this area). Here are some relatively recent issues:
- All recent JDBC drivers earlier than 4.10.JC8 (including 3.70.JC8W1) suffer from an issue where a small proportion of connections will fail. You might not notice this if your application can capture logon failures and retry automatically. There is no APAR for this that I know of as 4.10.JC8 was extensively reworked for JDBC 4.0 support.
- Informix 11.70.xC9 contains fix IT10493 but this caused a high rate of logon failures with SPW and fix IT17087 is additionally needed but not included.
- If you’re using the 12.10 code line you need xC8 or later to get the same fix.
- CSDK 4.10.FC8 ships with an incompatible GSKit version, 188.8.131.52, but actually requires 184.108.40.206+ (APAR IT18763). You may not notice this, however, if your server software ships with a later version.
I hope this doesn’t come across as a moan, more a call to action.
For the sake of brevity in this article I am going to group procedures, functions and routines together as stored procedures and ignore any differences between them.
What does the SQL command UPDATE STATISTICS FOR PROCEDURE/FUNCTION/ROUTINE does and perhaps, more pertinently, as a DBA do I need to run this regularly to ensure my systems are working efficiently? For those wanting an immediate answer I think it is “never” or “almost never“, the reasons for which I hope to explain clearly in this article.
The command itself is straightforward: calling it causes Informix to parse a stored procedure’s code and produce a query plan for all of it based on the current statistics and data distributions (if the procedure references any tables). It then writes the query plan to the sysprocplan table which is, unless you have an unlogged database, a logged operation written to the logical log. Used with no parameters it does this for all stored procedures in the system.
As long as the query plan in sysprocplan is reasonably efficient there is probably no need to (ever) proactively update it but there may be cases when you’d want to do so, for example, if a very small or empty table has grown into a large one. However if you were to do this your new plan would be based on the current table statistics and data distributions and if these haven’t been updated yet you may get the same, now inefficient, plan.
The manual states:
The sysprocplan system catalog table stores execution plans for SPL routines. Two actions can update the sysprocplan system catalog table:
- Execution of an SPL routine that uses a modified table
- The UPDATE STATISTICS FOR ROUTINE, FUNCTION, or PROCEDURE statement.
There is a created column in the sysprocplan table but it’s a date and not a date/time which makes it much harder to match plan updates to other events.
So what is a modified table? Quite simply it is one where the version number has been incremented. You can see the version number with an SQL query like:
select version from systables where owner='myowner' and tabname='mytable';
I think the only reference to this in manual is in section about the systables view where it simply says:
Number that changes when table is altered
How the engine works out the dependencies a stored procedure has on different tables falls into the category of system internals, which IBM chooses not to publicly document, but I think it’s safe to say that if a table is referenced anywhere in a procedure it is dependent on it.
There are many ways a table can be “altered”, some more obvious than others:
|Method||Version number incremented by|
|GRANT SELECT, UPDATE||1|
|UPDATE STATISTICS HIGH||2|
|UPDATE STATISTICS MEDIUM||2|
|UPDATE STATISTICS [LOW]||2|
I am not sure why some operations increment the value by large numbers, all powers of 2, as any increment has a similar effect, at least as far as the scope of this article is concerned.
The table is not a complete list because there are many possible DDL operations but this does already illustrate or suggest that:
- On most systems it’s likely that UPDATE STATISTICS commands will be the main trigger for stored query plans to be updated. If you run LOW, HIGH and MEDIUM modes for a table like you will if you use AUS or dostats, you’ll trigger at least three updates for dependent stored procedures (if they are called).
- If we want to grant multiple privileges on the same table, it’s best to do it in a single statement because if a dependent stored procedure is being called in between running commands by an application, its stored execution plan will be updated only once.
- GRANT DBA is not a table level operation yet it has an effect.
Further testing shows that both the GRANT DBA and REVOKE DBA statements increment the version number on all tables in the same database where the tabid is 100 or greater, that is all user tables. From the manual above it follows that the stored query plans for all stored procedures or functions dependent on a table will be updated the next time they are executed.
On our systems we see a large amount of writes to sysprocplan after granting or revoking the DBA privilege to anyone. When graphed we see a sharp peak and a long exponential tail off as less commonly used procedures get called.
Therefore if you grant DBA to a user on a busy live system, it can affect concurrency more than you might expect. On an idle system you may want to run UPDATE STATISTICS FOR PROCEDURE immediately afterwards to update the stored query plans in an orderly way and save the first session to call any given procedure from this overhead.
I think running the command offline to avoid the overhead for user or application sessions is possibly the only true use case for this command.