Passwords encrypted over the network: why is this feature not enabled by default?

In 2015 I wrote a blog post about using simple password encryption (SPW) and how – without it – your valuable passwords can be trivially sniffed on your network. If you look through the post it illustrates the vulnerability and shows just how easy it is to set your system up in a more secure way.

SPW only encrypts your password when you connect. Not everyone wants or needs full encryption of all their traffic but what reasons are there not to use SPW?

  • It requires a small amount of set up extra work, although this can (should?) be automated.
  • It means your database engine spawns some extra cssmbox_cn threads, although they are only used at connection time and the overhead is low.
  • Consideration should be given to patching the IBM Global Security Kit (GSKit) separately from the server and client, both of which bundle it.

I don’t know of any other drawbacks. In my opinion these are nothing substantive then when you consider your peace of mind.

If you have Fix Central access you can always download the latest GSKit from here. Although it’s used by many IBM products it’s filed under Tivoli which isn’t obvious at all.

Patching the GSKit separately isn’t necessarily something you need to do but it isn’t only used by SPW: if you’ve set ENCRYPT_HDR, ENCRYPT_SMX or ENCRYPT_CDR, for example, you are using it. The GSKit doesn’t get installed in INFORMIXDIR; it’s installed by RPM (on Linux) to /usr/local/ibm and only one version can exist on your server. So if you’re used to pre-installing a new version of Informix server or Client SDK in its own folder prior to an upgrade, be aware that you may just have unwittingly upgraded the GSKit.

The feature has suffered a few issues lately and is currently broken when used with the Informix JDBC driver in 11.70.xC9; connections supported by CSDK or IConnect work fine. I think the feature would be more dependable if more people used it (or if the product testing stress tested this area). Here are some relatively recent issues:

  • All recent JDBC drivers earlier than 4.10.JC8 (including 3.70.JC8W1) suffer from an issue where a small proportion of connections will fail. You might not notice this if your application can capture logon failures and retry automatically. There is no APAR for this that I know of as 4.10.JC8 was extensively reworked for JDBC 4.0 support.
  • Informix 11.70.xC9 contains fix IT10493 but this caused a high rate of logon failures with SPW and fix IT17087 is additionally needed but not included.
  • If you’re using the 12.10 code line you need xC8 or later to get the same fix.
  • CSDK 4.10.FC8 ships with an incompatible GSKit version, 8.0.50.66, but actually requires 8.0.50.70+ (APAR IT18763). You may not notice this, however, if your server software ships with a later version.

I hope this doesn’t come across as a moan, more a call to action.


When do my stored procedure execution plans get updated?

For the sake of brevity in this article I am going to group procedures, functions and routines together as stored procedures and ignore any differences between them.

What does the SQL command UPDATE STATISTICS FOR PROCEDURE/FUNCTION/ROUTINE does and perhaps, more pertinently, as a DBA do I need to run this regularly to ensure my systems are working efficiently? For those wanting an immediate answer I think it is “never” or “almost never“, the reasons for which I hope to explain clearly in this article.

The command itself is straightforward: calling it causes Informix to parse a stored procedure’s code and produce a query plan for all of it based on the current statistics and data distributions (if the procedure references any tables). It then writes the query plan to the sysprocplan table which is, unless you have an unlogged database, a logged operation written to the logical log. Used with no parameters it does this for all stored procedures in the system.

As long as the query plan in sysprocplan is reasonably efficient there is probably no need to (ever) proactively update it but there may be cases when you’d want to do so, for example, if a very small or empty table has grown into a large one. However if you were to do this your new plan would be based on the current table statistics and data distributions and if these haven’t been updated yet you may get the same, now inefficient, plan.

The manual states:

The sysprocplan system catalog table stores execution plans for SPL routines. Two actions can update the sysprocplan system catalog table:

  • Execution of an SPL routine that uses a modified table
  • The UPDATE STATISTICS FOR ROUTINE, FUNCTION, or PROCEDURE statement.

There is a created column in the sysprocplan table but it’s a date and not a date/time which makes it much harder to match plan updates to other events.

So what is a modified table? Quite simply it is one where the version number has been incremented. You can see the version number with an SQL query like:

select version from systables where owner='myowner' and tabname='mytable';

I think the only reference to this in manual is in section about the systables view where it simply says:

version
INTEGER
Number that changes when table is altered

How the engine works out the dependencies a stored procedure has on different tables falls into the category of system internals, which IBM chooses not to publicly document, but I think it’s safe to say that if a table is referenced anywhere in a procedure it is dependent on it.

There are many ways a table can be “altered”, some more obvious than others:

Method Version number incremented by
GRANT SELECT 1
GRANT UPDATE 1
GRANT SELECT 1
GRANT SELECT, UPDATE 1
UPDATE STATISTICS HIGH 2
UPDATE STATISTICS MEDIUM 2
UPDATE STATISTICS [LOW] 2
RENAME COLUMN 65536
CREATE INDEX 65536
ADD column 131072
DROP column 131072
GRANT CONNECT 0
GRANT RESOURCE 0
GRANT DBA 1

I am not sure why some operations increment the value by large numbers, all powers of 2, as any increment has a similar effect, at least as far as the scope of this article is concerned.

The table is not a complete list because there are many possible DDL operations but this does already illustrate or suggest that:

  • On most systems it’s likely that UPDATE STATISTICS commands will be the main trigger for stored query plans to be updated. If you run LOW, HIGH and MEDIUM modes for a table like you will if you use AUS or dostats, you’ll trigger at least three updates for dependent stored procedures (if they are called).
  • If we want to grant multiple privileges on the same table, it’s best to do it in a single statement because if a dependent stored procedure is being called in between running commands by an application, its stored execution plan will be updated only once.
  • GRANT DBA is not a table level operation yet it has an effect.

Further testing shows that both the GRANT DBA and REVOKE DBA statements increment the version number on all tables in the same database where the tabid is 100 or greater, that is all user tables. From the manual above it follows that the stored query plans for all stored procedures or functions dependent on a table will be updated the next time they are executed.

On our systems we see a large amount of writes to sysprocplan after granting or revoking the DBA privilege to anyone. When graphed we see a sharp peak and a long exponential tail off as less commonly used procedures get called.

Therefore if you grant DBA to a user on a busy live system, it can affect concurrency more than you might expect. On an idle system you may want to run UPDATE STATISTICS FOR PROCEDURE immediately afterwards to update the stored query plans in an orderly way and save the first session to call any given procedure from this overhead.

I think running the command offline to avoid the overhead for user or application sessions is possibly the only true use case for this command.


Improving remote query performance by tuning FET_BUF_SIZE

I thought I’d write blog post as a nice example of where tuning the client-side variable, FET_BUF_SIZE, really speeded up a remote query.

FET_BUF_SIZE is documented by IBM in the context of a Java application using JDBC here and as a server environment variable here.

One thing the documentation warns about is that simply setting this to a high value may degrade performance, especially if you have a lot of connections. With that in mind here are some facts about the query I’m running and using as a basis for these tests:

  • I am just using a single connection to the database.
  • the query returns around 10000 rows and 60 Mb of data.
  • the client and the server are geographically separated from each other and Art Kagel’s dbping utility typically takes around 0.1 seconds to connect remotely; this compares with around 3 milliseconds locally.
  • crucially the query runs in seconds locally on the server but takes over three minutes when run remotely.

If I begin running the query with the default value of FET_BUF_SIZE and monitor waits on the server, I can see that reads only go up slowly and that my session is waiting on a condition (indicated by the Y in position one of column two) more or less all the time:

> while [ 1 ] ; do
> onstat -u | grep thompson
> sleep 1
> done
Userthreads
address flags sessid user tty wait tout locks nreads nwrites
26eb492d18 Y--P-R- 76228 thompson 0 26e67cd298 0 0 552 0
26eb492d18 Y--P-R- 76228 thompson 0 26e67cd298 0 0 552 0
26eb492d18 Y--P-R- 76228 thompson 0 26e67cd298 0 0 560 0
26eb492d18 Y--P-R- 76228 thompson 0 26e67cd298 0 0 560 0
26eb492d18 Y--P-R- 76228 thompson 0 26e67cd298 0 0 568 0
26eb492d18 Y--P-R- 76228 thompson 0 26e67cd298 0 0 576 0
26eb492d18 Y--P-R- 76228 thompson 0 26e67cd298 0 0 592 0
26eb492d18 Y--P-R- 76228 thompson 0 26e67cd298 0 0 624 0
26eb492d18 Y--P-R- 76228 thompson 0 26e67cd298 0 0 624 0

The sixth column shows the rstcb value of the thread I’m waiting on. I can use onstat -g con (print conditions with waiters) to see that I’m waiting on the network:

> onstat -g con | grep -E '^cid|26e67cd298'
cid addr name waiter waittime
5789 26e67cd298 netnorm 84353 0

A quick check with onstat -g ses 76228 shows that thread id. 84353 does indeed correspond to my session.

While the wait time shown above is not increasing it’s a different story when we look at netstat, again on the server:

> netstat -nc | grep '172.16.0.1'
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 1312 10.0.0.1:9088 172.16.0.1:37004 ESTABLISHED
tcp 0 1284 10.0.0.1:9088 172.16.0.1:37004 ESTABLISHED
tcp 0 1306 10.0.0.1:9088 172.16.0.1:37004 ESTABLISHED
tcp 0 1302 10.0.0.1:9088 172.16.0.1:37004 ESTABLISHED
tcp 0 1194 10.0.0.1:9088 172.16.0.1:37004 ESTABLISHED
tcp 0 1206 10.0.0.1:9088 172.16.0.1:37004 ESTABLISHED
tcp 0 1266 10.0.0.1:9088 172.16.0.1:37004 ESTABLISHED
tcp 0 1304 10.0.0.1:9088 172.16.0.1:37004 ESTABLISHED
tcp 0 1318 10.0.0.1:9088 172.16.0.1:37004 ESTABLISHED
tcp 0 1248 10.0.0.1:9088 172.16.0.1:37004 ESTABLISHED

What the above is showing us is that there are consistently around 1200 to 1300 bytes in the send queue (Send-Q). This is surely our bottleneck.

At this point when investigating the problem I considered modifying other parameters such as OPTOFC and Linux kernel parameters. However with a few moment’s thought it was clear these weren’t going to gain anything: OPTOFC optimises the open-fetch-close sequence and for a single long running query this is not going to give us anything measurable; and an investigation into increasing the Linux kernel parameter related to the send queue size was dismissed when we found that 1300 bytes was well below the maximum allowed.

In Informix 11.50 the maximum value of FET_BUF_SIZE is 32767 (32 kb) but this is increased to 2147483648, or as we’ll see actually 2147483647, (2 Gb) in 11.70 and above. We can therefore move onto to experiment with different values:

FET_BUF_SIZE Query run time (s) Average Send-Q size over 10 samples Maximum Send-Q size observed
Default 221.2 1274 1332
1024 221.1 1255 1326
2048 221.1 1285 1338
4096 221.2 1297 1360
6144 102.1 2564 2676
8192 56.6 5031 5210
16384 22.6 12490 13054
32767 (max. 11.50 value) 11.5 24665 29968
65536 7.0 62188 62612
131072 4.9 115793 127826
262144 4.0 146686 237568
524288 3.5 184320 249856
1048576 3.3 245760 473616
2097152 3.2 249856 486352
2147483647 (max. value – 1) 3.0 245760 549352
2147483648 (supposed max. value) 221.3 1276 1366

As the run times get shorter it gets tricky to measure the Send-Q using netstat -nc: it can be sampled very frequently using a command like:

while [ 1 ] ; do
netstat -n | grep '172.16.0.1'
done

This will produce many measurements per second and with this it’s possible to see it fill up and drain several times in the period while the statement is running.

It’s also interesting to play around with the boundaries. For example, with a FET_BUF_SIZE between around 5500 and 5600 maximum Send-Q sizes the same as those consistently achieved with a FET_BUF_SIZE of 6144 begin to creep into the results but many measurements remain around the values consistently measured wit a FET_BUF_SIZE of 4096:

Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 1316 10.0.0.1:9088 172.16.0.1:37488 ESTABLISHED
tcp 0 1318 10.0.0.1:9088 172.16.0.1:37488 ESTABLISHED
tcp 0 1278 10.0.0.1:9088 172.16.0.1:37488 ESTABLISHED
tcp 0 1352 10.0.0.1:9088 172.16.0.1:37488 ESTABLISHED
tcp 0 1288 10.0.0.1:9088 172.16.0.1:37488 ESTABLISHED
tcp 0 2546 10.0.0.1:9088 172.16.0.1:37488 ESTABLISHED
tcp 0 1278 10.0.0.1:9088 172.16.0.1:37488 ESTABLISHED
tcp 0 2502 10.0.0.1:9088 172.16.0.1:37488 ESTABLISHED
tcp 0 1266 10.0.0.1:9088 172.16.0.1:37488 ESTABLISHED
tcp 0 1314 10.0.0.1:9088 172.16.0.1:37488 ESTABLISHED
tcp 0 2506 10.0.0.1:9088 172.16.0.1:37488 ESTABLISHED
tcp 0 1292 10.0.0.1:9088 172.16.0.1:37488 ESTABLISHED

So what are the conclusions?

  • Increasing FET_BUF_SIZE at the client side can dramatically improve the speed of remote queries.
  • Maximum Send-Q sizes, as measured by netstat, increase in discrete steps as FET_BUF_SIZE is increased.
  • A larger Send-Q allows more data to be cached and reduces waits seen in Informix.
  • To see any improvement at all FET_BUF_SIZE must be increased to at least 6000 (approximate value).
  • Around boundaries between maximum Send-Q sizes there appears to be a cross-over region where maximum send queue sizes overlap from two adjacent values are seen from one second to the next.
  • The maximum value allowed in 11.70 at least is 2147483647 and not 2147483648, as indicated in the documentation.
  • The maximum 11.50 value of 32767 produced a run time nearly 4x slower than an optimised value for 11.70+
  • Other testing I did, not documented here, shows that the results are uniform across JDBC and ESQL/C applications.

Note: all user names, IP addresses and port numbers used in this post have been altered.


Informix or Client SDK install: No Java virtual machine could be found

This is a something of a note to self. For some time it has been been the case that you may see this message when attempting an Informix server or Client SDK install if there is a problem starting the installer’s Java runtime environment:

# LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64:/lib64 ./ids_install
Preparing to install...
Extracting the JRE from the installer archive...
Unpacking the JRE...
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...
No Java virtual machine could be found from your PATH
environment variable. You must install a VM prior to
running this program.

To add insult to injury when this condition occurs the installer exits with status code zero, suggesting all is ok.

Now the obvious thing to do seems to be to install a Java package, wondering whether OpenJDK will suffice or the official Oracle version is needed. This is never the answer! The Informix installer comes bundled with its own Java run time environment (JRE) which gets extracted into /tmp/install.dir.X and your challenge is in fact to find out why it isn’t working as it should.

You can see in my attempt at installing the product I have already prefaced the command with LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64:/lib64. This is already a known way of fixing some installation problems. (For a 32-bit version you’d simply use /usr/lib:/lib.)

Everyone’s friend, strace, is a great way to start investigating this problem. In amongst the output I find this:

faccessat(AT_FDCWD, "/tmp/install.dir.12813/Linux/resource/jre/jre/bin/java", X_OK) = -1 EACCES (Permission denied)

So why is this? I am logged in as root so I ought not be running into permission denied issues.

The core problem here is the way /tmp, which is a separate filesystem on my machine, is mounted. From the mount command output:

tmpfs on /tmp type tmpfs (rw,nosuid,nodev,noexec,relatime)

The key part here is the noexec flag which is a security feature preventing execution of binary files residing on this filesystem.

The best way to fix this is to set the environment variable IATEMPDIR to a directory on a filesystem where execution is allowed. I usually use /root for this purpose. And success:

# export IATEMPDIR=/root
# LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/lib64:/lib64 ./ids_install
Preparing to install...
Extracting the JRE from the installer archive...
Unpacking the JRE...
Extracting the installation resources from the installer archive...
Configuring the installer for this system's environment...

Launching installer...

Preparing CONSOLE Mode Installation...

While the above should be sufficient I have seen the server installer still fail to work even with this environment variable set as some files may still be placed in /tmp. In this situation you can temporarily remove the security restriction with:

mount -o remount,rw,nosuid,nodev,relatime,exec /tmp

and switch it back on again with:

mount -o remount,rw,nosuid,nodev,relatime,noexec /tmp

I suggest before running the above you check the existing mount options for your /tmp filesystem.


Intermittent “CSM: authentication error” with JDBC

This article will only concern you if:

  • you connect using JDBC.
  • you use simple password encryption.

Simple password encryption just does one thing: it encrypts the password sent to the database server in transit preventing it from being obtainable by network packet sniffing.

At the client end configuring simple password encryption can be done simply by adding
";SECURITY=PASSWORD"
to your JDBC connection string.

At the server end set up a DBSERVERALIAS and add
csm=(SPWDCSM)
to the fifth field in sqlhosts and set environment variable INFORMIXCONCSMCFG before starting the instance to point to a file containing something like:
SPWDCSM("/opt/informix/lib/csm/libixspw.so", "", "p=1")

This is covered in more detail elsewhere and I haven’t covered using CSDK but for JDBC connections it’s all there is to know.

Unfortunately there is a bug in JDBC 3.70.JC8W1 and JDBC 4.10.JC7 and below where every 100th connection attempt or so will fail randomly with this stack:

java.sql.SQLException: CSM: authentication error.
at com.informix.jdbc.IfxSqliConnect.(IfxSqliConnect.java:1337)
at sun.reflect.GeneratedConstructorAccessor3.newInstance(Unknown Source)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at com.informix.jdbc.IfxDriver.connect(IfxDriver.java:243)
at java.sql.DriverManager.getConnection(DriverManager.java:664)
at java.sql.DriverManager.getConnection(DriverManager.java:270)
at Connect.main(Connect.java:30)
Caused by: com.informix.asf.IfxASFRemoteException:
at com.informix.asf.Connection.recvBindResponse(Connection.java:1363)
at com.informix.asf.Connection.establishConnection(Connection.java:1619)
at com.informix.asf.Connection.(Connection.java:392)
at com.informix.jdbc.IfxSqliConnect.(IfxSqliConnect.java:1232)
... 7 more

You can see if you’re vulnerable by compiling this app and running it until it fails or you’re reasonably confident you don’t have a problem:

public class Connect
{
public static void main(String[] args)
{
Connection conn = null;
String url = "jdbc:informix-sqli://hostname:port/dbname:INFORMIXSERVER=informixserver;user=user;password=password;SECURITY=PASSWORD";
System.out.println(url);

try
{
Class.forName("com.informix.jdbc.IfxDriver");
}
catch (Exception e)
{
System.out.println("FAILED to load Informix JDBC driver.");
e.printStackTrace();
return;
}

int i=0;
while (true) {
i++;
try
{
conn = DriverManager.getConnection(url);
}
catch (SQLException e)
{
System.out.println("FAILED to connect! "+e);
e.printStackTrace();
}
System.out.println("Connected " + i);
if (conn != null) {
try {
conn.close();
}
catch (SQLException e) {
System.out.println("FAILED to disconnect! "+e);
e.printStackTrace();
}
}
}
}
}

If your application handles fails connections and retries automatically you might not have noticed this error or perhaps it was lost in the noise but for more simple applications it can be a pain.

Fortunately this is fixed in JDBC 4.10.JC8 and the fix works with 11.70 and 12.10 versions of the servers.

Interestingly the JDBC release notes for 4.10.JC8 are coy about this, showing just one fix.


Buffer waits

Is it really a year since I last wrote a blog post? It does mean I have had time to think of a few topics to write about.

Recently a desk visitor came to me at work about a performance issue. From what he said I got the impression that he thought that DBAs spend most of their time tuning SQL queries, something I spent little time on during a typical day, perhaps one of the advantages of working on a mature system.

Perhaps though he was kind of right. Many of the things I do are around making sure queries run reliably, consistently and in a scalable manner, I just don’t necessarily do this by looking at explain plans. Don’t get me wrong: these are very important; it’s just that once these are as good as they can be it doesn’t mean there aren’t other ways of finding bottlenecks or contention and tuning them out. For example when I was a more junior DBA I used to concern myself with buffer cache hits and, once I knew a little more, buffer turnover ratios, reducing I/O by allocating more memory as Moore’s Law provided rapid improvements in server CPU speed and memory size.

In the Oracle world DBAs have moved away from this measure and use the Oracle wait interface which allows you to see what operations the engine was doing. We can do this (slightly differently) in Informix too and it’s very useful. The simplest overview of waits is to look at position one of the flags column from “onstat -u”. The Informix documentation states:

flags
Provides the status of the session.
The flag codes for position 1:
B Waiting for a buffer
C Waiting for a checkpoint
G Waiting for a write of the logical-log buffer
L Waiting for a lock
S Waiting for mutex
T Waiting for a transaction
Y Waiting for condition
X Waiting for a transaction cleanup (rollback)

Most sessions will probably show Y which usually means they are waiting on TCP transit or are idle, waiting for the application or user’s session to do something.

As the manual says we can get more detail for buffers through onstat -b, -B and -X; for latches (mutexes) through onstat -s, (also -g lmx and -g wmx) and locks with onstat -k.

onstat -X is the most useful way to examine buffer waits but it is not as user-friendly as it could be, which is a shame because it often offers many clues to where problems lie:

Buffers (Access)
address owner flags pagenum memaddr nslots pgflgs scount waiter

Buffer pool page size: 2048
14700 modified, 16777216 total, 16777216 hash buckets, 2048 buffer size

Buffer pool page size: 4096
2443cd7e8 ffffffffffffffff 80 25:1523486 cce17b000 101 2801 0 0
267ca4628 0 0 47:1570054 105c3c5000 122 2890 1 0
53980 modified, 8388608 total, 8388608 hash buckets, 4096 buffer size

Buffer pool page size: 8192
59577 modified, 8388608 total, 8388608 hash buckets, 8192 buffer size

Buffer pool page size: 16384
3784a8188 ffffffffffffffff 80 162:18904760 4baadf4000 248 890 0 0
37854d188 ffffffffffffffff 80 162:24581408 4baeff4000 248 890 0 0
378ead5e8 ffffffffffffffff 80 124:25597240 4beb010000 187 2801 0 0
378f781a8 ffffffffffffffff 80 124:25597376 4bf0128000 710 890 0 0
3798d3388 ffffffffffffffff 80 124:25597176 4c2bf34000 710 890 0 595236d428
3799321a8 ffffffffffffffff 80 162:13196672 4c2e528000 248 890 0 624d39d668
37a353128 ffffffffffffffff 80 124:25597840 4c6f258000 197 801 0 0
37a4cefe8 ffffffffffffffff 80 168:32233760 4c78a50000 399 890 0 0
37c485d28 ffffffffffffffff 80 264:13942672 4d439d8000 319 890 0 0
37c5b45c8 ffffffffffffffff 80 162:24158848 4d4b2dc000 193 2801 0 0
37c80f368 ffffffffffffffff 80 168:33303832 4d5a400000 303 890 0 0
37caf6ce8 0 10 124:25597160 4d6cd70000 710 890 1 0
37ceaab28 ffffffffffffffff 80 166:8227296 4d84898000 332 890 0 0
37ceba8e8 ffffffffffffffff 80 124:25597648 4d84ef0000 710 890 0 0
37d70f4a8 ffffffffffffffff 80 124:25597208 4dba408000 193 801 0 0
37d891088 ffffffffffffffff 80 162:26376432 4dc3e54000 248 890 0 0
37dc9abe8 58cc3c7eb8 80 144:18435488 4dddbd0000 193 2801 0 0
87962 modified, 13762560 total, 16777216 hash buckets, 16384 buffer size

The key problem here from a usability point of view is that it is tedious to convert the chunk:pagenum format into an actual database object.

There is a similar problem with part numbers when deciphering output from, for example onstat -k that can be solved by downloading and compiling the ESQL/C utility partn from the IIUG software repository.

Loosely based on that here is my Perl script, chunkpg, which can provide friendly names for objects in chunk:pagenum format:

#!/usr/bin/perl

# Decipher chunk and page numbers

use strict;
use warnings;

&main;

sub main {

my $key = &check_params;

my $chunkno = 0;
my $nextinfo = 0;
my $pagesize = 0;
my $count = 0;
my $lastdbs = '';
my $syspagesize = 2; # change to 4 kb if required

my (%objs, %start, %end, %c, %ps, %dbs);

open (ONCHECKPE, "oncheck -pe |");
while () {
chomp;
if ($nextinfo == 1) {
$nextinfo = 0;
my ($blank, $path, $size, $used, $free);
($blank, $chunkno, $path, $pagesize, $size, $used, $free) = split / +/;
# print "$chunkno: $pagesize kb\n";
$count = 0;
$c{$chunkno} = 0;
$ps{$chunkno} = $pagesize;
}
elsif ($_ eq ' Chunk Pathname Pagesize(k) Size(p) Used(p) Free(p)') {
$nextinfo = 1;
}
elsif ($_ =~ /^DBspace Usage Report: /) {
my @parts = split / +/;
$lastdbs = $parts[3];
$dbs{$chunkno} = $lastdbs;
}
elsif ($_ =~ /^ \w+:\'\w+\'\.\w+ +/) {
$count++;
my ($blank, $obj, $offset, $size) = split / +/;
# printf ("%s: %d -> %d\n", $obj, $offset*2, $offset*2+$size*2);
$objs{$chunkno}{$count} = $obj;
$start{$chunkno}{$count} = $offset;
$end{$chunkno}{$count} = $size + $offset;
$c{$chunkno}++;
if (!$dbs{$chunkno}) {
$dbs{$chunkno} = $lastdbs;
}
}
}
close (ONCHECKPE);

while () {
chomp;
print "$_ ";
$_ =~ s/^ +//;
my @vals = split / +/;
if ($vals[$key-1] && $vals[$key-1] =~ /\d+\:\d+/) {
my ($chunk, $page) = split /\:/, $vals[$key-1];
for (my $i = 1; $i = $start{$chunk}{$i} && $end{$chunk}{$i} && $page/($ps{$chunk}/$syspagesize) <= $end{$chunk}{$i}) {
print $objs{$chunk}{$i}.','.$dbs{$chunk};
last;
}
}
}
print "\n";
}
}

sub check_params () {

die print "INFORMIXDIR is not set.\n" unless $ENV{'INFORMIXDIR'};
die print "INFORMIXDIR is not set to a valid directory (\'$ENV{'INFORMIXDIR'}\').\n" unless -d $ENV{'INFORMIXDIR'};
die print "INFORMIXSERVER is not set.\n" unless $ENV{'INFORMIXSERVER'};

if ($ARGV[0] && $ARGV[0] eq '-k') {
if ($ARGV[1] && $ARGV[1] !~ /\D/) {
return ($ARGV[1]);
}
else {
print "Invalid key number: $ARGV[1]\n";
exit 1;
}
}
elsif ($ARGV[0]) {
print "Invalid parameter: $ARGV[1]\n";
exit 1;
}
return (1);
}

You can then run as follows: onstat -X | chunkpg -k 4

If you were to run onstat -X repeatedly, perhaps at different times of day, you would begin to build a picture of where buffer waits are occurring.

Having identified buffer waits, what can be done to reduce them?

Ultimately it is going to come back to your database design, including its storage schema, the queries you run and maybe onconfig parameters.

Buffer waits on tables can be reduced using partitioning. Fragmentation by round-robin is effective for improving the rate of concurrent inserts (although it doesn’t facilitate fragment elimination in queries). Very large tables may require some form of partitioning to avoid reaching the 16.7m (2^24) page limit but even smaller tables with a large number of inserts and/or deletes can benefit.

It’s worth noting here that buffer waits occur in server RAM and so it’s not obvious at all that the storage schema should come into it. After it only directly affects what happens on disk, right? Not so: I can show that different storage schemas perform differently by running a concurrent insert test on a machine tuned to only flush data to disk at checkpoints. If a table has 10-way round-robin partitioning then it has ten different part numbers and is for many purposes ten different tables. For inserts we are always appending to the end of the table and so there will be contention on the last buffered page between sessions. Spreading this across ten different partitions reduces it.

Indices can also see a large number of buffer waits. In one stark example I found a large number of buffer waits were occurring on a large single-column index where every value in the column was null (nulls are indexed in Informix, unlike some other RDBMSs) and so there was a single leaf node pointing to all rows. Every insert and delete needed to modify this. A less extreme version of this might be seen where there are a limited number of values a field can take. In this case dropping the index or combining the index column with another to vastly increase the number of nodes would help throughput.

Would index fragmentation/partitioning help? It is probably less useful, in general because there is no 16.7m page limit for indices since version 11.70 and, in the case of fragmentation by expression, the same nodes could be equally congested.

In conclusion I think this method is a useful addition to your performance tuning armoury and by reducing contention you can improve the scalability of your system and increase throughput.


Zone reclaim mode

Non-uniform memory access or NUMA is not a new concept but high end multiprocessor Intel-based servers are increasingly configured with this architecture, bringing it more to the mainstream. Put simply NUMA means that instead of all processors accessing your main system memory through a common bus, each processor is allocated an even share of the memory that it can address directly. If a processor needs to access memory controlled by another processor it can do so through that other processor.

Linux kernels from v2.5 onwards are aware of any NUMA architecture and it can be displayed using numactl -H or numactl –hardware:

node distances:
node 0 1 2 3
0: 10 21 21 21
1: 21 10 21 21
2: 21 21 10 21
3: 21 21 21 10

The above is from a four socket server. It shows that fetching from local memory is weighted at ’10’ and from memory controlled by other processors ’21’. I strongly suspect these weightings are hard coded.

numactl -H also shows information about how the memory is split between processors. The term ‘node’ is used:

available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 6 7 32 33 34 35 36 37 38 39
node 0 size: 65418 MB
node 0 free: 310 MB
node 1 cpus: 8 9 10 11 12 13 14 15 40 41 42 43 44 45 46 47
node 1 size: 65536 MB
node 1 free: 41 MB
node 2 cpus: 16 17 18 19 20 21 22 23 48 49 50 51 52 53 54 55
node 2 size: 65536 MB
node 2 free: 82 MB
node 3 cpus: 24 25 26 27 28 29 30 31 56 57 58 59 60 61 62 63
node 3 size: 65536 MB
node 3 free: 43 MB

What the above shows is that the free memory available to each node varies. If a process running on node 3, in our example, needs to allocate memory and it needs more than 43 Mb, it can either:

  • Use memory assigned to another node, for example node 0. This means the memory access will not be local.
  • Reclaim memory from node 3’s local memory by evicting other pages from memory.

The kernel switch vm.zone_reclaim_mode controls which behaviour is used. If set to 1 it will prefer to evict other pages from memory.

This is explained in a great more detail in this article by Christoph Lameter.

How is this parameter set on your system? You can check by running cat /proc/sys/vm/zone_reclaim_mode

If it’s set to 1 on your Informix system you should definitely read on. You’ll be glad to hear this parameter can be changed dynamically.

In the latest kernels (2014 onwards) this commit means that the parameter will never be set on your system automatically but if you’re running an enterprise Linux you could be on a kernel version like 2.6.32 (RHEL 6) where this can occur: although patched the base version of this dates from 2009.

I am not sure of the exact criteria that determine when older Linux kernels will switch on this feature at boot up. I think you need a modern four (or more) processor server with a NUMA architecture but there may be other requirements.

It’s interesting to read the slightly repetitious kernel commit log:

When it was introduced, zone_reclaim_mode made sense as NUMA distances punished and workloads were generally partitioned to fit into a NUMA node. NUMA machines are now common but few of the workloads are NUMA-aware and it’s routine to see major performance degradation due to zone_reclaim_mode being enabled but relatively few can identify the problem.

Those that require zone_reclaim_mode are likely to be able to detect when it needs to be enabled and tune appropriately so lets have a sensible default for the bulk of users.

This patch (of 2):

zone_reclaim_mode causes processes to prefer reclaiming memory from local node instead of spilling over to other nodes. This made sense initially when NUMA machines were almost exclusively HPC and the workload was partitioned into nodes. The NUMA penalties were sufficiently high to justify reclaiming the memory. On current machines and workloads it is often the case that zone_reclaim_mode destroys performance but not all users know how to detect this. Favour the common case and disable it by default. Users that are sophisticated enough to know they need zone_reclaim_mode will detect it.

Hopefully now the relevance to Informix is becoming a little clearer. Certainly there has been much complaining in the PostgreSQL community about this parameter. Another frustrated blog post describes some of the massive I/O latency problems it can cause on your system even when under no obvious memory pressure.

On our Informix system, which uses huge pages, we have experienced long disruptive checkpoints as a result of zone reclaiming. As huge pages are not swappable, it’s likely to be our monitoring and other non-Informix processes provoking the zone reclaims.

The long checkpoint behaviour can be summarised as:

  • A checkpoint is triggered by CKPTINTVL.
  • Informix instructs all threads to finish what they are doing and goes into state CKPT REQ.
  • One or more threads may be in critical section and must continue to the end of this section before it can stop.
  • A zone reclaim is occurring and I/O throughput dramatically decreases and this thread takes many seconds to come out of critical section.
  • All active threads wait (state C in the first column of onstat -u).
  • Eventually the operation completes, the checkpoint actually occurs very quickly and processing continues.

This behaviour can occur in later versions of the engine with non-blocking checkpoints.

If you have the mon_checkpoint sysadmin task enabled (I strongly recommend this), information about your checkpoints will be written to sysadmin:mon_checkpoint. (Otherwise you only retain information about the last twenty checkpoints visible through onstat -g ckp.) A tell tale sign is a large crit_time, nearly all of the checkpoint duration, and a much smaller flush_time.

You can get further evidence of whether a zone reclaim might be occurring at the same time by looking at the number of pages scanned per second in the output from sar -B. (sar is a very sophisticated monitoring tool these days with views into many aspects of the operating system.)

One test you can try (on a test server) is LinkedIn Engineering’s GraphDB simulator. It’s a C++ program that mimics the behaviour of GraphDB and is designed to provoke zone reclaim behaviour from the Linux kernel if it is switched on.

On our test system we can leave it running for hours without zone reclaim enabled and monitor it through sar -B.

10:30:55 AM pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s pgscand/s pgsteal/s %vmeff
10:31:00 AM 951.42 20993.52 8415.59 0.81 1351.62 0.00 0.00 0.00 0.00
10:31:05 AM 294.97 20930.38 8764.59 2.21 3286.92 0.00 0.00 0.00 0.00
10:31:10 AM 170.28 24627.31 4939.16 1.61 1859.64 32276.31 16282.73 565.06 1.16
10:31:15 AM 193.12 77519.03 5379.96 1.42 53762.75 4495.55 0.00 93.72 2.08
10:31:20 AM 240.24 88966.60 6875.45 1.81 1483.30 0.00 0.00 0.00 0.00
10:31:25 AM 183.50 277.67 8113.28 1.61 4045.47 0.00 0.00 0.00 0.00
10:31:30 AM 202.41 280.08 11409.46 2.82 3114.29 0.00 0.00 0.00 0.00
10:31:35 AM 243.37 255.42 8815.46 2.21 1905.62 0.00 0.00 0.00 0.00
10:31:40 AM 92.37 194.38 5890.96 1.00 1059.84 0.00 0.00 0.00 0.00
10:31:45 AM 283.70 313.08 12742.05 2.21 5263.38 0.00 0.00 0.00 0.00
10:31:50 AM 414.83 11179.96 7938.48 2.00 45495.59 39413.23 0.00 784.17 1.99
10:31:55 AM 198.79 31014.95 9007.47 2.63 2374.95 0.00 0.00 0.00 0.00
10:32:00 AM 235.74 25065.86 10159.84 2.61 1866.47 0.00 0.00 0.00 0.00
10:32:05 AM 202.01 37361.45 11010.24 2.01 3250.00 0.00 0.00 0.00 0.00
10:32:10 AM 256.91 5640.48 7596.59 3.01 3638.08 0.00 0.00 0.00 0.00
10:32:15 AM 246.89 20823.65 5411.42 1.80 1704.21 0.00 0.00 0.00 0.00
10:32:20 AM 114.46 41366.27 6625.30 0.80 1352.41 0.00 0.00 0.00 0.00
10:32:25 AM 188.76 20948.19 25422.09 1.81 8850.20 0.00 0.00 0.00 0.00
10:32:30 AM 177.15 29934.67 9358.52 1.60 54522.65 42292.59 4315.83 1071.14 2.30
10:32:35 AM 237.83 9914.69 9167.40 2.21 2483.50 0.00 0.00 0.00 0.00
10:32:40 AM 207.71 81296.55 8555.17 2.64 2631.85 0.00 0.00 0.00 0.00

The test itself reports latencies over 100 ms and in this mode we occasionally see I/O operations taking around 200 ms reported.

We can change the kernel parameter dynamically while the test is running and see the behaviour change almost immediately:

10:35:15 AM pgpgin/s pgpgout/s fault/s majflt/s pgfree/s pgscank/s pgscand/s pgsteal/s %vmeff
10:35:20 AM 365.06 15634.14 6300.40 3.41 3841.57 0.00 15241.77 2644.18 17.35
10:35:25 AM 333.06 5519.35 9262.10 3.43 8639.31 0.00 92890.32 4528.63 4.88
10:35:30 AM 1158.15 20868.81 10292.96 10.06 12215.09 0.00 255137.22 7858.55 3.08
10:35:35 AM 781.12 41385.54 7742.77 5.02 5841.16 0.00 34506.02 3422.89 9.92
10:35:40 AM 518.10 8764.47 2524.85 3.25 2906.59 0.00 1703326.11 2016.93 0.12
10:35:52 AM 2576.57 39524.85 13449.49 11.31 10332.12 0.00 1153144.24 4256.77 0.37
10:35:57 AM 2707.22 40786.31 7962.55 8.17 9893.92 0.00 4246095.82 6729.66 0.16
10:36:02 AM 1600.75 1889.37 2551.12 4.34 629.04 0.00 3595585.63 253.52 0.01
10:36:16 AM 756.94 39362.58 2063.18 8.25 3785.71 0.00 4238635.01 1814.29 0.04
10:36:21 AM 990.94 9277.31 1584.26 6.24 1692.88 0.00 6222810.91 833.73 0.01
10:36:52 AM 69.73 0.00 116.91 0.96 271.29 0.00 2056531.75 7.20 0.00

The number of pages scanned per second escalates.

Meanwhile I/O latencies reported by the test program escalate up to 36000 ms. We actually have to kill the test program within 30 seconds of changing the kernel parameter to avoid the system becoming so unresponsive it cannot maintain sshd connections.

In our real world Informix example we are not using the page cache anything like as aggressively and when the problem occurs I/O demands reduce as we get down to a single thread in critical section. Thus we don’t see pages scanned at the rate in the test, just a clear increase.

It’s worth mentioning that new NUMA capabilities have been added to the Linux kernel in version 3.8 (and later in 3.13) so RHEL 7 users might see slightly different behaviour.