Monitoring virtual segment usage and the CPU VP caches

A little while ago I was forced to look into detail into the memory usage of one of the production instances I look after. Specifically the problem was that the instance was allocating extra virtual segments to itself (via the SHMADD process) but it was a surprise because memory usage on this instance was being monitored and it clearly showed that memory usage was normal and well below the initial segment size.

Well, almost everything did. onstat -g seg was correctly reporting the memory usage. Without wishing to put in an early spoiler, this is the only way of seeing how much memory your system is using and how close you are to your instance allocating an extra segment.

With the help of a careful analysis by IBM Informix support, we looked into memory usage on our system using:

onstat -u
onstat -g mem
onstat -g mgm
onstat -g ses

We also drilled down into individual sessions using:

onstat -g afr
onstat -g ffr
onstat -g ufr

We also looked into memory usage by onbar and SQLTRACE.

The result was a massive discrepancy between the total memory in use, as reported by these tools, and what onstat -g seg was reporting. And onstat -g seg appeared to be right because when it said the memory was all used up, the engine would allocate another segment.

So where was the memory going? Was it a leak or a bug? Well no, it was a new feature or, as we later learned, a bug fix.

In response to a performance issue reported by another customer, IBM had redesigned the VP memory cache code in 11.70.FC7W1 and it turned out that this was responsible for the usage of up to 70% of the virtual memory segment on our system. This is an enterprise edition only feature so I guess if you any other edition, you can stop reading at this point and just note that monitoring the output from onstat -g seg is a great idea if you want to predict segment additions.

The CPU memory cache allocates a dedicated area of the virtual memory segment to each CPU virtual processor, which it can use exclusively without contending with other virtual processors.

So a new and undocumented feature in a W1 release? Isn’t this a bit irregular? Well, no said IBM, it was a fix for a customer issue. But it does change the behaviour from what is documented quite dramatically.

In 11.70.FC7 and before, the CPU memory cache size is controlled by the VP_MEMORY_CACHE_KB parameter and if you have, say 8 CPU VPs, this results in a fixed area of memory of 8x VP_MEMORY_CACHE_KB being allocated to CPU memory caches and this is still how the manual says it works.

In 11.70.FC7W1 this parameter merely controls the initial size of these caches, which are then free to grow (and I think shrink below the initial size) as they see fit. To improve performance memory can be allocated to these caches without having to free any first and a separate thread deals with garbage collection (or drainage). (I hope I have explained this properly as I an not a programmer.) What is certain is that if your system is very busy the caches grow faster than the garbage collection clears them down. If your system is very busy for a sustained period, they can grow and allocate memory until you hit SHMTOTAL, if you’ve set it. (I don’t think hitting this limit would be very pretty because the instance would kick out sessions to free up memory, but this is not where the problem lies. Anyway, it would need testing and I haven’t done so.)

So can you monitor it? Yes you can and I’d recommend if you’re running 11.70.FC7W1 or above and have the VP cache switched on that you do. This little code snippet does the job of calculating the total size in Mb of all the VP caches for an instance:

#!/bin/bash
vpcacheblks=0
vpcache=(`onstat -g vpcache | grep -E '^    [0-9]' | awk '{ print $2 }'`)
for ((i=0; i<${#vpcache[*]}; i++)); do
    vpcacheblks=`expr $vpcacheblks + ${vpcache[$i]}`
done
vpcachemb=`echo 'scale=1; '${vpcacheblks}' / 256' | bc`
echo $vpcachemb

You can also use the output from onstat -g vpcache to work out at the number of missed drains using the formula (free - alloc) - drains.

If you have a busy system and particularly one with heavy peak periods, graphing the size of this over time is very interesting. Equally if your system is not that busy, you may see flat memory usage. It’s worth knowing which applies to you.

So if you’re reading this article and, having done a bit of investigation on your own system to see whether it affects you and found that it does, what can you do to mitigate? Here are some options:

  • Downgrade to 11.70.FC7 or earlier.
  • Set VP_MEMORY_CACHE_KB to 0. You can actually do this dynamically using onmode -wm to clear out the cache and then reset it to its original value immediately afterwards.
  • Increase SHMVIRTSIZE to accommodate the growth. Of course you need spare memory in your server to do this.
  • Set SHMTOTAL to remove the possibility of your server swapping. If you do, also look at setting up the low memory manager.

So what are IBM doing about the situation? An APAR has been raised as follows:

IC95684 AFTER THE FIX FOR IC90645 (11.70.FC7W1 AND NEWER) THE VP PRIVATE CACHES CAN GROW UNCONTROLLABLY ON BUSY SYSTEMS

This should result (in 11.70.FC8) in the documentation being updated and a choice of STATIC or DYNAMIC modes will be available for the VP caches. DYNAMIC will be the same as the new behaviour and STATIC is more similar to how things were previously where the VP caches were a fixed size. Note I said more similar and not the same. It will be interesting to look at how this behaves when it’s available.

There’s also another issue touched on here and I’ve used the new request for enhancement (RFE) site to log it and that is that onstat -g mem does not include the VP cache sizes in its output and is therefore not a complete view of all the memory pools in your instance. The RFE requests that it is.

Advertisements

Parsing the online log

At a recent IBM roadshow there was a brief discussion where someone mentioned that they monitor their instances using a script and regular expressions to parse the online log. Using such an approach is quite straightforward, although it is somewhat tedious to code for all the possible combinations.

You can actually find out most, if not all of the possible messages, by running:

strings $INFORMIXDIR/msg/en_us/0333/olmsglog.iem

However, there is an easier way. Nearly always when a message is written to the online log, the alarmprogram is called. If you’re familiar with the alarmprogram, you’ll know that when it’s called a severity value is passed through which you can use as the basis for if you’re alerted by email or not. This severity value is never shown in the online log so already you can see that other approaches might have more potential.

There is now a scheduler task called post_alarm_message, which writes online log messages to a table called ph_alert in the sysadmin database.

> dbaccess sysadmin -

Database selected.

> select tk_name, tk_description, tk_execute, tk_enable from ph_task where tk_name='post_alarm_message';

tk_name post_alarm_message
tk_description System function to post alerts
tk_execute ph_dbs_alert
tk_enable t

1 row(s) retrieved.

I guess this is there primarily for OAT but it’s extremely useful for system monitoring because it preserves some information about the severity of the alerts and also makes them easy to query via SQL.

> select * from ph_alert where alert_time > current - 30 units minute;

id 9349
alert_task_id 18
alert_task_seq 4970
alert_type INFO
alert_color YELLOW
alert_time 2013-10-02 19:30:06
alert_state NEW
alert_state_chang+ 2013-10-02 19:30:06
alert_object_type ALARM
alert_object_name 23
alert_message Logical Log 14294 Complete, timestamp: 0x7ab0345c.
alert_action_dbs sysadmin
alert_action
alert_object_info 23001

id 9350
alert_task_id 18
alert_task_seq 4971
alert_type INFO
alert_color YELLOW
alert_time 2013-10-02 19:49:25
alert_state NEW
alert_state_chang+ 2013-10-02 19:49:25
alert_object_type ALARM
alert_object_name 23
alert_message Logical Log 14295 Complete, timestamp: 0x7ad5b988.
alert_action_dbs sysadmin
alert_action
alert_object_info 23001

2 row(s) retrieved.

Note the alarm types and colours. Personally I don’t set much store by the alert colour; I prefer to go by the alarm type where the possibilities are INFO, WARNING and ERROR. These are not the same as the alarmprogram severities which range from 1 to 5.

For monitoring purposes, a sensible query to look for alerts might be something like:

select alert_time, alert_color, alert_type, alert_object_type, alert_message from ph_alert where alert_type!='INFO' and alert_state='NEW' and alert_time > current - 7 units day order by alert_time;

Using dbaccess or an Informix API for your favourite scripting language, you can monitor for alerts quite easily.

You can mark the alerts as acknowledged using a query like the below:

update ph_alert set alert_state='ACKNOWLEDGED' where id=? and alert_state='NEW' and alert_type in ('WARNING', 'ERROR');

Hopefully I’ve demonstrated that this approach is a lot easier and probably better than a complex regex script. Maybe you shouldn’t throw away that script just yet and run both in parallel until you’re satisfied of the reliability of this approach? To make sure that the post_alarm_message process is working I also check that there are entries in the ph_alert table on a regular basis, although on a quiet system there may not be any entries for some time.

There’s also the console log to consider, which you may want to monitor and it cannot be implemented in this way.