Monitoring Informix with the Elastic Stack
Posted: 8 December, 2018 Filed under: Monitoring | Tags: elasticsearch, elk, informix, kibana, logstash 1 CommentIntroduction
If you’re not familiar with the Elastic Stack it is a suite of products for ingesting data or logs, searching, analysing and visualising. There is a good overview over at the Elastic web site of how it can be put together. I say “can” because the stack is very flexible and, for example, you can send JSON documents to Elasticsearch via a REST API, rather than use Filebeat or Logstash.
This blog post is mostly concerned with ingesting the Informix online log with Filebeat, recognising certain types of log line that can occur and tagging the file using rules set up in Logstash, before sending it to Elasticsearch for storage and indexing. Finally Kibana can be used to visualise the data stored in Elasticsearch.
It’s easy to see how this could be scaled up to provide a single place to visualise logs from multiple instances and it would be fairly trivial to add in other logs too, like the Informix bar logs and logs from the operating system.
At IIUG 2018 in Arlington, VA I presented a talk entitled DevOps for DBAs, which demonstrated the Docker set up now described below but I hadn’t at the time documented the full set up at the time. Here it is!
Practical demonstration with Docker containers
Overview
This demonstration sets up two containers: one running Informix Developer Edition and Filebeat to collect and ship logs:
- Informix 12.10.FC12W1DE, listening on port 9088/tcp for onsoctcp connections.
- Filebeat 6.5.2.
and the other running the Elasticstack components as follows:
- Logstash 6.5.2, listening on port 5044/tcp.
- Elasticsearch 6.5.2, listening on port 9200/tcp.
- Kibana 6.5.2, listening on port 5601/tcp.
Access to Kibana is via your favourite web browser running on your desktop. Nginx will be listening on port 80 so you can simply access http://localhost/
For a secure production implementation it’s recommended that you use Nginx with HTTPS as a reverse proxy for the Kibana web service as shown in the diagram. We’ll be using Nginx in this demonstration, rather than connecting to Kibana directly, but we won’t be configuring SSL; there are plenty of online guides about how to do this. Also communication between Filebeat and Logstash should be encrypted: this blog post doesn’t cover this.
The above versions are current at the time of writing (December 2018). Elasticstack moves very quickly so it is likely these will not be the latest versions by the time you read this. The idea of this blog post is that you should just be able to copy and paste the commands and end up with a working system but you shouldn’t be surprised if things don’t work perfectly if your versions don’t match the above. For example, in between beginning this blog post and finishing it, version 6.5.x was released with improved default security settings, with services only listening on the loopback interface without reconfiguration.
Running the whole Elasticstack in Docker plus Informix does require a reasonable amount of memory and I’d suggest a minimum of 2.5 GB to be allocated to the Docker Engine.
Docker network
To provide name resolution between the containers we are going to start by creating a docker network:
docker network create --driver bridge my_informix_elk_stack
Elastic Stack installation
Docker container
We’ll start by setting up the Elastic Stack Docker container which will be using on a (minimal) Debian installation. In a terminal run:
docker pull debian
docker run -it --name elasticstack_monitoring -p 80:80 -p 5044:5044 --hostname elasticstack -e "GF_SECURITY_ADMIN_PASSWORD=secret" --net my_informix_elk_stack debian
Your terminal should now be inside the Docker container and logged in as root.
To avoid some issues with debconf when installing packages run:
echo 'debconf debconf/frontend select Noninteractive' | debconf-set-selections
Install Java
Run these commands to install some the software-properties-common package and then install OpenJDK 8, which is required by Elasticsearch and Logstash. Java 9 should be fine too.
The Debian Docker image does not come with many packages pre-installed so I am also going to install vim for editing files later plus a few other essentials; you may prefer nano or another editor.
apt-get update
apt-get install software-properties-common gnupg vim wget apt-transport-https openjdk-8-jre
The Debian Docker container is a basic installation so this short list of packages have hundreds of dependencies.
Check the Java version:
# java -version
openjdk version "1.8.0_181"
OpenJDK Runtime Environment (build 1.8.0_181-8u181-b13-2~deb9u1-b13)
OpenJDK 64-Bit Server VM (build 25.181-b13, mixed mode)
Install Elasticsearch
The Elasticsearch installation is more straightforward than Oracle Java and follows standard Linux methods for setting up and installing from a third party software repository.
First we install the repository’s key:
wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | apt-key add -
Then add the repository:
echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-6.x.list
We need the HTTPS transport package installing before we can proceed to install.
apt-get update
apt-get install elasticsearch
Elasticsearch will work right out of the box which is fine for the purposes of this demonstration and (after we start the service) will be listening on localhost only on ports 9200 for the REST API and 9300 for node communication.
Install Kibana
This is installed from the Elasticstack repository added above:
apt-get install kibana
Again Kibana doesn’t require any reconfiguration for the purposes of this demonstration and will listen on localhost only on port 5601.
Start Kibana by running:
service kibana start
Now that Kibana is running start Elasticsearch by running:
service elasticsearch start
It’s worth noting as this point that modern Debian distributions use systemd but this doesn’t work in non-privileged Docker containers. For reference the systemd equivalents for this are:
systemctl daemon-reload
systemctl enable kibana
systemctl enable elasticsearch
systemctl start kibana
systemctl start elasticsearch
This commands also ensure the service starts on boot.
As Kibana is only listening on localhost and therefore unreachable from an external web browser, we will set up Nginx as a reverse proxy. This is a more secure configuration and recommended for any production implementation because only Nginx is directly exposed to the internet and not Kibana.
Install Nginx as a reverse proxy
Start by installing Nginx:
apt-get install nginx
Edit the file /etc/nginx/sites-available/default and in the location / section add the line:
proxy_pass http://localhost:5601;
Comment out the line beginning with try_files.
It should look something like this:
location / { # First attempt to serve request as file, then # as directory, then fall back to displaying a 404. proxy_pass http://localhost:5601; #try_files $uri $uri/ =404; }
Don’t forget the semi-colon!
Start nginx:
service nginx start
Install Logstash
An installation procedure that by now should look familiar:
apt-get install logstash
Our Logstash configuration will be in two parts:
- A standard out of the box configuration for receiving files from Filebeat and sending them to Elasticsearch: for this copy /etc/logstash/logstash-sample.conf to /etc/logstash/conf.d/logstash.conf
- A custom config file, /etc/logstash/conf.d/informix.conf, for parsing the Informix online log.
I intend to update and improve the Logstash config for filtering the Informix online log and it’s available on my Github page at https://github.com/skybet/informix-helpers/blob/master/logstash/informix.conf. Download it locally and then you can copy it to your Docker container as follows:
docker cp informix.conf elasticstack_monitoring:/etc/logstash/conf.d/informix.conf
The Informix config file requires that Filebeat tags the file with “[field][informix] = true“; this condition is trivial to remove if you wish.
Check the Logstash configuration with:
/usr/share/logstash/bin/logstash -t --path.settings /etc/logstash
Finally start Logstash with:
/usr/share/logstash/bin/logstash --path.settings /etc/logstash
You could also use systemd to do this.
Informix installation
Informix Developer Edition Docker container
Now we are going to set up the Informix container to monitor. On your workstation in another terminal run:
$ docker pull ibmcom/informix-developer-database
docker run -it --name iif_developer_edition --privileged -p 9088:9088 -p 9089:9089 -p 27017:27017 -p 27018:27018 -p 27883:27883 --hostname ifxserver --net my_informix_elk_stack -e LICENSE=accept ibmcom/informix-developer-database:latest
It’s worth noting that if you exit the shell the Informix DE Docker container will stop. You can start it again with:
docker start iif_developer_edition -i
This latest version containing 12.10.FC12W1 doesn’t return you to the prompt after the engine starts, so you’ll need to open an interactive container in another terminal window as follows:
docker exec -it iif_developer_edition /bin/bash
Now both Docker containers are running you should be able to test name resolution and connectivity both ways with ping.
From the Informix container:
informix@ifxserver:/$ ping elasticstack_monitoring
From the Elastic Stack container:
root@elasticstack2:/# ping iif_developer_edition
These names belong to the containers and are not necessarily their host names.
While you’re still logged in as Informix set MSG_DATE to 1 which is required for my Logstash configuration:
onmode -wf MSG_DATE=1
This means (nearly) all online log messages will be prefixed with the date (MM/DD/YY format) and time (HH:MM:SS format).
You’ll be logged in as user informix which can sudo to root as follows:
sudo -i
Install Filebeat
In the Informix container it’s more of the same to install Filebeat:
apt-get update
apt-get install vim wget apt-transport-https
wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | apt-key add -
echo "deb https://artifacts.elastic.co/packages/6.x/apt stable main" | tee -a /etc/apt/sources.list.d/elastic-6.x.list
apt-get update
apt-get install filebeat
Filebeat’s configuration file is /etc/filebeat/filebeat.yml. If you’re not familiar with yml files correct indentation with two spaces per level is absolutely essential; yml files rely on this for structure instead of any braces or brackets. Lines beginning with a dash indicate an array.
onstat -c | grep MSGPATH reveals that the Informix online log file resides at /opt/ibm/data/logs/online.log and we want Filebeat to ingest this and pass it to Logstash running in the other container for parsing.
The Informix online log quite often contains wrapped lines and these generally don’t start with a timestamp or date of any kind.
Edit the file and add the following configuration directly under filebeat.inputs:
- type: log enabled: true paths: - /opt/ibm/data/logs/online.log fields: informix: true multiline.pattern: '^[0-9][0-9]' multiline.negate: true multiline.match: after
Finally set up the output by commenting out (add a ‘#’) to all parts of output.elasticsearch and uncommenting section output.logstash. Set hosts in this section to [“elasticstack_monitoring:5044”].
Start filebeat by running:
service filebeat start
You should see the message: Config OK.
Using Kibana
You should now be able to log into Kibana on http://localhost/ and add an index on filebeat*.
Then in the Discover section you should be able to see Informix log lines coming into the system and how they have been tagged.
If you wish to improve and test the parsing of particular log entries it is simple enough to create ones yourself in the Informix container like this:
echo "12/08/18 17:00:00 My Informix log line" >> /opt/ibm/data/logs/online.log
This particular blog post is going to end shortly. Kibana is a big subject and at this stage of my Elasticstack DevOps journey, I don’t feel qualified to write a blog on the subject. Once I’ve progressed further I may do a follow-on post.
The tagging of events in the online log like the checkpoint duration and its assignment to variable informix.ckpt_duration should allow you do easily do searches based on this and visualise them in dashboards.
Good luck!
Before I read this I thought an ELK was simply a large species of deer. My eyes have now been opened, and the power of our antlered friend is clear. I can now navigate online log messages with ease, like a moose browsing on aspen saplings. Yet another great blog post to add to my bookmark collection.