Skip to main content


Showing posts from 2016

Logstash Grok Filter Example For Jboss Server Access Logs

Logstash is a great tool for centralizing application server logs. Here is an excerpt from a jboss application server's access logs and corresponding grok filter for them.
Jboss Access Logs:

Converting some fields' data types to numbers (in the example integer and float) are useful for later statistical calculations. Logstash Filter (Logstash Version 2.3.4)

When logs are sent to elasticsearch, string fields would be stored as analyzed fields.

Listing Volume Mount Information For Each Container in Docker

Docker inspect command returns container information in JSON format. While you want to get specific objects from the returning json array, --format or -f option formats the output using the Go’s text/template package. Sometimes I just want to get source and destination folders of the volume mounts for every container. I have written following bash script to achieve this:

Keeping TableSpace Statistics with Graphite

Most of the time, monitoring usable free size of oracle tablespaces is helpful. Especially for production systems. Keeping that statistical data for some time is also meaningful so as to see how much new data enter into the database.
With the following bash script each tablespace's free size can be sent to graphite database. Just query oracle data dictionary views (dba_tablespaces, dba_data_files, dba_free_space) then send each value to graphite using netcat.

On the side of graphite carbon storage-schemas.conf whisper file schema definitions must be updated like the following example. This file is scanned for changes every 60 seconds, so no need to reload any service.
[oracle_tablespace_free_space] pattern = ^ora_tbls.*.free_space_mb$ retentions = 10m:90d
I am using grafana to visualize the metrics. It looks like:

Nagios Plugin Return Codes

Nagios plugin scripts have to return two things:

1. Exit with a return value
2. A text output to STDOUT

Possible plugin return values are:


Managing Cisco Network Devices using Bash and Expect

Most of the time, managing lots of network devices are troublesome if you do not have a proper management software. In this post i will go through an example. The task that i want to achieve is getting existing tunnel configuration of cisco network devices, then creating a new tunnel configuration using them.

First install expect packages. In my case i use ubuntu:
# sudo apt-get install expect
Make a directory for logs:
# mkdir /tmp/expect_logs
There are some text files and bash, expect scripts: 1. devices_list : IP list of the cisco network devices. 2. : Main Bash script. 3. expect_get.exp : Expect script for getting existing device config. 4. expect_put.exp : Expect script for creating a new device config.
Contents of the scripts accordingly:
Running in a while loop should do the trick. # while read -r line; do ./ $line; done < devices_list

Converting Virtualization Image Formats with qemu-img convert

qemu-img is a practical tool to convert between multiple virtual disk image formats. As of qemu-img- supported formats are in the following list.
raw: Raw disk image format qcow2: QEMU image format (copy-on-write) qcow: Old QEMU image format cow: User Mode Linux copy-on-write image format vdi: VirtualBox 1.1 compatible image format vmdk: VMware 3 and 4 compatible image format vpc: VirtualPC compatible image format (VHD) vhdx: Hyper-V compatible image format (VHDX) cloop: Linux Compressed Loop image
A few examples:
kvm raw image to qcow2 $ qemu-img convert -f raw -O qcow2 raw-image.img qcow2-image.qcow2
kvm raw image to vmdk $ qemu-img convert -f raw -O vmdk raw-image.img vmware-image.vmdk
vmdk to raw image $ qemu-img convert -f vmdk -O raw vmware-image.vmdk raw-image.img
vmdk to qcow2 $ qemu-img convert -f vmdk -O qcow2 vmware-image.vmdk qcow2-image.qcow2
vdi to qcow2 $qemu-img convert -f vdi -O qcow2 vbox-image.vdi qcow2-image.qcow2

Migrating a Linux KVM machine to VMWare ESX

There are several steps to move your linux kvm virtual machine to an VMWare ESX cluster.
The first step after shutting down the kvm instance is to convert its disk format to vmdk format. qemu-img tool makes this step easy. On the kvm host: # qemu-img convert -O vmdk kvm-image.img esx-image.vmdk -p (100.00/100%)
Now the vmdk file must be uploaded to esx host. In order to scp to esx host, sshd daemon must be started. On esx host: # service sshd restart On kvm host: # scp esx-image.vmdk user@esx-host:/path/to/datastore
It is time to create the esx virtual machine by using the vmdk file: Using vSphere Client, right click on the desired host and select New Virtual Machine. In the Create New Virtual Machine wizard, select custom configuration. Provide detailed options in the later screens that suits you best. Then later in the Create a Disk screen, select Use an Existing Virtual Disk. Click Browse and navigate to the location of your existing vmdk file. Review the summary then finish.
After ru…

KVM VCpu Pinning With Virsh

CPU allocation to virtual guests with KVM Hypervisor can be done in different ways. In my example vcpu placement is static, but no cpuset is specified, so that domain process will be pinned to all the available physical CPUs. I want to change them and pin vcpus to available physical cpus.
Note: The followings are only examples. So give them a try on a test environment by substituting your values.
# virsh dumpxml test_kvm01 <domain type='kvm' id='18'> <name>test_kvm01</name> .. .. <vcpu placement='static'>4</vcpu> .. .. </domain>

vcpu info can be listed using virsh vcpuinfo command: # virsh vcpuinfo test_kvm01 VCPU: 0 CPU: 11 State: running CPU time: 247454.2s CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
VCPU: 1 CPU: 11 State: running CPU time: 845257.6s CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
VCPU: 2 CPU: 53 State: running CPU time: 237962.2s CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyy…

Mounting ACL enabled File Systems

Supporting ACLs for files or directories in which the partition contains must be mounted with the acl option.
# mount -t ext4 -o acl /dev/test_vg0/test_lv0 /test
If you want to make it persistent using /etc/fstab
/dev/test_vg0/test_lv0 /test   ext4   acl   1 2
/test directory can be accessed via smb or nfs. Both file services support access control lists. When mounting /test from a nfs client noacl option must be used in order to disable ACLs.

Increasing KVM Guest CPU by using virsh

First edit configuration xml using virsh edit. virsh edit opens an editor.

# virsh edit test_vm

When editor opens configuration xml, find containing vcpu placement element then change it according to your needs and save it.

<vcpu placement='static'>8</vcpu>

Then shutdown and start your vm:

# virsh shutdown test_vm # virsh start test_vm

Making KVM Image File Sparse with virt-sparsify

Sparse files use disk space more efficiently. Only metadata information is written to disk instead of the empty space that constitutes the block, using less disk space. Sparse term corresponds to thin-provisioned image in the VMware jargon. To sparsify a vm guest image first you shut the guest down. In my KVM Host there is a non-sparse kvm image file which is identified like this:
# ls -lsh 61G -rw-r--r-- 1 qemu qemu 60G Jun 22 08:15 SRVTEST01.img
As seen from the output of the ls command SRVTEST01.img guest image occupies all of the file. To make it sparse:

Sending Jboss Server Logs to Logstash Using Filebeat with Multiline Support

In addition to sending system logs to logstash, it is possible to add a prospector section to the filebeat.yml for jboss server logs. Sometimes jboss server.log has single events made up from several lines of messages. In such cases Filebeat should be configured for a multiline prospector.
Filebeat takes lines do not start with a date pattern (look at pattern in the multiline section "^[[:digit:]]{4}-[[:digit:]]{2}-[[:digit:]]{2}" and negate section is set to true) and combines them with the previous line that starts with a date pattern.

server.log file excerpt where DatePattern: yyyy-MM-dd-HH and ConversionPattern: %d %-5p [%c] %m%n
Logstash filter:

An Experiment with Filebeat and ELK Stack

ELK Stack is one of the best distributed systems to centralize lots of servers' logs. Filebeat is a log shipper that keeps track of the given logs and pushes them to the Logstash. Then logstash outputs these logs to elasticsearch. I am not going to explain how to install ELK Stack but experiment about sending multiple log types (document_type) using filebeat log shipper to logstash server.

Linux Foundation Zephyr Project

On the February 17, 2016, The Linux Foundation has launched a project named Zephyr, which is a new player on the Internet of Things field, to encourage an open source, Real-Time Operating System for Internet-of-things (IoT) devices. The Project supports multiple system architectures and is available through the Apache 2.0 license. Just because it is an open source project, the community can improve it to support new hardwares, tools, sensors and device drivers.

The Project is modular. Many of IoT devices need a hardware that addresses the very smallest memory footprint. Linux has already proven to be very good at running with limited resources. Zephyr kernel can be run on 8kB of memory. One disable or enable as many modules as needed.

It will be also secure. Security is very important to all IoT devices. The Linux Foundation is working out on a group whose jobs are to the task of maintaining and improving the project security in particular.

At first Zephyr will Support:


Parsing HTML Output of the Nagios Service Availability Report

Nagios has a Service Availability Report feature which is usable from Reports section of its web interface. But this feature is not designed as a web service architecture like a RESTful system. So in order to get these reports from an application interface we must make an Http request and parse the results. I just want to give an example of a python script does this kind of automation.

Nagios version used in this example is 3.5.0. Since Nagios serves with https and requires basic authentication, we have to use a ssl context and an authentication header. Python version is 2.7. Beautifulsoup4 library is used for parsing Http output.