Pages

Listing Volume Mount Information For Each Container in Docker

No comments:
Docker inspect command returns container information in JSON format. While you want to get specific objects from the returning json array, --format or -f option formats the output using the Go’s text/template package. Sometimes I just want to get source and destination folders of the volume mounts for every container. I have written following bash script to achieve this:


Keeping TableSpace Statistics with Graphite

No comments:
Most of the time, monitoring usable free size of oracle tablespaces is helpful. Especially for production systems. Keeping that statistical data for some time is also meaningful so as to see how much new data enter into the database.

With the following bash script each tablespace's free size can be sent to graphite database. Just query oracle data dictionary views (dba_tablespaces, dba_data_files, dba_free_space) then send each value to graphite using netcat.



On the side of graphite carbon storage-schemas.conf whisper file schema definitions must be updated like the following example. This file is scanned for changes every 60 seconds, so no need to reload any service.

[oracle_tablespace_free_space]
pattern = ^ora_tbls.*.free_space_mb$
retentions = 10m:90d

I am using grafana to visualize the metrics. It looks like:

Nagios Plugin for Checking Netapp 8020 Storage Disk Errors

No comments:
Using paramiko, ssh connections can be implemented in Python. I have written a simple nagios plugin to check status of disks on the Netapp 8020 Storage on which Clustered Data ONTAP 8.3 runs. Python version is 2.6.6.

Nagios Plugin Return Codes

No comments:
Nagios plugin scripts have to return two things:

1. Exit with a return value
2. A text output to STDOUT

Possible plugin return values are:


Plugin Return Code Service State Host State
0 OK UP
1 WARNING UP or DOWN/UNREACHABLE
2 CRITICAL DOWN/UNREACHABLE
3 UNKNOWN DOWN/UNREACHABLE

Managing Cisco Network Devices using Bash and Expect

No comments:
Most of the time, managing lots of network devices are troublesome if you do not have a proper management software. In this post i will go through an example. The task that i want to achieve is getting existing tunnel configuration of cisco network devices, then creating a new tunnel configuration using them.

First install expect packages. In my case i use ubuntu:
# sudo apt-get install expect
Make a directory for logs:
# mkdir /tmp/expect_logs

There are some text files and bash, expect scripts:
1. devices_list : IP list of the cisco network devices.
2. cisco.sh : Main Bash script.
3. expect_get.exp : Expect script for getting existing device config.
4. expect_put.exp : Expect script for creating a new device config.

Contents of the scripts accordingly:
cisco.sh

expect_get.exp

expect_put.exp

Running cisco.sh in a while loop should do the trick.
# while read -r line; do ./cisco.sh $line; done < devices_list

Converting Virtualization Image Formats with qemu-img convert

No comments:
qemu-img is a practical tool to convert between multiple virtual disk image formats. As of qemu-img-0.12.1.2-2.479.el6.x86_64 supported formats are in the following list.

raw: Raw disk image format
qcow2: QEMU image format (copy-on-write)
qcow: Old QEMU image format
cow: User Mode Linux copy-on-write image format
vdi: VirtualBox 1.1 compatible image format
vmdk: VMware 3 and 4 compatible image format
vpc: VirtualPC compatible image format (VHD)
vhdx: Hyper-V compatible image format (VHDX)
cloop: Linux Compressed Loop image

A few examples:

kvm raw image to qcow2
$ qemu-img convert -f raw -O qcow2 raw-image.img qcow2-image.qcow2

kvm raw image to vmdk
$ qemu-img convert -f raw -O vmdk raw-image.img vmware-image.vmdk

vmdk to raw image
$ qemu-img convert -f vmdk -O raw vmware-image.vmdk raw-image.img

vmdk to qcow2
$ qemu-img convert -f vmdk -O qcow2 vmware-image.vmdk qcow2-image.qcow2

vdi to qcow2
$qemu-img convert -f vdi -O qcow2 vbox-image.vdi qcow2-image.qcow2

Migrating a Linux KVM machine to VMWare ESX

No comments:
There are several steps to move your linux kvm virtual machine to an VMWare ESX cluster.

The first step after shutting down the kvm instance is to convert its disk format to vmdk format. qemu-img tool makes this step easy.
On the kvm host:
# qemu-img convert -O vmdk kvm-image.img esx-image.vmdk -p
(100.00/100%)

Now the vmdk file must be uploaded to esx host. In order to scp to esx host, sshd daemon must be started.
On esx host:
# service sshd restart
On kvm host:
# scp esx-image.vmdk user@esx-host:/path/to/datastore

It is time to create the esx virtual machine by using the vmdk file:
Using vSphere Client, right click on the desired host and select New Virtual Machine. In the Create New Virtual Machine wizard, select custom configuration. Provide detailed options in the later screens that suits you best. Then later in the Create a Disk screen, select Use an Existing Virtual Disk. Click Browse and navigate to the location of your existing vmdk file. Review the summary then finish.

After running newly created esx virtual machine, it is possible to encounter errors when starting or creating snapshots.
eg:
2016-08-18T06:12:50.598Z| vcpu-0| I120: DISKLIB-CHAINESX : ChainESXOpenSubChainNode: can't create multiextent node 7eea123d-esx-image-00001-s001.vmdk failed with error The system cannot find the file specified (0xbad0003, Not found)
2016-08-18T06:12:50.599Z| vcpu-0| I120: DISKLIB-CHAIN : "/vmfs/volumes/54525cc9-bd43329e8-3a47-b2ba3eef754a/esx-image/esx-image-00001.vmdk" : failed to open (The system cannot find the file specified).

That is because your virtual machine disks are of a hosted type. To resolve this issue you should convert virtual disks in hosted format to one of the vmfs formats. Hosted disks end with the -s00x.vmdk extension.

Open a ssh console to the ESXi host. Run this command to load the multiextent module:
# vmkload_mod multiextent
To convert virtual disks in hosted format to the VMFS format:
For a thick disk:
# vmkfstools -i esx-image.vmdk esx-image-new.vmdk -d zeroedthick
Or for a thin disk:
# vmkfstools -i esx-image.vmdk esx-image-new.vmdk -d thin
If conversion successful, Delete the hosted disk:
# vmkfstools -U esx-image.vmdk
Rename the new VMFS disk to the original name:
# vmkfstools -E esx-image-new.vmdk esx-image.vmdk
Unload the multiextent module:
# vmkload_mod -u multiextent