Pages

Managing Cisco Network Devices using Bash and Expect

No comments:
Most of the time, managing lots of network devices are troublesome if you do not have a proper management software. In this post i will go through an example. The task that i want to achieve is getting existing tunnel configuration of cisco network devices, then creating a new tunnel configuration using them.

First install expect packages. In my case i use ubuntu:
# sudo apt-get install expect
Make a directory for logs:
# mkdir /tmp/expect_logs

There are some text files and bash, expect scripts:
1. devices_list : IP list of the cisco network devices.
2. cisco.sh : Main Bash script.
3. expect_get.exp : Expect script for getting existing device config.
4. expect_put.exp : Expect script for creating a new device config.

Contents of the scripts accordingly:
cisco.sh

expect_get.exp

expect_put.exp

Running cisco.sh in a while loop should do the trick.
# while read -r line; do ./cisco.sh $line; done < devices_list

Converting Virtualization Image Formats with qemu-img convert

No comments:
qemu-img is a practical tool to convert between multiple virtual disk image formats. As of qemu-img-0.12.1.2-2.479.el6.x86_64 supported formats are in the following list.

raw: Raw disk image format
qcow2: QEMU image format (copy-on-write)
qcow: Old QEMU image format
cow: User Mode Linux copy-on-write image format
vdi: VirtualBox 1.1 compatible image format
vmdk: VMware 3 and 4 compatible image format
vpc: VirtualPC compatible image format (VHD)
vhdx: Hyper-V compatible image format (VHDX)
cloop: Linux Compressed Loop image

A few examples:

kvm raw image to qcow2
$ qemu-img convert -f raw -O qcow2 raw-image.img qcow2-image.qcow2

kvm raw image to vmdk
$ qemu-img convert -f raw -O vmdk raw-image.img vmware-image.vmdk

vmdk to raw image
$ qemu-img convert -f vmdk -O raw vmware-image.vmdk raw-image.img

vmdk to qcow2
$ qemu-img convert -f vmdk -O qcow2 vmware-image.vmdk qcow2-image.qcow2

vdi to qcow2
$qemu-img convert -f vdi -O qcow2 vbox-image.vdi qcow2-image.qcow2

Migrating a Linux KVM machine to VMWare ESX

No comments:
There are several steps to move your linux kvm virtual machine to an VMWare ESX cluster.

The first step after shutting down the kvm instance is to convert its disk format to vmdk format. qemu-img tool makes this step easy.
On the kvm host:
# qemu-img convert -O vmdk kvm-image.img esx-image.vmdk -p
(100.00/100%)

Now the vmdk file must be uploaded to esx host. In order to scp to esx host, sshd daemon must be started.
On esx host:
# service sshd restart
On kvm host:
# scp esx-image.vmdk user@esx-host:/path/to/datastore

It is time to create the esx virtual machine by using the vmdk file:
Using vSphere Client, right click on the desired host and select New Virtual Machine. In the Create New Virtual Machine wizard, select custom configuration. Provide detailed options in the later screens that suits you best. Then later in the Create a Disk screen, select Use an Existing Virtual Disk. Click Browse and navigate to the location of your existing vmdk file. Review the summary then finish.

After running newly created esx virtual machine, it is possible to encounter errors when starting or creating snapshots.
eg:
2016-08-18T06:12:50.598Z| vcpu-0| I120: DISKLIB-CHAINESX : ChainESXOpenSubChainNode: can't create multiextent node 7eea123d-esx-image-00001-s001.vmdk failed with error The system cannot find the file specified (0xbad0003, Not found)
2016-08-18T06:12:50.599Z| vcpu-0| I120: DISKLIB-CHAIN : "/vmfs/volumes/54525cc9-bd43329e8-3a47-b2ba3eef754a/esx-image/esx-image-00001.vmdk" : failed to open (The system cannot find the file specified).

That is because your virtual machine disks are of a hosted type. To resolve this issue you should convert virtual disks in hosted format to one of the vmfs formats. Hosted disks end with the -s00x.vmdk extension.

Open a ssh console to the ESXi host. Run this command to load the multiextent module:
# vmkload_mod multiextent
To convert virtual disks in hosted format to the VMFS format:
For a thick disk:
# vmkfstools -i esx-image.vmdk esx-image-new.vmdk -d zeroedthick
Or for a thin disk:
# vmkfstools -i esx-image.vmdk esx-image-new.vmdk -d thin
If conversion successful, Delete the hosted disk:
# vmkfstools -U esx-image.vmdk
Rename the new VMFS disk to the original name:
# vmkfstools -E esx-image-new.vmdk esx-image.vmdk
Unload the multiextent module:
# vmkload_mod -u multiextent

Script for Reporting vCpu Count Currently Running on a KVM Host

No comments:
This script is useful for listing total vcpus assigned to each domain that is currently running on kvm host.

KVM VCpu Pinning With Virsh

No comments:
CPU allocation to virtual guests with KVM Hypervisor can be done in different ways. In my example vcpu placement is static, but no cpuset is specified, so that domain process will be pinned to all the available physical CPUs. I want to change them and pin vcpus to available physical cpus.

Note: The followings are only examples. So give them a try on a test environment by substituting your values.

# virsh dumpxml test_kvm01
<domain type='kvm' id='18'>
<name>test_kvm01</name>
..
..
<vcpu placement='static'>4</vcpu>
..
..
</domain>


vcpu info can be listed using virsh vcpuinfo command:
# virsh vcpuinfo test_kvm01
VCPU: 0
CPU: 11
State: running
CPU time: 247454.2s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

VCPU: 1
CPU: 11
State: running
CPU time: 845257.6s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

VCPU: 2
CPU: 53
State: running
CPU time: 237962.2s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy

VCPU: 3
CPU: 51
State: running
CPU time: 226221.4s
CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy


Current vcpu pinning configuration can be obtained with the following command:
# virsh vcpupin test_kvm01
VCPU: CPU Affinity
----------------------------------
0: 0-79
1: 0-79
2: 0-79
3: 0-79


With the same command a vcpu can be pinned to corresponding physical cpu:
# virsh vcpupin test_kvm01 0 10
# virsh vcpupin test_kvm01 1 11
# virsh vcpupin test_kvm01 2 12
# virsh vcpupin test_kvm01 3 13


Mounting ACL enabled File Systems

No comments:
Supporting ACLs for files or directories in which the partition contains must be mounted with the acl option.

# mount -t ext4 -o acl /dev/test_vg0/test_lv0 /test

If you want to make it persistent using /etc/fstab 

/dev/test_vg0/test_lv0 /test   ext4   acl   1 2

/test directory can be accessed via smb or nfs. Both file services support access control lists. When mounting /test from a nfs client noacl option must be used in order to disable ACLs.

Increasing KVM Guest CPU by using virsh

No comments:
First edit configuration xml using virsh edit. virsh edit opens an editor.

# virsh edit test_vm

When editor opens configuration xml, find containing vcpu placement element then change it according to your needs and save it.

<vcpu placement='static'>8</vcpu>

Then shutdown and start your vm:

# virsh shutdown test_vm
# virsh start test_vm