Skip to main content


Showing posts from August, 2016

Migrating a Linux KVM machine to VMWare ESX

There are several steps to move your linux kvm virtual machine to an VMWare ESX cluster.
The first step after shutting down the kvm instance is to convert its disk format to vmdk format. qemu-img tool makes this step easy. On the kvm host: # qemu-img convert -O vmdk kvm-image.img esx-image.vmdk -p (100.00/100%)
Now the vmdk file must be uploaded to esx host. In order to scp to esx host, sshd daemon must be started. On esx host: # service sshd restart On kvm host: # scp esx-image.vmdk user@esx-host:/path/to/datastore
It is time to create the esx virtual machine by using the vmdk file: Using vSphere Client, right click on the desired host and select New Virtual Machine. In the Create New Virtual Machine wizard, select custom configuration. Provide detailed options in the later screens that suits you best. Then later in the Create a Disk screen, select Use an Existing Virtual Disk. Click Browse and navigate to the location of your existing vmdk file. Review the summary then finish.
After ru…

KVM VCpu Pinning With Virsh

CPU allocation to virtual guests with KVM Hypervisor can be done in different ways. In my example vcpu placement is static, but no cpuset is specified, so that domain process will be pinned to all the available physical CPUs. I want to change them and pin vcpus to available physical cpus.
Note: The followings are only examples. So give them a try on a test environment by substituting your values.
# virsh dumpxml test_kvm01 <domain type='kvm' id='18'> <name>test_kvm01</name> .. .. <vcpu placement='static'>4</vcpu> .. .. </domain>

vcpu info can be listed using virsh vcpuinfo command: # virsh vcpuinfo test_kvm01 VCPU: 0 CPU: 11 State: running CPU time: 247454.2s CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
VCPU: 1 CPU: 11 State: running CPU time: 845257.6s CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyyy
VCPU: 2 CPU: 53 State: running CPU time: 237962.2s CPU Affinity: yyyyyyyyyyyyyyyyyyyyyyy…

Mounting ACL enabled File Systems

Supporting ACLs for files or directories in which the partition contains must be mounted with the acl option.
# mount -t ext4 -o acl /dev/test_vg0/test_lv0 /test
If you want to make it persistent using /etc/fstab
/dev/test_vg0/test_lv0 /test   ext4   acl   1 2
/test directory can be accessed via smb or nfs. Both file services support access control lists. When mounting /test from a nfs client noacl option must be used in order to disable ACLs.

Increasing KVM Guest CPU by using virsh

First edit configuration xml using virsh edit. virsh edit opens an editor.

# virsh edit test_vm

When editor opens configuration xml, find containing vcpu placement element then change it according to your needs and save it.

<vcpu placement='static'>8</vcpu>

Then shutdown and start your vm:

# virsh shutdown test_vm # virsh start test_vm