# KVM virtual machines tips
## Migrating an existing VM to another host/hypervisor
### Process overview
Let's assume that you have a simple KVM guest that you'd need to migrate to a new hypervisor.
While normally we have everything under ansible control, and so we'd be able to rebuild a new VM from scratch, have role applied and restore backup to have node migrated, it's sometimes easier and even faster to just move the VM (and eventually reconfigure the network) to migrate the VM elsewhere.
!!! note
The following snippets are only shown below to help you do this, but don't copy/paste blindly. Again "Some Thinking Required [TM]" when considering this and a basic knowledge of how kvm/libvirt works is needed.
We can start with ssh (agent-forwarded of course) to the new hypervisor ( `ansible-role-kvm-host` applied and bridge[s]/network/storage configured, out of the scope of this doc).
We can then define some variables like :
* KVM guest VM to migrate
* KVM hypervisor on which it's actually running
* (optional) : the ssh bastion host (check in ansible inventory which are the bastion/jump hosts relevant to ansible env) through which we'll just open temporary tcp/22 firewall port (normally closed on all servers fleet)
```
# Defining some variables first
VM="kvm-guest1.dev.centos.org"
hypervisor="kvm-hypervisor1.dev.centos.org"
bastion_host="bastion.dev.centos.org"
# Finding our local IP to temporary allow it to push over ssh for rsync
# Only needed for public hypervisors, not needed at all for internal hypervisors in same DC or through vpn
ip_addr=$(curl --silent -4 http://icanhazip.com)
ssh -J ${bastion_host} ${hypervisor} "sudo iptables -I INPUT -p tcp --dport 22 --source ${ip_addr} -j ACCEPT"
# Ensuring our current user can read .qcow2 files on initial hypervisor
ssh $hypervisor "sudo setfacl -m u:$USER:rwx /var/lib/libvirt/images ; sudo virsh shutdown $VM ; sleep 15 ;sudo chown $USER /var/lib/libvirt/images/${VM}.qcow2"
# Ensuring now that we can also write to local/target hypervisor and starting to rsync -S (sparsify) the VM
sudo setfacl -m u:arrfab:rwx /var/lib/libvirt/images
rsync -avS -4 --progress ${hypervisor}:/var/lib/libvirt/images/${VM}.qcow2 /var/lib/libvirt/images/${VM}.qcow2
ssh $hypervisor "sudo virsh dumpxml $VM" > /var/lib/libvirt/images/${VM}.xml
```
!!! info
Now that we have both the .qcow2 and .xml files, we can define the VM but *first* we have to verify that it matches local hypervisor. For example it can be that you'll have issues with define cpu model and also network settings, like bridge name, etc so feel free to review/adapt (see below some tips)
Once you have edited the .xml file to (eventually) match some settings that needed to be modified, you can define the VM, and start it :
```
# then define/starts the VM
sudo virsh define /var/lib/libvirt/images/${VM}.xml
sudo virsh start ${VM}
sudo virsh console ${VM}
sudo virsh autostart --domain $VM
```
### Migration tips (also for .xml)
Here are some tips that you have to think about before migrating :
* if you'll have to modify network settings, ensure that you have the root password available to connect through the console with `virsh console` or modify network settings before shutting VM down at origin side.
* if libvirt complains about cpu mismatch, it can be that VM was created with something else than cpu host-passthrough, so if that's the case, you can always ` virsh edit $VM` , and replace the cpu section with `<cpu mode='host-passthrough' check='partial'/>` , save and try to boot the VM again
* for network virtual nics, you can have to change the bridge name or even switch back to default network (if suddenly not using a bridge, etc)