----
Running XenServer... without a server
// Latest blog entries
With the exciting release of the latest XenServer Dundee beta, the immediate reaction is to download it to give it a whirl to see all the shiny new features (and maybe to find out if your favourite bug has been fixed!). Unfortunately, it's not something that can just be installed, tested and uninstalled like a normal application - you'll need to find yourself a server somewhere you're willing to sacrifice in order to try it out. Unless, of course, you decide to use the power of virtualisation!
XenServer as a VM
Nested virtualisation - running a VM inside another VM - is not something that anyone recommends for production use, or even something that works at all in some cases. However, since Xen has its origins way back before hardware virtualisation became ubiquitous in intel processors, running full PV guests (that don't require any HW extensions) when XenServer is running as a VM actually works very well indeed. So for the purposes of evaluating a new release of XenServer it's actually a really good solution. It's also ideal for trying out many of the Unikernel implementations, such as Mirage or Rump kernels as these are pure PV guests too.
XenServer works very nicely when run on another XenServer, and indeed this is what we use extensively to develop and test our own software. But once again, not everyone has spare capacity to do this. So let's look to some other virtualisation solutions that aren't quite so server focused and that you might well have installed on your own laptop. Enter Oracle's VirtualBox.
VirtualBox, while not as performant a virtualization solution as Xen, is a very capable platform that runs XenServer without any problems. It also has the advantage of being easily installable on your own desktop or laptop. Therefore it's an ideal way to try out these betas of XenServer in a quick and convenient way. It also has some very convenient tools that have been built around it, one of which is Vagrant.
Vagrant
Vagrant is a tool for provisioning and managing virtual machines. It targets several virtualization platforms including VirtualBox, which is what we'll use now to install our XenServer VM. The model is that it takes a pre-installed VM image - what Vagrant calls a 'box' - and some provisioning scripts (using scripts, Salt, Chef, Ansible or others), and sets up the VM in a reproducible way. One of its key benefits is it simplifies management of these boxes, and Hashicorp run a service called Atlas that will host your boxes and metadata associated with them. We have used this service to publish a Vagrant box for the Dundee Beta.
Try the Dundee Beta
Once you have Vagrant installed, trying the Dundee beta is as simple as:
vagrant init xenserver/dundee-beta
vagrant up
This will download the box image (about 1 Gig) and create a new VM from this box image. As it's booting it will ask which network to bridge onto, which if you want your nested VMs to be available on the network should be a wired network rather than wireless.
The XenServer image is tweaked a little bit to make it easier to access - for example, it will by default DHCP all of the interfaces, which is useful for testing XenServer, but wouldn't be advisable for a real deployment. To connect to your XenServer, we need to find the IP address, so the simplest way of doing this is to ssh in and ask:
Mac-mini:xenserver jon$ vagrant ssh -c "sudo xe pif-list params=IP,device"
device ( RO) : eth1 IP ( RO): 192.168.1.102 device ( RO) : eth2 IP ( RO): 172.28.128.5 device ( RO) : eth0 IP ( RO): 10.0.2.15
So you should be able to connect using one of those IPs via XenCenter or via a browser to download XenCenter (or via any other interface to XenServer).
Going Deeper
Let's now go all Inception and install ourselves a VM within our XenServer VM. Let's assume, for the sake of argument, and because as I'm writing this it's quite true, that we're not running on a Windows machine, nor do we have one handy to run XenCenter on. We'll therefore restrict ourselves to using the CLI.
As mentioned before, HVM VMs are out so we're limited to pure PV guests. Debian Wheezy is a good example of one of these. First, we need to ssh in and become root:
Mac-mini:xenserver jon$ vagrant ssh Last login: Thu Mar 31 00:10:29 2016 from 10.0.2.2 [vagrant@localhost ~]$ sudo bash [root@localhost vagrant]#
Now we need to find the right template:
[root@localhost vagrant]# xe template-list name-label="Debian Wheezy 7.0 (64-bit)"
uuid ( RO) : 429c75ea-a183-a0c0-fc70-810f28b05b5a
name-label ( RW): Debian Wheezy 7.0 (64-bit)
name-description ( RW): Template that allows VM installation from Xen-aware Debian-based distros. To use this template from the CLI, install your VM using vm-install, then set other-config-install-repository to the path to your network repository, e.g. http:///
Now, as the description says, we use 'vm-install' and set the mirror:
[root@localhost vagrant]# xe vm-install template-uuid=429c75ea-a183-a0c0-fc70-810f28b05b5a new-name-label=wheezy
479f228b-c502-a791-85f2-a89a9f58e17f
[root@localhost vagrant]# xe vm-param-set uuid=479f228b-c502-a791-85f2-a89a9f58e17f other-config:install-repository=http://ftp.uk.debian.org/debian
The VM doesn't have any network connection yet, so we'll need to add a VIF. We saw the IP addresses of the network interfaces above, and in my case eth1 corresponds to the bridged network I selected when starting the XenServer VM using Vagrant. So I need the uuid of the network, so I'll list the networks:
[root@localhost vagrant]# xe network-list uuid ( RO) : c7ba748c-298b-20dc-6922-62e6a6645648 name-label ( RW): Pool-wide network associated with eth2 name-description ( RW): bridge ( RO): xenbr2 uuid ( RO) : f260c169-20c3-2e20-d70c-40991d57e9fb name-label ( RW): Pool-wide network associated with eth1 name-description ( RW): bridge ( RO): xenbr1 uuid ( RO) : 8d57e2f3-08aa-408f-caf4-699b18a15532 name-label ( RW): Host internal management network name-description ( RW): Network on which guests will be assigned a private link-local IP address which can be used to talk XenAPI bridge ( RO): xenapi uuid ( RO) : 681a1dc8-f726-258a-eb42-e1728c44df30 name-label ( RW): Pool-wide network associated with eth0 name-description ( RW): bridge ( RO): xenbr0
So I need a VIF on the network with uuid f260c...
[root@localhost vagrant]# xe vif-create vm-uuid=479f228b-c502-a791-85f2-a89a9f58e17f network-uuid=681a1dc8-f726-258a-eb42-e1728c44df30 device=0 e96b794e-fef3-5c2b-8803-2860d8c2c858
All set! Let's start the VM and connect to the console:
[root@localhost vagrant]# xe vm-start uuid=479f228b-c502-a791-85f2-a89a9f58e17f [root@localhost vagrant]# xe console uuid=479f228b-c502-a791-85f2-a89a9f58e17f
This should drop us into the Debian installer:
A few keystrokes later and we've got ourselves a nice new VM all set up and ready to go.
All of the usual operations will work; start, shutdown, reboot, suspend, checkpoint and even, if you want to set up two XenServer VMs, migration and storage migration. You can experiment with bonding, try multipathed ISCSI, check that alerts are generated, and almost anything else (with the exception of HVM and anything hardware specific such as VGPUs, of course!). It's also an ideal companion to the Docker build environment I blogged about previously, as any new things you might be experimenting with can be easily built using Docker and tested using Vagrant. If anything goes wrong, a 'vagrant destroy' followed by a 'vagrant up' and you've got a completely fresh XenServer install to try again in less than a minute.
The Vagrant box is itself created using Packer, a tool often used to create Vagrant boxes. The configuration for this is available on github, and feedback on this box is very welcome!
In a future blog post, I'll be discussing how to use Vagrant to manage XenServer VMs.
Read More
----
Shared via my feedly reader
Sent from my iPhone
No comments:
Post a Comment