Wednesday, December 3, 2014

Using Xen Project on OpenStack “Juno” via Libvirt [feedly]



----
Using Xen Project on OpenStack "Juno" via Libvirt
// Xen Project Blog

By Xing Lin

This document describes steps I took to setup a compute node based on Ubuntu 14.04 for OpenStack "juno", using the Xen Project via libvirt approach. Openstack does not support this approach well as it is in Group C of the hypervisor support matrix for Openstack. You can hardly find any tutorial online describing this approach and this might be the first. Let's get started!

Prerequisites

Follow "OpenStack Installation Guide for Ubuntu 14.04″ to setup the control node and network node, following the three-node architecture with OpenStack Networking (neutron). This involves lots of configuration and could take a day or two. Check that the control node and network node is working.

Steps

NOTE: Steps 3, 4, and 5 are workarounds for bugs present in Ubuntu 14.04 (and probably in other Debian derivatives of that era). Future releases of Ubuntu may not require these workarounds.  See the REFERENCES section below to review the actual bug reports which have been filed.

1. Add OpenStack juno to the repository:

apt-get update  apt-get install software-properties-common  add-apt-repository cloud-archive:juno  apt-get update

2. Install nova-compute-xen, sysfsutils and python nova client:

apt-get install nova-compute-xen sysfsutils python-novaclient

3. Install qemu-2.0.2 with a patch fixing unmapping of persistent grants. Current qemu releases (including 2.0.2, 2.1.2 and 2.2.0-rc1) do not have this patch included and this will result in Dom0 kernel crashes when creating a Xen Project DomU from OpenStack GUI (horizon). I have applied this patch and made the modified qemu available in github:

wget https://github.com/xinglin/qemu-2.0.2/archive/master.zip  unzip master.zip  cd qemu-2.0.2-master/  apt-get build-dep qemu  ./configure  make -j16  make install

4. Add a patch to /etc/init.d/xen to start qemu process during startup:

--- /etc/init.d/xen	2014-11-18 20:54:10.788457049 -0700  +++ /etc/init.d/xen.bak	2014-11-18 20:53:14.804107463 -0700  @@ -228,6 +228,9 @@ case "$1" in  		*) log_end_msg 1; exit ;;  	esac  	log_end_msg 0  +	/usr/local/bin/qemu-system-i386 -xen-domid 0 -xen-attach -name dom0 -nographic -M xenpv -daemonize \  +	    -monitor /dev/null -serial /dev/null -parallel /dev/null \  +	    -pidfile /var/run/qemu-xen-dom0.pid  	;;    stop)  	capability_check

5. Create a link at /usr/bin for pygrub:

 ln -s /usr/lib/xen-4.4/bin/pygrub /usr/bin/pygrub

6. Reboot the machine and boot into Xen Project Dom0.

7. Edit the /etc/nova/nova.conf, to configure nova service. You can also follow steps in the OpenStack installation guide to configure nova as a compute node.

  • In the default section, add the following:
 [default]   ...   rpc_backend = rabbit   rabbit_host = controller   rabbit_password = RABBIT_PASS   auth_strategy = keystone   my_ip = MANAGEMENT_INTERFACE_IP_ADDRESS   vnc_enabled = True   vncserver_listen = 0.0.0.0   vncserver_proxyclient_address = MANAGEMENT_INTERFACE_IP_ADDRESS   novncproxy_base_url = http://controller:6080/vnc_auto.html   verbose = True

MANAGEMENT_INTERFACE_IP_ADDRESS is the IP address of the management network interface for this compute node, typically 10.0.0.31 for the first compute node.

  • Add keystone_authtoken section:
 [keystone_authtoken]   auth_uri = http://controller:5000/v2.0   identity_uri = http://controller:35357   admin_tenant_name = service   admin_user = nova   admin_password = NOVA_PASS
  • Add glance section:
[glance]  host=controller

8. Verify the content of /etc/nova/nova-compute.conf is as follows:

[DEFAULT]  compute_driver=libvirt.LibvirtDriver  [libvirt]  virt_type=xen

9. Install and configure network component in compute node. Follow the steps outlined in this OpenStack install guide for neutron compute nodes. Note, in /etc/neutron/neutron.conf, I did not set "allow_overlapping_ips = True" in the default section, because it is said to set this property to False if both Neutron and the nova security groups are used together.

10. Final step: now you should be able to launch an instance from horizon. In my case, I launched an instance running cirros-0.3.3-x86_64. When I login to the compute node, I can see this instance running with virsh:

# virsh --connect=xen:///  Welcome to virsh, the virtualization interactive terminal.    Type:  'help' for help with commands         'quit' to quit    virsh # list  Id    Name                           State  ----------------------------------------------------  1     instance-0000003b              running

References

About the author:
Xing is a PhD student in the School of Computing at the University of Utah. His primary interests are in file and storage systems. He has built a NFS client driver for Hadoop (will open source soon) and proposed a new data transformation for improving compression, called "migratory compression" (check out our USENIX FAST '14 paper).


----

Shared via my feedly reader


Sent from my iPhone

No comments:

Post a Comment