Tuesday, March 18, 2014

Patching XenServer at Scale [feedly]




Patching XenServer at Scale
// Latest blog entries

In January, I posted a how-to guide covering the installation of XenServer in a large scale environment, and this month we're going to talk about patching XenServer in a similar environment. Patching any operating environment is an important aspect of running a production installation, and XenServer is no different. Patching a XenServer manually can be done in one of two ways; either through XenCenter and its rolling pool upgrade option or via the CLI. The rolling pool upgrade wizard has been available since XenServer 6.0, and not only applies hotfixes to all the servers in a pool in the correct order, but also ensures any running VMs are migrated if reboots are required. If you prefer to apply the patches using the CLI, it becomes your responsibility to perform the VM migration, but the process is quite simple. XenServer customers with a Citrix support contract can utilize the rolling pool upgrade wizard, while free users have the option of manually patching using the CLI. Of course these two options can be used in a large scale environment, but generally the requirement is to script everything, and that's where this blog comes in.

Assumptions

The core assumption in the script in this blog is that the XenServer hosts are not in a pool. If the hosts are in a pool, then you should apply patches to the pool master first, and then any slaves. Since we're building on the environment in my previous blog which had standalone hosts, this assumption is valid.

Preparation Steps

  1. Download the desired hotfixes, patches and service packs from either citrix.com (http://support.citrix.com/product/xens/v6.2.0/) or xenserver.org (http://xenserver.org/overview-xenserver-open-source-virtualization/download.html)
  2. Extract the xsupdate file of each patch into a directory on an NFS share
  3. Test each patch to verify it works in your environment. While not required, I always like to do this because QA can't know every possible configuration and bugs do happen.
  4. Create a file named manifestand place it in the same directory as the xsupdate files. The manifest file will contain a single line for each patch, and those patches will be processed in order. An example manifest file is provided below, and any given line can be commented out using the hash (#) character.
    XS62E001.xsupdate  XS62E002.xsupdate  XS62E004.xsupdate  XS62E005.xsupdate  XS62E009.xsupdate  XS62E010.xsupdate  XS62E011.xsupdate  XS62E012.xsupdate  XS62ESP1.xsupdate
  5. Create a script file named apply-patches.shand place it in a known location. The contents of the script will be
    #!/bin/sh   # apply all XenServer patches which have been approved in our manifest    mkdir /mnt/xshotfixes  mount 192.168.98.3:/vol/exports/isolibrary/xs-hotfixes /mnt/xshotfixes      HOSTNAME=$(hostname)  HOSTUUID=$(xe host-list name-label=$HOSTNAME --minimal)  while read PATCH  do   if [ "$(echo "$PATCH" | head -c 1)" != '#' ]  then   	PATCHNAME=$(echo "$PATCH" | awk -F: '{ split($1,a,"."); printf ("%s\n", a[1]); }')  	echo "Processing $PATCHNAME"  	PATCHUUID=$(xe patch-list name-label=$PATCHNAME hosts=$HOSTUUID --minimal)  	if [ -z "$PATCHUUID" ]  	then  		echo "Patch not yet applied; applying .."  		PATCHUUID=$(xe patch-upload file-name=/mnt/xshotfixes/$PATCH)  		if [ -z "$PATCHUUID" ] #empty uuid means patch uploaded, but not applied to this host  		then  			PATCHUUID=$(xe patch-list name-label=$PATCHNAME --minimal)  		fi  		#apply the patch to *this* host only  		xe patch-apply uuid=$PATCHUUID host-uuid=$HOSTUUID    		# remove the patch files to avoid running out of disk space in the future  		xe patch-clean uuid=$PATCHUUID   		  		#figure out what the patch needs to be fully applied and then do it  		PATCHACTIVITY=$(xe patch-list name-label=$PATCHNAME params=after-apply-guidance | sed -n 's/.*: \([.]*\)/\1/p')  		if [ "$PATCHACTIVITY" == 'restartXAPI' ]  		then  			xe-toolstack-restart  			# give time for the toolstack to restart before processing any more patches  			sleep 60  		elif [ "$PATCHACTIVITY" == 'restartHost' ]  		then  			# we need to rebot, but we may not be done.  			# need to create a link to our script  			  			# first find out if we're already being run from a reboot  			MYNAME="`basename \"$0\"`"  			if [ "$MYNAME" == 'apply-patches.sh' ]  			then  				# I'm the base path so copy myself to the correct location  				cp "$0" /etc/rc3.d/S99zzzzapplypatches    			fi  			  			reboot  			exit  		fi  		  	else  		echo "$PATCHNAME already applied"  	fi  	  fi  done "/mnt/xshotfixes/manifest"    echo "done"  umount /mnt/xshotfixes  rmdir /mnt/xshotfixes    # lastly if I'm running as part of a reboot; kill the link  rm -f /etc/rc3.d/S99zzzzapplypatches 

Applying Patches

Applying patches is as simple as running the script file and letting it do what it needs to do. Here's how it works...

  1. We need to find out if the patch has already been applied.
  2. If the patch hasn't been applied, we want to upload it and then apply it. Since any given patch might require the toolstack to be restarted, we check for that and restart the toolstack. Additionally we need to handle the case where the patch might require a reboot. If that's the case, we want to reboot, but also might need to process additional patches. To account for that, we'll insert ourself into the reboot sequence to keep processing more patches until we've reached the end.
  3. Since we want to be sensitive to disk space usage, we'll cleanup the patch files once each patch has been applied.

 

This script becomes quite valuable when used in conjunction with the provisioning script in my blog on installing XenServer at scale. Simply copy the patch script to /etc/rc3.d/S99zzzzapplypatches and add that command to first-boot-script.sh prior to the final reboot. With the combination of these two scripts, you now can install XenServer at scale, and ensure those newly installed XenServer hosts are fully patched from the beginning.     



----
Shared via my feedly reader




Sent from my iPad

No comments:

Post a Comment