Thursday, March 21, 2013

Some technical bits of my CloudStack testing

http://kirkjantzer.blogspot.com/2013/03/some-technical-bits-of-my-cloudstack.html


Thursday, March 21, 2013 1:24 AMSome technical bits of my CloudStack testingThe blog of Kirk JantzerKirk

As I mentioned a few bits of my testing in a previous post, I will try to not repeat myself too much.

After making my decision to use CloudStack, I had to decide how I wanted to deploy it. The most amazing thing about CloudStack is its flexibility in deployment models. Since this deployment was just a test and I had a limited amount of time, I wanted to simplify things. 
  • I had eight servers - all Dell PowerEdge that we had from a recently abandoned project
  • The master was an R610 with two 73GB drives in a RAID-1 for the OS, two 250GB drives in a RAID-1 for the secondary storage of the deployment, and 32GB of RAM
  • The slaves were all R415. Each node had: Dual AMD 4280 8-core processors, 128GB of RAM, four 2TB SATA 7200RPM drives (no RAID since the hypervisor uses all available drives), and Intel 10GbE cards. (Side note: If you use Dell and haven't checked out this server, you need to. Great compute power at a killer price)

Even though this deployment was a very small fraction of our current production infrastructure, I feel what I was able to accomplish in my testing of CloudStack that it will provide the starting framework for a substantial private cloud deployment in our infrastructure. 

Sticking with simplicity, I used CentOS 6.3 for the master node since that's what we already use for all our other Linux servers. Also, I chose XenServer 6.0.1 (current version is 6.1, but isn't supported by CS 4.0.1) for the hypervisor on the compute nodes. I chose XenServer for the following reasons:
  • FREE
  • VERY simple install (<30min from mount ISO to online)
  • No need for a central server (vCenter) to act as the control plane between the CloudStack master and the nodes
  • Supports live migration of instances when using shared storage
  • It's supported by Dell as an OS on their hardware, both for drivers as well as OMSA (Open Manage Server Administrator) - this is VERY important to us

Since all the hosts were of the same hardware an hypervisor - and sticking with simplicity - I chose to put all the hosts into one cluster, in one pod, in one zone; all using local storage. If I had more time, I would like to have tested setting up an open source shared storage cluster on a few of the nodes as I didn't really need seven compute nodes, but that wasn't in my scope for my test. 

The beauty of CloudStack, zones/pods/clusters can all differ in their resources, one can have shared storage, while the other has local - all is managed from the same interface and most is transparent to the user when deploying their instances based on the templates and offerings you opt to present to you or your customers in your deployment.  

For networking, I chose basic. I didn't feel I could give the networking my full attention in the time allotted and didn't want to add any additional possible issues that would need to be debugged. I didn't use VLANs. I had my own /16 subnet to play with, so as to not interfere with anything production. Instances would use DHCP and the security groups were left open. All this for - sorry to say it again - the sake of simplicity. 

If/when this expands into other pools/functions/datacenters, we will certainly change a lot of things, but as you can see, with a little bit of time and effort, your business can have its own private cloud. Feel free to reach out to me or the CloudStack community if you have any questions or comments. 

No comments:

Post a Comment