Friday, March 8, 2013

Introducing the Latest VMware and Puppet Labs Integration

Introducing the Latest VMware and Puppet Labs Integration:
Nan Liu is Senior Systems Engineer at VMware.
Disclaimer: This is a repost from Nan’s personal blog. The opinions expressed herein are Nan’s personal opinions and do not necessarily represent those of VMware.
Last week, we released a set of open source Puppet modules for managing VMware cloud environments, specifically VMware vCenter Server Appliance 5.1 and VMware vCloud Network and Security 5.1 (vCNS, previously known as vShield). They provide a framework for managing resources within vCenter and vCNS via Puppet (read Nick Weaver’s blog for more information).
puppet_device

The modules can be obtained from Puppet Forge:

$ puppet module install vmware/vcsa
$ puppet module install vmware/vcenter
$ puppet module install vmware/vshield


For development use GitHub repos which can be installed via the following librarian Puppet file:

mod "puppetlabs/stdlib"
mod "nanliu/staging"
mod "vmware_lib", :git => "git://github.com/vmware/vmware-vmware_lib.git"
mod "vcsa",       :git => "git://github.com/vmware/vmware-vcsa.git"
mod "vcenter",    :git => "git://github.com/vmware/vmware-vcenter.git"
mod "vshield",    :git => "git://github.com/vmware/vmware-vshield.git"


The Puppet management host needs to have connectivity with vCenter and vCNS appliance. We are currently using a custom version of RbVmomi which has been included in the module. The management host should deploy all dependent software packages before managing any vCenter/vCNS resources:

node 'management_server' {
  include 'vcenter::package'
}


One of the gems in the package requires nokogiri. If you use Puppet Enterprise, install the pe-rubygem-nokogiri package on the management host (it’s not typically installed for agents). See Nokogiri documentation for additional information for open source puppet agents.
In February’s sneak preview, I showed the debugging output for SSH transport. For the observant readers, those commands were the steps to initialize vCenter Server Appliance:
SSH Transport
This is the corresponding Puppet manifests. (Note: In the module test manifests “import ‘data.pp’” is a pattern to simplify testing for developers in different environments, please do not use import function for your production Puppet manifests):

vcsa { 'demo':
  username => 'root',
  password => 'vmware',
  server   => '192.168.1.10',
  db_type  => 'embedded',
  capacity => 'm',
}


If we dig into the define resources type, it simply passes the user account to the SSH transport and initialize the device in the appropriate sequence:

define vcsa (
...
) {u
  transport { $name:
    username => $username,
    password => $password,
    server   => $server,
  }
 
  vcsa_eula { $name:
    ensure    => accept,
    transport => Transport[$name],
  } ->
 
  vcsa_db { $name:
    ensure    => present,
    type      => $db_type
  ...
}


Once vCenter Server appliance is initialized we can manage vCenter resources using the vSphere API. The example below specifies a vSphere API transport, along with datacenter, cluster, and an ESX host (also, the resources should work against a vCenter installation on Windows; however, it hasn’t been tested):


transport { 'vcenter':
  username => 'root',
  password => 'vmware',
  server   => '192.168.1.10',
  # see rbvmomi documentation for available options:
  options  => { 'insecure' => true },
}
 
vc_datacenter { 'dc1':
  ensure    => present,
  path      => '/dc1',
  transport => Transport['vcenter'],
}
 
vc_cluster { '/dc1/clu1':
  ensure    => present,
  transport => Transport['vcenter'],
}
 
vc_cluster_drs { '/dc1/clu1':
  require   => Vc_cluster['/dc1/clu1'],
  before    => Anchor['/dc1/clu1'],
  transport => Transport['vcenter'],
}
 
vc_cluster_evc { '/dc1/clu1':
  require   => [
    Vc_cluster['/dc1/clu1'],
    Vc_cluster_drs['/dc1/clu1'],
  ],
  before    => Anchor['/dc1/clu1'],
  transport => Transport['vcenter'],
}
 
anchor { '/dc1/clu1': }
 
vcenter::host { 'esx1':
  path      => '/dc1/clu1',
  username  => 'root',
  password  => 'esx_password',
  dateTimeConfig => {
    'ntpConfig' => {
      'server' => 'us.ntp.pool.org',
    },
    'timeZone' => {
      'key' => 'UTC',
    },
  },
  transport => Transport['vcenter'],
}


The next task is to connect vCloud Network and Security appliance to a vCenter appliance to form a cell:

transport { 'vshield':
  username => 'admin',
  password => 'default',
  server   => '192.168.1.11',
}
 
vshield_global_config { '192.168.1.11':
  # This is the vcenter connectivity info. See vShield API doc:
  vc_info   => {
    ip_address => '192.168.1.10',
    user_name  => 'root',
    password   => 'vmware',
  },
  time_info => { 'ntp_server' => 'us.pool.ntp.org' },
  dns_info  => { 'primary_dns' => '8.8.8.8' },
  transport => Transport['vshield'],
}


In vShield API, all vCenter resources are referred by the vSphere Managed Object Reference (MoRef). ‘esx-13′ might be understandable to a computer, but for configuration purpose the name of the ESX host would make much more sense to an admin. For this reason, we developed the transport resource to support multiple connectivity during a single Puppet execution:

transport { 'vcenter':
  username => 'root',
  password => 'vmware',
  server   => '192.168.1.10',
  options  => { 'insecure' => true },
}
 
transport { 'vshield':
  username => 'admin',
  password => 'default',
  server   => '192.168.1.11',
}
 
vshield_edge { '192.168.1.11:dmz':
  ensure             => present,
  datacenter_name    => 'dc1',
  resource_pool_name => 'clu1',
  enable_aesni       => false,
  enable_fips        => false,
  enable_tcp_loose   => false,
  vse_log_level      => 'info',
  fqdn               => 'dmz.vm',
  vnics              => [
    { name          => 'uplink-test',
      portgroupName => 'VM Network',
      type          => "Uplink",
      isConnected   => "true",
      addressGroups => {
        "addressGroup" => {
          "primaryAddress" => "192.168.2.1",
          "subnetMask"     => "255.255.255.128",
        },
      },
    },
  ],
  transport  => Transport['vshield'],
}


This should a provide general overview for the module capabilities. Additional resources are available beyond what’s covered in this post; however some of them such as vc_vm are not operational yet, and currently the modules do not offer comprehensive coverage of the vSphere and vShield API. I hope other users will find this module useful for your environment.
Thanks again for the support from R&D team at VMware, and especially Randy Brown and Shawn Holland for contributing the vCenter and vShield module. Also, thanks to Rich Lane for releasing RbVmomi, and support from Christian Dickmann for resolving an issue in that library.

Learn More



No comments:

Post a Comment