Friday, May 31, 2013

Video: Continuous Integration for Your Puppet Code

https://puppetlabs.com/blog/release-management-blog/video-continuous-integration-for-your-puppet-code/

Friday, May 31, 2013 3:24 PMVideo: Continuous Integration for Your Puppet CodePuppet LabsMike Hall

Matthew Bar of Snap Interactive gave a great talk at Puppet Camp New York about bringing continuous integration (CI) tools and practices to Puppet code. It's a good introduction to the ways you can manage the automation of your infrastructure in a safe, reliable manner.


Matthew started his talk with a reference to our own Adrien Thebo's blog post about using Git with Puppet environments. Environments provide a way to break up your Puppet installation into multiple configurations. With Puppet environments, it's easier to create development, testing and production configurations, or isolate configurations by roles, such as database servers or web servers.

Since Git branches are cheap, Matthew noted, they're a natural way to store Puppet environments in a CI environment: One branch per development stage.

Alongside Git in the core toolchain Matthew discussed is Jenkins, an open source CI server that makes it easy to apply automated tests to a codebase.

Getting Jenkins set up is made easier with R. Tyler Croy's Jenkins Puppet Module, which automates Jenkins' installation and configuration. You can also use the module to install Jenkins plugins, including a Jenkins pluginthat builds pull requests in Github and reports the results.

Static Analysis

The first stage of Matthew's Puppet/CI workflow involves performing static analysis against incoming Git commits. The tools he mentioned include:

The static analysis tools validate the basic syntax of your Puppet manifests and erb templates, then make sure they're in line with recommended style by catching whitespace errors and other issues that might not cause Puppet code to fail, but could make it harder to maintain.

Module Testing

The next stage in Matthew's workflow involves module testing. With rspec-puppet tool, Puppet code can be tested in greater depth: Issues with more complex logic or Hiera data sources are surfaced during this stage.

Catalog Testing

Once the Puppet code is validated and tested, it moves on to catalog testing. At this stage, Jenkins jobs are running shell scripts that update the repository under test from Github, then attempt to compile a Puppet catalog against a set of Facter facts.

Where previous stages in the CI process can catch more formal issues with logic or style, catalog testing can catch dependency loops or missing modules, variables or facts.

Dynamic Analysis

Having moved through a number of Jenkins jobs that test your Puppet code in isolation, Matthew's workflow moves on to dynamic analysis. This is a stage where virtualization helps, because the Puppet code must be deployed to running systems.

During dynamic analysis, testing is centered around making sure the target systems themselves will work with your Puppet code: Missing packages, or problematic configuration files that might break a service in production are caught at this stage.

Integration Testing

Once your Puppet code has been validated in isolation and in virtual environments that match what your code would encounter in product, the next step in the workflow involves integration testing.

Matthew said he uses integration testing to check critical functionality on each system to make sure necessary services are running, applications are responding as expected, and that key resources such as databases are present on the system.

He said tests don't have to be complex. Jenkins can work with anything that can return a "0″ for success or a non-zero value for failure. A simple script that uses curl to pull down a web page, for instance, can serve as a Jenkins integration test.

Nightly Rebuilds

Once everything makes its way through testing, Matthew said it's important to perform nightly rebuilds in order to have confidence that you can redeploy your Puppet-managed infrastructure in an emergency.

Nightly rebuilds check for dynamic factors outside your Puppet code: Updated package repositories, changed or updated packages, altered kickstart files, or other elements Puppet isn't directly controlling.

Deployment

And there's the purpose of the whole workflow, which is deployment to production. Matt's talk covered the use of canary servers doing "noop" and live puppet runs before merging the Puppet code and doing a full deployment.

If you'd like a look at some tooling Matthew himself has produced, he's published his puppet-ci module on the Puppet Forge.




No comments:

Post a Comment