Tuesday, March 19, 2013

Secure Puppet Code Collaboration

https://puppetlabs.com/blog/secure-puppet-code-collaboration/

Tuesday, March 19, 2013 1:01 PMSecure Puppet Code CollaborationPuppet LabsChris Spence

Did you know that you can use workflow and testing to ensure you get the configurations you asked for in a secure manner? In this post, we discuss how.

Puppet compiles a catalog using Puppet DSL code and data that's usually at rest on the local filesystem of the puppet master. An agent requests a catalog from a master, and subsequently enforces the catalog locally. It's common for multiple teams to collaborate on the configuration code and data, with the different teams being responsible for different modules. Often times, this means that the Puppet module path is actually made up of two or more module paths, where each modulepath entry is a different version control system repository managed by a different team.


  modulepath = /data/puppet/core/modules:/data/puppet/webapps/modules  

I will have a manifest (maybe an ENC too, but let's just talk about manifests here):

  manifest = /data/puppet/manifests/site.pp  

I am more likely to have my core security modules in the first part of my module path. Potentially I *could* have a class security {..} in both halves of my module path; however, the module found first by the autoloader will take precedence, and the module in the web apps part of the path won't be evaluated. It's important to get the configurations that you want in first, especially because there's no hard and fast rule that will restrict resource definitions for security configurations to that security module. Any resource declaration might be in any module, subject to uniqueness constraints, and so forth.

Thus, my security class manages a resource:

   user { 'root':       password => 'myspecialpasswordhash',    }   

If another class attempts to manage the same resource, Puppet will return a catalog compilation error, and no configurations will be managed. This is good. My root user won't get hosed by a rogue configuration. However, there's a twist to this story. What if my manifest (site.pp) declares another class 'configurethewebserver'? That class is defined in the the second half of my module path (which is where the modules that deploy my applications live, as opposed to the core standard operating environment), and gets declared, quite legitimately, in a node definition in the site manifest.


The downside is that it also turns out that someone (a rogue/misguided sysadmin and/or developer who can contribute to my Puppet code) has remembered that you can do class inheritance and override parameters and, in order to achieve their configuration aims, has inserted code something like the following in a Puppet class:

class configurethewebserver inherits security {     User['root'] {      password => 'wejustpwnedyourwebinfrastructure',    }  }  

Unfortunately, the next time I do a Puppet run on my production boxes, my root user no longer has the password hash 'myspecialpasswordhash',but the overridden inherited value 'wejustpwnedyourwebinfrastructure' from the rogue class in the second half of the module path. Sigh.

The clear security moral here is that if you give someone the ability to put code on your puppet master, you've given them root on all the nodes that you're managing on that puppet master. Don't give people root if you don't have to.

Aside from the security implications of delegating access to specific portions of Puppet module path aside, the issue we are really trying to fix here is making sure that the configurations we think we are defining are actually taking effect and work. So what can we do with processes and workflow to help ensure consistency and make sure we don't fall victim to accidental or deliberate misconfigurations?

First make sure that all the configurations you regard as being core for security are actually defined in your code and data — that way you can positively validate them. If you don't define something, you're not managing it, and you can't test it. Testing things you haven't attempted to manage just won't work — really, you'd just be relying on the defaults. Even if you just want to use the default, enforce it because that's your desired configuration.

Second, you might want to have a gateway between commits to your version control system and the updates that follow on your puppet masters from a workflow perspective. That gateway is likely to be a person tasked with merging edits into the branch of the repository that the puppet master uses for catalog compiles. That way, you stand a chance that a real human being will review incoming changes. It turns out that humans are really good at heuristics; spotting subtle vulnerabilities tends to be a real person's job. When it comes down to it, computers essentially say 'yes' or 'no', and a computer that says 'maybe' is probably screwing with you. At the end of the day, you've got to fundamentally understand the codebase; if you allow rogue code on your puppet master, you're in trouble.

Third, you should certainly run automated testing against your module code using a continuous integration framework. Make sure you use appropriate data when you do so! Unit testing toolsets like rspec and rspec-puppet to positively validate the functionality of your code and modules is valuable.

Fourth, please consider testing full catalog compiles prior to deploying them. This would involve compiling a catalog for each node in your infrastructure and having integration tests that validate each security related resource configuration based on the your data.

Finally, as well as unit testing code, cheap virtualization technologies make it possible to fully validate the entire application stack in an automated fashion and subsequently test it. If you use continuous integration tooling to automatically spin up your environment, you can test and validate your entire build before you deploy to production since the tool will use your Puppet code and data for configuration. By using the same monitoring tools you use in production against this ephemeral application stack, you can prove a full integration test. It's important to be confident that when you move to production, you will get the configurations you want.

Learn More:

  • Validate your code with rspec-puppet
  • Vagrant is another great tool for continuous integration. You can check out Vagrant founder Mitchell Hashimoto's PuppetConf 2012 talk on advanced Vagrant usage.





No comments:

Post a Comment