An adventure with Chef

30Apr13

I hear a lot of people talking about automated deployment with Chef (and its competitor Puppet, which I haven’t had the chance to try yet), so I thought I’d spend some time seeing how it would fit in with our image management platform Server3.

Don’t stray from the PATH

To get familiar with Chef, I dove straight into the online quick start guide, and ended up making a couple of trying too hard errors:

  1. I began by installing Chef onto an Ubuntu VM, but when it came to the bits of the quick start that used Vagrant it became clear that I needed something that could poke and prod VirtualBox from its host environment (Windows). I went back to square one and installed Chef (and Vagrant) onto the base OS.
  2. My second mistake was installing onto non-default directories. I find it pretty offensive when stuff wants to go into c:\, and the tidy freak in me likes to put stuff that doesn’t go straight into c:\program files (or c:\program files (x86)) into subdirectories like c:\dev or c:\temp (depending on how long I expect to keep stuff). Chef did not like being in c:\dev\chef – none of the scripts worked. When I looked closely all of the scripts were hard coded to c:\chef – an automated installation system that can’t even install itself cleanly – hardly confidence inspiring. I ended up switching to a different machine, started from scratch, and accepted the defaults.

When I kept to the PATH the quick start worked. I had a VM that had been summoned up out of nowhere that would converge onto some recipes. The time had come to get under the hood, figure out how this thing really worked, and apply it to something of my own creation.

Architecture expectations – ruined

The published architecture for Chef is pretty much what I expected, and consists of three core components:

  1. Server – a place where configuration (expressed as recipes and roles) is stored.
  2. Nodes – machines that connect to a server to retrieve a configuration.
  3. Workstation – a machine that’s used to create and manage configurations kept of the server (using a tool called Knife)

Conceptually this is all well and good. Some configuration is created from a workstation, placed on a server, and nodes come along later and converge on their given config. Unfortunately there are a couple of holes in the story:

  1. Chef installation on a node. The best documented method for doing this is using the Knife tool (which essentially logs into the node via SSH and runs some install scripts), in which case we have (real time) connectivity between the workstation and node before the node ever connects to a server.
    It is possible to do things differently, and there are downloadable install scripts plus descriptions of using OS packaging mechanisms and alternative approaches to getting the Chef Ruby Gem installed, but it feels like this is all off the well trodden path.
  2. Bootstrapping a node. A node needs more than just the base Chef client install. It needs to know which Chef server to connect to, have some credentials to authenticate itself and know which role(s) and or recipe(s) to configure against.Once again the usual approach seems to be to use Knife from a workstation to create the appropriate files and SCP them into place on the Node. It is of course possible to get the files in place by other means.

The dependence on a real time relationship between the Chef workstation and a node before it even connects to a server leads me to believe that Chef is mostly being used in dev/test environments that are being driven by humans. If that’s what DevOps is then it seems like we need another name for fully automated deployment an configuration management in a production setting.

Bundling into Server3

Firstly I should explain what I was trying to achieve here… The idea was to create a VM image that would converge on a Chef role as soon as it was launched. There’s a longer term goal here of doing convergence within the Server3 image factory (but we’re not quite ready yet with the underlying metavirtualisation required for that).

I ended up creating two packages:

  1. An archive to go into /etc/chef containing my validation.pem, a client.rb pointing to my Chef server and a first_boot.json with a run_list pointing at a role configured on the Chef server.
  2. A run on first boot script with the Chef install.sh with the like chef_client –j /etc/chef/first_boot.json appended to the end so that Chef would run once installed and converge onto the defined role.

With those two packages in a bundle I was able to add that to a VM recipe and deploy straight to a cloud (AWS) for testing. It was nice to be able to connect to a VM after launch and find it converged and ready to run.

Next – Inception

Convergence on launch is nice, but it would be better still to launch a pre converged machine – after all if you’re adding machines to grow capacity then it’s probably needed now rather than in however long it takes to install Chef and converge on a role or recipes. This capability should be coming soon to Server 3, and we’re using the label ‘inception’ to define what happens inside the image factory – planting dreams inside the VM’s head.

Conclusion

Chef can be made to work like I expected it to, which makes it possible to have an image that converges when first launched without any human intervention. Going by the weight of documentation this doesn’t seem to be how most people use Chef though – DevOps appears to involve having an actual developer involved in their operations. We need another name for fully automated production operations.

This is cross posted from the original on the CohesiveFT blog.



No Responses Yet to “An adventure with Chef”

  1. Leave a Comment

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.