Over the past week or so my automated build engine for OpenELEC on the Raspberry Pi hasn’t been working. XBMC has grown to a point where it will no longer build on a machine with 1GB RAM.

Normal services has now been resumed, as the good people at GreenQloud kindly increased my VM from t1.milli (1 CPU 1GB RAM) to m1.small (2 CPUs 2GB RAM). In fact I’m hoping that the extra CPU might even make the build process quicker. I have to congratulate the GreenQloud team for how easy the upgrade process was – about 3 clicks followed by a reboot of the VM. Not only are they the most environmentally friendly Infrastructure as a Service (IaaS), but also one of the easiest to use – thanks guys.


Magic teabags

23May13

I’m a creature of habit, and like a cup of green and Earl Grey[1] to start my day and a Red Bush (aka Rooibos) mid afternoon. Approximately nowhere that I go has the tea that I like to drink, so I take along my own stash. This means that I often find myself asking for a cup of hot water when those around me are ordering their teas and coffees, and 99% of the time that isn’t a problem. I sometimes feel like a bit of a cheapskate in high street coffee shops, but then I think of Starbucks and their taxes and the guilt subsides.

My teabags are tasty but they’re not magic – they simply infuse hot water with a flavour I like.

easyJet sell magic teabags. For £2.50.

Here's the magic tea bag that easyJet sold me for £2.50.

Here’s the magic tea bag that easyJet sold me for £2.50.

I have no idea what easyJet’s magic teabags taste like (and let’s face it, £2.50 is a lot for a cup of tea – they should taste great), but the magic is isn’t in the taste. It’s in their safety properties.

easyJet teabags turn otherwise dangerous cups of scalding water into perfectly safe cups of tea.

I know this because EasyJet cabin crew aren’t allowed to give me a cup of hot water any more for ‘health and safety reasons’, but they are allowed to sell me a cup of tea for £2.50. Since I asked really nicely they even sold me a cup of tea without putting the magic teabag in. I’ll assume that the magic works at a (short) distance – so it’s OK for me to have the teabag on the tray table in front of me, and not OK for it still to be on the cart making its way down the aisle.

I could accuse easyJet of perverting the cause of ‘health and safety’ to benefit their greed. In fact I did in a web survey I completed following a recent trip:

All I wanted was a cup of hot water (I carry my own tea bags as I prefer a type that is never available anywhere I go). This has never been a problem in the past on Easyjet flights, but this time the crew told me that they’re no longer allowed to serve hot water for ‘health and safety reasons’. Apparently a £2.50 teabag has the magical property of turning a cup of scaling water into something safe. The crew very kindly obliged my request to sell a cup of tea without the teabag being dunked. I got ripped off and nobody was made any safer. Blaming your corporate greed on health and safety isn’t a way to impress your customers.

I should point out that easyJet aren’t alone in this shameful practice, they’re just the first airline I’ve found doing it. I’ve also come across it at conference centres where ripoff prices are charged for beverages – Excel, Olympia and Earls Court I’m looking at you.

Maybe if I keep my magic teabag I can use it again on another flight. Or does it have some sort of charge that runs out?

Notes

[1] This is as good a place as any for me to say how disappointed I am that Twinings have discontinued their superb Green and Earl Grey blend. It still gets a mention on their web site, but they stopped selling it a year or so ago. Had I known I’d have bought more than a years supply when I last did a bulk order – of course (like most companies) they didn’t bother to tell me (their previously loyal customer) that they were going to stop making and selling something that I’d been buying regularly for years. I have yet to perfect my own blend of Green and Earl Grey.


This post first appeared on the CohesiveFT blog.

One of the announcments that seemed to get lost in the noise at this week’s IO conference was that Google Compute Engine (GCE) is now available for everyone.

I took it for a quick test drive yesterday, and here are some of my thoughts about what I found.

Web interface

The web UI is less bad than most of the other public clouds I’ve tried of late, but it’s nowhere near as good as AWS. I see a number of places where I think ‘that works fine now whilst I’m just playing, but I’m not going to like that when I’m using this in anger and I’ve got LOTS of stuff to manage’.
One thing I like a lot about the web interface is how well it has been connected to the REST API and gcutil command line tool. The overall effect is to give the impression ‘this is just for when you’re running with training wheels, if you’re serious about using this platform then you’ll use (or build) some grown up tools elsewhere’.

gcutil

Google have gone with their own API, which means you can’t use third party tools adapted to AWS and other popular APIs. If (as most pundits predict) Google grows to be the #2 public IaaS this won’t be a big deal as an ecosystem will grow around them. For the time being I expect the main way that people will use the API is through the gcutil command line tool. It’s very easy to get going with gcutil due to the integration with the web interface, though I do wish there were direct links from the tool guide rather than links to links (a trap for those like me that just copy links and paste into wget commands).

Access control

GCE uses OAUTH2 for access control. This is both a very clever use of standards, and a Lovecraftian horror to use.

Beware, Fluffy Cthulhu will eat your brains if you think you can just source different creds to switch between accounts

This manifests itself when you first use gcutil where the invokation creates a challenge/response – paste URL into browser, authenticate, approve, paste token back into gcutil. A ~/.gcutil_auth file is then written to save you jumping through the same hoops every time. It’s possible to make the tool look elsewhere for the credentials stored in that file (and I guess equally possible to write a script to move files into and out of the default location), but the net effect is to bind a user on a local machine to an account in the cloud, which I think will be jarring to many people who are used to just sourcing creds files into environment variables as they hop between accounts (and providers).

SSH

Google also breaks with convention over how it manages SSH keys. Most other clouds either force you to create a key pair before launching an instance, or allow the upload of the public key from a keypair you made yourself.
GCE creates a keypair for you the first time that you try to access an instance using SSH (with a different name):
  • gcutil creates a keypair and copies the private key to ~/.ssh/google_compute_engine
  • the public key is uploaded to your project metadata as name:key_string
  • new users of ‘name’ are created on instances in the project
    • and the key_string is copied into ~/.ssh/authorized_keys on those instances
  • meanwhile gcutil sits there for 5 minutes waiting for all that to finish
    • I’ve found that the whole process is much faster than that, and in the time it takes me to convert a key to PuTTY format everything is ready for me to log into an instance (whilst gcutil is still sat there waiting).
The whole process is a little creepy, as you end up signing into cloud machines with the same local username as you’re using on whatever machine you have running gcutil. This also feels like another way that gcutil ends up binding a little too hard to a single local account.

Access control redux – multi accounts

The OAUTH2 system for creating gcutil tokens does support Google’s muliple account sign on – allowing me to choose between my personal and work accounts.
The web interface doesn’t.
If I want to use the web interface with my work account then I have to use my browser in incognito mode (and jump through the 2FA auth every time, which is a pain).
At this stage I’m glad I’m only wrangling two GCE accounts. Any more and I’d be quickly running out of browsers (and out of luck if I was using my Chromebook).

Image management

The entire GCE image library presently fits onto a single browser page, and half of that is deprecated or deleted, so the choice of base OS is limited to Debian (6 or 7) and Centos 6.
There are no choices for anything more than a base OS (though there are instructions for creating your own images once you’ve added stuff to a base OS).
There is no (documented) way to import an image that didn’t start out from one of the official base images.
There is no image sharing mechanism.
There is no image marketplace (or any means to protect IP within images).

Network

This is an area where it seems Google have learned from Amazon how to do things more intelligently. The network functionality is more like an Amazon Virtual Private Cloud (VPC) than the regular EC2 network. By default you get a 10.x.x.x/16 network with a gateway to the outside world and firewall rules that let instances talk to each other on that network, and SSH in from the outside.
Firewall rules apply to the network (like VPC security groups) rather than the instance (like EC2 security groups), and there’s a very flexible source/target tagging system there that can be used to describe interconnectivity.
The launch announcement talks about ‘Advanced Routing features help you create gateways and VPN servers, and enable you to build applications that span your local network and Google’s cloud’, but if those features exist in the API I don’t (yet) see them exposed anywhere in the web UI.

Disks

The approach to disks is much more like Azure’s IaaS than AWS, at least in terms of default behaviour. terminating an instance doesn’t destroy the disk underneath it, and it’s possible to leave that disk hanging around (and the meter running) then go back and attach another instance to it later. If you don’t want the disks to be persistent then that needs to be specified at launch time (or you have to delete the disk after deleting the instance).
There’s no real difference in capability here, it’s just a difference in default behaviour.

Speed

GCE feels fast compared to AWS and very fast compared to most of the other public clouds I’ve used. Launches and other actions happen quickly, and the entire environment feels responsive. I hope this isn’t a honeymoon period (like Azure IaaS storage) where everything is fine for the first few days and crumbles under load once people have the time to get onto the service (given how Google have handled the launch of GCE I’m fairly confident they won’t repeat Microsoft’s mistakes here).
I haven’t benchmarked any instances to see if machine performance is +/- equivalent AWS instances, but I’ve heard on the grapevine that GCE has more robust performance.

Price

Seems to be set to be about the same as AWS benchmarks across instances, storage and network. GCE doesn’t seem to be competing on price (yet), but if might be offering better quality (albeit for fewer services) at the same price.
One thing that has caught people’s attention is the move to per minute billing (with 10m minimum):

I’m not so sure:

Paying for a whole hour when you tried something for a few minutes (and it didn’t work so start again) might be a big deal for people tinkering with the cloud. It might also be a thing for those bursty workloads, but I think for most users the integral of their minute-hour overrun is a small number (and Google will no doubt have run the numbers to know that exactly).

In effect per minute billing means GCE runs at a small discount to AWS for superficially similar price cards, but I don’t see this being a major differentiator. It’s also something that AWS (and other clouds) can easily replicate.

Conclusion

There’s a lot to like about GCE. It gets the basics right, and no doubt more functionality will come with time.
I see room for improvement in the identity management pieces, but the underlying security bits are well thought out and executed.
Image management is the area most in need of attention. People are religious about their OS choices, and having one flavour from each of the big Linux camps is enough for a start but not enough for the long term. Google’s next major area for improvement has to be getting the right stuff in place for a storefront to compete with AWS Marketplace. Some people might even want to run Windows :-0

Authorization

17May13

In which I examine why XACML has failed to live up to my expectations, even if it isn’t dead, which has been the topic of a massive blogosphere battle in recent weeks.

Some background

I was working with the IT R&D team at Credit Suisse when we provided seed funding[1] for Securent, which was one of the first major XACML implementations. My colleague Mark Luppi and I had come across Rajiv Gupta and Sekhar Sarukkai when we’d been looking at Web Services Management platforms as their company Confluent had made our short list. The double acquisition by Confluent by Oblix then Oracle set Rajiv free, and he came to us saying ‘I’m going to do a new startup but I don’t know what it is yet’. Mark had come to the realisation that authorization was an ongoing problem for enterprise applications, and we suggested to Rajiv that he build an entitlements service, with us providing a large application as the proof point.

My expectations

Back in March 2008 (when I was still at Credit Suisse) I wrote ‘Directories 2.0‘ in which I laid out my hopes that XACML based authorization services would become as ubiquitous as LDAP directories (particularly Active Directory).

I also at that time highlighted an issue – XACML was ‘like LDIF without LDAP’ – that it was an interchange format without an interface. It was going to be hard for people to universally adopt XACML based systems unless there was a standard way to plug into them. Luckily this was fixed the following year by the release of an open XACML API (which I wrote about in ‘A good week for identity management‘).

I’ll reflect on why my expectations were ruined toward the end of the post.

Best practices for access control

Anil Saldhana has stepped up out of the idendity community internicine warfare about XACML and written an excellent post ‘Authorization (Access Control) Best Practices‘. I’d like to go through his points in turn and offer my own perspective:

  1. Know that you will need access control/authorization
    The issue that drove us back at Credit Suisse was that we saw far too many apps where access control was an afterthought. A small part of the larger problem of security being a non functional requirement that’s easy to push down the priority list whilst ‘making the application work’. Time and time again we saw development teams getting stuck with audit points (a couple of years after going into production) because authorization was inadequate. We needed a systematic approach, an enterprise scale service, and that’s why we worked with the Securent guys.
  2. Externalize the access control policy processing
    The normal run of things was for apps to have authorization as a table in their database, and this usually ran into trouble around segregation of duties (and was often an administrative nightmare).
  3. Understand the difference between coarse grained and fine grained authorization
    This is why I’m a big fan of threat modeling at the design stage for an application, as it makes people think about the roles of users and the access that those roles will have. If you have a threat model then it’s usually pretty obvious what granularity you’re dealing with.
  4. Design for coarse grained authorization but keep the design flexible for fine grained authorization
    This particularly makes sense when the design is iterative (because you’re using agile methodologies). It may not be clear at the start that fine grained authorization is needed, but pretty much every app will need something coarse grained.
  5. Know the difference between Access Control Lists and Access Control standards
    We’re generally trying not to reinvent wheels, but this point is about using new well finished wheels rather than old wobbly ones. I think this point also tends to relate more to the management of unstructured data, where underlying systems offer a cornucopia of ACL systems that could be used.
  6. Adopt Rule Based Access Control : view Access Control as Rules and Attributes
    This relates back to the threat model I touched upon earlier. Roles are often the wrong unit of currency because they’re an arbitrary abstraction. Attributes are something you can be more definite about, as they can be measured or assigned.
  7. Adopt REST Style Architecture when your situation demands scale and thus REST authorization standards
    This is firstly a statement that REST has won out over SOAP in the battle of WS-(Death)Star, but is more broadly about being service oriented. The underbelly of this point is that authorization services become a dependency, often a critical one, so they need to be robust, and there needs to be a coherent plan to deal with failure.
  8. Understand the difference between Enforcement versus Entitlement model
    This relates very closely to my last point about dependency, and whether the entitlements system is an inline dependency or out of band.

So what went wrong?

It’s now over 5 years since I laid out my expectations, and it’s safe to say that my expectations haven’t been met. I think there are a few reasons why that happened:

  • Loss of momentum
    Prior to the Cisco acquisition Securent was one of a handful of startups making the running in the authorization space. After the acquisition the Securent stopped moving forward, and the competition didn’t have to keep running to keep up. The entire segment lost momentum.
  • My app, your apps
    Entitlements is more of a problem for the enterprise with thousands of apps than it is for the packaged software vendor that may only have one. We ended up with a chicken and egg situation where enterprises didn’t have the service for off the shelf packages to integrate into, and since the off the shelf packages didn’t support entitlements services there was less incentive to buy in. Active Directory had its own killer app – the Windows Desktop, which (approximately) everybody needed anyway, and once AD was there it was natural to (re)use it for other things. Fine grained services never had their killer app – adoption always had to be predicated on in house apps.
  • Fine grained is a minority sport
    Many apps can get by with coarse grained authorization (point 4 above) so fine grained services find it harder to build a business case for adoption.
  • In house can be good enough
    When the commercial services aren’t delivering on feature requests (because the industry lost momentum), and the problem is mostly in house apps (because off the shelf stuff is going its own way) then an in house service (that isn’t standards based) might well take hold. Once there’s a good enough in house approach the business case of a 3rd party platform becomes harder than ever to make.

Conclusion

It’s been something like 9 years since I started out on my authorization journey, and whilst the state of the art has advanced substantially, the destination I envisaged still seems almost as distant as it was at the start. XACML and systems based upon it have failed to live up to my expectations, but that doesn’t mean that they’ve failed to deliver any value. I think at this point it’s probably fair to say that the original destination will never be reached, but as with many things the journey has bourne many of its own rewards.

Notes

[1] Stupidly we didn’t take any equity – the whole thing was structured as paying for a prototype


I’ve been very happy with the results from my Raspberry Pi controlled water bath for sous vide cooking, but I knew that the control loop could be improved. Past runs show fairly continued oscillation:

Roast beef temps2

Roast beef run at 60C

I’ve been keeping track of the average power for my control loop, which has been coming out at 22%. So i modified the code to have a bias of 22%, and here’s the result:

Test run at 55C

Test run at 55C

Overall much more stable. The occasional hiccups are probably caused by the remote socket failing to receive off commands. There’s a 3C overshoot at the start, which I hope to have fixed by entering the control loop from initial warm up 3C earlier. Here’s the new code (also available at GitHub):

import os
from subprocess import Popen, PIPE, call
from optparse import OptionParser
from time import sleep

def tempdata():
    # Replace 28-000003ae0350 with the address of your DS18B20
    pipe = Popen(["cat","/sys/bus/w1/devices/w1_bus_master1/28-000003ea0350/w1_slave"], stdout=PIPE)
    result = pipe.communicate()[0]
    result_list = result.split("=")
    temp_mC = int(result_list[-1]) # temp in milliCelcius
    return temp_mC

def setup_1wire():
  os.system("sudo modprobe w1-gpio && sudo modprobe w1-therm")

def turn_on():
  os.system("sudo ./strogonanoff_sender.py --channel 4 --button 1 --gpio 0 on")

def turn_off():
  os.system("sudo ./strogonanoff_sender.py --channel 4 --button 1 --gpio 0 off")
#Get command line options
parser = OptionParser()
parser.add_option("-t", "--target", type = int, default = 55)
parser.add_option("-p", "--prop", type = int, default = 6)
parser.add_option("-i", "--integral", type = int, default = 2)
parser.add_option("-b", "--bias", type = int, default = 22)
(options, args) = parser.parse_args()
target = options.target * 1000
print ('Target temp is %d' % (options.target))
P = options.prop
I = options.integral
B = options.bias
# Initialise some variables for the control loop
interror = 0
pwr_cnt=1
pwr_tot=0
# Setup 1Wire for DS18B20
setup_1wire()
# Turn on for initial ramp up
state="on"
turn_on()

temperature=tempdata()
print("Initial temperature ramp up")
while (target - temperature > 6000):
    sleep(15)
    temperature=tempdata()
    print(temperature)

print("Entering control loop")
while True:
    temperature=tempdata()
    print(temperature)
    error = target - temperature
    interror = interror + error
    power = B + ((P * error) + ((I * interror)/100))/100
    print power
    # Make sure that if we should be off then we are
    if (state=="off"):
        turn_off()
    # Long duration pulse width modulation
    for x in range (1, 100):
        if (power > x):
            if (state=="off"):
                state="on"
                print("On")
                turn_on()
        else:
            if (state=="on"):
                state="off"
                print("Off")
                turn_off()
        sleep(1)

Sunday Roast

05May13

After some success with low temperature cooking sous vide style with my Raspberry Pi controlled water bath I had a go today at doing slow roasted pork belly[1], which came out well. After some questions on twitter I though I’d go through the standard recipes and techniques I use for the rest of the meal.

Some of the crackling might make it to a sandwich tomorrow

Some of the crackling might make it to a sandwich tomorrow

My cooking style is very scientific – keep to the right weights, measures, timings and temperatures and things will come out fine every time. Here’s how I do my roasties (roast potatoes), yorkies (Yorkshire puddings), stuffing[2] and veggies:

t-70m peel and chop potatoes – I usually allow for 6 roasties per person.

pour oil into a roasting tin for potatoes, and a little into each cup of a 12 cup yorkie tray, grease a small roasting tray for the stuffing

t-60m turn fan oven onto full power (~240C[3]), put potatoes into a pan of hot tap water and place to boil on a large gas ring

prepare batter for yorkies: 125g strong white bread flour, 235g[4] milk, 3 eggs whisk until smooth
prepare stuffing – half a pack of paxo (~90g) with a bit less than the specified amount of water (~240g)

t-48m put roasting tin for potatoes into oven to warm

t-45m drain potatoes, shake together to fluff surface, put into roasting tin, make sure potatoes are covered in oil then place in oven

prepare veggies (usually carrots and broccoli) and place in pan of hot tap water

t-33m put yorkie tin into oven

t-30m pour yorkie batter into yorkie tin and place in oven, turn down to 180C

place stuffing onto small roasting tin

t-25m put stuffing into oven

t-20m turn on large ring gas under veggies

t-15m turn oven up to 200C

put plates and serving bowls into microwave to warm up (~5m)

t-2m make gravy using Bisto granules and water from veggies[5]

t-0 service!

Notes

[1] After scoring and rubbing salt into the skin I put a 1.2kg on the bone pork belly into a non fan oven at 130C starting at t-4hrs. At t-60m it went into the hot fan oven to crisp the cracking, and at t-30m it came out to rest and make space for the yorkies. I took the cracking off and put it back in the oven with everything else.
[2] Traditionally yorkies are only served with roast beef, and stuffing is only served with poultry, but I love both (and so do the rest of the family) so we have them every time. $daughter0 insists on having mint sauce every time too.
[3] If using a regular oven then add around 20C throughout
[4] I don’t bother with liquid measures in jugs or whatever, everything gets measured by mass on a set of electronic scales
[5] Or use left over (frozen) gravy from previous pot roast


Two years ago I took my son along to Maker Faire UK in Newcastle (which is where I grew up). This year the whole family came along.

Whilst I queued with the kids for the ‘laser caper'[1] my wife went along to a talk by Clive from the Raspberry Pi Team. I can’t blame her for wanting to find out about stuff I’m so keen on from an independent (though not impartial) source. She came back very enthused, particularly about Rob Bishop’s singing jelly baby project.

The setup

The project looked ideal for a couple of reasons:

  1. It’s short and simple – something that my son could ostensibly tackle on his own
  2. Jelly babies – yummy

I set up my son’s Raspberry Pi so that it was working on a newly acquired Edimax EW-7811Un WiFi adaptor[2] in order to minimise clutter (no need for keyboard, screen etc.). I also made sure that his Raspbian was up to date (sudo apt-get update && sudo apt-get upgrade). Once jelly babies, paper clips and jumper wires were sourced from around the house he was almost ready to go. I opened up a PuTTY session to the Pi and left him to it.

Hacking like it’s 1983

The hardware bits were simple – as expected. The software was more troublesome.

I cut my teeth programming on the ZX81 and Dragon32, typing in games from magazines. These would invariably not run due to a multitude of typos causing BASIC syntax errors. As many others found out at the time, learning to debug programs is functionally equivalent to learning to program.

Not a lot has changed (and maybe that’s the point).

It’s possible to cut and paste from Rob’s PDF into Nano[3], but that doesn’t give you a working python program. All the indentation (which Python relies on) gets stripped out, and characters like ‘ can mysteriously change into .

Sorting out the copy/paste errors was a good refresher on Python loops and conditionals, and highlighted key aspects of the code flow.

No sound

Before getting too seriously into hammering the Python into shape I sugested a test that the ‘song’ would play.

I ran ‘mpg321 la.mp3’, but heard nothing. Aha I thought – the Pi is trying to play to HDMI rather than the 3.5mm jack (something I’d seen before with MAME). I ran:

sudo modprobe snd_bcm2835
sudo amixer cset numid=3 1

I ran ‘mpg321 la.mp3’ again, but still heard nothing. I tried ‘aplay /usr/share/sounds/alsa/Front_Center.wav’ – that worked – so the problem was with mp3 playback.

I took a hard look at the la.mp3 file. It’s file size seemed wrong. I downloaded it again – different size, but still wrong. I downloaded it with forced raw:

wget https://github.com/Rob-Bishop/RaspberryPiRecipes/blob/master/la.mp3?raw=true

I now had a file called ‘la.mp3?raw=true’, but at least it was the right size (and hence the right file). Git is an awful place to keep binaries (as I found out when trying to use it for OpenELEC builds).

mv la.mp3?raw=true la.mp3
mpg321 la.mp3

Still nothing. I conceded defeat and rebooted as suggested by Adafruit guide ‘playing sounds and using buttons with Raspberry Pi‘. It worked when the Pi came back up.

More hacking

Everything was ready now. This time ‘sudo python SingingJellyBaby.py’ would work.

It didn’t. GPIO.cleanup() was throwing errors (because it didn’t exist in earlier versions of the library).

sudo apt-get install -y python-rpi.gpio

GPIO.cleanup() still throwing errors :(

sudo apt-get install -y python-dev
wget https://pypi.python.org/packages/source/R/RPi.GPIO/RPi.GPIO-0.5.2a.tar.gz
tar -xvf RPi.GPIO-0.5.2a.tar.gz
cd RPi.GPIO-0.5.2a/
sudo python setup.py install

And now, at last, the Jelly Baby sings.

To be fair…

In the course of retracing my steps to write this up it’s looking like the latest Raspbian doesn’t suffer from the GPIO.cleanup() issue. Why an updated/upgraded Raspbian tripped on this will remain a mystery (though I’m starting to suspect that I did apt-get upgrade without an apt-get update).

Conclusions

Until people get much better at putting code into PDF (the Puppet guys seem to have this one covered) kids who cut’n’paste their Python will still have a load of debugging to do.

Github is great for source, not so good for binary objects.

Jelly Babies can be needy – they have dependencies – especially operatic ones.

Notes

[1] I ended up doing my run in 9.45s, which I was told was the fastest of the weekend (as of Sunday morning), though I did just trip the last beam and lost a ‘life’.
[2] I keep forgetting the config steps for WiFi on the Pi. So here’s what I put into /etc/network/interfaces (as I never be bothered to mess around with wpa-supplicant) – you’ll need to replace MyWiFi with your SSID and Pa55word with your WPA shared key:

auto wlan0
iface wlan0 inet dhcp
wpa-ssid “MyWiFi”
wpa-psk “Pa55word”

[3] The truly lazy can of course:

wget https://github.com/Rob-Bishop/RaspberryPiRecipes/blob/master/SingingJellyBaby.py


I hear a lot of people talking about automated deployment with Chef (and its competitor Puppet, which I haven’t had the chance to try yet), so I thought I’d spend some time seeing how it would fit in with our image management platform Server3.

Don’t stray from the PATH

To get familiar with Chef, I dove straight into the online quick start guide, and ended up making a couple of trying too hard errors:

  1. I began by installing Chef onto an Ubuntu VM, but when it came to the bits of the quick start that used Vagrant it became clear that I needed something that could poke and prod VirtualBox from its host environment (Windows). I went back to square one and installed Chef (and Vagrant) onto the base OS.
  2. My second mistake was installing onto non-default directories. I find it pretty offensive when stuff wants to go into c:\, and the tidy freak in me likes to put stuff that doesn’t go straight into c:\program files (or c:\program files (x86)) into subdirectories like c:\dev or c:\temp (depending on how long I expect to keep stuff). Chef did not like being in c:\dev\chef – none of the scripts worked. When I looked closely all of the scripts were hard coded to c:\chef – an automated installation system that can’t even install itself cleanly – hardly confidence inspiring. I ended up switching to a different machine, started from scratch, and accepted the defaults.

When I kept to the PATH the quick start worked. I had a VM that had been summoned up out of nowhere that would converge onto some recipes. The time had come to get under the hood, figure out how this thing really worked, and apply it to something of my own creation.

Architecture expectations – ruined

The published architecture for Chef is pretty much what I expected, and consists of three core components:

  1. Server – a place where configuration (expressed as recipes and roles) is stored.
  2. Nodes – machines that connect to a server to retrieve a configuration.
  3. Workstation – a machine that’s used to create and manage configurations kept of the server (using a tool called Knife)

Conceptually this is all well and good. Some configuration is created from a workstation, placed on a server, and nodes come along later and converge on their given config. Unfortunately there are a couple of holes in the story:

  1. Chef installation on a node. The best documented method for doing this is using the Knife tool (which essentially logs into the node via SSH and runs some install scripts), in which case we have (real time) connectivity between the workstation and node before the node ever connects to a server.
    It is possible to do things differently, and there are downloadable install scripts plus descriptions of using OS packaging mechanisms and alternative approaches to getting the Chef Ruby Gem installed, but it feels like this is all off the well trodden path.
  2. Bootstrapping a node. A node needs more than just the base Chef client install. It needs to know which Chef server to connect to, have some credentials to authenticate itself and know which role(s) and or recipe(s) to configure against.Once again the usual approach seems to be to use Knife from a workstation to create the appropriate files and SCP them into place on the Node. It is of course possible to get the files in place by other means.

The dependence on a real time relationship between the Chef workstation and a node before it even connects to a server leads me to believe that Chef is mostly being used in dev/test environments that are being driven by humans. If that’s what DevOps is then it seems like we need another name for fully automated deployment an configuration management in a production setting.

Bundling into Server3

Firstly I should explain what I was trying to achieve here… The idea was to create a VM image that would converge on a Chef role as soon as it was launched. There’s a longer term goal here of doing convergence within the Server3 image factory (but we’re not quite ready yet with the underlying metavirtualisation required for that).

I ended up creating two packages:

  1. An archive to go into /etc/chef containing my validation.pem, a client.rb pointing to my Chef server and a first_boot.json with a run_list pointing at a role configured on the Chef server.
  2. A run on first boot script with the Chef install.sh with the like chef_client –j /etc/chef/first_boot.json appended to the end so that Chef would run once installed and converge onto the defined role.

With those two packages in a bundle I was able to add that to a VM recipe and deploy straight to a cloud (AWS) for testing. It was nice to be able to connect to a VM after launch and find it converged and ready to run.

Next – Inception

Convergence on launch is nice, but it would be better still to launch a pre converged machine – after all if you’re adding machines to grow capacity then it’s probably needed now rather than in however long it takes to install Chef and converge on a role or recipes. This capability should be coming soon to Server 3, and we’re using the label ‘inception’ to define what happens inside the image factory – planting dreams inside the VM’s head.

Conclusion

Chef can be made to work like I expected it to, which makes it possible to have an image that converges when first launched without any human intervention. Going by the weight of documentation this doesn’t seem to be how most people use Chef though – DevOps appears to involve having an actual developer involved in their operations. We need another name for fully automated production operations.

This is cross posted from the original on the CohesiveFT blog.


There’s been a lot of Monday morning quarterbacking over last week’s shutdown of Boston during the pursuit of terror suspects. I have my own opinions about what went on, but don’t feel this is the time or place to get into that.

The point of this post is to examine whether if many (or even any) of the people involved really had that much discretion. If there was a common sense path to be taken, then was it even permissible to take that path? I’ll illustrate with a personal anecdote of a terrorist attack that never was.

My last appointment in the Royal Navy was as a section officer at HMS Collingwood, which at the time was the RN school of Communications and Weapon Engineering (it is now much more besides). My day job was to manage all of the training relating to Type 22 Frigates, but one of the delights of military service is additional ‘duties’. The main duty I was expected to perform at Collingwood was Officer of the Day (OOD) – the officer responsible for the safety and discipline of everybody at the base (in the absence of the usual chain of command when everybody else packed up for the day at 4pm and headed home).

A typical day for an OOD was mostly ceremonial – Rounds (making sure that the new trainees were keeping good order), Sunset (saluting whilst somebody pulled a flag down), and Colours (saluting some more whilst somebody pulled the flag back up). The safety and security stuff was mostly taken care of my a sizeable contingent of (armed) Ministry of Defence (MoD) guards and MoD policemen who would patrol the perimeter and interior of the base and inspect the identities and vehicles of those passing in and out.

Late one night I got a call. A routine perimeter patrol had turned up something the policemen weren’t happy with. In a field next to the base (not too far from the perimeter fence) was a large flat bed truck with a huge electrical transformer on it. My presence was demanded to evaluate what was going on.

The truck was clearly out of place, off a proper road. It also had foreign plates. Attempts were made to contact the trucker who might be sleeping inside, no answer[1]. Attempts were made to call the trucking company, no answer (though it was no surprise that nobody was answering the phone somewhere in Spain during the early hours of their morning). The civilian police were contacted to see if they could scare up any information about the truck, but that was going to take hours. It soon became clear that the truck wasn’t going anywhere quick, and we weren’t learning any more about it any time soon.

The policemen were concerned that the transformer could be a disguised mortar launcher. I was certain it wasn’t. My analysis was thus:

  • As a frontline engineering officer my team was responsible for all manner of electrical conversion equipment. I knew what the real thing looked like.
  • The tubes at the centre of the transformer (that the policemen were most bothered about) were pointed straight up. Mortars have to be pointed at something.

The transformer was much larger than this, but this cutaway gives a good illustration of how a big transformer might have large pipes running through the middle

I had a decision to make. I could either:

  • Play it ‘safe’, treat this situation as a terrorist attack in progress, and evacuate the base[2].
    or
  • Use judgement based on over half a decade of training (and latterly teaching) in areas of weapon system design (manufactured and improvised), explosives and (battle) damage control to determine that I wasn’t looking at a weapon system for some terrorist plot, tell everybody to stand down, and go back to bed.

I chose to stand down. The MoD policemen kept a close eye on the truck through the night.

Nothing happened.

I think the trucker might have had a talking to when he finally emerged at first light from the back of his cab and tried to figure out where he was supposed to be going[3].

I got more than a talking to when the Base Security Officer (BSO) grabbed me after the morning Colours (pulling up the flag) ceremony. The message was clear – how dare I use my discretion and professional judgement when there was an element of risk involved. I should have evacuated the base and kicked off a major police and military operation. The cost and inconvenience were as irrelevant as my opinions. I should have erred on the side of caution.

The BSO then let me into a little secret. He was privy to some intelligence reports from a few years back (which had never been used to brief front line officers like myself) relating to an IRA plot to bomb electricity substations around London in an attempt to disrupt the capital. Aha. It was all clear then – if a terror group once tried to blow up some transformers, then we should be extra scarred of transformers blowing up. How could I argue with logic like that? I thanked him for his trust and insight and returned (bewildered and late) to my desk and business as usual. A few thousand other people were already getting on with their day as usual because I’d made the wrong call.

Conclusion

I used my common sense, my judgement, my discretion and I got in trouble for it[4] – even though that type of decision was exactly why we had an Officer of the Day in the first place. I’m sure many of my colleagues would have called it differently – particularly those lacking front line experience or easily bullied by the gun toting security types. I’m also sure I made the right call based on what I saw in front of me, and what I knew. The trouble is that every incident like this becomes a lesson not just for one person on duty, but whole groups of them – discretion will get you into trouble, don’t take any risks, play it safe; and progressively we eliminate discretion – even from the hands of those that do know better.

Notes

[1] If I was a lost trucker parked in the middle of nowhere I’d probably ignore any commotion outside and hope it went away.
[2] It should be noted that this was back in the early days of the Northern Ireland peace process. After almost a decade in service I was finally allowed to wear my uniform in public without fear of being bombed or shot at, but the old habits and attitudes died hard (on both sides), and dissident terror groups were still active. Mortar launched bombs had been a frequent tool of choice for attacks against military and police bases in Northern Ireland, but had never been used on the mainland.
[3] One of my other considerations was my own recollection of my first visit to Collingwood – driving around in the dark along unlit peninsular roads with farmland on either side (in the days before mobile phones and consumer GPS) – it was all too easy to go from motorway to completely lost.
[4] I’ll never know if this episode was a ‘career limiting move’. I was already serving out my (substantial) notice period with an eye on a future life in IT management. It was therefore hard to get too bothered about a chewing out from a retired officer who chose to still wear his old uniform to work each day (even if he did happen to send his concerns up the chain of command).


Since I was so happy with the HP 650 business laptop that I got for my wife, my father in law decided to get one too.

I was surprised to find that the Windows Experience Index (WEI) was so much slower than I’d seen on my wife’s machine:

HP_650_WEI1It’s no surprise that the memory benchmark is down, as I expect that a second DIMM improves overall system bandwidth, but it seems that 2D graphics are also affected. I suggested that he got another 4GB PC12800 SODIMM. It worked:

HP_650_WEI2So it looks like even if you’re not using memory intensive apps then overall system performance can be improved by adding extra RAM.

I wonder if performance would be the same with 2x2GB, and whether this applies to all systems with Intel integrated graphics?