Shiva Iyer at Packt Publishing kindly sent me a review copy of Instant OpenELEC Starter. It’s an ebook with a list price of £5.99, and I was able to download .pdf and .mobi versions (with an .epub option too). It’s also available from Amazon as a paperback (£12.99) and for Kindle (£6.17).

The book is pretty short, with a table of contents that runs to 35 pages, and it’s set at an introductory level that seems intended for new users of OpenELEC and XBMC.

It breaks down roughly into thirds:

  1. Installation (with instructions for PC and Raspberry Pi).
  2. Managing XBMC – the basics of creating content libraries for various media.
  3. Top 10 features – some slightly more advanced customisations.

If you’re after a detailed explanation of what OpenELEC is, and how it’s put together then you’ll need to look elsewhere.

I could pick holes in some of the details of the Raspberry Pi install guide, but the information is accurate enough. Overall the author, Mikkel Viager, has done a good job of explaining what’s required and how to do things.


Our elected (and unelected) officials keep getting caught with their hands in the till by investigative journalists.

The proposed remedy for this is to establish a register for lobbyists. A plan that the Chartered Institute of Public Relations (CIPR) seems to be eagerly embracing (when it’s not saying that the plan needs to be even more encompassing). I smell a rat. It’s just not normal for people to ask for more regulation of their industry, unless they have (or are trying to establish) regulatory capture.

Why politicians like the idea

A register of lobbyists will make it easy for politicians to check the credentials of those they’re speaking to. This will make it harder (more expensive and time consuming) for investigative journalists to pose as lobbyists. Newspapers are now going to have to run cut out lobby organisations (on a variety of issues to suit the needs of future stings). This will likely preclude public interest broadcasters like the BBC from participation – building fake lobby organisations won’t be seen as a good use of TV license payer’s money.

So this is all about stopping politicians from getting caught, and does nothing to stop politicians from being corrupt (and of course even the non corrupt politicians don’t like people getting caught, because it makes their parties and the entire political establishment look bad).

Why the professional lobbyists like the idea

A register will be a barrier to entry. Their job is to gain access to people with limited time and bandwidth, so anything that cuts down the size of the field helps.

Why ordinary citizens should not like the idea

If the only lobbyists are professional lobbyists then our political system becomes entirely bought and paid for[1]. Amateur lobbyists and pressure groups are an essential part of the democratic process. As Tim Wu pointed out in his ORGCon keynote at the weekend – movements start with the amateurs and enthusiasts.

I was personally involved in the creation of The Coalition For a Digital Economy (Coadec) at a time when the Digital Economy Bill (now Act) was threatening to undermine the use of the Internet by many small businesses. That organisation is now well enough established that I’m sure it could step in line with any regulation of lobbyists. It’s hard to see how we’d have got from a bunch of geeks in a Holborn pub to what’s there today without the support of friendly politicians. We needed access, and regulation would be just another barrier to that access.

Conclusion

Regulating lobbyists will not prevent corruption in politics. Quite the opposite – it will make it more challenging for individual corruption to be found out, and strengthen the systemic corruption of corporate interests in politics. We all ought to get lobbying about this while we still can.

Notes

[1] Rather than mostly bought an paid for as it is today.


I came across this tweet yesterday:

fraud

It was timely, as I was in the midst of sorting out a foreign exchange transaction that had gone wrong. I’d sent $250 to a recipient in the US, and only $230 had shown up in their account (and then their bank had charged them $12 for the privilege of receiving it). Somehow $20 had gone missing along the way.

The payments company had this to say on the matter:

I would also not be happy if $20 was missing from a transfer and I apologise for the situation.

This does, unfortunately, happen from time to time. Normally it is a corresponding bank charge charged en route, which we will refund.

I responded:

Whilst your explanation might fit something off the beaten path there’s no good reason for $20 to vanish into the ether on a well worn road like GBP/USD. My first guess would be somebody fat fingered this at some manual data entry stage (I’d like to hope that you have a straight through process, but I expect it isn’t), my second guess would be fraud.

and they’d said in return:

I assure you that our instruction was for the full amount and there is no fraudulent activity.

At the moment our payments to the USA are via the SWIFT network and as they are international cross border payments there can be correspondent banks involved that we have no control over.

So there we have it – some random correspondent bank along the payment chain treating itself to $20 is completely fine – that’s not fraud. Or maybe two banks helped themselves to $10 each? Nobody seem to know, and nobody seems to care – cost of doing business.

I wouldn’t call out international payments and foreign exchange (FX) as being ‘advanced financial instruments’, but I do know that it’s mostly a disgraceful shambles. If I add up the total fees, charges and spreads associated with this simple transaction then it comes out at almost $50, or around 20% of my transaction. That’s just utterly ridiculous for squirting a few bits from a computer in the UK to a computer in the US. It makes what the telcos charge for SMS seem reasonable (which it is not).

It’s no wonder that developing economies, and particularly small firms within developing economies, are struggling to engage in international commerce. If it’s this hard and expensive to move money along what should be the trunk road of UK/US then I dread to think what it’s like trying to do business off the beaten path (such as to or from Sub-Saharan Africa). I’m pleased to see that the World Bank is doing something about this by investing in payments companies that route around some of the greedy mouths to feed by taking advantage of low cost national payments networks (like ACH in the US, Faster Payments in the UK and corresponding systems elsewhere). Of course SWIFT still gets their pound of flesh (for the time being), but perhaps as we get better netting over that network the toll will be minimised.


Over the past week or so my automated build engine for OpenELEC on the Raspberry Pi hasn’t been working. XBMC has grown to a point where it will no longer build on a machine with 1GB RAM.

Normal services has now been resumed, as the good people at GreenQloud kindly increased my VM from t1.milli (1 CPU 1GB RAM) to m1.small (2 CPUs 2GB RAM). In fact I’m hoping that the extra CPU might even make the build process quicker. I have to congratulate the GreenQloud team for how easy the upgrade process was – about 3 clicks followed by a reboot of the VM. Not only are they the most environmentally friendly Infrastructure as a Service (IaaS), but also one of the easiest to use – thanks guys.


Magic teabags

23May13

I’m a creature of habit, and like a cup of green and Earl Grey[1] to start my day and a Red Bush (aka Rooibos) mid afternoon. Approximately nowhere that I go has the tea that I like to drink, so I take along my own stash. This means that I often find myself asking for a cup of hot water when those around me are ordering their teas and coffees, and 99% of the time that isn’t a problem. I sometimes feel like a bit of a cheapskate in high street coffee shops, but then I think of Starbucks and their taxes and the guilt subsides.

My teabags are tasty but they’re not magic – they simply infuse hot water with a flavour I like.

easyJet sell magic teabags. For £2.50.

Here's the magic tea bag that easyJet sold me for £2.50.

Here’s the magic tea bag that easyJet sold me for £2.50.

I have no idea what easyJet’s magic teabags taste like (and let’s face it, £2.50 is a lot for a cup of tea – they should taste great), but the magic is isn’t in the taste. It’s in their safety properties.

easyJet teabags turn otherwise dangerous cups of scalding water into perfectly safe cups of tea.

I know this because EasyJet cabin crew aren’t allowed to give me a cup of hot water any more for ‘health and safety reasons’, but they are allowed to sell me a cup of tea for £2.50. Since I asked really nicely they even sold me a cup of tea without putting the magic teabag in. I’ll assume that the magic works at a (short) distance – so it’s OK for me to have the teabag on the tray table in front of me, and not OK for it still to be on the cart making its way down the aisle.

I could accuse easyJet of perverting the cause of ‘health and safety’ to benefit their greed. In fact I did in a web survey I completed following a recent trip:

All I wanted was a cup of hot water (I carry my own tea bags as I prefer a type that is never available anywhere I go). This has never been a problem in the past on Easyjet flights, but this time the crew told me that they’re no longer allowed to serve hot water for ‘health and safety reasons’. Apparently a £2.50 teabag has the magical property of turning a cup of scaling water into something safe. The crew very kindly obliged my request to sell a cup of tea without the teabag being dunked. I got ripped off and nobody was made any safer. Blaming your corporate greed on health and safety isn’t a way to impress your customers.

I should point out that easyJet aren’t alone in this shameful practice, they’re just the first airline I’ve found doing it. I’ve also come across it at conference centres where ripoff prices are charged for beverages – Excel, Olympia and Earls Court I’m looking at you.

Maybe if I keep my magic teabag I can use it again on another flight. Or does it have some sort of charge that runs out?

Notes

[1] This is as good a place as any for me to say how disappointed I am that Twinings have discontinued their superb Green and Earl Grey blend. It still gets a mention on their web site, but they stopped selling it a year or so ago. Had I known I’d have bought more than a years supply when I last did a bulk order – of course (like most companies) they didn’t bother to tell me (their previously loyal customer) that they were going to stop making and selling something that I’d been buying regularly for years. I have yet to perfect my own blend of Green and Earl Grey.


This post first appeared on the CohesiveFT blog.

One of the announcments that seemed to get lost in the noise at this week’s IO conference was that Google Compute Engine (GCE) is now available for everyone.

I took it for a quick test drive yesterday, and here are some of my thoughts about what I found.

Web interface

The web UI is less bad than most of the other public clouds I’ve tried of late, but it’s nowhere near as good as AWS. I see a number of places where I think ‘that works fine now whilst I’m just playing, but I’m not going to like that when I’m using this in anger and I’ve got LOTS of stuff to manage’.
One thing I like a lot about the web interface is how well it has been connected to the REST API and gcutil command line tool. The overall effect is to give the impression ‘this is just for when you’re running with training wheels, if you’re serious about using this platform then you’ll use (or build) some grown up tools elsewhere’.

gcutil

Google have gone with their own API, which means you can’t use third party tools adapted to AWS and other popular APIs. If (as most pundits predict) Google grows to be the #2 public IaaS this won’t be a big deal as an ecosystem will grow around them. For the time being I expect the main way that people will use the API is through the gcutil command line tool. It’s very easy to get going with gcutil due to the integration with the web interface, though I do wish there were direct links from the tool guide rather than links to links (a trap for those like me that just copy links and paste into wget commands).

Access control

GCE uses OAUTH2 for access control. This is both a very clever use of standards, and a Lovecraftian horror to use.

Beware, Fluffy Cthulhu will eat your brains if you think you can just source different creds to switch between accounts

This manifests itself when you first use gcutil where the invokation creates a challenge/response – paste URL into browser, authenticate, approve, paste token back into gcutil. A ~/.gcutil_auth file is then written to save you jumping through the same hoops every time. It’s possible to make the tool look elsewhere for the credentials stored in that file (and I guess equally possible to write a script to move files into and out of the default location), but the net effect is to bind a user on a local machine to an account in the cloud, which I think will be jarring to many people who are used to just sourcing creds files into environment variables as they hop between accounts (and providers).

SSH

Google also breaks with convention over how it manages SSH keys. Most other clouds either force you to create a key pair before launching an instance, or allow the upload of the public key from a keypair you made yourself.
GCE creates a keypair for you the first time that you try to access an instance using SSH (with a different name):
  • gcutil creates a keypair and copies the private key to ~/.ssh/google_compute_engine
  • the public key is uploaded to your project metadata as name:key_string
  • new users of ‘name’ are created on instances in the project
    • and the key_string is copied into ~/.ssh/authorized_keys on those instances
  • meanwhile gcutil sits there for 5 minutes waiting for all that to finish
    • I’ve found that the whole process is much faster than that, and in the time it takes me to convert a key to PuTTY format everything is ready for me to log into an instance (whilst gcutil is still sat there waiting).
The whole process is a little creepy, as you end up signing into cloud machines with the same local username as you’re using on whatever machine you have running gcutil. This also feels like another way that gcutil ends up binding a little too hard to a single local account.

Access control redux – multi accounts

The OAUTH2 system for creating gcutil tokens does support Google’s muliple account sign on – allowing me to choose between my personal and work accounts.
The web interface doesn’t.
If I want to use the web interface with my work account then I have to use my browser in incognito mode (and jump through the 2FA auth every time, which is a pain).
At this stage I’m glad I’m only wrangling two GCE accounts. Any more and I’d be quickly running out of browsers (and out of luck if I was using my Chromebook).

Image management

The entire GCE image library presently fits onto a single browser page, and half of that is deprecated or deleted, so the choice of base OS is limited to Debian (6 or 7) and Centos 6.
There are no choices for anything more than a base OS (though there are instructions for creating your own images once you’ve added stuff to a base OS).
There is no (documented) way to import an image that didn’t start out from one of the official base images.
There is no image sharing mechanism.
There is no image marketplace (or any means to protect IP within images).

Network

This is an area where it seems Google have learned from Amazon how to do things more intelligently. The network functionality is more like an Amazon Virtual Private Cloud (VPC) than the regular EC2 network. By default you get a 10.x.x.x/16 network with a gateway to the outside world and firewall rules that let instances talk to each other on that network, and SSH in from the outside.
Firewall rules apply to the network (like VPC security groups) rather than the instance (like EC2 security groups), and there’s a very flexible source/target tagging system there that can be used to describe interconnectivity.
The launch announcement talks about ‘Advanced Routing features help you create gateways and VPN servers, and enable you to build applications that span your local network and Google’s cloud’, but if those features exist in the API I don’t (yet) see them exposed anywhere in the web UI.

Disks

The approach to disks is much more like Azure’s IaaS than AWS, at least in terms of default behaviour. terminating an instance doesn’t destroy the disk underneath it, and it’s possible to leave that disk hanging around (and the meter running) then go back and attach another instance to it later. If you don’t want the disks to be persistent then that needs to be specified at launch time (or you have to delete the disk after deleting the instance).
There’s no real difference in capability here, it’s just a difference in default behaviour.

Speed

GCE feels fast compared to AWS and very fast compared to most of the other public clouds I’ve used. Launches and other actions happen quickly, and the entire environment feels responsive. I hope this isn’t a honeymoon period (like Azure IaaS storage) where everything is fine for the first few days and crumbles under load once people have the time to get onto the service (given how Google have handled the launch of GCE I’m fairly confident they won’t repeat Microsoft’s mistakes here).
I haven’t benchmarked any instances to see if machine performance is +/- equivalent AWS instances, but I’ve heard on the grapevine that GCE has more robust performance.

Price

Seems to be set to be about the same as AWS benchmarks across instances, storage and network. GCE doesn’t seem to be competing on price (yet), but if might be offering better quality (albeit for fewer services) at the same price.
One thing that has caught people’s attention is the move to per minute billing (with 10m minimum):

I’m not so sure:

Paying for a whole hour when you tried something for a few minutes (and it didn’t work so start again) might be a big deal for people tinkering with the cloud. It might also be a thing for those bursty workloads, but I think for most users the integral of their minute-hour overrun is a small number (and Google will no doubt have run the numbers to know that exactly).

In effect per minute billing means GCE runs at a small discount to AWS for superficially similar price cards, but I don’t see this being a major differentiator. It’s also something that AWS (and other clouds) can easily replicate.

Conclusion

There’s a lot to like about GCE. It gets the basics right, and no doubt more functionality will come with time.
I see room for improvement in the identity management pieces, but the underlying security bits are well thought out and executed.
Image management is the area most in need of attention. People are religious about their OS choices, and having one flavour from each of the big Linux camps is enough for a start but not enough for the long term. Google’s next major area for improvement has to be getting the right stuff in place for a storefront to compete with AWS Marketplace. Some people might even want to run Windows :-0

Authorization

17May13

In which I examine why XACML has failed to live up to my expectations, even if it isn’t dead, which has been the topic of a massive blogosphere battle in recent weeks.

Some background

I was working with the IT R&D team at Credit Suisse when we provided seed funding[1] for Securent, which was one of the first major XACML implementations. My colleague Mark Luppi and I had come across Rajiv Gupta and Sekhar Sarukkai when we’d been looking at Web Services Management platforms as their company Confluent had made our short list. The double acquisition by Confluent by Oblix then Oracle set Rajiv free, and he came to us saying ‘I’m going to do a new startup but I don’t know what it is yet’. Mark had come to the realisation that authorization was an ongoing problem for enterprise applications, and we suggested to Rajiv that he build an entitlements service, with us providing a large application as the proof point.

My expectations

Back in March 2008 (when I was still at Credit Suisse) I wrote ‘Directories 2.0‘ in which I laid out my hopes that XACML based authorization services would become as ubiquitous as LDAP directories (particularly Active Directory).

I also at that time highlighted an issue – XACML was ‘like LDIF without LDAP’ – that it was an interchange format without an interface. It was going to be hard for people to universally adopt XACML based systems unless there was a standard way to plug into them. Luckily this was fixed the following year by the release of an open XACML API (which I wrote about in ‘A good week for identity management‘).

I’ll reflect on why my expectations were ruined toward the end of the post.

Best practices for access control

Anil Saldhana has stepped up out of the idendity community internicine warfare about XACML and written an excellent post ‘Authorization (Access Control) Best Practices‘. I’d like to go through his points in turn and offer my own perspective:

  1. Know that you will need access control/authorization
    The issue that drove us back at Credit Suisse was that we saw far too many apps where access control was an afterthought. A small part of the larger problem of security being a non functional requirement that’s easy to push down the priority list whilst ‘making the application work’. Time and time again we saw development teams getting stuck with audit points (a couple of years after going into production) because authorization was inadequate. We needed a systematic approach, an enterprise scale service, and that’s why we worked with the Securent guys.
  2. Externalize the access control policy processing
    The normal run of things was for apps to have authorization as a table in their database, and this usually ran into trouble around segregation of duties (and was often an administrative nightmare).
  3. Understand the difference between coarse grained and fine grained authorization
    This is why I’m a big fan of threat modeling at the design stage for an application, as it makes people think about the roles of users and the access that those roles will have. If you have a threat model then it’s usually pretty obvious what granularity you’re dealing with.
  4. Design for coarse grained authorization but keep the design flexible for fine grained authorization
    This particularly makes sense when the design is iterative (because you’re using agile methodologies). It may not be clear at the start that fine grained authorization is needed, but pretty much every app will need something coarse grained.
  5. Know the difference between Access Control Lists and Access Control standards
    We’re generally trying not to reinvent wheels, but this point is about using new well finished wheels rather than old wobbly ones. I think this point also tends to relate more to the management of unstructured data, where underlying systems offer a cornucopia of ACL systems that could be used.
  6. Adopt Rule Based Access Control : view Access Control as Rules and Attributes
    This relates back to the threat model I touched upon earlier. Roles are often the wrong unit of currency because they’re an arbitrary abstraction. Attributes are something you can be more definite about, as they can be measured or assigned.
  7. Adopt REST Style Architecture when your situation demands scale and thus REST authorization standards
    This is firstly a statement that REST has won out over SOAP in the battle of WS-(Death)Star, but is more broadly about being service oriented. The underbelly of this point is that authorization services become a dependency, often a critical one, so they need to be robust, and there needs to be a coherent plan to deal with failure.
  8. Understand the difference between Enforcement versus Entitlement model
    This relates very closely to my last point about dependency, and whether the entitlements system is an inline dependency or out of band.

So what went wrong?

It’s now over 5 years since I laid out my expectations, and it’s safe to say that my expectations haven’t been met. I think there are a few reasons why that happened:

  • Loss of momentum
    Prior to the Cisco acquisition Securent was one of a handful of startups making the running in the authorization space. After the acquisition the Securent stopped moving forward, and the competition didn’t have to keep running to keep up. The entire segment lost momentum.
  • My app, your apps
    Entitlements is more of a problem for the enterprise with thousands of apps than it is for the packaged software vendor that may only have one. We ended up with a chicken and egg situation where enterprises didn’t have the service for off the shelf packages to integrate into, and since the off the shelf packages didn’t support entitlements services there was less incentive to buy in. Active Directory had its own killer app – the Windows Desktop, which (approximately) everybody needed anyway, and once AD was there it was natural to (re)use it for other things. Fine grained services never had their killer app – adoption always had to be predicated on in house apps.
  • Fine grained is a minority sport
    Many apps can get by with coarse grained authorization (point 4 above) so fine grained services find it harder to build a business case for adoption.
  • In house can be good enough
    When the commercial services aren’t delivering on feature requests (because the industry lost momentum), and the problem is mostly in house apps (because off the shelf stuff is going its own way) then an in house service (that isn’t standards based) might well take hold. Once there’s a good enough in house approach the business case of a 3rd party platform becomes harder than ever to make.

Conclusion

It’s been something like 9 years since I started out on my authorization journey, and whilst the state of the art has advanced substantially, the destination I envisaged still seems almost as distant as it was at the start. XACML and systems based upon it have failed to live up to my expectations, but that doesn’t mean that they’ve failed to deliver any value. I think at this point it’s probably fair to say that the original destination will never be reached, but as with many things the journey has bourne many of its own rewards.

Notes

[1] Stupidly we didn’t take any equity – the whole thing was structured as paying for a prototype


I’ve been very happy with the results from my Raspberry Pi controlled water bath for sous vide cooking, but I knew that the control loop could be improved. Past runs show fairly continued oscillation:

Roast beef temps2

Roast beef run at 60C

I’ve been keeping track of the average power for my control loop, which has been coming out at 22%. So i modified the code to have a bias of 22%, and here’s the result:

Test run at 55C

Test run at 55C

Overall much more stable. The occasional hiccups are probably caused by the remote socket failing to receive off commands. There’s a 3C overshoot at the start, which I hope to have fixed by entering the control loop from initial warm up 3C earlier. Here’s the new code (also available at GitHub):

import os
from subprocess import Popen, PIPE, call
from optparse import OptionParser
from time import sleep

def tempdata():
    # Replace 28-000003ae0350 with the address of your DS18B20
    pipe = Popen(["cat","/sys/bus/w1/devices/w1_bus_master1/28-000003ea0350/w1_slave"], stdout=PIPE)
    result = pipe.communicate()[0]
    result_list = result.split("=")
    temp_mC = int(result_list[-1]) # temp in milliCelcius
    return temp_mC

def setup_1wire():
  os.system("sudo modprobe w1-gpio && sudo modprobe w1-therm")

def turn_on():
  os.system("sudo ./strogonanoff_sender.py --channel 4 --button 1 --gpio 0 on")

def turn_off():
  os.system("sudo ./strogonanoff_sender.py --channel 4 --button 1 --gpio 0 off")
#Get command line options
parser = OptionParser()
parser.add_option("-t", "--target", type = int, default = 55)
parser.add_option("-p", "--prop", type = int, default = 6)
parser.add_option("-i", "--integral", type = int, default = 2)
parser.add_option("-b", "--bias", type = int, default = 22)
(options, args) = parser.parse_args()
target = options.target * 1000
print ('Target temp is %d' % (options.target))
P = options.prop
I = options.integral
B = options.bias
# Initialise some variables for the control loop
interror = 0
pwr_cnt=1
pwr_tot=0
# Setup 1Wire for DS18B20
setup_1wire()
# Turn on for initial ramp up
state="on"
turn_on()

temperature=tempdata()
print("Initial temperature ramp up")
while (target - temperature > 6000):
    sleep(15)
    temperature=tempdata()
    print(temperature)

print("Entering control loop")
while True:
    temperature=tempdata()
    print(temperature)
    error = target - temperature
    interror = interror + error
    power = B + ((P * error) + ((I * interror)/100))/100
    print power
    # Make sure that if we should be off then we are
    if (state=="off"):
        turn_off()
    # Long duration pulse width modulation
    for x in range (1, 100):
        if (power > x):
            if (state=="off"):
                state="on"
                print("On")
                turn_on()
        else:
            if (state=="on"):
                state="off"
                print("Off")
                turn_off()
        sleep(1)

Sunday Roast

05May13

After some success with low temperature cooking sous vide style with my Raspberry Pi controlled water bath I had a go today at doing slow roasted pork belly[1], which came out well. After some questions on twitter I though I’d go through the standard recipes and techniques I use for the rest of the meal.

Some of the crackling might make it to a sandwich tomorrow

Some of the crackling might make it to a sandwich tomorrow

My cooking style is very scientific – keep to the right weights, measures, timings and temperatures and things will come out fine every time. Here’s how I do my roasties (roast potatoes), yorkies (Yorkshire puddings), stuffing[2] and veggies:

t-70m peel and chop potatoes – I usually allow for 6 roasties per person.

pour oil into a roasting tin for potatoes, and a little into each cup of a 12 cup yorkie tray, grease a small roasting tray for the stuffing

t-60m turn fan oven onto full power (~240C[3]), put potatoes into a pan of hot tap water and place to boil on a large gas ring

prepare batter for yorkies: 125g strong white bread flour, 235g[4] milk, 3 eggs whisk until smooth
prepare stuffing – half a pack of paxo (~90g) with a bit less than the specified amount of water (~240g)

t-48m put roasting tin for potatoes into oven to warm

t-45m drain potatoes, shake together to fluff surface, put into roasting tin, make sure potatoes are covered in oil then place in oven

prepare veggies (usually carrots and broccoli) and place in pan of hot tap water

t-33m put yorkie tin into oven

t-30m pour yorkie batter into yorkie tin and place in oven, turn down to 180C

place stuffing onto small roasting tin

t-25m put stuffing into oven

t-20m turn on large ring gas under veggies

t-15m turn oven up to 200C

put plates and serving bowls into microwave to warm up (~5m)

t-2m make gravy using Bisto granules and water from veggies[5]

t-0 service!

Notes

[1] After scoring and rubbing salt into the skin I put a 1.2kg on the bone pork belly into a non fan oven at 130C starting at t-4hrs. At t-60m it went into the hot fan oven to crisp the cracking, and at t-30m it came out to rest and make space for the yorkies. I took the cracking off and put it back in the oven with everything else.
[2] Traditionally yorkies are only served with roast beef, and stuffing is only served with poultry, but I love both (and so do the rest of the family) so we have them every time. $daughter0 insists on having mint sauce every time too.
[3] If using a regular oven then add around 20C throughout
[4] I don’t bother with liquid measures in jugs or whatever, everything gets measured by mass on a set of electronic scales
[5] Or use left over (frozen) gravy from previous pot roast


Two years ago I took my son along to Maker Faire UK in Newcastle (which is where I grew up). This year the whole family came along.

Whilst I queued with the kids for the ‘laser caper'[1] my wife went along to a talk by Clive from the Raspberry Pi Team. I can’t blame her for wanting to find out about stuff I’m so keen on from an independent (though not impartial) source. She came back very enthused, particularly about Rob Bishop’s singing jelly baby project.

The setup

The project looked ideal for a couple of reasons:

  1. It’s short and simple – something that my son could ostensibly tackle on his own
  2. Jelly babies – yummy

I set up my son’s Raspberry Pi so that it was working on a newly acquired Edimax EW-7811Un WiFi adaptor[2] in order to minimise clutter (no need for keyboard, screen etc.). I also made sure that his Raspbian was up to date (sudo apt-get update && sudo apt-get upgrade). Once jelly babies, paper clips and jumper wires were sourced from around the house he was almost ready to go. I opened up a PuTTY session to the Pi and left him to it.

Hacking like it’s 1983

The hardware bits were simple – as expected. The software was more troublesome.

I cut my teeth programming on the ZX81 and Dragon32, typing in games from magazines. These would invariably not run due to a multitude of typos causing BASIC syntax errors. As many others found out at the time, learning to debug programs is functionally equivalent to learning to program.

Not a lot has changed (and maybe that’s the point).

It’s possible to cut and paste from Rob’s PDF into Nano[3], but that doesn’t give you a working python program. All the indentation (which Python relies on) gets stripped out, and characters like ‘ can mysteriously change into .

Sorting out the copy/paste errors was a good refresher on Python loops and conditionals, and highlighted key aspects of the code flow.

No sound

Before getting too seriously into hammering the Python into shape I sugested a test that the ‘song’ would play.

I ran ‘mpg321 la.mp3’, but heard nothing. Aha I thought – the Pi is trying to play to HDMI rather than the 3.5mm jack (something I’d seen before with MAME). I ran:

sudo modprobe snd_bcm2835
sudo amixer cset numid=3 1

I ran ‘mpg321 la.mp3’ again, but still heard nothing. I tried ‘aplay /usr/share/sounds/alsa/Front_Center.wav’ – that worked – so the problem was with mp3 playback.

I took a hard look at the la.mp3 file. It’s file size seemed wrong. I downloaded it again – different size, but still wrong. I downloaded it with forced raw:

wget https://github.com/Rob-Bishop/RaspberryPiRecipes/blob/master/la.mp3?raw=true

I now had a file called ‘la.mp3?raw=true’, but at least it was the right size (and hence the right file). Git is an awful place to keep binaries (as I found out when trying to use it for OpenELEC builds).

mv la.mp3?raw=true la.mp3
mpg321 la.mp3

Still nothing. I conceded defeat and rebooted as suggested by Adafruit guide ‘playing sounds and using buttons with Raspberry Pi‘. It worked when the Pi came back up.

More hacking

Everything was ready now. This time ‘sudo python SingingJellyBaby.py’ would work.

It didn’t. GPIO.cleanup() was throwing errors (because it didn’t exist in earlier versions of the library).

sudo apt-get install -y python-rpi.gpio

GPIO.cleanup() still throwing errors :(

sudo apt-get install -y python-dev
wget https://pypi.python.org/packages/source/R/RPi.GPIO/RPi.GPIO-0.5.2a.tar.gz
tar -xvf RPi.GPIO-0.5.2a.tar.gz
cd RPi.GPIO-0.5.2a/
sudo python setup.py install

And now, at last, the Jelly Baby sings.

To be fair…

In the course of retracing my steps to write this up it’s looking like the latest Raspbian doesn’t suffer from the GPIO.cleanup() issue. Why an updated/upgraded Raspbian tripped on this will remain a mystery (though I’m starting to suspect that I did apt-get upgrade without an apt-get update).

Conclusions

Until people get much better at putting code into PDF (the Puppet guys seem to have this one covered) kids who cut’n’paste their Python will still have a load of debugging to do.

Github is great for source, not so good for binary objects.

Jelly Babies can be needy – they have dependencies – especially operatic ones.

Notes

[1] I ended up doing my run in 9.45s, which I was told was the fastest of the weekend (as of Sunday morning), though I did just trip the last beam and lost a ‘life’.
[2] I keep forgetting the config steps for WiFi on the Pi. So here’s what I put into /etc/network/interfaces (as I never be bothered to mess around with wpa-supplicant) – you’ll need to replace MyWiFi with your SSID and Pa55word with your WPA shared key:

auto wlan0
iface wlan0 inet dhcp
wpa-ssid “MyWiFi”
wpa-psk “Pa55word”

[3] The truly lazy can of course:

wget https://github.com/Rob-Bishop/RaspberryPiRecipes/blob/master/SingingJellyBaby.py