The proposed remedy for this is to establish a register for lobbyists. A plan that the Chartered Institute of Public Relations (CIPR) seems to be eagerly embracing (when it’s not saying that the plan needs to be even more encompassing). I smell a rat. It’s just not normal for people to ask for more regulation of their industry, unless they have (or are trying to establish) regulatory capture.
Why politicians like the idea
A register of lobbyists will make it easy for politicians to check the credentials of those they’re speaking to. This will make it harder (more expensive and time consuming) for investigative journalists to pose as lobbyists. Newspapers are now going to have to run cut out lobby organisations (on a variety of issues to suit the needs of future stings). This will likely preclude public interest broadcasters like the BBC from participation – building fake lobby organisations won’t be seen as a good use of TV license payer’s money.
So this is all about stopping politicians from getting caught, and does nothing to stop politicians from being corrupt (and of course even the non corrupt politicians don’t like people getting caught, because it makes their parties and the entire political establishment look bad).
Why the professional lobbyists like the idea
A register will be a barrier to entry. Their job is to gain access to people with limited time and bandwidth, so anything that cuts down the size of the field helps.
Why ordinary citizens should not like the idea
If the only lobbyists are professional lobbyists then our political system becomes entirely bought and paid for. Amateur lobbyists and pressure groups are an essential part of the democratic process. As Tim Wu pointed out in his ORGCon keynote at the weekend – movements start with the amateurs and enthusiasts.
I was personally involved in the creation of The Coalition For a Digital Economy (Coadec) at a time when the Digital Economy Bill (now Act) was threatening to undermine the use of the Internet by many small businesses. That organisation is now well enough established that I’m sure it could step in line with any regulation of lobbyists. It’s hard to see how we’d have got from a bunch of geeks in a Holborn pub to what’s there today without the support of friendly politicians. We needed access, and regulation would be just another barrier to that access.
Regulating lobbyists will not prevent corruption in politics. Quite the opposite – it will make it more challenging for individual corruption to be found out, and strengthen the systemic corruption of corporate interests in politics. We all ought to get lobbying about this while we still can.
 Rather than mostly bought an paid for as it is today.
Filed under: politics | Leave a Comment
Tags: citizen, corruption, lobby, politician, politics, regulation
I came across this tweet yesterday:
It was timely, as I was in the midst of sorting out a foreign exchange transaction that had gone wrong. I’d sent $250 to a recipient in the US, and only $230 had shown up in their account (and then their bank had charged them $12 for the privilege of receiving it). Somehow $20 had gone missing along the way.
The payments company had this to say on the matter:
I would also not be happy if $20 was missing from a transfer and I apologise for the situation.
This does, unfortunately, happen from time to time. Normally it is a corresponding bank charge charged en route, which we will refund.
Whilst your explanation might fit something off the beaten path there’s no good reason for $20 to vanish into the ether on a well worn road like GBP/USD. My first guess would be somebody fat fingered this at some manual data entry stage (I’d like to hope that you have a straight through process, but I expect it isn’t), my second guess would be fraud.
and they’d said in return:
I assure you that our instruction was for the full amount and there is no fraudulent activity.
At the moment our payments to the USA are via the SWIFT network and as they are international cross border payments there can be correspondent banks involved that we have no control over.
So there we have it – some random correspondent bank along the payment chain treating itself to $20 is completely fine – that’s not fraud. Or maybe two banks helped themselves to $10 each? Nobody seem to know, and nobody seems to care – cost of doing business.
I wouldn’t call out international payments and foreign exchange (FX) as being ‘advanced financial instruments’, but I do know that it’s mostly a disgraceful shambles. If I add up the total fees, charges and spreads associated with this simple transaction then it comes out at almost $50, or around 20% of my transaction. That’s just utterly ridiculous for squirting a few bits from a computer in the UK to a computer in the US. It makes what the telcos charge for SMS seem reasonable (which it is not).
It’s no wonder that developing economies, and particularly small firms within developing economies, are struggling to engage in international commerce. If it’s this hard and expensive to move money along what should be the trunk road of UK/US then I dread to think what it’s like trying to do business off the beaten path (such as to or from Sub-Saharan Africa). I’m pleased to see that the World Bank is doing something about this by investing in payments companies that route around some of the greedy mouths to feed by taking advantage of low cost national payments networks (like ACH in the US, Faster Payments in the UK and corresponding systems elsewhere). Of course SWIFT still gets their pound of flesh (for the time being), but perhaps as we get better netting over that network the toll will be minimised.
Filed under: could_do_better, grumble | 3 Comments
Tags: banking, charges, fees, financial services, fraud, FX, international, payments, spreads
Over the past week or so my automated build engine for OpenELEC on the Raspberry Pi hasn’t been working. XBMC has grown to a point where it will no longer build on a machine with 1GB RAM.
Normal services has now been resumed, as the good people at GreenQloud kindly increased my VM from t1.milli (1 CPU 1GB RAM) to m1.small (2 CPUs 2GB RAM). In fact I’m hoping that the extra CPU might even make the build process quicker. I have to congratulate the GreenQloud team for how easy the upgrade process was – about 3 clicks followed by a reboot of the VM. Not only are they the most environmentally friendly Infrastructure as a Service (IaaS), but also one of the easiest to use – thanks guys.
Filed under: cloud, Raspberry Pi | Leave a Comment
Tags: cloud, GreenQloud, iaas, openelec, Raspberry Pi, Raspi, RPi, XBMC
I’m a creature of habit, and like a cup of green and Earl Grey to start my day and a Red Bush (aka Rooibos) mid afternoon. Approximately nowhere that I go has the tea that I like to drink, so I take along my own stash. This means that I often find myself asking for a cup of hot water when those around me are ordering their teas and coffees, and 99% of the time that isn’t a problem. I sometimes feel like a bit of a cheapskate in high street coffee shops, but then I think of Starbucks and their taxes and the guilt subsides.
My teabags are tasty but they’re not magic – they simply infuse hot water with a flavour I like.
easyJet sell magic teabags. For £2.50.
I have no idea what easyJet’s magic teabags taste like (and let’s face it, £2.50 is a lot for a cup of tea – they should taste great), but the magic is isn’t in the taste. It’s in their safety properties.
easyJet teabags turn otherwise dangerous cups of scalding water into perfectly safe cups of tea.
I know this because EasyJet cabin crew aren’t allowed to give me a cup of hot water any more for ‘health and safety reasons’, but they are allowed to sell me a cup of tea for £2.50. Since I asked really nicely they even sold me a cup of tea without putting the magic teabag in. I’ll assume that the magic works at a (short) distance – so it’s OK for me to have the teabag on the tray table in front of me, and not OK for it still to be on the cart making its way down the aisle.
I could accuse easyJet of perverting the cause of ‘health and safety’ to benefit their greed. In fact I did in a web survey I completed following a recent trip:
All I wanted was a cup of hot water (I carry my own tea bags as I prefer a type that is never available anywhere I go). This has never been a problem in the past on Easyjet flights, but this time the crew told me that they’re no longer allowed to serve hot water for ‘health and safety reasons’. Apparently a £2.50 teabag has the magical property of turning a cup of scaling water into something safe. The crew very kindly obliged my request to sell a cup of tea without the teabag being dunked. I got ripped off and nobody was made any safer. Blaming your corporate greed on health and safety isn’t a way to impress your customers.
I should point out that easyJet aren’t alone in this shameful practice, they’re just the first airline I’ve found doing it. I’ve also come across it at conference centres where ripoff prices are charged for beverages – Excel, Olympia and Earls Court I’m looking at you.
Maybe if I keep my magic teabag I can use it again on another flight. Or does it have some sort of charge that runs out?
 This is as good a place as any for me to say how disappointed I am that Twinings have discontinued their superb Green and Earl Grey blend. It still gets a mention on their web site, but they stopped selling it a year or so ago. Had I known I’d have bought more than a years supply when I last did a bulk order – of course (like most companies) they didn’t bother to tell me (their previously loyal customer) that they were going to stop making and selling something that I’d been buying regularly for years. I have yet to perfect my own blend of Green and Earl Grey.
Filed under: could_do_better, grumble | 3 Comments
Tags: EasyJet, heath and safety, hot water, magic, tea, tea bag, teabag
This post first appeared on the CohesiveFT blog.
I took it for a quick test drive yesterday, and here are some of my thoughts about what I found.
- gcutil creates a keypair and copies the private key to ~/.ssh/google_compute_engine
- the public key is uploaded to your project metadata as name:key_string
- new users of ‘name’ are created on instances in the project
- and the key_string is copied into ~/.ssh/authorized_keys on those instances
- meanwhile gcutil sits there for 5 minutes waiting for all that to finish
- I’ve found that the whole process is much faster than that, and in the time it takes me to convert a key to PuTTY format everything is ready for me to log into an instance (whilst gcutil is still sat there waiting).
Access control redux – multi accounts
I’m not so sure:
Paying for a whole hour when you tried something for a few minutes (and it didn’t work so start again) might be a big deal for people tinkering with the cloud. It might also be a thing for those bursty workloads, but I think for most users the integral of their minute-hour overrun is a small number (and Google will no doubt have run the numbers to know that exactly).
In effect per minute billing means GCE runs at a small discount to AWS for superficially similar price cards, but I don’t see this being a major differentiator. It’s also something that AWS (and other clouds) can easily replicate.
Filed under: cloud, CohesiveFT, review | Leave a Comment
Tags: access control, cloud, GCE, gcutil, google, iaas, identity, image management, network, performance, price, SSH, storage, UI, web
I’ve been very happy with the results from my Raspberry Pi controlled water bath for sous vide cooking, but I knew that the control loop could be improved. Past runs show fairly continued oscillation:
I’ve been keeping track of the average power for my control loop, which has been coming out at 22%. So i modified the code to have a bias of 22%, and here’s the result:
Overall much more stable. The occasional hiccups are probably caused by the remote socket failing to receive off commands. There’s a 3C overshoot at the start, which I hope to have fixed by entering the control loop from initial warm up 3C earlier. Here’s the new code (also available at GitHub):
import os from subprocess import Popen, PIPE, call from optparse import OptionParser from time import sleep def tempdata(): # Replace 28-000003ae0350 with the address of your DS18B20 pipe = Popen(["cat","/sys/bus/w1/devices/w1_bus_master1/28-000003ea0350/w1_slave"], stdout=PIPE) result = pipe.communicate() result_list = result.split("=") temp_mC = int(result_list[-1]) # temp in milliCelcius return temp_mC def setup_1wire(): os.system("sudo modprobe w1-gpio && sudo modprobe w1-therm") def turn_on(): os.system("sudo ./strogonanoff_sender.py --channel 4 --button 1 --gpio 0 on") def turn_off(): os.system("sudo ./strogonanoff_sender.py --channel 4 --button 1 --gpio 0 off") #Get command line options parser = OptionParser() parser.add_option("-t", "--target", type = int, default = 55) parser.add_option("-p", "--prop", type = int, default = 6) parser.add_option("-i", "--integral", type = int, default = 2) parser.add_option("-b", "--bias", type = int, default = 22) (options, args) = parser.parse_args() target = options.target * 1000 print ('Target temp is %d' % (options.target)) P = options.prop I = options.integral B = options.bias # Initialise some variables for the control loop interror = 0 pwr_cnt=1 pwr_tot=0 # Setup 1Wire for DS18B20 setup_1wire() # Turn on for initial ramp up state="on" turn_on() temperature=tempdata() print("Initial temperature ramp up") while (target - temperature > 6000): sleep(15) temperature=tempdata() print(temperature) print("Entering control loop") while True: temperature=tempdata() print(temperature) error = target - temperature interror = interror + error power = B + ((P * error) + ((I * interror)/100))/100 print power # Make sure that if we should be off then we are if (state=="off"): turn_off() # Long duration pulse width modulation for x in range (1, 100): if (power > x): if (state=="off"): state="on" print("On") turn_on() else: if (state=="on"): state="off" print("Off") turn_off() sleep(1)
Filed under: code, cooking, Raspberry Pi | 2 Comments
Tags: 434MHz, bias, control system, DS18B20, mains, PI, PID, python, Raspberry Pi, remote control, RPi, Sous vide, water bath
After some success with low temperature cooking sous vide style with my Raspberry Pi controlled water bath I had a go today at doing slow roasted pork belly, which came out well. After some questions on twitter I though I’d go through the standard recipes and techniques I use for the rest of the meal.
My cooking style is very scientific – keep to the right weights, measures, timings and temperatures and things will come out fine every time. Here’s how I do my roasties (roast potatoes), yorkies (Yorkshire puddings), stuffing and veggies:
t-70m peel and chop potatoes – I usually allow for 6 roasties per person.
pour oil into a roasting tin for potatoes, and a little into each cup of a 12 cup yorkie tray, grease a small roasting tray for the stuffing
t-60m turn fan oven onto full power (~240C), put potatoes into a pan of hot tap water and place to boil on a large gas ring
prepare batter for yorkies: 125g strong white bread flour, 235g milk, 3 eggs whisk until smooth
prepare stuffing – half a pack of paxo (~90g) with a bit less than the specified amount of water (~240g)
t-48m put roasting tin for potatoes into oven to warm
t-45m drain potatoes, shake together to fluff surface, put into roasting tin, make sure potatoes are covered in oil then place in oven
prepare veggies (usually carrots and broccoli) and place in pan of hot tap water
t-33m put yorkie tin into oven
t-30m pour yorkie batter into yorkie tin and place in oven, turn down to 180C
place stuffing onto small roasting tin
t-25m put stuffing into oven
t-20m turn on large ring gas under veggies
t-15m turn oven up to 200C
put plates and serving bowls into microwave to warm up (~5m)
t-2m make gravy using Bisto granules and water from veggies
 After scoring and rubbing salt into the skin I put a 1.2kg on the bone pork belly into a non fan oven at 130C starting at t-4hrs. At t-60m it went into the hot fan oven to crisp the cracking, and at t-30m it came out to rest and make space for the yorkies. I took the cracking off and put it back in the oven with everything else.
 Traditionally yorkies are only served with roast beef, and stuffing is only served with poultry, but I love both (and so do the rest of the family) so we have them every time. $daughter0 insists on having mint sauce every time too.
 If using a regular oven then add around 20C throughout
 I don’t bother with liquid measures in jugs or whatever, everything gets measured by mass on a set of electronic scales
 Or use left over (frozen) gravy from previous pot roast
Filed under: cooking | Leave a Comment
Tags: crackling, pork belly, roast, roast potatoes, roasties, stuffing, Sunday, veggies, yorkies, Yorkshire pudding
Two years ago I took my son along to Maker Faire UK in Newcastle (which is where I grew up). This year the whole family came along.
Whilst I queued with the kids for the ‘laser caper’ my wife went along to a talk by Clive from the Raspberry Pi Team. I can’t blame her for wanting to find out about stuff I’m so keen on from an independent (though not impartial) source. She came back very enthused, particularly about Rob Bishop’s singing jelly baby project.
The project looked ideal for a couple of reasons:
- It’s short and simple – something that my son could ostensibly tackle on his own
- Jelly babies – yummy
I set up my son’s Raspberry Pi so that it was working on a newly acquired Edimax EW-7811Un WiFi adaptor in order to minimise clutter (no need for keyboard, screen etc.). I also made sure that his Raspbian was up to date (sudo apt-get update && sudo apt-get upgrade). Once jelly babies, paper clips and jumper wires were sourced from around the house he was almost ready to go. I opened up a PuTTY session to the Pi and left him to it.
Hacking like it’s 1983
The hardware bits were simple – as expected. The software was more troublesome.
I cut my teeth programming on the ZX81 and Dragon32, typing in games from magazines. These would invariably not run due to a multitude of typos causing BASIC syntax errors. As many others found out at the time, learning to debug programs is functionally equivalent to learning to program.
Not a lot has changed (and maybe that’s the point).
It’s possible to cut and paste from Rob’s PDF into Nano, but that doesn’t give you a working python program. All the indentation (which Python relies on) gets stripped out, and characters like ‘ can mysteriously change into .
Sorting out the copy/paste errors was a good refresher on Python loops and conditionals, and highlighted key aspects of the code flow.
Before getting too seriously into hammering the Python into shape I sugested a test that the ‘song’ would play.
I ran ‘mpg321 la.mp3′, but heard nothing. Aha I thought – the Pi is trying to play to HDMI rather than the 3.5mm jack (something I’d seen before with MAME). I ran:
sudo modprobe snd_bcm2835
sudo amixer cset numid=3 1
I ran ‘mpg321 la.mp3′ again, but still heard nothing. I tried ‘aplay /usr/share/sounds/alsa/Front_Center.wav’ – that worked – so the problem was with mp3 playback.
I took a hard look at the la.mp3 file. It’s file size seemed wrong. I downloaded it again – different size, but still wrong. I downloaded it with forced raw:
I now had a file called ‘la.mp3?raw=true’, but at least it was the right size (and hence the right file). Git is an awful place to keep binaries (as I found out when trying to use it for OpenELEC builds).
mv la.mp3?raw=true la.mp3
Still nothing. I conceded defeat and rebooted as suggested by Adafruit guide ‘playing sounds and using buttons with Raspberry Pi‘. It worked when the Pi came back up.
Everything was ready now. This time ‘sudo python SingingJellyBaby.py’ would work.
It didn’t. GPIO.cleanup() was throwing errors (because it didn’t exist in earlier versions of the library).
sudo apt-get install -y python-rpi.gpio
GPIO.cleanup() still throwing errors :(
sudo apt-get install -y python-dev
tar -xvf RPi.GPIO-0.5.2a.tar.gz
sudo python setup.py install
And now, at last, the Jelly Baby sings.
To be fair…
In the course of retracing my steps to write this up it’s looking like the latest Raspbian doesn’t suffer from the GPIO.cleanup() issue. Why an updated/upgraded Raspbian tripped on this will remain a mystery (though I’m starting to suspect that I did apt-get upgrade without an apt-get update).
Until people get much better at putting code into PDF (the Puppet guys seem to have this one covered) kids who cut’n'paste their Python will still have a load of debugging to do.
Github is great for source, not so good for binary objects.
Jelly Babies can be needy – they have dependencies – especially operatic ones.
 I ended up doing my run in 9.45s, which I was told was the fastest of the weekend (as of Sunday morning), though I did just trip the last beam and lost a ‘life’.
 I keep forgetting the config steps for WiFi on the Pi. So here’s what I put into /etc/network/interfaces (as I never be bothered to mess around with wpa-supplicant) – you’ll need to replace MyWiFi with your SSID and Pa55word with your WPA shared key:
iface wlan0 inet dhcp
 The truly lazy can of course:
Filed under: Raspberry Pi | Leave a Comment
Tags: audio, dependency, github, GPIO, jelly baby, mpg321, Raspberry Pi, Raspi, Rob Bishop, RPi, wifi
I hear a lot of people talking about automated deployment with Chef (and its competitor Puppet, which I haven’t had the chance to try yet), so I thought I’d spend some time seeing how it would fit in with our image management platform Server3.
Don’t stray from the PATH
To get familiar with Chef, I dove straight into the online quick start guide, and ended up making a couple of trying too hard errors:
- I began by installing Chef onto an Ubuntu VM, but when it came to the bits of the quick start that used Vagrant it became clear that I needed something that could poke and prod VirtualBox from its host environment (Windows). I went back to square one and installed Chef (and Vagrant) onto the base OS.
- My second mistake was installing onto non-default directories. I find it pretty offensive when stuff wants to go into c:\, and the tidy freak in me likes to put stuff that doesn’t go straight into c:\program files (or c:\program files (x86)) into subdirectories like c:\dev or c:\temp (depending on how long I expect to keep stuff). Chef did not like being in c:\dev\chef – none of the scripts worked. When I looked closely all of the scripts were hard coded to c:\chef – an automated installation system that can’t even install itself cleanly – hardly confidence inspiring. I ended up switching to a different machine, started from scratch, and accepted the defaults.
When I kept to the PATH the quick start worked. I had a VM that had been summoned up out of nowhere that would converge onto some recipes. The time had come to get under the hood, figure out how this thing really worked, and apply it to something of my own creation.
Architecture expectations – ruined
- Server – a place where configuration (expressed as recipes and roles) is stored.
- Nodes – machines that connect to a server to retrieve a configuration.
- Workstation – a machine that’s used to create and manage configurations kept of the server (using a tool called Knife)
Conceptually this is all well and good. Some configuration is created from a workstation, placed on a server, and nodes come along later and converge on their given config. Unfortunately there are a couple of holes in the story:
- Chef installation on a node. The best documented method for doing this is using the Knife tool (which essentially logs into the node via SSH and runs some install scripts), in which case we have (real time) connectivity between the workstation and node before the node ever connects to a server.
It is possible to do things differently, and there are downloadable install scripts plus descriptions of using OS packaging mechanisms and alternative approaches to getting the Chef Ruby Gem installed, but it feels like this is all off the well trodden path.
- Bootstrapping a node. A node needs more than just the base Chef client install. It needs to know which Chef server to connect to, have some credentials to authenticate itself and know which role(s) and or recipe(s) to configure against.Once again the usual approach seems to be to use Knife from a workstation to create the appropriate files and SCP them into place on the Node. It is of course possible to get the files in place by other means.
The dependence on a real time relationship between the Chef workstation and a node before it even connects to a server leads me to believe that Chef is mostly being used in dev/test environments that are being driven by humans. If that’s what DevOps is then it seems like we need another name for fully automated deployment an configuration management in a production setting.
Bundling into Server3
Firstly I should explain what I was trying to achieve here… The idea was to create a VM image that would converge on a Chef role as soon as it was launched. There’s a longer term goal here of doing convergence within the Server3 image factory (but we’re not quite ready yet with the underlying metavirtualisation required for that).
I ended up creating two packages:
- An archive to go into /etc/chef containing my validation.pem, a client.rb pointing to my Chef server and a first_boot.json with a run_list pointing at a role configured on the Chef server.
- A run on first boot script with the Chef install.sh with the like chef_client –j /etc/chef/first_boot.json appended to the end so that Chef would run once installed and converge onto the defined role.
With those two packages in a bundle I was able to add that to a VM recipe and deploy straight to a cloud (AWS) for testing. It was nice to be able to connect to a VM after launch and find it converged and ready to run.
Next – Inception
Convergence on launch is nice, but it would be better still to launch a pre converged machine – after all if you’re adding machines to grow capacity then it’s probably needed now rather than in however long it takes to install Chef and converge on a role or recipes. This capability should be coming soon to Server 3, and we’re using the label ‘inception’ to define what happens inside the image factory – planting dreams inside the VM’s head.
Chef can be made to work like I expected it to, which makes it possible to have an image that converges when first launched without any human intervention. Going by the weight of documentation this doesn’t seem to be how most people use Chef though – DevOps appears to involve having an actual developer involved in their operations. We need another name for fully automated production operations.
This is cross posted from the original on the CohesiveFT blog.
Filed under: cloud, CohesiveFT, technology | Leave a Comment
Tags: aws, Chef, development, DevOps, image automation, image management, Inception, Puppet, Server3