Dockercon #2 is underway, version 1.7.0 of Docker was released at the end of last week, and lots of other new toys are being launched. Time for some upgrades.

I got used to Docker always restarting containers when the daemon restarted, which included upgrades, but that behaviour went away around version 1.3.0 with the introduction of the new –restart policies

Here’s a little script to automate upgrading and restarting the containers that were running:

datenow=$(date +%s)
sudo docker ps > /tmp/docker."$datenow"
sudo apt-get update && sudo apt-get install -y lxc-docker
sudo docker start $(tail -n +2 /tmp/docker."$datenow" | cut -c1-12)

I also ran into some problems with Ubuntu VMs where I’d installed from the old repos that have now moved to

I needed to change /etc/apt/sources.list.d/docker.list from:

deb docker main


deb docker main

The switch to HTTPS also meant I needed to:

apt-get install apt-transport-https

BanyanOps have published a report stating that ‘Over 30% of Official Images in Docker Hub Contain High Priority Security Vulnerabilities’, which include some of the sensational 2014 issues such as ShellShock and Heartbleed. The analysis also looks at user generated ‘general’ repositories and finds an even greater level of vulnerability. Their conclusion is that images should be actively screened for security issues and patched accordingly.

continue reading the full story at InfoQ

Official Images with Vulnerabilities

Yesterday I delivered a tutorial as part of the Open Network Users Group (ONUG) Academy:

To go through the tutorial yourself you’ll need an AWS account and an SSH client (and the Internet access and browser you’re using to read this).

To complement the slides there’s a wiki on GitHub with all of the relevant command line snippets and explanations of what’s going on. The materials are Creative Commons license 4.0 Attribution Sharealike licensed, so please feel free to make derivatives, just please give me some attribution for creating this (it took a lot of work).

If you like this then you might also want to take a look at the Cloud Networking Workshop I did for the Open Data Center Alliance (ODCA) Forecast event last year.

At last week’s Ignite conference Microsoft announce a set of new networking capabilities for its Azure cloud described as being ‘for a consistent, connected and hybrid cloud’. The new capabilities include improvements to ExpressRoute, Azure’s Internet bypass offering, availability of ExpressRoute for SaaS offerings such as Office 365 and Skype for Business, additional VPN capabilities and enhancement of virtual networks in Azure’s IaaS.

continue reading the full story at InfoQ


Docker Inc have worked with the Center for Internet Security (CIS) to produce a benchmark document [pdf] containing numerous recommendations for the security of Docker deployments. The benchmark was announced in a blog post ‘Understanding Docker Security and Best Practices’ by Diogo Mónica who was recently hired along with Nathan McCauley to lead theDocker Security team. The team have also released an ‘Introduction to Container Security’ [pdf] white paper.

continue reading the full story at InfoQ

For the last few years the fantastic chaps at GreenQloud have been hosting my automated builds for OpenELEC. Sadly (for me) their business is shifting from running a cloud to selling their ‘QStack‘ cloud platform to others, so GreenQloud are shutting down their IaaS (so that they’re not competing with their customers).

I’m pleased to say that the guys at Bytemark’s BigV cloud have stepped in to offer hosting, so the PiChimney site is now proudly wearing their banner:

NAT in the hat



Whilst on vacation in Spain I’ve found networks that seem to be like something out of a Cory Doctorow novel – domestic WiFi routers hanging off domestic WiFi routers hanging off domestic WiFi routers. At first I thought it was my Airbnb host being cheap and having a cosy arrangement with a neighbour to provide Internet, but it’s much more systematic than that.


Six routers deep

Here’s a traceroute from my laptop:

Tracing route to [] over a maximum of 30 hops:

1 3 ms 1 ms 1 ms
2 6 ms 7 ms 5 ms . []
3 9 ms 8 ms 7 ms
4 963 ms 940 ms 697 ms
5 368 ms 464 ms 159 ms homestation.Home []
6 685 ms 728 ms 769 ms
7 * * * Request timed out.
8 1580 ms 658 ms 588 ms []
9 * * * Request timed out.
10 3538 ms 2147 ms 1566 ms []
11 723 ms 397 ms 877 ms
12 1975 ms 1198 ms 1047 ms
13 865 ms 431 ms 425 ms []

Trace complete.

That’s six different routers on RFC1918 class C private networks (and a lot of latency) before I hit the Internet proper. It’s also a whole ton of NATing and way too much potential flakiness. On a good moment I’ve seen 2Mb/s, but in reality it seems lucky when packets get through at all, and amazing when Skype works[1].

I’ll try to unpick what’s going on at each router in turn…

Router 1

The router in the house I’m renting is my old friend the TP-Link TL-WR841N. I have one of these at home running OpenWRT, but the one here has the (awful) stock firmware on it. Luckily the admin password hasn’t been changed, which came in handy when it needed some help reconnecting after a long power cut.


The WAN link of the router is connected to a TP-Link powerline adaptor


At first I thought this was connected through to a neighbour, but that was because I was looking downstairs for an ADSL modem or similar that wasn’t there. When I looked upstairs (in the laundry room) I found its twin attached to a Ubiquiti power over ethernet coupler:


and that was for a WiFi antenna mounted on the roof:



A quick detour to Solyaires Internet

I didn’t set the system up, and I don’t pay the bill, but my research would suggest that it’s connected to Solyaires Internet or some similar system for distributing Internet via 5GHz WiFi connections. So instead of a community effort where people create a mesh network to share, this seems to be a commercial endeavour (and it’s not a mesh – more like a spider’s web).

One amusing thing I’ve noticed is that my in law’s apartment building (which is miles away from the house I’m renting) has exactly the same egress IP onto the Internet. Here’s their tracert (oddly despite fewer layers of router they get much worse bandwidth):

Tracing route to [] over a maximum of 30 hops:

1 <1 ms <1 ms 1 ms
2 1 ms 1 ms <1 ms homestation []
3 107 ms 56 ms 42 ms
4 * * * Request timed out.
5 62 ms 72 ms 102 ms []
6 * * * Request timed out.
7 160 ms 142 ms 136 ms []
8 67 ms 59 ms 58 ms
9 58 ms 65 ms 60 ms
10 57 ms 59 ms 59 ms google []

Trace complete.

Router 2

The second router along is a Belkin F7D1301, which judging by the Amazon reviews is a very ordinary router indeed. It has no password set, so the admin interface is wide open, which is obviously a terrible idea from a security perspective. My best guess as to what’s going on here is that the WiFi distribution outfit use some of their customers as Internet mules, acting as a relay from one point to the next. It’s pretty shocking how amateurish the setup is though.

Router 3

The third router doesn’t have an open admin interface. Looking at its response headers I see a Boa 0.93.15 web server, which could suggest a Zyxel/Edimax piece of kit (which might be a full router, or might be some sort of ‘range extender’). That web server is susceptible to a basic authentication bypass exploit, but I wasn’t feeling nefarious enough to pwn it (this was a look but don’t touch exercise). The basic auth prompt was ‘Graham-New’ so I suspect it’s a wise home user (another relay mule?) rather than something professionally configured.

Router 4

This one has an airOS admin screen implying something from Ubiquiti networks, and likely kit that’s run by an actual service provider rather than sat in somebody’s home.

Routers 5&6

Neither of these had web admin screens on ports 80 or 443 so I have less to go on (but at least they’re somewhat secure)[2].

The home.homestation implies that we’re back to consumer ADSL gear, and my best guess is that the WiFi connections are being back-hauled by a bunch of consumer grade ADSL links.

The final 192.168.x.y router might just be the local telco being awful and aggregating many ADSL connections onto one public IP.

Part of a broader broadband problem?

I asked a friend who lives and works in Spain about her experiences, and she said ‘it’s unreliable, it’s slow, and the telephone companies are from the last century’. Flicking through local papers I also see that WiFi delivery is a pretty normal offering, and priced in line with ADSL services at around €24/month.

Whilst here I’ve been lucky enough to see Spain included in Three’s ‘Feel at home‘ roaming deal, which means I’ve also been able to check out 3G service. The 3G I’m getting is pretty typical of a mobile service – when it’s good it’s OK (~1Mb/s), when it’s bad it’s not there at all.

In general I’d say that the house WiFi and 3G are about on par in terms of bandwidth and reliability – good enough for keeping up with what’s going on in the world beyond, but not so good that I’d want to depend on it for any kind of business use.


Something must be very wrong with the Internet connectivity market in the Costa Tropical (and perhaps Spain more generally) for this type of arrangement to be tolerable (never mind commonplace).  I’ve been visiting Almuñécar for many years now, and back in the early days the ADSL provision seemed to be much the same as back home in the UK. I get the feeling that the FTTC connection I have at home now would be considered enough to serve hundreds of properties. It’s been great to see investment in infrastructure like roads over the past decade, but it’s a shame that the technology infrastructure hasn’t had the same attention.


[1] I got so sick of large downloads from my home network failing that I’ve lashed up a combination of autossh and bittorrent so that it will download things eventually, and I don’t have to burden the network (and my mouse finger) with redoing the same failed file time and time again.
[2] I’m guessing that the homestation in the path from my in-laws’ was a different one, as in addition to being ‘homestation’ rather than ‘homestation.Home’ in the traceroute it also serves up an admin GUI over the web.


Apple and Google have both launched laptops in the past few days that are both amazing and seriously flawed. If only somebody could make a machine that has the best of both worlds.



The leaks were pretty much spot on, so in the end the new MacBook brought few surprises. I really want a small, light, robust laptop with a decent battery life, so it looks almost ideal.

Why the MacBook is wrong for me

8GB max RAM – it’s barely enough to run a busy browser, and certainly doesn’t have the headroom for running a few VMs for test/demo purposes. I’ve had a laptop with 16GB RAM for two years now, and I’m really not willing to downsize.

I could live with the small(ish) SSD, the low powered processor and the lack of ports, but the lack of RAM is the deal breaker for me. I know that the mainboard is smaller than a Raspberry Pi, but RAM doesn’t take that much space.

Can it be fixed?

No – not unless Apple decide to squeeze in the extra memory, and I rate the chances of that happening within the product life-cycle at approximately zero.



The original Pixel was an enigma to me – too high end for the ChomeOS that it runs, but not high end enough to really distinguish itself. The Pixel2 seems different – it’s so high end that it stands out on the merits of the hardware. i7 processor, 16GB RAM, 12″ screen (I really don’t care that it’s a touchscreen) – we’re certainly headed in the right direction here.

Why the Pixel2 is wrong for me

ChromeOS – I may joke that any desktop OS is just a bootloader for Chrome, and that’s almost true, but not true enough. Even though this machine has the memory to run local VMs it doesn’t have the OS to do that. Not having Skype is also a major issue for me.

Puny SSD – cloud services are great when you have connectivity, which rules out a lot of the time when I actually want a small and light laptop – like when I’m on planes, trains etc. Of course even if the OS problem can be solved, 64GB doesn’t leave much space for VM images. When it’s possible to get (reasonably priced) tiny 1TB SSDs it’s such a shame that they’re not an option.

Can it be fixed?

Possibly – I’ve not seen a detailed tear down yet to establish how SSD is done in the Pixel2, and whether the tiny original one can be upgraded to something more suitable. I have greater confidence in the OS side of things, as I’ve seen the Linux community do a good job of porting things onto previous Chromebooks.

Update [19 Mar 2015] – David Radkowski let me know that the SSD is soldered onto the motherboard, so although I’d expect the OS piece to be fixable the lack of storage is pretty much a show stopper. Whilst it’s possible to get huge capacity SD cards these days for add on storage, I wouldn’t want to be running VMs off them.

A quick diversion to USB-C

It’s interesting to note that both of these laptops use USB-C for power and other purposes.

Many Mac fanboys seem to be disgusted at the decision to replace magsafe with USB-C – just think of all that shiny new stuff that’s going to fall victim to clumsy idiots tripping over power cables. There’s also a loud conspiracy theory that it’s all about selling lots of expensive proprietary dongles.

Google is doing a much better job of talking calmly about USB-C being a new industry standard.

With the ability to carry 100W of power it seems that USB-C will soon be pretty much everywhere, and I like the idea of commodity chargers, video adaptors etc. I also like the idea that I can top up my laptop from the same battery pack I might use for my phone or tablet.

If it was just Apple going down the USB-C road then that would be a problem, but the fact that both of these new laptops from such different stables are released in the same week and headed in the same direction gives me some confidence that USB-C is here to stay and it’s just the opposite of a scam – it’s something with real potential to deliver better value and convenience – just don’t trip over the cable.

Google have done a better job here by having USB-C on both sides to allow charging and monitor attachment at the same time, and it also helps that they have some conventional USB3 ports, but then they did have more volume to play with. I’d note that when I last bought a laptop with MacBook Air lost points on the number of bits and bobs I’d need to carry around to support it – I was thinking about total travel volume and weight – not just the machine.

What would work for me

A MacBook with 1TB SSD and 16GB RAM – just take my money.

An i7 16GB Pixel2 with 1TB SSD and Ubuntu – likewise.

A Canonical badged i7 16GB Pixel2 clone with 1TB SSD – YES PLEASE.

Both of these machines are tantalisingly close to being perfect – just a couple of spec tweaks and I’d be ready to buy. So who’s going to exploit the me shaped gap they’ve left in the market? Lenovo, HP, Dell and Toshiba might all have been contenders in earlier days, but I feel it’s more likely to be Samsung, Acer or Asus, perhaps even Xiaomi that will get the joke this time around.

Or maybe I’m just part of some pinnacle IT clique that’s too small to be worth marketing to, and I’ll be stuck with my 16GB Lenovo X230 (with its 1TB SSD) for the rest of eternity?

Last week Jérôme Petazzoni did an excellent (abbreviated) version of his ‘Deep dive into Docker storage drivers‘ at the London Docker Meetup. If I wasn’t convinced enough by that, Jessie Frazelle hammered home the same point in her QCon Presentation – AUFS is where it used to be at, and the new king is Overlay. I set about trying it out for myself during Jessie’s presentation, and as I couldn’t find a simple guide I’m writing one here.

3.18 Kernel

OverlayFS has been in Ubuntu kernels for some time, but that’s not what we want. Overlay (without the FS) is a different kernel module, so you’ll need to install the 3.18 (or later) kernel:

cd /tmp/
sudo dpkg -i linux-headers-3.18.0-*.deb linux-image-3.18.0-*.deb

I’ve tested this myself on Ubuntu 14.04 and 12.04.

Docker bits

You’ll need Docker 1.4 or later (I tested on 1.5), which can be installed using the usual Ubuntu instructions if you don’t already have it.

After rebooting to get the new kernel it’s now necessary to set ‘-s overlay’ in the DOCKER_OPTS within /etc/default/docker:

# Use DOCKER_OPTS to modify the daemon startup options.
DOCKER_OPTS=&quot;-s overlay&quot;

Restart the Docker service, and if all is well you should get the following output from ‘docker info’

$ sudo docker info
Containers: 0
Images: 0
Storage Driver: overlay
 Backing Filesystem: extfs
Execution Driver: native-0.2
Kernel Version: 3.18.0-031800-generic
Operating System: Ubuntu 14.04.1 LTS

I seem to recall needing ‘modprobe overlay’ on 12.04 to get things working. I’d also note the bad news that Docker falls back to DeviceMapper (rather than AUFS) if it can’t make Overlay work.


[1] Justin Cormack pointed me at this article ‘Another union filesystem approach

I’ve modified my automated build system for OpenELEC so that it now creates RPi2 builds in addition to regular old RPi builds –;O=D


Get every new post delivered to your Inbox.

Join 112 other followers