Back in March I wrote about Using Overlay file system with Docker on Ubuntu – those instructions applied to Ubuntu before the switch to systemd e.g. 14.04 and earlier.

The move to systemd means that changes to /etc/default/docker don’t have any effect any more.

To get systemd to dance along to our tune needs a file like this:

/etc/systemd/system/docker.service.d/overlay.conf

[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// --storage-driver=overlay

To make this work use the following script (or get it from gist to avoid silly copy/past replacement of < with &lt;):


sudo mkdir /etc/systemd/system/docker.service.d
sudo bash -c 'cat <<EOF > /etc/systemd/system/docker.service.d/overlay.conf
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// --storage-driver=overlay
EOF'
sudo systemctl daemon-reload
sudo systemctl restart docker

Now when you run ‘sudo docker info’ you should see something like:


...
Storage Driver: overlay
 Backing Filesystem: extfs
...

At least you didn’t need to upgrade the kernel this time – small mercies.

NB this is somewhat (inaccurately right now) documented in Control and configure Docker with systemd – I can feel a PR coming on.


“Installation is a Software Hate Crime” – Pat Kerpan, then CTO of Borland circa 2004.

Today’s hate – meta installers

I’ve noticed a trend for some time that when downloading an installer rather than being the 20MB or so for the actual thing I’m installing it’s just a 100KB bootstrap that then goes off and downloads the actual installer from the web.

I’ve always hated this, because if I’m installing an app on multiple computers then it means waiting around for multiple downloads (rather than just running the same thing off a share on my NAS). This is why most good stuff comes with an option for an ‘offline’ installer intended for use behind enterprise firewalls.

One possibly good reason for this approach is that you always get the latest version. That however is a double edged sword – what if you don’t actually want the latest version? What if version 10.5 is an adware laden, anti virus triggering turd (I’m looking at you DivX) and you want to go back to version 10.2.1 that actually worked? Bad luck – you don’t actually have the installer for 10.2.1.

Yesterday’s great hope – virtual appliances

When I talked to Pat a few years later about the concern of virtual appliances he figured out straight away that it would ‘solve the installation problem’. Furthermore he went and built Elastic Server on Demand (ESoD) to help people make and deploy virtual appliances.

Virtual appliances are still hugely useful, as anybody launching an Amazon Machine Image (AMI) out of the AWS Marketplace or using Vagrant will tell. Actually those people won’t tell – using virtual appliances has become so much part of regular workflows that people don’t even think of using ‘virtual appliances’ and the terminology has all but disappeared from regular techie conversation.

Virtual appliances had a problem though – there were just so many different target environments – every different cloud, every different server virtualisation platform, every different desktop virtualisation platform, every different flavour of VMware. Pat and the team at Cohesive bravely tried to fight that battle with brute force, but thing sprawl eventually got the better of them.

Today’s salvation – Docker

Build, Ship, Run is the Docker manta.

  • Build – an image from a simple script (a Dockerfile)
  • Ship – the image (or the Dockerfile that made it) anywhere that you can copy a file
  • Run – anywhere with a reasonably up to date Linux Kernel (and subsequently Windows and SmartOS)

A Docker image is a virtual appliance in all but name[1], but the key is that there’s only one target – the Linux Kernel – so no thing sprawl to deal with this time[2].

The most important part of this is Docker Hub – a place where people can share their Docker images. This saves me (and everybody else using it) from having to install software.

If I want to learn Golang then I don’t have to install Go, I just run the Go image on Docker Hub.

If I want to use Chef then I don’t have to install it, I just run the Chef image on Docker Hub.

If I want to run the AWS CLI, or Ansible, or build a kernel for WRTNode[3] then I don’t have to Yak shave an installation, I can just use one that’s already been done by somebody else.

Docker gives you software installation superpowers

Docker gives you software installation superpowers, because the only thing you need to install is Docker itself, and then it’s just ‘docker run whatever’.


 

This is why I’ll probably not ever get myself another PC running Windows, because I’ve done enough installing for this lifetime.

Notes

[1] Docker images and Xen images are basically just file system snapshots without kernels, so there’s precious little to tell between them.
[2] This isn’t strictly true now that Windows is on the scene – so 2 targets; and then there’s the ARM/x86 split – so at least 4 targets. Oh dear – this binary explosion could soon get quite hard to manage… the point is that there’s not a gazillion different vendor flavours. Oh look – CoreOS Rocket :0 Ah… now we have RunC to bring sense back to the world. Maybe the pain will stop now?
[3] These are all terrible examples, because they’re things where I made my own Docker images (so they’re in my page on Docker Hub), but even then I was able to stand on the shoulders of giants, and use well constructed base images whilst relying on Dockerfile to do the hard work for me.


On my first day with Bryan Cantrill he did a wonderful (and very amusing) presentation on Debugging Microservices in Production on the containers track at QCon SF.

On my second day with Bryan Cantrill we talked about Containers, Unikernels, Linux, Triton, Illumos, Virtualization and Node.js – it was something of a geekfest[1].

On my third day with Bryan Cantrill I said, ‘if I see you tomorrow something’s gone terribly wrong for both of us’.

[1] If you enjoy that interview then you’ll probably like the one I did with John Graham-Cumming too (and maybe all of the others).

 


I’m writing this on my last day as CTO for Cohesive Networks, and by the time it’s published I’ll have moved on to a new role as CTO for Global Infrastructure Services at CSC.

Looking Back

It’s been a pretty incredible (almost) three years at Cohesive.

Year 1 – focus on networking. When I joined Cohesive in March 2013 we had a broad product portfolio covering image management, deployment automation and networking. It was clear however that most of our customers were driven by networking, and hence that was what we should concentrate our engineering resources and brand on. We finished 2013 with a strategic commitment to put all of our energy into VNS3, and in many ways that transition ended with renaming the company to Cohesive Networks at the beginning of this year.

Year 2 – containers everywhere. By the summer of 2013 we had a number of customers clamouring for additional functions in VNS3. This was generally for things they were running in ancillary VMs such as load balancing, TLS termination and caching. Putting all of those things into the core product (and allowing them to be customised to suit every need) was an impossible task. Luckily we didn’t need to do that; the arrival of Docker provided a way to plug in additional functions as containers, giving a clean separation between core VNS3 and user customisable add ons. The Docker subsystem went into VNS3 3.5, which we released in April 2014. This turned out to be a very strategic move, as not only did it shift VNS3 from being a closed platform to an open platform, but it also allowed us and our customers to embrace the rapidly growing Docker ecosystem.

Year 3 – security. By the end of 2014 some customers were coming looking at the NIST Cyber Security Framework and trying to figure out how to deal with it. Section PR.AC-5 ‘Network integrity is protected, incorporating network segregation where appropriate’ was of particular concern as organisations realised that the hard firewall around the soft intranet no longer provided effective defence. It was time for the cloud network security model to return home to the enterprise network, and VNS3:turret was born to provide curated security services along with the core encrypted overlay technology.

What I’ve learned

The power of ‘no’ – what something isn’t. Products can be defined as much by what they don’t do as what they do. Many products that I see in the marketplace today are the integral of every (stupid) feature request that’s ever been made, and they end up being lousy to work with because of that. VNS3 doesn’t fall into that trap because we spent a lot of time figuring out what it wasn’t going to be (as well as plenty of time working on what it is and will be).

Write less software. Is a natural follow on to what something isn’t, as you don’t have to code features that you decide not to implement. But even when you do decide to do something writing code isn’t necessarily the best way to get things done. The world of open source provides a cornucopia of amazing stuff, and it’s often much better to collaborate and integrate rather than cutting new code (and creating new bugs) from scratch.

CTOs are part of the marketing team. I’d previously observed CTOs elsewhere that seemed to spend most of their visible time on marketing, and I think that’s become ever more prevalent over the past few years. There’s little point to a commercial product that nobody knows about, and successful communities require engagement. It’s been fantastic to work with our customers, partners, prospects, industry associations, and open source communities over the past few years.

What’s next

I’m going to one of the world’s biggest IT outsourcing companies just as three of the largest transitions our industry has ever seen start to hit the mainstream:

  • Cloud Computing – is now consuming around half of new server CPU shipments, marking a tilt in the balance away from traditional enterprise data centres.
  • Infrastructure as code – is just a part of the shift towards design for operations and the phenomenon that’s been labelled ‘DevOps’, bringing changes and challenges across people, process, and technology.
  • Open source – not only as a replacement for proprietary technologies or the basis of software as a service, but as a key part of some business strategies.

I hope that my time at Cohesive has prepared me well for helping the new customers I’ll meet deal with those transitions.

This also isn’t the end of my involvement with Cohesive, as I’ll stay involved via the Board of Advisors.

This post originally appeared on the Cohesive Networks blog titled CTO Chris Swan Moving On


Southern Railway recently upgraded their train ticket buying website. The new user interface (UI) is very pretty, and I would guess it’s an easier place to buy train tickets online if you’ve never done that before.

If you buy tickets frequently, and particularly if you need receipts for expenses then it’s a user experience (UX) disaster. Here’s how…

If you know the short code for your station

I live in Haywards Heath, which has a short code of HHE. In the old setup (still used by South Eastern trains) this is what happens:

SE_origin

Southern’s new site also tries to offer me Garelochhead (some hundreds of miles outside their franchise) because it has the letters ‘hhe’ in it.

Southern_origin

On my first try using the new site I managed to accidentally select Garelochhead, and boy are those trains to London expensive, slow and infrequent.

One way

The old default used to be a return journey. The new one is a single. I find it hard to believe that’s a decision supported by data – people generally want to go and come back.

Too specific

The new site makes you choose a train for each leg of the journey, which was only required if making seat reservations on the old site. Not only is this unnecessary and potentially confusing when buying a ticket or travelcard that offers some choice over when to travel, but it leads to…

The fraudulent receipt

Having made you specify a train for each leg of a journey that information then makes it onto the order confirmation that I expect many people (like me) use as a receipt[1]. That’s fine if you actually end up taking those exact trains, but what if you don’t? I can see a situation arising where a boss approving expenses knows that a particular train wasn’t taken, so the receipt turns into a lie.

The receipt should be for the ticket bought, not the journey (forcibly) planned.

Just take my money

Southern used to store my credit card information as part of my account, but now they make me type it in every time.

and do you really need a phone number?

I can’t think of a time that I’d ever want an online ticket provider to call me.

A booking reference AND a collection reference

and neither of them in the email subject.

When picking up tickets bought online the machine needs a collection reference to be typed in (on the not awfully responsive touch screen keyboards)[2]. This used to be presented at the top of the email:

Southern_collection

Now the first thing presented is some totally irrelevant ‘booking reference’ and the vital ‘collection reference’ is further down.

Thameslink_collection

I’d also note that the email comes from Thameslink – not Southern who’s website I bought the ticket at. Some extra confusion for those unfamiliar with exactly how private enterprise has taken over our railway networks.

For the record the right thing to do here is put the number I need for the machine into the email subject – so I don’t have to actually open the email to find it when I need it.

Conclusion

The new Southern Railway ticketing website might look prettier than the old one, it might even be friendlier for occasional users, but it’s a disaster for frequent users like me and a terrible example of how too much attention to user interface can ruin user experience.

Notes

[1] Arguably the ticket itself is a receipt, but that’s not much help when it’s been swallowed by a ticket machine after a return journey.
[2] I have in the past been able to collect tickets just by presenting my credit card, which seems to me how things should work; and I’m struggling with what the threat model looks like for people picking up tickets with a card that ties to an order but that’s somehow illegitimate without the 8 character code presented at the time of the order (which a fraudster with card details would see anyway).


Dell has been in trouble for the last few days for shipping a self signed CA ‘eDellRoot'[1] in the trusted root store on their Windows laptops. From a public relations perspective they’ve done the right thing by saying sorry and providing a fix.

This post isn’t going to pick apart the rights and wrongs – that’s being done to death elsewhere. What I want to do instead is examine what this means from the perspective of trust boundaries.

Fine On Your Own Turf

It’s completely acceptable to use private CAs (which might be self signed) within a constrained security scope. This is regular practice within many enterprises. Things can get a little underhand if those CAs are being used to break TLS at corporate proxies, but usually there’s a reason for this (e.g. ‘data leakage prevention’) and it’s flagged up front as part of employee contracts etc.

Where Dell went wrong here was doing something that had a limited scope to them ‘it’s just to help support our customers’, but global scope in implementation[2].

Who Says?

When a company adds a CA certificate to its corporate desktop image then not only is the scope limited to users within that company, but the decision making process and engineering to put it there falls onto a relatively small number of people. That small group will be able to reason about which certificates to add in (and maybe also which to pull out).

It’s completely opaque who at Dell (or Lenovo etc.) got to make the call on adding in their CA to their OEM build, but I’m guessing that this wasn’t a decision that got run by a central security team (otherwise I expect this would never have happened).

The Lesson

Something that can impact many people (perhaps even a global population) should not be subject to the whims of individual product managers. Mechanisms need to be set up to identify security/privacy sensitive areas, and provide governance over changes to them.

Note

[1] It now seems that there’s also a second certificate ‘DSDTestProvider ‘
[2] That mistake was further compounded by making the private key available, which thus amounted to a compromise toolkit for anybody to stage man in the middle attacks against Dell customers, a pattern seen previously with Lenovo’s Superfish adventure. Dell’s motivations may have been purer, but the outcome was the same.

 


After all of the noise surrounding Apple’s special relationship with Intel when it first launched the Macbook Air the IT press have been strangely quiet about it ending[1].

Intel’s 6th generation ‘Skylake‘ Core CPUs have been out for a few weeks now, and it seems like the only machines you can buy them in come from Microsoft and Dell.

This is a big deal for me, as it means that it’s possible once again to get travel size/weight laptops with 16GB RAM and decent size SSDs, so I can finally replace my almost 3 year old Lenovo X230. If I had to buy a laptop today I’d be torn between Dell’s XPS13 and the Microsoft Surface Book (and MS would probably get my money as they offer 1TB SSD for the Surface Book and Microsoft Surface Pro 4, even if it is a ridiculously pricey upgrade versus the cost of mSATA drives).

If the roll out of the new chips follows the usual pattern of other OEMs getting parts 3-4 months later this means that we can expect to see new Macbooks, Macbook Airs, Macbook Pros, ThinkPads etc. some time in the new year (perhaps with announcements at CES and shipping a little while later).

There’s some speculation that Apple looked at x86 chips for the iPad Pro, and the decision to go with ARM might have soured the Intel relationship, so it’s easy to see why the ‘WinTel’ relationship might grow stronger with Microsoft – especially now that they’re making such great devices (having escaped from Intel’s ‘Ultrabook‘ branding monoculture). It’s less clear how Dell got one over Lenovo and HP (and Acer, Asus, Toshiba and Sony etc.).

So the good news is that it’s finally possible to buy a decent laptop again. The bad news is that any chance of a 16GB Macbook is deferred, and I’ll also have to wait for something with a trackpoint (which I’ve always prefered over trackpads).

Note

[1] It seems that the tech investor community have noticed.


Rising from the ashes of GigaOm the tribal gathering of cloud elders that is Structure has returned, and got off to a strong start with Battery Venture’s Adrian Cockcroft presenting on the State of the Cloud and Container Ecosystems. Cockcroft paid particular attention to the impact of containers, which wasn’t even a major discussion topic at the last Structure conference in 2013. The event’s opening also had a significant contribution from Intel, with particular focus on healthcare applications for cloud.

continue reading the full story at InfoQ


I had many issues with the kitchen I bought from Wren living. One of the most troublesome ones came from wooden worktops, and in hindsight I should never have bought my worktop from Wren.

Sizing

Wren don’t supply a full range of sizes. So they tried to cover a 2605x945mm island with two pieces of 2400x900mm worktop (actually the original order had one piece of 2400x900mm and another of 3000x610mm, but that would still have meant multiple joins).

I ended up getting my worktop from Worktop Express, who supply 3000x960mm (amongst other sizes) and do next day delivery.

Cost

Wren charged (and later refunded) me £997.98 for the two pieces of worktop that didn’t fit.

Worktop Express charged me £350 (plus £25 delivery) for the one piece that did fit.

Don’t ignore the extras

Wren added a £50 ‘worktop care kit’ to my kitchen order, which included oil, cleaner and joiners. Some of the stuff I just didn’t need, and the rest could have been sourced much more cheaply elsewhere.


I had many issues with the kitchen I bought from Wren living. Some of the most troublesome ones came from appliances, and in hindsight I should never have bought my appliances from Wren.

Everything at once

Buying my kitchen and appliances at the same time from the same supplier seemed like a good idea. I’d have one company to deal with, and everything would come at once. If only that were true.

Two of the appliances came three weeks late, and an appliance that was damaged on delivery took nine weeks to be replaced.

Cost is just one consideration

When I shopped around for the appliances I’d bought with my kitchen at Wren I found that I could get them cheaper, though the difference wasn’t generally too bad – tens of pounds not hundreds.

Delivery is the key

The appliances I ordered were generally available online with next day delivery from a range of UK sellers.

Wren don’t do next day delivery. Wren use Swiftcare to do their delivery (when things aren’t coming in their own vans). Swiftcare run a 10 working day service. Within that 10 days you’ll get a call 48hrs before a planned delivery to see if you’ll be home to take it, if not then back to square one. On the day of delivery you’ll get a call about an hour before the (huge) lorry comes.

So Wren saving a few quid on shipping costs means that you can be left waiting weeks for appliances.

Availability is important too

I’ll repeat the point – the appliances I ordered were generally available online with next day delivery from a range of UK sellers. So it was annoying to be left waiting for weeks for Wren to get stock and Swiftcare to come along.

The substitutes

Wren didn’t leave me without. When they couldn’t deliver they sent me CDA brand substitute appliances. In some cases (like the hob) it wasn’t worth the bother to fit another thing, but in others (like the microwave) it was genuinely useful to have the substitute.

When I got substitutes I was told that I could keep them afterwards ‘for the inconvenience’. How it makes business sense to give away hobs, extractors and microwaves when the items the customer wants are available elsewhere for next day delivery remains a mystery to me.