A colleague asked me the other day how to get started with GitHub on a Windows machine, and I ended up doing a quick screen share to show him my usual setup. Thinking that it’s likely a common question I’ve put together a quick screencast of installing Git Bash and Atom on Windows, and using them with GitHub.


I’m not much of a podcast fan, but I came across Sam Harris interviewing Michael Hayden and set aside some time to listen to it. I wasn’t expecting much common ground between the interviewer and interviewee, but a mutual dislike of Glenn Greenwald seemed to get their rapport clicked into place, and I very much enjoyed their discussion.

The purpose of this post is that there’s a massive logical discrepancy in what Hayden said. If it’s true then the (US) Intelligence Agencies are being wilfully ignorant, something that I find unlikely.

The President’s BlackBerry

In one part of the interview Hayden talks about how Obama had to be weaned off his BlackBerry after taking office at the White House.

The assertion: Every embassy in Washington would be able to tap in to the President’s email traffic.

Why this seems unlikely: BlackBerries had a reputation for strong end to end encryption, so much so that there has been controversy over whether they could be permitted in India. For sure any (foreign) agency with proximity to the cell tower near the White House would be able to intercept the encrypted traffic, but that should be pretty much useless to them. At worst traffic pattern analysis would reveal when the device was sending and receiving.

Tipping their hand: This could reveal that NSA already knew how to intercept BlackBerry traffic and that it was simple enough that they also expected less friendly agencies to be able to do the same and/or their was a TEMPEST issue with the handsets that presented a vulnerability to anybody with sufficient proximity (‘anybody on Pennsylvania Avenue’).

Trump Tower Wire Taps

Elsewhere in the interview Hayden marvelled at how the discourse around Trump Tower wiretapping allegations had taken place so publicly, with key agency representatives going on the record to say that Obama had not authorised spying on the Trump campaign/President Elect.

The assertion: Trump Tower wasn’t under surveillance.

Why this seems unlikely: Hayden had talked earlier in the interview about domestic metadata collection for back tracing, and how even his own call records would be amongst that horde. So it would be exceptional for Trump Tower not to be included. Perhaps the point here is the oft argued content versus metadata; but in a world of ‘we kill people based on metadata’ it matters not; and the metadata would be plenty to build the graph of Trumps relationships and influence.

Putting it together

Trump is well known for using an Android phone, which is likely a lot less secure than Obama’s BlackBerry, so if the earlier assertions are true, then any (foreign) agency in New York would be all over his comms and collecting their ‘Kompromat‘. If the latter assertion is also true then the US are basically saying that they left the field open for foreigners, but didn’t take a look themselves; and that would be wilful ignorance. At the very least I’d expect a ‘this is what we (and everybody else) got on you before you were sworn in, and that’s why things need to change’ type conversation and the business as usual activities behind it. Maybe I’m wrong; maybe wilful ignorance is part of how the game is played.


Lovely

22Apr17

My wife took me out to the ‘Supper Club‘ at the newly opened Waitrose in Haywards Heath earlier this week. I wouldn’t be writing about it here if the Daily Mail hadn’t done a hatchet job review that totally misaligns with own experience.

That’s me in the blue shirt with my back to the camera

The title of this post comes from the head chef’s introduction to the menu. He must have said ‘lovely’ over a dozen times.

Let’s examine some of the headline quotes..

‘I could make a better dessert at home’

Well, that was kind of the point. Diners were given a pack as they left with recipe cards so that they could use (Waitrose sourced ingredients) to make everything at home.

‘1970s style’ food

This seems to be a standard attack for anywhere with a prawn cocktail on the menu, since that’s probably the pinnacle of 70s retro chic; though I expect the same rock could be thrown at the chicken fricassee. I’m a child of the 70s, and I happen to like a good prawn cocktail. As fate would have it this was my second prawn cocktail in the week, and the second time where the menu had offered ‘brown prawns’. It wasn’t as good as the ‘half pint of shell on brown prawns with Marie rose sauce’ I got on Monday at the Lockhart Tavern, which came with many prawns that had roe attached, but that was the best prawn cocktail I’ve ever had. The Waitrose one was a lot less fiddly and time consuming.

Small portions and long waits

I’m not a small chap, and the quantity of food was more than enough for me (and I didn’t take any potatoes or eat half of the biscuits that came with my cheese). They were also pretty generous with the wine, with bottles left on the table for us to help ourselves. Mineral water was also free flowing at no extra expense for those choosing not to drink.

I never felt like I was waiting, which could have been just because the wine and conversation was flowing. There was also no wait to pay at the end (my most hated part of the usual restaurant experience) – just say goodbyes and walk away.

My own view

The communal seating wouldn’t have been my choice, but I’m an introvert at heart so I wouldn’t generally sit and chat to strangers. But these were nice middle class Waitrose buying strangers, and the conversation was in the end what made it an entertaining night out.

£70 for dinner for two would have also bought us dinner and a few craft beers up the road at the recently opened Lockhart Tavern, my new favourite in Haywards Heath. It wouldn’t have bought us 3 courses with bubbly, still wine, mineral water etc.

I enjoyed the food, I enjoyed the format, I’ll be back again for a new menu if they keep doing it. I might even try the chicken fricassee recipe at home (and if I have one ruined expectation it’s that I expected more emphasis on how to cook the meals). It was lovely.

It’s also worth noting that the #waitrosesupperclub Tweets belie the Mail’s version of events, and I’m told it’s the same over on Facebook.


Intel recently launched their 3D XPoint non-volatile memory (NVM) under the brand name of Optane. The SSD label in some of the branding might imply that it’s a different type of durable storage, but the technology is aimed at applications that would normally use RAM. This marks the beginning of the end for the compromise between in memory and persistent, as Optane is touted as offering the best of both worlds – DRAM performance and SSD durability.

Continue reading the full story at InfoQ.


TL;DR

I started a new job yesterday that boils down to applying data driven (re)design for operations (DevOps) to what’s now one of the largest Global IT Services companies. After leaving the startup world for ‘a bigger train set’ the scale just got bigger still. The size of the task ahead seems initially daunting, but much of what needs to be done has already been figured out, and there’s a great team around me.

Background

It seems that I only write about my work when I change jobs. This time it’s a bit different, as the company I worked for changed around me, and I ended up with (sort of) the job I was hired to do.

Yesterday CSC merged with HPE Enterprise Services (ES) to create DXC.technology. The Global Infrastructure Services (GIS) organisation that I worked for ceased to exist, and I moved to the new Global Delivery Organisation.

The story so far

I started at CSC at the beginning of December 2015 as the CTO for GIS, which with revenues in the region of $4Bn was about half the company following the spin out of the US Public Sector business to CSRA. In May 2016 I was tasked with setting up the Operations Engineering (OE) group, which I handed over to Greg Dietrich in August; and then I was asked to be general manager for the x86 and Distributed Compute business. Quite a ride over the course of 16 months.

Not quite as expected

When CSC first approached me in July 2015 it was for the job of GIS CTO, but as the hiring process was nearing the end I had an in person meeting with Steve Hilton who told me that there were plans to restructure the company along the lines of Build, Sell and Deliver. He expected to be running the Deliver piece, and he wanted me to be Deliver CTO. That was a moment where my mouth ended up saying ‘yes’ whilst the back of my head was saying ‘you need to go away and think about this, there’s still time to escape, you’ve not signed anything yet’. After some consideration, I took the job anyway; Deliver sounded a bit boring and back officey, but there was clearly lots of interesting work to be done.

I next got to see Steve a few months later when he got his leadership team together at the (then) CSC HQ in Falls Church. It was clear that not only had the organisational change not happened, but there was no sign of it starting. ‘I spoke to Mike about that last week, and he’s gone off the idea’ seemed pretty inadequate. A CEO nearing the end of a (successful) major turnaround surely couldn’t be so capricious.

It was an almost 4 month wait for a proper explanation. The public announcement of our impending merger with HPE ES made it quite clear why we were holding back on restructuring – why change CSC when it would just make integration with a similarly organised ES more difficult.

Got there in the end

So yesterday I became the Deliver CTO – the job I’d signed up for, just with a much bigger Deliver organisation.

Both sides happy

I’ve seen a lot of mergers and acquisitions during my time in the IT industry, and in almost every case there’s one side thinking they’re winning, and consequentially another side that comes away as the losers. As we’ve brought DXC Technology together it’s not felt like that. The CSC people have been positive about the move as it’s brought us from near the back of the IT services field to near the front. At the same time the (mostly ex EDS) ES people have been positive about moving away from a single vendor and becoming an independent services pure play (again).

That’s not to say that every single person is happy. Like most mergers there are ‘synergy’ savings that have been promised to Wall St, so that means a bunch of people who don’t have a seat when the music stops. Bringing together two similar organisations inevitably means that you end up with (more than) two of everything when you actually need one; and that applies particularly at senior management.

So what does a Deliver CTO actually do?

Firstly it’s worth unpacking the three letters of CTO:

  • Chief – a leader, who points the way for the rest of the organisation.
  • Technology – understanding the constantly changing environment, continuously learning and teaching others.
  • Officer – making decisions within a delegation framework, and helping others to make decisions.

The Global Delivery Organisation encompasses everybody involved in making stuff actually happen for customers, so it covers a lot of ground; but if I grossly oversimplify it turns into two different takes on DevOps:

Internal: All in on Operational Data Mining

Operational Data Mining (ODM) is the way that we’ve been improving operations at CSC, and is a discipline that will be central to how we make DXC Technology work. It goes like this:

  • Collect data exhaust from service management and ancillary systems
  • Analyse to identify operational constraints
  • Model the system being worked on
  • Hypothesise a way to improve the system
  • Experiment to see if that works
  • Rinse and repeat

In terms of the ‘three DevOps ways’ of flow, feedback and continuous learning through experimentation this is clearly the third way, but it applies to the first and second too. In every case we’re trying to improve the flow of work (first way), and we’re generally trying to build systems that provide faster feedback (second way). This is quite simply the implementation of what’s in the DevOps Handbook, which is in turn everything that the manufacturing industry learned about process improvement post WW2 (Deming etc.). Science not magic, and it works.

The great thing about doing this in a company the size of DXC Technology is that the experiments can be parallelised across multiple regions and accounts. Furthermore we can identify repeat patterns and apply them with less setup cost and time.

External: Design for Operations

Build, Sell, Deliver can be conceived as a triangle, with bidirectional relationships on each vertex:

  • Sell what we have Built
  • Build what we can Sell
  • Sell what we can Deliver
  • Deliver what we have Sold
  • Deliver what we have Built
  • Build what we can Deliver

Here I’ll concentrate on the final point – Build what we can Deliver.

This is a point about design. We should think about design for operations when an offering is put together. It’s not enough to just build what the customer says they want (Build what we can Sell), we need to consider the delivery experience (and associated costs) for turn up on day 1 and ongoing operations from day 2 to infinity.

What we call ‘DevOps’ is just a bunch of artefacts that come from organisations that have designed for operations (which generally tend to be software as a service or software based services). This isn’t even a software thing – just about every industry evolves from design for purpose through design for manufacture and to design for operations (sometimes called design for maintenance).

Design for operations isn’t an enormously complex thing to do, it’s just about ensuring that operational considerations get included alongside other functional and non functional requirements. Once again data can be brought to bear, because we can use data about the cost of operations to drive investment decisions around the design of offerings.

An example: Modern Platform

Modern Platform is a virtual infrastructure offering that’s been designed for turnkey operation on day 1 – plug it into ‘power and ping’ and it’s ready to go after minimal configuration. This is largely achieved by the approach of using a partner engineered system, but that’s irrelevant to design for operations – it’s just pushing work into a place within the value chain that has best scope and scale.

Modern Platform provides an offering in its own right (for customers that just want [more or newer] virtual infrastructure), but also a foundation for higher order offerings like workplace and private cloud. Here the design principles can be pushed higher up the stack. If I can get a turnkey virtual infrastructure then why not a turnkey virtual desktop infrastructure? (and so on).

The challenges ahead

Top of my list is restoring/engendering psychological safety. Google’s research into differentiators for high performing teams identified it as the primary determining factor, but both organisations have been battered by years of cuts and the machinations of the merger. Now that we’re together as one we need to make as many people as possible, feel as safe as possible, as quickly as possible; we need to create belief for individuals in their place within DXC Technology.

Beyond that lies the need to reskill for a cloudier future. Ops people who are used to eyes on glass and hands on keyboard and mouse need to get familiar with the disciplines of software engineering as infrastructure becomes code. CSC came some way on that journey with Infrastructure as Code Boot Camps, but scaling that to a much larger delivery organisation will take a different approach, and there’s also a need to go broader and deeper.

To earn our share of the shared responsibility model that comes with cloud will require greater application intimacy, which needs to come both top down (business needs to app) and bottom up (infrastructure services to app). The concepts from site reliability engineering (SRE) that were used in founding the OE team will be essential, and Ian Miell shows just what it takes to achieve application intimacy in ‘Things I Learned Managing Site Reliability for Some of the World’s Busiest Gambling Sites‘.

In good company

I was attracted to CSC by Simon Wardley, Glen Robinson, Sam Johnston and JP Morgenthal joining ahead of me (and that’s before I got to properly know Dan Hushon). Those hires were clearly about growing a team that understands next generation IT. It’s a group of people who understand the need for situational awareness and mapping, who get OODA, who adapt and evolve but also learn and mentor. Since joining CSC I’ve been impressed by almost everybody I’ve met there, by their technical skill, willingness to learn, and resilience. As I’ve come to know my new ex ES colleagues they’ve impressed me too, for there’s a great deal of shared experience and shared culture in both firms. It will no doubt be a tough road ahead, but the journey will be easier with such good company.


The Raspberry Pi Foundation has launched the Pi Zero W, a variant of the Pi Zero that comes with onboard WiFi and Bluetooth. At $10 it’s double the price of the Pi Zero, but still substantially cheaper than $35 for the original Raspberry Pi Model B which launched 5 years ago. By having connectivity onboard the Pi Zero W will make a much better starting point for Internet of Things (IoT) projects.

Continue reading the full story at InfoQ.


I’ve been working as a CTO in some shape or form for almost 8 years now. Many people tell me that they want to be a CTO, and then moments later ask what’s involved.

Wednesday wasn’t a typical day – they’re (thankfully) not all that full on, but it serves as a good example of the cross section of CTO activities.

An early start

A colleague’s in town from New York, and wants to catch up over breakfast. Since both of us have early meetings it means an even earlier breakfast, so my alarm is set for 0510 so that I can make the 0551 train up to London. Thankfully I get to the station early enough to get the 0547, as the train I’d planned to get is cancelled.

On the way in I do my usual ‘daily download’ on Feedly and Twitter to keep up with what’s going on in the industry. The previous evening’s AWS S3 outage dominates things.

I get to the breakfast venue with time to spare, which gives me the chance to finish off some stuff I was doing with the Serverless Framework that wasn’t working due to the S3 problems. After an excellent Eggs Benedict and a nice cup of tea (my usual morning green and earl grey) it’s time to head off to the next event.

Windows 10 round table

The marketing team have asked me to chair a round table event on Windows 10 migration. We have the top man in the region from Microsoft, the technical lead from our workplace offering group, and a good selection of customers from a variety of industries. My role is to kick off the discussion, and make sure it keeps moving along (and that we don’t leave people out). The conversation is lively, everybody gets a chance to dig into their particular challenges, and the time passes very quickly.

When I first met software defined networking (SDN) inventor Martin Cassado he asked me ‘what sort of CTO are you?’, and my answer was ‘a marketing CTO’ to which he replied ‘that’s the right sort’. I only spend a fraction of my time on marketing, but projecting technical leadership into the marketplace is super important.

As the customers disperse I have a quick debrief with the marketing lead, and he seems happy that we have some strong follow ups to pursue.

A brief pause in the action

It’s 11am and I don’t have anything in my calendar until 1245. I jump on the tube to the office and begin catching up on emails. By the time I get to my desk after a quick catch up with my assistant it’s almost lunch time, and I see a tweet for The Poutinerie that they’re not far away so I head over to them in the hope that I’ll beat the lunchtime rush.

My (very tasty) poutine is almost finished by the time I make it back to the cafe in the office, and I bump into a colleague who tells me he’s leaving the firm in a few days time. It’s a shame – somebody that I like and respect will be working for the competition in a few weeks time, but it’s also an inevitable consequence of the constant change in our industry. While we’re chatting I spot somebody else that I want to catch up with, and once we’re done I do a lap of the office to find which desk she’s at. After some general chit chat about what’s going on I give her a quick demo of Katacoda, which I’m planning on using for infrastructure as code training.

There’s just about time for a quick triage of my inbox before I head out for the next thing. I spot a text from a friend that will need to wait for now.

The bid review

Next up I’m joining one of bid teams for the review meeting with the customer. It’s an account that I’ve been involved with for some time, so I’ve got to know many of the players on both sides of the table. They spent the whole of the preceding day in rehearsals, which included a few hours of my time. Despite all the preparation we go off track with the opening section taking much longer than planned, and everything after that is rushed as we try to make up time.

The meeting seems to go well though, and we leave with everybody feeling pretty positive that it went well. Out of 2.5 hours of scheduled time I spoke for maybe 20 minutes, but that’s not really the point – just by having me there we’re showing our commitment.

I’m not off the hook yet though, it’s back to the office for a debrief. By the time we’re done with that it’s 5pm and people are heading home for the day.

Joining the dots

I finally get to reply to the text I’d received after lunch. I’d introduced our cloud general manager to a consultancy company specialising in virtualisation and cloud, and it turns out that we’ve taken a 20% stake in their company. Their office is on the way to where I need to be next, so I quickly shed the suit I’ve been wearing in favour of jeans and T-shirt for the evening activities, and head over for a quick celebratory drink (and the chance to meet whoever’s still around at 5.30).

We talk a little about the opportunities to work together, but as they’re getting in the second round I realise that I’m not going to be able to stay. I finish my pint, make my apologies and head off to the next thing.

Serverless Meetup

Serverless is one of the hottest trends in IT at the moment, and despite the fact that I have very little experience in using it myself I’ve been asked to do a talk on operational considerations (because despite the hype Serverless doesn’t mean No Ops). My talk is largely based on the Serverless Operations is Not a Solved Problem piece I did for InfoQ from last year’s Serverlessconf London. Before we get started I bump into some of the usual suspects from the leading edge of London’s tech scene, but there are plenty of new faces. After my talk I get into some great discussions with people I hadn’t met before, which is the whole point of such events, but sadly the pizza is gone before I get anywhere near it. The other two talks are very illuminating in terms of what can be done with serverless and some of the challenges, so combined with the various discussions it’s a very educational event.

Sadly I don’t have time to carry on the conversation in the pub. I want to be home before everybody goes to bed, so I head for the 2130 train home, grabbing a spicy mini chirashi from the Wasabi in the station. The journey home gives me time to watch an episode of a TV series (that my wife doesn’t like) on one screen whilst I catch up on emails and Twitter on another.

I get through the door at home just as the news is turning to the weather. At least the next day isn’t such an early start.

Conclusion

My company is organised along the lines of ‘build, sell, deliver’, and I often describe my job as a three legged stool with activities aligned with each. This particular day was more ‘sell’ biased, but that perhaps highlights the difference between a CTO role and a senior architect – the need to get out and be the public face of the organisation (with people who expect a technical answer to their questions).

I’m glad that not every day is so full (and so long) – that would be too exhausting; but this particular day was quite fun and rewarding – and worth writing about.


A buffer overflow bug has caused a small number of requests to Cloudflare proxies to leak data from unrelated requests, including potentially sensitive data such as passwords and other secrets. The issue, which has been named ‘Cloudbleed’, was discovered and documented by Google Project Zero vulnerability researcher Tavis Ormandy. After applying fixes and attempting to clean search engine caches Cloudflare’s John Graham-Cumming provided a detailed explanatory blog post. Despite some sensitive data being leaked Cloudflare’s Founder and CEO Matthew Prince tweeted ‘I think we largely dodged a bullet on the actual impact’.

Continue reading the full story at InfoQ.


My Asus Tinker Board arrived yesterday from CPC, and I did a quick tweet with unboxing photos. Having taken it for a quick test drive here are my first impressions based on running up their Debian image[1] (I’ve not had the time to try Kodi yet).

Tinker Board

Tinker Board before mounting heatsink on SOC

Reassuringly expensive

The Tinker Board is £55, which is a good chunk more than a RPi3 at £32 – that’s quite a premium for a bit more CPU performance and RAM. I like the annotated PCB, and it’s also good to have a clicky MicroSD slot (like the RPi2 had rather than the cheaper feeling ones on the RPi3).

Desktop

It boots straight into a GUI desktop. Chromium is there, and seems fast enough to be used as a desktop machine (if you can live with a 1080p screen). I guess if I can get by with 2GB RAM on my Chromebook then I can get by with 2GB RAM on this.

I’ve not yet figured out which window manager it’s using (likely whatever Debian default is).

Network

Connecting to WiFi from the desktop was easy – click the button, select the network, enter password.

Getting the gigabit wired network working was not so easy/obvious (for something that should ‘just work’). I could see from my switch that the network was up (and connected at GigE), but the interface didn’t connect and pull a DHCP address and the usual command line invocations like ‘ifdown eth0 && ifup eth0’ weren’t working. Eventually it seems that I clicked something in the desktop UI that provoked action, and at least once it was up it stayed up across power cycles.

The OS image

It’s pretty obvious that somebody at Asus cloned an OS from their working Tinker Board, I can even see their command history for the bits and bobs that they installed by hand. This is not how professionals build and release an image, and I’m guessing my network issues might be related to the hardware MACs on my Tinker Board being different from the ones on the one the snapshot came from. At least the base is relatively stock Debian Jesse.

Security

When the board boots into a desktop it’s with the user ‘linaro’, which happens to have a password ‘linaro’; that user is part of the sudo group, and so can jump straight into doing stuff as root. So we have a hard coded username and password for a user who can get to root.

SSH is listening by default, making it possible to log in remotely (with the hard coded username and password).

The Raspberry Pi foundation did a better job with this stuff, and Asus clearly haven’t learned those lessons, which is a shame.

How could this be better?

If the supplied image booted into a late stage customisation script with the following few options that would be much better:

  • Desktop or CLI?
  • Username and password?
  • SSH (and other exposed services) on or off?

If it was possible to provide a cloud-init like way of supplying customisation to do that without human touch then even better.

It runs hot

I found the included SOC heatsink after I’d done the unboxing photos, and popped it on. It gets pretty hot, so my guess is that it’s needed (or the SOC would be frying). The quick start guide specifies a 2A USB power supply (so that’s 10W). I tried to measure current draw with my PLX Legion Meter, but I couldn’t get it to boot as it seems to try to draw more current than the meter can supply.

That’s it for now

I’ve not had the chance to do anything meaningful with the board yet (let alone build a project around it). Next up I’ll try the Kodi build and see if the x.265 hardware decoding can be used there.

Note

[1] The quick start guide doesn’t have download links (I’m guessing they weren’t ready at the time of printing), and they’re not that easy to find with search. Here’s the download site (though it wasn’t working at the time of writing – sigh). Updated 18 Feb 2017 – There’s a new download site, but it doesn’t seem to have a Kodi image.


TL;DR

Organisations of all types are increasingly making decisions based on data and its analysis, but the rigour involved in this hasn’t yet entered our broader social discourse. I’m hopeful that we all start getting better access to data, and better understanding of the analysis and modelling process so that decisions can be made for the right reasons.

All models are wrong, some are merely useful — Simon Wardley channelling George Box

Background

I spend my days encouraging people to make better decisions based on scientific method and data — collect, analyse, model, hypothesize, experiment — rinse and repeat[1]. My work is just a minuscule part of the overall trend towards running companies on data rather than opinion, and the march towards machine learning[2] and artificial intelligence it brings with it. This makes me very critical of data when it’s put in front of me, and how it gets analysed. I’m going to use a news article I read this morning as an example of bad practice in order to illustrate how things can (and probably will) change for the better.

The News

I’m going to pick apart a no byline piece from the BBC ‘Four-year MOT exemption for new cars proposed’. It’s full of facts and figures, but also has all the hallmarks of a rushed together content farm piece as described in ‘the rest is advertising’.

The proposal

The UK Ministry of Transport (MOT) is proposing that new cars be allowed to go an extra year (4 instead of 3) before their first MOT test. This almost certainly is a decision that’s been made in light of the data. The crucial question here, and one that’s not answered by the article is ‘how many cars fail their MOT test when first presented at 3 years old?’. The MOT people surely know the answer to that question, and that answer no doubt informs the statement that “new vehicles are much safer than they were 50 years ago”.

The irrelevant opinion

The article goes on to present data from an Automobile Association (AA) member poll. Apparently 44% were in favour of the change to 4 years, with 26% against.

It’s pretty clear that those AA members weren’t presented with the data that the MOT has, otherwise I’d expect a vary different outcome.

A question asked with facts presented:

The Ministry of Transport has found that 99.9% of cars presented for their MOT test at 3 years old pass the test, and they’re proposing that new cars now start taking the test after 4 years — does that sound reasonable to you?

Gets a very different answer than:

The Ministry of Transport says that new cars are safer than they were in the past. Do you think the MOT should start at 4 years instead of 3 years like it is now?

My bottom line here is: who gives a rats ass what a bunch of ill informed drivers think — where are the facts driving this decision?

This is not (entirely) the writer’s fault

For sure the writer could have gone back to the Ministry and asked for the fail rate data for cars at 3 years old (and 4 years old etc.), and I’m sure a better article would have resulted. But that’s too much to ask in a world of churning out content and reacting to the next press release or politician’s tweet.

If the Ministry was doing a good job of communicating its proposal perhaps it could have also explained its reasoning, and spoon fed the data with the press release.

What’s this got to do with politics?

Everything is politics — Thomas Mann

With Brexit and Trump’s election 2016 brought a moral panic around ‘fake news’ and the whole concept that one person’s opinion can be more valuable that another person’s fact.

Anti-intellectualism has been a constant thread winding its way through our political and cultural life, nurtured by the false notion that democracy means that ‘my ignorance is just as good as your knowledge.’ — Isaac Asimov

Facts come from data, but it’s easy for the causal link between collected data and presented ‘fact’ to become stretched, especially when statistical methods are being used (which is pretty much all ‘data science’). It’s this bending of fact, particularly in social science such as economics that opened the door to statements like this:

Britain has had enough of experts — Michael Gove

It’s interesting to note that the Brexit Leave campaign made extensive use of data science, along with other modern strategic tools like OODA as described by Dominic Cummings in his ‘how the Brexit referendum was won’. It also seems that we’re dealing with the deliberate introduction of noise into Western political discourse per ‘Putin’s Real Long Game’ and ‘Playtime is Over’.

There is a more hopeful angle though. Peter Leyden argues for a positive refrain in his ‘Why Trump’s Inauguration is Not the Beginning of an Era? — ?but the End’, noting that California might (once again) be ahead of the pack in moving on from celebrity politicians to a more data driven and scientific approach.

From Global Politics to Office Politics

The section above touched on major political events, but it’s worth looking more closely at what happens with data based decision making within organisations. Leaning on my own experience it seems to eliminate lots of office politics.

Don’t bring an opinion to a data fight — Kent Beck

Decisions have traditionally been made based on the Highest Paid Person’s Opinion (HiPPO), and perhaps the heart of office politics has been saying and doing what’s thought to keep the HiPPOs happy. As Andrew McAfee observed in ‘The Diminishment of Don Draper’ the HiPPO is being displaced by data and analytics. This can be very empowering to front line people, and in turn displaces traditional political structures. I think this is for the good, as it seems to make workplaces more pleasant and predictable (rather than confrontational and capricious).

Conclusion

In a world where it seems harder than ever to distinguish fact from fiction it’s on all of us to bring our data and clearly explain our analysis, because that provides facts with provenence, facts that can be understood, facts that can be trusted, facts that can triumph over opinion; and there’s nothing more political than that.

I look forward to better data based journalism in our broader social and political discourse, but I also look forward to what data and data science does to the workplace, because I think less political workplaces are nicer workplaces.

Updated 23 Jan 2017 — I meant to add a link to the London School of Economics series The Politics of Data

This post by Chris Swan was originally made to Medium

Notes

[1] For some insight into the work I’ve been contributing to take a look at my GOTO:London 2016 presentation.
[2] One of the ways I like to think of recent advances in machine learning is that computers are finally doing what we might reasonably expect of them — which mainly boils down to not asking a human a question that the machine can reasonably answer for itself.