Laser Printers

16Jun18

My family prints a lot[1] – about 1200 pages/year, which is why I made the decision almost a decade ago to switch from inkjet to laser. Inkjets weren’t just costing me a fortune in ink; they were also costing me a fortune in printers because they kept clogging up and failing in various ways. I worked my way through a variety of Epsons and Canons before giving up on the genre[2].

Black and White

My first buy back at the end of 2008 was an HP LaserJet 2420DN (the Duplex, Networked version) that was made around 2006 and that I picked up on eBay for £75. It was barely run in with a page count of just under 150,000, which is just 2 months usage at its advertised duty cycle. The toner that came with it had a little life left, but I lucked into a brand new HP toner on eBay for £6.26 that I’ve been using ever since – 7683 pages printed so far, with a forecast of over 2000 still to come. Over the years it’s needed some new rollers (£10.40) and a new fuser sleeve (£3.54), but it’s otherwise been a trouble free workhorse.

The ratio of simplex:duplex has worked out at around 2:3, leading to an average page cost (inc paper and the printer itself amortised over usage so far) of 1.66p/page.

Colour

For a while I hung on to an inkjet just for colour printing, but inkjets hate infrequent use, and so reliability and print quality worsened. When a deal came along in 2010 for Dell’s 1320CN with extra toners for £133.90 I grabbed it.

Colour printing is a less frugal endeavour altogether, but at least the 1320CN is a popular model with a plentiful supply of cheap(er) generic toners. Sadly it only does one sided printing, which has come out at 5.5p/page over the 4000 or so pages printed so far.

If I was starting over

I’d probably go for a Color LaserJet with Duplex and Network so that I could get everything from one unit rather than running two printers. Something like the 3600dn[3] seems to fit the bill as it uses decent capacity toners.

Update 18 Jun 2018

I spent a bit more time modelling costs over the weekend. As things stand the cost per page breaks down to:

  • B&W – 75% Hardware, 4% Toner, 21% Paper – I’m obviously benefiting from ridiculously cheap toner here, but the ‘right’ printer is one with cheap consumables, and there seems to be no better way of getting that than using older laser printers that are (or have been) popular. A quick look at eBay shows that I could easily get another 6000 page toner for about £10.
  • Colour – 61% Hardware, 28% Toner, 11% Paper – once again cheap toner makes a huge difference. When I first bought a replacement toner multi-pack (CMYK) in 2012 it was £18.94, but I’ve since got them as cheaply as £9.99.

If I project usage out a bit further (2x,3x,4x) I quickly get below 1p/page for B&W and 4p/page for Colour as the hardware is amortised and the costs become dominated by toner (more so for colour) and paper (more so for B&W).

I also analysed toner usage… I seem to be getting about 10,000 pages from an HP toner rated at 6,000, which is great (though not uncommon from what I’ve seen in forums). On the other hand I’m getting more like 666 pages for Dell toners rated at 2,000, which is pretty miserable (but probably a reflection of the fact that the colour printer gets used a fair bit for photos, which obviously use tons more toner than a normal page of text with a few words in colour).

Update 9 Jul 2018

Chris Neale pointed me to this Twitter thread from Paul Balze about inkjets:

Notes

[1] Hardly surprising given that my wife is a school teacher and both of my kids are still at school; though I think the bulk of the printing in the household comes from my wife.
[2] I’ve never owned an HP DeskJet myself due to the cost of consumables. Something I’d note from family members running these things is that they last pretty well, but ultimately fall victim to drivers not being available for newer versions of Windows, which has never been an issue for workhorse HP LaserJets.
[3] Or the newer 3800dn or CP3505.


I’m starting to see companies abandon Pivotal Cloud Foundry (PCF) in favour of Kubernetes distributions such as Red Hat’s OpenShift; and it’s almost certainly just a matter of time before we see traffic in the opposite direction.

My suspicion is that this is nothing to do with the technology itself[1], but rather that early implementations have failed to turn out as hoped, and people are blaming the platform rather than their inability to change the culture[2]. So they wheel in an alternative platform (and some fresh faces) and have another go.

We’ve seen this movie before with mobile development[3]. The native developers switched to cross platform frameworks just as the cross platform framework folk switched to native. It wasn’t that one approach was better or worse; as ever with these things there are trade offs that need to be balanced. It was just that v1 sucked, because the organisation that had built v1 hadn’t completed its cultural transformation; so the people making v2 wanted to change things up a bit.

Notes

[1] I could (and may) write an entirely separate post on the pros and cons of PCF and K8s, but the most important point is that they’re both platforms inspired by Google’s Borg that people can run outside of Google (or even on Google Cloud). Meanwhile this post ‘Comparing Kubernetes to Pivotal Cloud Foundry—A Developer’s Perspective‘ by Oded Shopen covers most of the key points.
[2] I’ll use ‘the way we do things around here’ as my definition for culture
[3] and NOSQL


The Spectre and Meltdown bugs have been billed as a ‘failure of imagination’, where the hardware designers simply didn’t conceive of the possibility that a performance optimisation might lead to a security vulnerability.

I personally find this a little hard to swallow. The very first time I came across side-channel attacks the first thing I though of was CPU caches. I just naively assumed that the folk at Intel etc. were smart enough to have figured out the potential problems and already designed in the countermeasures.

Regardless of whether Spectre and Meltdown genuinely were caused by failure of imagination (and I have my doubts about ARM here given that the CSDB instruction was already in the silicon of their licensees) it’s a class of problem we collectively need to think harder about. There seem to be a few valid approaches here:

  1. Adopting a more adversarial mindset – think about how an attacker might try to exploit a new feature or performance optimisation – the ‘red team‘ approach.
  2. ‘Chicken bits'[1] to allow features/optimisations to be disabled if they’re discovered to be vulnerable.
  3. Use of artificial intelligence (AI) to imagine harder/differently. When Google’s Deepmind team created AlphaGo it played Go like a human but a bit better; when they created AlphaGo Zero it came up with entirely different plays. I’d therefore expect that similar approaches could be applied to security validation.

Note

[1] Hat tip to Moritz Lipp for this term from the Q&A section of his QCon London presentation ‘How Performance Optimisations Shatter Security Boundaries


500

01Jun18

I was just about to put virtual pen to paper when I noticed that it would be my 500th post.

Not all of those posts have made it to being published, a few linger in my drafts.

It’s been a little over a decade since my Hello world!, and I’m still tremendously grateful to JP for giving me the push to start blogging, and you all for coming along and reading.


I spent the last couple of days at the Agile Enterprise conference in Rome organised by New York Java Special Interest Group (NYJavaSIG) founder Frank Greco. It was a much more intimate event that I’m generally used to, with only thirty-something attendees.

The best part was the ‘ask me anything’ panel of all the speakers at the end, and the best question boiled down to:

I’ve spent the last couple of days learning about microservices, serverless, Kubernetes, service meshes and blockchain – how do I possibly pull all this stuff together into a coherent solution?

My answer was to repeat the advice from Accelerate and suggest that this issue at hand was one of strategy, best approached by first achieving situational awareness using Wardley Mapping. Looking back I should have also referenced Amazon’s practice of ‘working backwards‘ where they start with a press release and FAQ as a way to crystallise what it is they’re planning to do that customers would actually care about.

Frank closed the panel by asking for thoughts on what might be most important over the next 2-3 years, which meant that the last words were gifted to me. Obviously everybody was there to learn about technology, and how new trends are emerging, but my suggestion was to follow the ‘3rd DevOps way’ of ‘continuous learning by experimentation’ by going and trying stuff out. The Istio Katacoda covers a lot of ground for a 10 minute beginner level course – so that was my suggested place to start.

The slides from my presentation ‘Ops and Security in a PaaS and Serverless world’ are on SlideShare (though they probably make little sense without the narrative). It’s the first time that I’ve done such a long presentation (70m), which had the advantage of allowing time for storytelling, though I fear that it asks a lot of an audience to pay attention for so long (especially when most are listening to a live translation).

This was my second trip to Rome, and for the second time I barely saw the place (not helped by some pretty atrocious weather that didn’t really encourage exploration). It’s a city I’d like to see more of, and I was also impressed by the airport (FCO), which rivals Zurich for being clean and efficient.


TL;DR

Norwegian seems pretty decent as a leisure traveller, but I won’t be choosing them for business travel again unless things change substantially.

Background

Norwegian have been flying long haul from Gatwick (my nearest airport) for a little while now. At first it was to New York, and more recently they’ve added routes to the Bay Area (Oakland) and Florida (Orlando and Ft Lauderdale). I started hearing good reports about their fleet of 787s, especially the premium economy offering, which was described to me as being ‘like business class was on American carriers a decade ago’.

Booking Blockers

Based on the good reports I chose Norwegian for the biennial family trip to Florida for some theme park fun.

Booking was a bit of an adventure. When I went to book the flights I could get outbound (before the end of March) but not return. As the only option for outbound was flexible I thought at worst I could always cancel if I couldn’t figure out return flights that worked. A few weeks later the block of tickets from April onwards was released, and I was able to complete my booking.

Y to JFK

Before making the trip with the family I got to fly solo on a business trip to New York. I thought leaving from Gatwick and being able to get a day flight home would be good, but I was wrong on both counts.

The outbound flight wasn’t too bad; though the trip got off to a poor start on two counts:

  1. No online check in, making me fear the dreaded SSSS, though I was able to check in at a machine on arrival at Gatwick and proceed with my carry on straight through security.
  2. My Priority Pass being turned away at the No.1 Lounge in Gatwick South Terminal (again).

We were a little late getting away, but made up some of the time en route. Norwegian’s 787 seats are a LOT more comfortable than Virgin’s (for which I’ve bought an inflatable cushion) and legroom is OK even in regular economy. My ticket included food, where if I recall correctly the options literally were ‘chicken or beef’; though the beef was pretty nice.

I missed out on a £250 upgrade that was offered at the departure gate, partly because the sale process wasn’t well managed. It was also entirely unclear to me how I might have paid for an upgrade (or even a seat change) after my ticket had been bought through my work’s travel agent.

Arrivals at JFK T1 was fine, though once out of the airport I was quickly reminded of why I prefer flying into EWR when I visit New York.

Grrr back to Gatwick

My return trip was a catalogue of awful:

  • Once again no online/app check in
  • Go to machine at airport, then another, then another – machine #3 actually works and gives me a boarding pass
  • Security won’t let me through, apparently my boarding pass needs to be stamped
  • Stand in line for check in
  • Yes, I did pack my own things, No, I’ve not left my bags unattended
  • Check in chap tells me that flight is delayed 3hrs
  • Check in chap insists on weighing my bag, tells me I’m 4kg over the 10kg carry on allowance for economy, so I have to check it. There’s no fee for the bag, but my late arrival has just been made even later, and I have to faff around splitting the contents of my bag into what I’ll actually carry on and what’s going in the hold (lucky that I had a bag inside the bag)
  • No TSA Pre – so it’s back to the slow line, and shoes off and all that malarky
  • Both Priority Pass lounges in T1 are closed – one for refurbishment and the other because of standard time limits (that mean it would never be useful for Norwegian’s day flight)

As I waited to board there were clearly people with roller luggage (and often another bag) that blew past the 10kg restriction. At least when we did arrive at Gatwick they were quick with the bags.

My plan had been to get home before midnight, the reality was that I got home just before 4am – so the night in my own bed rationale for taking a day flight was utterly undermined.

Did I mention that Norwegian don’t have WiFi?

Premium to Orlando

A few weeks later I was back at Gatwick with the family for our holiday flight.

It didn’t matter that we couldn’t check in online, as we had three bags to take with us. The premium check in line was very friendly, quick and efficient, though they did insist on weighing all of our hand luggage, which easily came below the 15kg limit for that cabin.

It didn’t matter that Priority Pass never works at No.1 Lounge Gatwick South (unless you play along with their scam of paying a £5 pre booking fee), as lounge access was included.

The onboard experience was pretty much as it had been described to me – big comfortable seats with tons of legroom, and despite a slightly late departure (waiting for freight – again) it’s easily the best flight I’ve ever taken to Florida.

Arrivals at MCO was less awful than it used to be, and the immigration delay was just fractionally shorter than the luggage delay (there was no point in me using Global Entry when with the family, especially given our rental car pickup was off site).

Delayed back to Gatwick

The return trip wasn’t quite so good.

Check in was fine.

There is no accounting for the awfulness of TSA at MCO (and even if I’d had a Pre stamp I wouldn’t have abandoned the family).

No lounge access on the tickets this time, but Priority Pass worked at The Club MCO – in fact we were pretty much the only people in there… because it was ridiculously early.

When I booked the flights the return trip was a little after 10pm, a bit on the late side, but probably OK for getting some sleep. Before we took the flights the slot changed to 4.10pm, arriving 5.25am, which is too early to sleep on the way out, and too early to arrive.

We left the lounge to board for an on time departure as that’s what the screens were showing – I should have checked the inbound flight on my phone, as we spent an extra hour sat at the gate that could have been more comfortably spent in the lounge. Norwegian knew they were woefully late on the inbound flight, but apparently just didn’t bother to tell anybody at MCO that this would cause a delayed departure.

Whilst waiting at the gate I looked at Norwegian’s punctuality record for the preceding flights that week – they’d all been delayed – some pretty horrendously.

Would I fly with them again?

As a leisure traveller yes – if you have family and hold luggage then you’re in the slow lane anyway, and there’s nothing that Norwegian will do to make things worse. The premium cabin is very nice, and the extra for it isn’t too steep, but the main cabin is perfectly adequate and in some ways a cut above lots of the mainstream economy offerings.

As a business traveller definitely not – their carry on policy might keep things sane in the overheads, but it’s actively hostile to people who pack a single bag for the week. No online checkin, no TSA Pre, no WiFi – these things all cost time or hurt productivity; as does working from terminals where Priority Pass doesn’t work. Norwegian might expect business travellers to pay for Premium, and I wish my company’s policy allowed for that, but that would only (just) resolve the carry on issue – everything else would still be a mess of petty inconvenience.

Will Norwegian even be around next year or the year after?

They’ve been losing a ton of money, and the vultures are circling.

I kind of hope that they do pull out of their financial nosedive and make the model work, and invest in dealing with some of the issues noted above. The alternative of IAG taking over would I’m pretty sure result in a worst of both worlds outcome – I wouldn’t expect service to improve at all, but prices would surely go up. Let’s not forget that BA don’t have TSA Pre, which must be a massive source of frustration for their frequent flyers on US routes[1].

Conclusion

If Norwegian are still around in 2020 they’ll likely be my first pick for the next family trip to Florida, but until they change a few things I’ll be looking wistfully at Gatwick as my taxi speeds past to deliver me to Virgin or United at Heathrow.

Note

[1] My working theory on this is that BA don’t wan’t to pay for an extra Pre line at JFK where they have their own terminal, and so they’re willing to compromise the experience of flying with them from every other airport in the US.


TL;DR

If you happen to be in Zurich, and you want to get to Munich (or vice versa), then it turns out that the Inter City Bus (ICB) is probably the least worst option.

Background

I was in Zurich last week speaking at the Open Cloud Forum hosted by UBS, where it was my great honour to share a stage with Mark Shuttleworth. I’d originally planned to fly back to London City straight after the event, but a customer meeting in Munich cropped up.

Train

The first thing I checked was the train schedule, and what it revealed was not what I expected. Rather than some shiny high speed service that would whisk me from city to city in no time I was presented with two awful choices:

  1. Four different trains (== 3 changes) shuffling from one place to another taking a grand total of around 4h45 (assuming everything worked, which is usually a safe assumption for Swiss trains).
  2. Bouncing in an out of Stuttgart on high speed trains for a journey taking 5h40.

Plane

There are fairly frequent direct flights from Zurich to Munich, which take about an hour; but they’re not cheap (~£200) and then there’s the whole palaver of get to the airport, security, lounge, gate, boarding, arrivals, get to the city. A one hour flight was going to be a 4-5 hour journey, and that time wasn’t going to be at all productive.

Automobile

It crossed my mind that I could probably just get an Uber or something like that, but the complexities of crossing borders put me off.

Bus

The train schedule actually showed the bus as an option. I’m guessing this particular route is something of a bandaid over what would otherwise be a clockwork machine for getting people around the continent.

Fare was advertised as ‘from €19.90’, which is in fact what I ended up paying. The following day a colleague commented that he’d paid more for a taxi from another hotel in the city.

Getting to the bus was easy, as the bus station is rich by Zurich HB. I expected a bustling place, but for the whole time I was there the only bus present was the one I was getting.

The seats were spacious and comfortable, offering about as much space as extra legroom seats on flights.


Although the booking process offered me a seat reservation it wasn’t shown anywhere on my ticket. The first seat I chose turned out to be reserved by somebody who could see their number, and it seems I wasn’t the only one playing musical chairs.

Free WiFi was offered, but didn’t work for me, which didn’t matter as my various mobile devices all worked perfectly.

Overhead space for carry on was minimal, and I regretted handing over my bag to go in the hold given that once boarding was complete the bus was less than half full – so I could easily have just dropped it on the empty seat beside me; though retrieving it on arrival cost me mere seconds.

Looking at the other passengers my read was that there were a fair number of business travellers (~60%), some tourists (~30%) and backpackers (~10%).

Punctuality was about what I expected – the trip scheduled for 3h45 took 4h30. We hit bad traffic on the motorway outside of Zurich, and there were lots of roadworks along the way.

The terminus was right by the S-Bahn station, so getting to my hotel was a breeze – I think 3 trains that I could have caught whisked through the station in the time it took me to figure out what sort of ticket I needed from the machine.

Conclusion

Bus wouldn’t be my first choice for this sort of travel, but it was comfortable, inexpensive, productive and punctuality didn’t break my expectations.


Accelerate

23Apr18

TL;DR

Accelerate is now my top book recommendation for people looking for practical guidance on how to do DevOps. It’s a quick read, actionable, and data driven.

Background

I’ve previously recommended the following books for DevOps:

  • The Phoenix Project – Gene Kim’s respin of The Goal is an approachable tale of how manufacturing practices can be applied to IT to get the ‘three DevOps ways’ of flow, feedback and continuous learning by experimentation. It’s very accessible, but also leaves much to the reader’s interpretation.
  • The DevOps Handbook – is a much more of a practitioner’s guide, what to do (and what not to do) with copious case studies to illustrate things that have worked well for others.
  • The SRE Book – explains how Google do DevOps, which they call Site Reliability Engineering (SRE). The SRE prescription seems to have worked in many places beyond Google (usually at the hands of Xooglers), so if you’re happy to follow a strict prescription this is a known working approach[1].

I’ve also been a big fan of the annual State of DevOps Reports emerging from DevOps Research and Assessment (DORA) that have been sponsored by Puppet Labs, as they took a very data rich approach to the impact and potential of DevOps practices.

Bringing it all together

Accelerate is another practitioner’s guide like The DevOps Handbook, but it’s much shorter, and replaces case studies with analysis of the data that fueled the State of DevOps Reports. It’s more suitable as a senior leader’s guide – explaining the why and what, whilst the DevOps Handbook is better fitted to mid level managers who want to know how.

If you’ve been fortunate enough to hear Nicole, Jez and Gene speak at conferences over the past few years you won’t find anything groundbreaking in Accelerate – it’s very much the almanac of what they’ve been saying for some time; but it spares you having to synthesise the guidance for yourself as it’s all clear, concise and consistent in one place.

Roughly half of the book is spend explaining their homework on the data that drove the process. Showing their working I think is necessary to the credibility of what’s being presented, but it need not be read and understood in detail if you just want to get on with it.

Conclusion

If you’re going to read one book about DevOps this is the one.

Update

Jez and Nicole were interviewed on the a16z Podcast: Feedback Loops — Company Culture, Change, and DevOps, which provides a great overview of the Accelerate material.

Note

[1] It’s also worth noting that Google have worked with some of their cloud customers to create Customer Reliability Engineering (CRE), which breaks through the normal shared responsibility line for a cloud service provider.


Why?

Everything you access on the Internet starts with a Domain Name System (DNS) query to turn a name like google.com into an IP address like 216.58.218.14. Typically the DNS server that provides that answer is run by your Internet Service Provider (ISP) but you might also use alternative DNS servers like Google (8.8.8.8). Either way regular DNS isn’t encrypted – it’s just plain text over UDP Port 53, which means that anybody along the way can snoop on or interfere with your DNS query and response (and it’s by no means unheard of for unscrupulous ISPs to do just that).

Cloudflare recently launched their 1.1.1.1 DNS service, which is fast (due to their large network of local data centres plus some clever engineering) and offers some good privacy features. One of the most important capabilities on offer is DNS over TLS, where DNS queries are sent over an encrypted connection, but you don’t get that by simply changing the DNS settings on your device or your home router. To take advantage of DNS over TLS you need your own (local) DNS server that can answer queries in a place where they won’t be snooped or altered; and to do that there are essentially two approaches:

  1. Run a DNS server on every machine that you use. This is perhaps the only workable approach for devices that leave the home like laptops, mobiles and tablets, but it isn’t the topic of his Howto – maybe another day.
  2. Run a DNS server on your home network that encrypts queries that leave the home. I already wrote about doing this with OpenWRT devices, but that assumes you’re able and willing to find routers that run OpenWRT and re-flash firmware (which usually invalidates any warranty). Since many people have a Raspberry Pi, and even if you don’t have one they’re inexpensive to buy and run, this guide is for using a Pi to be the DNS server.

Prerequisites

This guide is based on the latest version of Raspbian – Stretch. It should work equally well with the full desktop version or the lite minimal image version.

If you’re new to Raspberry Pi then follow the stock guides on downloading Raspbian and flashing it onto an SD card.

If you’re already running Raspbian then you can try bringing it up to date with:

sudo apt-get update && sudo apt-get upgrade -y

though it may actually be faster to download a fresh image and reflash your SD card.

Installing Unbound and DNS utils

Unbound is a caching DNS server that’s capable of securing the connection from the Pi to 1.1.1.1. Other options are available. This guide also uses the tool dig for some testing, which is part of the DNS utils package.

First ensure that Raspbian has up to date package references:

sudo apt-get update

Then install Unbound and DNS utils:

sudo apt-get install -y unbound dnsutils

At the time of writing this installs Unbound v1.6.0, which is a point release behind the latest v1.7.0, but good enough for the task at hand. Verify which version of Unbound was installed using:

unbound -h

which will show something like:

usage:  unbound [options]
        start unbound daemon DNS resolver.
-h      this help
-c file config file to read instead of /etc/unbound/unbound.conf
        file format is described in unbound.conf(5).
-d      do not fork into the background.
-v      verbose (more times to increase verbosity)
Version 1.6.0
linked libs: libevent 2.0.21-stable (it uses epoll),
OpenSSL 1.1.0f  25 May 2017
linked modules: dns64 python validator iterator
BSD licensed, see LICENSE in source package for details.
Report bugs to [email protected]

At this stage Unbound will already be running, and the installer will have taken care of reconfiguring Raspbian to make use of it; but it’s not yet set up to provide service to other machines on the network, or to use 1.1.1.1.

Configuring Unbound

The Raspbian package for unbound has a very simple configuration file at /etc/unbound/unbound.conf that includes any file placed into /etc/unbound/unbound.conf.d, so we need to add some config there.

First change directory and take a look at what’s already there:

cd /etc/unbound/unbound.conf.d
ls

There should be two files: qname-minimisation.conf and root-auto-trust-anchor-file.conf – we don’t need the former as it will be part of the config we introduce, so remove it with:

sudo rm qname-minimisation.conf

Create an Unbound server configuration file:

sudo bash -c 'cat >> /etc/unbound/unbound.conf.d/unbound_srv.conf \
<<UNBOUND_SERVER_CONF
server:
 qname-minimisation: yes
 do-tcp: yes
 prefetch: yes
 rrset-roundrobin: yes
 use-caps-for-id: yes
 do-ip6: no
 interface: 0.0.0.0
 access-control: 0.0.0.0/0 allow
UNBOUND_SERVER_CONF'

It’s necessary to use ‘sudo bash’ and quotes around the content here as ‘sudo cat’ doesn’t work with file redirection.

These settings are mostly lifted from ‘What is DNS Privacy and how to set it up for OpenWRT‘ by Torsten Grote, but it’s worth taking a look at each in turn to see what they do (based on the unbound.conf docs):

  • qname-minimisation: yes – Sends the minimum amount of information to upstream servers to enhance privacy.
  • do-tcp: yes – This is the default, but better safe than sorry – DNS over TLS needs a TCP connection (rather than UDP that’s normally used for DNS).
  • prefetch: yes – Makes sure that items in the cache are refreshed before they expire so that the network/latency overhead is taken before a query needs an answer.
  • rrset-roundrobin: yes – Rotates the order of Round Robin Set (RRSet) responses according to a random number based on the query ID.
  • use-caps-for-id: yes – Use 0x20-encoded random bits in the query to foil spoof attempts. This perturbs the lowercase and uppercase of query names sent to authority servers and checks if the reply still has the correct casing. Disabled by default. This feature is an experimental implementation of draft dns-0x20.
  • do-ip6: no – Turns off IPv6 support – obviously if you want IPv6 then drop this line.
  • interface: 0.0.0.0 – Makes Unbound listen on all IPs configured for the Pi rather than just localhost (127.0.0.1) so that other machines on the local network can access DNS on the Pi.
  • access-control: 0.0.0.0/0 allow – Tells Unbound to allow queries from any IP (this could/should be altered to be the CIDR for your home network).

Next configure Unbound to use Cloudflare’s DNS servers:

sudo bash -c 'cat >> /etc/unbound/unbound.conf.d/unbound_ext.conf \
<<UNBOUND_FORWARD_CONF
forward-zone:
    name: "."
    forward-addr: [email protected]#cloudflare-dns.com
    forward-addr: [email protected]#cloudflare-dns.com
    forward-ssl-upstream: yes
UNBOUND_FORWARD_CONF'

There are two crucial details here:

  1. @853 at the end of the primary and secondary server IPs tells unbound to connect to Cloudflare using port 853, which is the secured end point for the service.
  2. forward-ssl-upstream: yes – is the instruction to use DNS over TLS, in this case for all queries (name: “.”)

Restart Unbound to pick up the config changes:

sudo service unbound restart

Testing

First check that the Unbound server is listening on port 53:

netstat -an | grep 53

Should return these two lines (not necessarily together):

tcp    0    0 0.0.0.0:53    0.0.0.0:*    LISTEN
udp    0    0 0.0.0.0:53    0.0.0.0:*

Then try looking up google.com

dig google.com

The output should end something like:

;; ANSWER SECTION:
google.com.    169    IN    A    172.217.4.174

;; Query time: 3867 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Wed Apr 04 14:46:09 UTC 2018
;; MSG SIZE  rcvd: 55

Note that the Query time is pretty huge when the cache is cold[1]. Try again and it should be much quicker.

dig google.com
;; ANSWER SECTION:
google.com.    169    IN    A    172.217.4.174
;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Wed Apr 04 14:46:09 UTC 2018
;; MSG SIZE rcvd: 55

Using it

Here’s where you’re somewhat on your own – you have a Pi working as a DNS server, but you now need to reconfigure your home network to make use of it. There are two essential elements to this:

  1. Make sure that the Pi has a known IP address – so set it up with a static IP, or configure DHCP in your router (or whatever else is being used for DHCP) to have a reservation for the Pi.
  2. Configure everything that connects to the network to use the Pi as a DNS server, which will usually be achieved by putting the (known from the step above) IP of the Pi at the top of the list of DNS servers passed out by DHCP – so again this is likely to be a router configuration thing.

Updates

10 Apr 2018 – Thanks to comments from Quentin and TJ I’ve added the access-control line to the server config. Also check out Quentin’s repo (and Docker Hub) for putting Unbound into a Docker container.
4 Jun 2018 – Modified the config to use #cloudflare-dns.com per comment from Daniel Aleksandersen (see this thread for background).
20 Jul 2018 – I was just trying to use the Pi that I first tested this on, and it was stubbornly refusing to work. The issue turned out to be time. The Pi hadn’t got an NTP sync when it came up, and was unable to establish a trusted connection with Cloudflare when the clock was too far off.

Note

[1] Almost 4 seconds is ridiculous, but that’s what I’m getting with a Pi on a WiFi link to a US cable connection. From my home network it’s more like 160ms. I suspect that Spectrum might be tarpitting TCP to 1.1.1.1



Using 1.1.1.1

02Apr18

TL;DR

One of the best features of Cloudflare’s new 1.1.1.1 DNS service is the privacy provided by DNS over TLS, but some setup is required to make use of it. I put Unbound onto the OpenWRT routers I use as DNS servers for my home network so that I could use it.

Background

Yesterday Cloudflare launched its public DNS service 1.1.1.1. It’s set up to be amazingly fast, but has some great privacy features too – Nick Sulivan tweeted some of the highlights:

The 1.1.1.1 resolver also implements the latest privacy-enhancing standards such as DNS-over-TLS, DNS-over-HTTPS, QNAME minimization, and it removes the privacy-unfriendly EDNS Client Subnet extension. We’re also working on new standards to fix issues like dnscookie.com

The thing that I’m most interested in is DNS over TLS, as that prevents queries from being snooped, blocked or otherwise tinkered with by ISPs and anybody else along the path; but to use that feature isn’t a trivial case of putting 1.1.1.1 into DNS config. So I spent a good chunk of yesterday upgrading the DNS setup on my home network.

Unbound

I’ve written before about my BIND on OpenWRT setup, which takes care of the (somewhat complex) needs I have for DNS whilst running on kit that would be powered on anyway and that reliably comes up after (all too frequent) power outages. Whilst it’s probably possible to configure BIND to forward to 1.1.1.1 over TLS the means to do that weren’t obvious; meanwhile I found a decent howto guide for Ubound in minutes – ‘What is DNS Privacy and how to set it up for OpenWRT‘ – thanks Torsten Grote.

Before I got started both of the routers needed to be upgraded to the latest version of OpenWRT, as although Unbound is available for v15.05.1 ‘Chaos Calmer’ it’s not a new enough version to support TLS – I needed OpenWRT 17.01.4, which has Unbound v1.6.8. The first upgrade was a little fraught as despite doing ‘keep config’ the WRTNode in my garage decided to reset back to defaults (including dropping off my 10. network onto 192.168.1.1) – so I had to add a NIC on another subnet to one of my VMs to rope it back in. The upgrades also ditched the existing BIND setup, but that was somewhat to be expected, and I had it all under version control in git anyway.

My initial plan was to put Unbound in front of BIND, but after many wasted hours I gave up on that and went for BIND in front of Unbound. In either case Unbound is taking care of doing DNS over TLS to 1.1.1.1 for all the stuff off my network, and BIND is authoritative for the things on my network. I’m sure it would be better for Unbound to be up front, but I just couldn’t get it to work with one of my BIND zones – every single A record with a 10. address was returning an authority list but no answer. I even tried moving that zone to Knot, which behaved the same way.

Here are the config files I ended up with for Unbound:

unbound_srv.conf

do-tcp: yes
prefetch: yes
qname-minimisation: yes
rrset-roundrobin: yes
use-caps-for-id: yes
do-ip6: no
do-not-query-localhost: no #leftover from using Unbound in front of BIND
port: 2053

unbound_ext.conf

forward-zone:
    name: "."
    forward-addr: [email protected]#cloudflare-dns.com
    forward-addr: [email protected]#cloudflare-dns.com
    forward-addr: [email protected]#dns.quad9.net
    forward-addr: [email protected]#dns.quad9.net
    forward-ssl-upstream: yes

and here’s the start of my BIND named.conf (with the zone config omitted as that’s particular to my network and yours will be different anyway):

options {
	directory "/tmp";

  listen-on-v6 { none; };

	forwarders {
                127.0.0.1 port 2053;
	};

	auth-nxdomain no;    # conform to RFC1035
        notify yes;          # notify slave server(s)
};

Could I have just used BIND?

Probably, but it fell into ‘life’s too short’, and I expect Unbound works better as a caching server anyway.

Could I have just used Unbound?

If my DNS needs were just a little simpler then yes. Having started out without the ability to do authoritative DNS Unbound has become quite capable in this area since I last looked at it; but it doesn’t yet do CNAMEs how I’d like them, and it’s a definite no for my SSH over DNS tunnelling setup (or at least being able to test that at home).

Could I have just used Knot?

I tried the Knot ‘tiny proxy’ but couldn’t get it to work. The 2.3 version of Knot packaged for OpenWRT is pretty ancient, and although this is the DNS server that Cloudflare use as the basis for 1.1.1.1 it’s short on howtos etc. beyond the basic online docs.

Conclusion

This is the DNS setup I’ve been wanting for a long while. I might lose a little speed with the TLS connection (and the TCP underneath it), but I gain speed back with Cloudflare being nearby, and Unbound caching on my network; I also gain privacy. Plain old DNS over UDP has been one of the most glaring privacy issues on the Internet, and I shudder to think of how that’s been exploited.