TL;DR

I need local DNS for various home lab things, but the Windows VMs I’ve been using can be slow and unreliable after a power outage (which happens too frequently). Moving to BIND turned out to be much easier than I feared, and I chose OpenWRT devices to run it on as I wanted reliable turnkey hardware.

Background

My home network is a mixture of ‘production’ – stuff I need for working from home and that the family rely on, and ‘dev/test’ – stuff that I use for tinkering. Both of these things need DNS, which I’ve been running on Windows Active Directory (AD) since it was in Beta in 1999. For many years I had an always on box that ran as an AD server and my personal desktop, but that was before I became more concerned about power consumption and noise. For the last few years my AD has been on VMs running on a couple of physical servers at opposite ends of the house. That’s fine under normal circumstances, but leads to lots of ‘the Internet’s not working’ complaints if a power outage takes down the VM servers.

Why OpenWRT?

I already have a few TP-Link routers that have been re-purposed as WiFi access points, and since the stock firmware is execrable I’ve been running OpenWRT on them for years. They seem to have survived all of the power outages without missing a beat, and restart pretty quickly.

Why BIND?

Despite a (possibly undeserved) reputation for being difficult to configure and manage BIND is the DNS server that does it all (at least according to this comparison). It’s also available as an OpenWRT package so all I needed to do was follow the BIND HowTo and:

opkg update
opkg install bind-server bind-tools

Getting my zone files

Windows provides a command line tool to export DNS from Active Directory, and the files that it creates can be used directly as BIND zone files:

dnscmd /ZoneExport Microsoft.local MSzone.txt

The exports show up in %SystemRoot%\System32\Dns and it’s then a case of copying and cleaning up; the cleaning up being necessary because the exports are full of AD SVC records that I don’t need. I simply deleted en masse the SVC records, and tweaked the NS records to reflect the IPs of their new homes.

With clean zone files it was a simple matter of scping them over to the OpenWRT routers and configuring named.conf to pick them up and use my preferred forwarders[1]. A restart of named then allowed my new BIND server to do its thing:

/etc/init.d/named restart

Conclusion

I’d been put off BIND due to tales of its complexity, but for my purposes it’s a breeze to use. The fact that I was able to export my existing DNS setup straight into files that were suitable for use as BIND zone files made things extra easy.

Note

[1] I used to run a recursive DNS at home, but I found that it can be slow at times, so I’ve been using forwarders ever since. I’m not spectacularly keen on giving a list of everything I visit on the Internet to anybody, but ultimately I’ve settled on this selection of forwarders:

	forwarders {
		64.6.64.6;      # Verisign primary
                64.6.65.6;      # Verisign secondary
                8.8.8.8;        # Google primary
                208.67.222.222; # OpenDNS primary
                8.8.4.4;        # Google secondary
                208.67.220.220; # OpenDNS secondary
                80.80.80.80;    # Freenom primary
                80.80.81.81;    # Freenom secondary
	};

I really wish the a company with strong values (like CloudFlare) ran a service that I’d be happy to forward to, though the snoopers charter is making me reconsider my whole approach to DNS – I may have to tunnel my DNS traffic offshore like I’ve done with my Android tablet – anybody know a DNS server that can be forced to use a SOCKS proxy?


Restoring Power

22Jan17

TL;DR

I had a huge problem with ‘nuisance trips’ of the residual current device (RCD) in my house, which has been resolved by the installation of residual current circuit breakers with overcurrent protection (RCBOs). More reliable power to individual circuits in the house (and particularly the garage) has forced me to set up better monitoring so that I’m actually aware of circuit breaker trips.

Background

I live in a new build house that was completed in 2002, but it has perhaps the least reliable electricity supply I’ve ever encountered. Brownouts and power cuts (short and long) are all too frequent, so I’ve invested in a number of uninterruptible power supplies (UPSs) to protect power to my home network and the servers (and their files) that live on it.

As part of the wiring regulations that existed at the time all of the power sockets in my house were protected by a single RCD in the consumer unit (aka ‘fuse board’). The RCD was suffering from an increasing number of ‘nuisance trips’ (activating when it wasn’t actually saving me or somebody else from electrocution), taking power to the entire house with it each time it went off.

Why now?

In the run up to Christmas I was experiencing more nuisance trips than ever – three in a single day at one stage; and every time we turned the oven on there was a minute of anticipation about whether it would trip the board again[1].

The incident that spurred me into action happened over New Year. We took a long weekend away with family and friends in the North East of England and Borders of Scotland. On returning home the house was freezing – the power had been out since the evening that we left, and so the central heating had been off. At least the cold weather meant that stuff in the fridges and freezers wasn’t impacted. It took hours to get everything back on – it seemed impossible to have all of the house circuits on at once without tripping the RCD and taking everything down again[2].

RCBOs vs RCD

A bit of online research led me to this Institution of Engineering Technology (IET) forum thread about nuisance RCD trips in a house with ‘a lot of computers’, the response was:

Too many computers, cumulative earth leakage is tripping the RCD.
Ditch the single RCD and install RCBOs.

I’d not previously heard of RCBOs, but they’re basically a device that combines the functions of a Miniature Circuit Breaker (MCB) and an RCD into a single breaker. This has a couple of advantages:

  1. When a trip happens due to high residual current leakage to earth it only takes out a single circuit rather than the whole board.
  2. Any background levels of current leakage get spread across a number of breakers rather than accumulating on a single breaker and bringing it ever closer to its tripping point.

I got in touch with a friendly electrician[3] and he’s now installed a new consumer unit along with RCBOs[4] for each of the socket circuits (and in accordance with updated wiring regulations the light circuits are now RCD protected too using a ‘split board’ approach).

Since getting the RCBOs installed there hasn’t been a single trip on any circuit – so mission accomplished :)

A quick detour on suppression

The IET Forum Thread I referenced above also contains some discussion of the problems that come with transient voltage suppressors (TVSs). The issue is that when a there’s a voltage spike the transient suppressor gives it a path to earth, which then causes a residual current leak that trips an RCD.

I removed all of the surge suppressed power strips from the house, but it didn’t make any difference, likely because I left in place my 4 UPSs, which I expect all have their own TVSs within.

Every silver lining…

Now that I had reliable power there was a new problem. How would I know when power to the garage (and the server and freezer out there) went off? A trip on the garage RCBO wouldn’t take out the rest of the house, so it would be far less obvious. If I was away on a business trip it’s possible that the rest of the family wouldn’t notice for days.

The answer was to have better monitoring…

Time to go NUTs

I have a PowerWalker VI 2200 out in the garage looking after my Dell T110 II and associated network switch. Like most UPSs it has a USB port that provides a serial over USB connection. I hooked this up to a VM running on the server and installed  Network UPS Tools (NUT) to keep an eye on things.

Email alerts

I use upsmon to run upssched to run a script that sends me an email when power goes out (and another when it’s restored). I’ve dropped the configs and scripts into a gist if you want to do this yourself (thanks to Bogdan for his ‘How to monitor PowerMust 2012 UPS with Ubuntu Server 14‘ for showing me the way here)

Graceful shutdown

My NAS and Desktop NUC both interface directly by USB with their local UPS and are configured to shut down when the battery level goes critical. I needed the same for the VMware vSphere servers running off the other two UPSs (in my garage and loft). Luckily this has been taken care of by René with his ‘NUT Client for ESXi 5 and 6‘ (Google Translation to English) so I just had to install and configure that, pointing at the same NUT servers I used for email alerts.

Conclusion

RCBOs have totally solved the problems I was having with RCD nuisance trips, and I now have monitoring and graceful shutdown in place for when there are real power issues.

Notes

[1] Conventional fault finding would suggest a fault with the oven, but I wasn’t convinced given that it’s only about 18 months old. My take was that a trivial issue with the oven that wouldn’t trip the RCD on its own was taking it over the edge when added to other leakage on other circuits. The fact that the oven doesn’t trip its new RCBO seems to bear that out.
[2] Going through the usual fault finding to identify a single culprit was fruitless. Everything was wrong and nothing was wrong. It didn’t matter which circuit I isolated. My hypothesis became that there was probably about 3-4mA of leakage on each circuit in the house, combining to a total leakage that had the single RCD on a hair trigger of any additional leakage.
[3] Despite being an electronics engineering graduate, a member of the IET (MIET) and a Chartered Engineer (CEng) I’m not allowed to do my own electrical work. I recall that there were ructions about this when the revised regulations with proposed and introduced, but it’s pretty sensible. Messing around with the 100A supplies to consumer units is no business for amateurs, and I was happy to get in a professional.
[4] After a bit of hunting around and price comparison I went for a Contactum Defender consumer unit and associated RCBOs and MCBs from my local TLC Direct. The kit came to just over £200, and getting it professionally installed cost about the same – so if you need to do this yourself budget around £350-500 depending on how many circuits you have.


TL;DR

I thought I could put Squid in front of an SSH tunnel, but it can’t do that. Thankfully Polipo can do the trick.

Why?

I was quite happy when it was just spies that were allowed to spy on me (even if they might have been breaking the law by doing so), but I see no good reason (and much danger) in the likes of the Department of Work and Pensions being able to poke its nose into my browsing history. The full list of agencies empowered by the ‘snoopers charter‘ is truly horrifying.

PCs are easy

I’ve been using Firefox (with the FoxyProxy plugin) for a while to choose between various virtual private servers (VPSs) that I run. This lets me easily choose between exit points in the US or the Netherlands (and entry points on a virtual machine [VM] on my home network or a local SSH session when I’m on the road).

To keep the tunnels up on the home network VM I make use of autossh e.g.


/usr/lib/autossh/autossh -M 20001 -D 0.0.0.0:11111  [email protected]

I can then use an init script to run autossh within a screen session (gist):

### BEGIN INIT INFO
# Provides:   sshvps
# Required-Start: $local_fs $remote_fs
# Required-Stop:  $local_fs $remote_fs
# Should-Start:   $network
# Should-Stop:    $network
# Default-Start:  2 3 4 5
# Default-Stop:   0 1 6
# Short-Description:  Tunnel to VPS
# Description:    This runs a script continuously in screen.
### END INIT INFO

case "$1" in

  start)
        echo "Starting sshvps"
        su chris -c "screen -dmS sshvps /usr/lib/autossh/autossh -M 20001 -D 0.0.0.0:11111  [email protected]"
        ;;
  stop)
        echo "Stopping sshvps"
        PID=`ps -ef | grep autossh | grep 20001 | grep -v grep | awk '{print $2}'`
        kill -9 $PID
        ;;

  restart|force-reload)
        echo "Restarting sshvps"
        PID=`ps -ef | grep autossh | grep 20001 | grep -v grep | awk '{print $2}'`
        kill -9 $PID
        sleep 15
        su chris -c "screen -dmS /usr/lib/autossh/autossh -M 20001 -D 0.0.0.0:11111  [email protected]"
        ;;
  *)
        N=/etc/init.d/$NAME
        echo "Usage: $N {start|stop|restart}" >&2
        exit 1
        ;;
esac
exit 0

That script lives in /etc/init.d on my Ubuntu VM. I haven’t yet migrated to systemd. With the VM running it provides a SOCKS proxy listening on port 11111 that I can connect to from any machine on my home network. So I can then put an entry into the Firefox FoxyProxy Plugin to connect to VM_IP:11111

Android isn’t so easy

It’s possible to define a proxy in Firefox on Android by going to about:config then searching for proxy and completing a few fields e.g.:

  • network.proxy.socks VM_IP
  • network.proxy.socks_port 11111
  • network.proxy.type 2

That however will only work on a given network, in this case my home network.

I could (and sometimes will) use localhost as the SOCKS address and then use an SSH client such as ConnectBot, but that means starting SSH sessions before I can browse, which will get tiresome quickly.

Android does allow an HTTP proxy to be defined for WiFi connections, but it doesn’t work with SOCKS proxies – I needed a bridge from HTTP to SOCKS.

Not Squid

I’ve used Squid before, and in fact it was already installed on the VM I use for the tunnels. So I went searching for how to join Squid to an SSH tunnel.

It turns out that Squid doesn’t support SOCKS parent proxies, but this Ubuntu forum post not only clarified that, but also pointed me to the solution, another proxy called Polipo.

Polipo

Installing Polipo was very straightforward, as it’s a standard Ubuntu package:


sudo apt-get install -y polipo

I then needed to add a few lines to the /etc/polipo/config file to point to the SSH tunnel and listen on all ports:

socksParentProxy = "localhost:11111"
socksProxyType = socks5
proxyAddress = "0.0.0.0"

Polipo needs to be restarted to read in the new config:


sudo service polipo restart

Once that’s done I had an HTTP proxy listening on (the default) port 8123. All I had to do to use it was long press on my home WiFi connection in Android, tap ‘Modify Connection’ , tick ‘Show advanced options’ and input VM_IP as the ‘Proxy hostname’ and 8123 as ‘Proxy port’.

A quick Google of ‘my ip’ showed that my traffic was emerging from my VPS.

Yes, I know there are holes

DNS lookups will still be going to the usual place, in my case a pair of local DNS servers that forward to various public DNS resolvers (though not my ISP’s). If I was ultra paranoid I could tunnel my DNS traffic too.

When I’m using the tablet out and about on public WiFi and cellular networks I’ll not be proxying my traffic (unless I explicitly use an SSH tunnel).

Conclusion

Polipo provided the bridge I needed between Android’s ability to use WiFi specific HTTP proxies and the SSH tunnels I run out of a VM on my home network to various VPSs outside the UK. I don’t for a moment expect this to provide any protection from real spies, but it should prevent casual snooping, and will also guard against the inevitable ISP data protection failures.


Amongst the flurry of announcements at re:invent 2016 was the launch of a developer preview for a new F1 instance type. The F1 comes with one to eight high end Xilinx Field Programmable Gate Arrays (FPGAs) to provide programmable hardware to complement the Intel E5 2686 v4 processors that come with up to 976 GiB of RAM and 4 TB of NVMe SSD storage. The FPGAs are likely to be used for risk management, simulation, search and machine learning applications, or anything else that can benefit from hardware optimised coprocessors.

continue reading the full story at InfoQ


Amazon have launched Lightsail, a Virtual Private Server (VPS) service to compete with companies like Digital Ocean, Linode and the multitude of Low End Box providers. The service bundles a basic Linux virtual machine with SSD storage and a bandwidth allowance. Pricing starts at $5/month with tiers by RAM allocation. Each larger configuration comes with more storage and bandwidth, though these scale sub-linearly versus RAM/price.

lightsail_pricing

continue reading the full story at InfoQ


I went along to Serverlessconf last week and wrote up a couple of InfoQ news pieces about it:

Day 1 – Serverless Operations is Not a Solved Problem

The emergent theme from day one of the Serverlessconf London 2016 was that far from being ‘NoOps’, Serverless platforms bring with them substantial operational challenges. The physical servers and virtual machines may have been abstracted away, but that doesn’t mean an end to infrastructure configuration; and developers ignore the implications of underlying persistence mechanisms at their peril.

continue reading the full story at InfoQ

Day 2 – Serverless Frameworks

The emergent theme for the second day of Serverlessconf London 2016 was the availability and functionality of management frameworks to address the operational issues highlighted on day one. The Node.js based Serverless Framework featured in at least three talks, and there was also coverage of Zappa, a more opinionated Python based framework.

continue reading the full story at InfoQ


TL;DR

I’ve been very happy with the X250 – it’s given me the same performance I got from my X230, but with better battery life, a smaller form factor and it seems more robust.

Long term review

I started writing this post in January not long after I got my X250, but I never got past the title, and another nine months have rolled by. In that time the X250 has been a faithful companion on a pretty much daily basis.

There are many like it, this is mine

There are many like it, this is mine

RAM and SSD

My X250 came with 8GB RAM and a 256GB SSD, neither of which is really sufficient for my needs, so I dropped in a 16GB DIMM from Crucial[1] and a 500GB SSD from SanDisk[2]. The X250 can take 2.5″ SATA and/or M.2 SSDs, though I’ve not tried the latter (as I already had a spare 2.5″ drive to hand).

Performance

Subjectively the X250 is no better or worse than the three years older X230. That’s fine, because the older laptop had all of the speed I needed, but it’s interesting to note that laptops have essentially plateaued in performance, offering better battery life instead.

For a less subjective view, the X250 gets a Geekbench 3 multi-core score of 5166 (5083 on battery) versus 4065 for the X230 – so there is some extra quantitative performance there. I expect that the newer GPU would also be much better (but hardly ideal) for gaming e.g. it would provide a noticeable improvement to Skyrim, but it’s not going to cope with No Man’s Sky.

Battery

The X250 actually has two batteries, an integral battery in the main body, and a detachable battery in the usual place. Together they provide around 6hrs of real world use, which is sufficient to get through a day away from power outlets at a conference or similar.

Form factor

The X250 and its 12″ screen provide the same width and depth as the older X230, but it’s a good bit shallower whilst still offering full sized VGA and network ports (so no need to carry a bag of adapters).

Screen

The resolution of the 12″ touchscreen is the same resolution as the screen I had before at 1368 x 768, and it’s nice and bright. It’s thicker than the X230 screen, but more robust as a result.

Resilience

After 10 months a worn down Ultrabook label shows that it’s had plenty of use, but that’s the only sign – nothing seems to be otherwise showing any age or wear and tear. It will be another 8 months before I can do a fair comparison, but it seems to be made of stronger stuff than my old X230. It appears that Lenovo have got the old ThinkPad mojo back for making laptops that can withstand what everyday life throws at them.

Niggles

Every Thinkpad that I’ve previously had sported a removable drive bay, which I’ve generally taken advantage of and hence found useful. The X250 has dispensed with this, which means taking off the base (and dealing with potentially fragile plastic clips) to get at the SSD. It’s the same story for the RAM, which doesn’t have an access door.

The M.2 SSD interface only takes SATA drives, so there’s no option for a further boost with NVMe.

The slimmed down form factor means that Lenovo have changed their 20v power supply jack from circular to rectangular, so I’ve had to buy a bunch of adaptors for the handful of power supplies I already had in my daily use and travel bags.

Should I have waited for an X260?

Perhaps – but they weren’t available in December 2015, and I wasn’t going to refuse a laptop that fitted the bill. The Skylake CPU in the later model might have given me even better battery life, but that’s the only difference that I’d expect to notice.

Conclusion

I’ve been very happy with the X250. It’s fast, small, lightweight, has a full selection of ports and can get through the day without worrying about when the next charging opportunity will come. It also seems to show that Lenovo have restored the build quality that traditionally went with the ThinkPad brand, and perhaps slipped a little a few years ago.

Update 23 Apr 2020

I got a few comments to my earlier X230 posts overnight that caused me to revisit this. 4+ years in and my X250 is going strong. The build quality issues I experienced with the X230 haven’t come up. In fact the only things that stop it from feeling like a new laptop are inevitable long term battery decline, and wear and tear to the keyboard and screen. If somebody offered me a new X395 tomorrow I’d take it, but I’m not beating anybody’s door down to get a new laptop – it’s still entirely fit for purpose for my day to day work – enough speed, enough RAM, enough SSD.

I bought my wife an X270 a little while ago, which seems pretty much the same as the X250 in terms of form factor and build quality.

Update 16 Mar 2021

My original X250 made it past 5 years, but I had to give it back. So… I bought another one, which now carries the SSD and extra RAM I’d bought back in 2015. It doesn’t have a touch screen, but I never used that, and it’s in better condition than the one that clocked up over 325,000 miles with me.

If I didn’t have the SSD and RAM I’d have probably gone for an X270, as I’d prefer an NVMe SSD. But I’m quite happy to keep on trucking with an X250, as it’s proven to be performant, robust, and the right size and weight.

Notes

[1] The DIMM failed after about 8 months causing the laptop to become *very* unstable. A quick run of Memtest86 revealed the culprit, and I swapped back to the OEM DIMM whilst Crucial did their RMA process, which took longer than I might have hoped, but was otherwise painless.
[2] I don’t seem to have filled the SSD anything like as quickly as I did the similar size one I first put into my X230, so there’s been no need yet to upgrade.


Metaprogramming

26Sep16

I spent part of my weekend absorbing Rod Johnson’sSoftware That Writes And Evolves Software‘, which introduces what he’s been doing at his new company Atomist, and particularly the concept of ‘Editors’, which are essentially configuration templates for programs. The combination of Atomist and its Editors is a powerful new means of metaprogramming.

I’ll repeat Rod’s suggestion that it’s well worth watching Jessica Kerr’s demo of using Atomist and Editors with Elm:

Why this could be huge

Firstly Rod has form for making the lives of developers easier. His Spring Framework for Java transformed the landscape of Enterprise Java programming, and largely supplanted the ‘Enterprise Edition’ parts of Java Enterprise Edition (JEE [or more commonly J2EE]).

The war against boilerplate

One of the trickier things about using the EE parts of JEE was the sheer volume of boilerplate code needed to get something like Enterprise Java Beans (EJB) working. This is a sickness that still plagues Java to this day – witness Trisha Gee’s Java 8 in Anger (and particularly the parts about Lambdas). Spring fixed this by stripping out the boilerplate and putting the essence into a config file for dependency injection – this got even better when Rod brought Adrian Colyer on board to integrate aspect oriented programming, as it became possible to do really powerful stuff with just a few lines of code.

Jess’s Elm demo shows that the war against boilerplate rumbles on. Even modern programming languages that are supposed to be simple and expressive make developers do grunt work to get things done, so there’s a natural tendency towards scripting away the boring stuff – something that Atomist provides a framework for.

For infrastructure (as code) too…

Atomist’s web site shouts ‘BUILD APPLICATIONS, NOT INFRASTRUCTURE’, but there’s clearly a need for this stuff in the realm of infrastructure as code. Gareth Rushgrove asked yesterday ‘Does anyone have a tool for black box testing AWS AMIs?’ the discussion rapidly descends into ‘everybody starts from scratch’ with a side order of ‘there should be a better way’. The issue here is that for any particular use case it’s easier to hack something together with single use scripts than it might be to learn the framework that does it properly. Metaprogramming is potentially the answer here, but it also raises an important issue…

This stuff is hard

If programming is hard then metaprogramming is hard squared – you need to be able to elevate the thought process to reasoning about reasoning.

Jessica’s demo is impressive, and she makes it look easy, but I take it with the pinch of salt that Jessica is a programming goddess, and she can do stuff with functors that makes my brain melt.

Documentation, samples and examples to the rescue

Perhaps the whole point here isn’t to *do* metaprogramming, but to use metaprogramming. Spring didn’t have to be easy to write, but it was easy to use. Likewise if the hard work is done for us by Rod, and Jessica, and Gareth then that provides a boost for everybody else as we stand on the shoulders of giants.

It’s about making the right way to do things also the easy way to do things – from a learning perspective, and Rod, Jessica and Gareth all have great form here with their books and presentations. If Atomist succeeds then it will be because the documentation, samples and examples that come with it make it easier to get things done with Atomist – the value isn’t just in the tool (or the approach that underlies it), but in the learning ecosystem around the tool.

I have great hopes that metaprogramming (and particularly the Atomist implementation of metaprogramming) will help us win the war against boilerplate (and hacked together scripts) – because it will be easier to start with their documentation, samples and examples.


TL;DR

Meat can be cooked safely at well below 100C, and comes out better for it, so why do cook books and TV chefs never suggest it?

Background

I love to eat and I love to cook, which is one of the reasons that I made my own temperature controlled sous vide water bath – so that I could experiment with new cooking styles.

Today’s Sunday Roast was beef brisket that I cooked for 16 hours at 88C (190F). The gravy that I made with the juices was so nice that my wife finished off what was left over with a spoon.

brisket

Low and Slow

Using the water bath has encouraged me to try other low temperature cooking methods. I frequently roast leg of lamb or pork belly in a fan oven overnight at around 75C, which brings me to the point – for me the low in ‘low and slow’ is generally below 100C – I don’t boil my food for hours on end.

But I’ve never, ever seen a TV chef or cook book (apart from the excellent Cooking for Geeks) suggest roasting at anything less than 100C (and generally a good margin above that), and with higher temperatures come quicker cooking times, so ‘slow and low’ becomes neither.

It’s an artefact of human history that there were two ways to make water safe to drink:

  1. Boiling
  2. Alcohol

Of course neither come into play in the modern first world, where we get safe drinking water at the twist of a tap that doesn’t need boiling. It seems however that 100C has become a magical number for food safety that nobody in the public eye is willing to go below.

If I look at official guidance (such as this from the Canadian government) it ranges from 63C for beef steaks to 82C for whole poultry – all comfortably below 100C. These are of course internal temperatures, so with normal high temperature roasting we might go for an hour or two at 180C or more externally to get to the required internal temperature; but with ‘low and slow’ then (given enough time) the internal temperature becomes the same as the external temperature. This happens quicker with a water bath (which uses conduction) than an over (which uses convection), but given sufficient time (which is the whole point) the outcome is the same.

I strongly suspect that the ‘low and slow’ of earlier generations, before the advent of modern ovens, was generally below 100C – so I expect that we’re just rediscovering the delights of how my grandparents used to get their Sunday roasts (when they were lucky enough to have meat).

There’s an important point here – water turns to steam at 100C and meat dehydrates, so proper ‘low and slow’ results in moist and tender meat, which is why I find it such a mystery that all the modern recipes advise above 100C. My guess is that the TV chefs use proper low temperatures for their own cooking, but are too scared to bring the general public along with them in case some fool messes it up and poisons themselves (and their family).


Let the 80s and 90s computer nostalgia continue…

Between writing about how I learned to code, and watching the latest season of Halt and Catch Fire, I’ve been thinking about how the online services I’ve used over the years have shaped my view of the IT landscape.

WarGames

Like so many others my journey started with the 1983 classic WarGames. I came away from that movie desperately wanting an acoustic coupler (and free phone calls).

Prestel

I didn’t (yet) have a modem at home, but there was one in the school computer room, and we used it to connect to Prestel – British Telecom’s (BT) videotex service. Videotex was a big deal in the UK with BBC’s Ceefax and ITV’s Oracle, but they were broadcast only – Prestel was bi-directional. Prestel was also a big deal in the IT press due to the case against Steve Gold and Robert Schifreen for hacking Prince Philip’s account. They were charged with forgery because there were no hacking laws then.

Compunet

I never did get an acoustic coupler, but I finally got my hands on a modem with a promotion by Compunet where they gave away their C64 modem if you signed up to their service for a year. The modem supported the v.23 standard common with videotex services of 1200/75 baud – 1200 bits per second download and 75 upload (and yes – you probably can type faster than that). It also did v.21 300/300, which was generally better for uploading stuff to bulletin board systems (BBSs).

modem

My main problem then was paying the phone bill. I’d get home from school and wait until 6pm for ‘off-peak’ calls, though that put me at odds with the rest of the household wanting to use our single landline for making and receiving calls, and it was still far from cheap. I think eventually we subscribed to BT’s ‘midnight line’ where it was possible to pay a fixed fee for unlimited calls between midnight and 6am – like many teenagers I became semi nocturnal – though the noise from my typing would sometimes result in angry shouts from my mum to pack it in and go to sleep.

Compunet had a great community, and I remember being able to find people who’d help me out with some of the crazy homebrew hardware projects I used to engage in at the time.

Kermit

Some of the companies and organisations I worked with on my evening/weekend jobs found that they needed to send files from office to office, so I created scripts that made the modems connect then transfer files with Kermit.

JANET/CIX

First year at University meant living in halls of residence, which in turn meant no access to a telephone point to use a modem. It didn’t matter much as I had my Amiga and PPC640 on hand. The fact that the University network was connected to JANET, and in turn the entire Internet eluded me at that time.

That all changed in second year. Project work meant burning the midnight oil, and a dialup connection to the University’s VAX cluster gave me a jumping off point into the Unix boxes of both the Electronics Department and Computer Science department, and from there I had worldwide connectivity. The World Wide Web hadn’t been invented yet, so I gorged on a diet of Telnet, FTP and Usenet guided by Zen and the Art of the Internet. One of the amazing things at the time was that people would give you Telnet access to their computers if you just asked. It was also a time when almost everything was connected to the Internet without any firewalls.

At roughly the same time I signed up to CIX[1], a service that I still use to this day. CIX was the place to be amongst the UK’s IT savvy during the early 90s, and it gave me the chance to electronically rub shoulders with IT journalists whose stuff I’d been reading in magazines like PCW for years.

WWW and ISPs

The World Wide Web (WWW) was born just before I left University, but I don’t recall using it then. My first memory of browsing was downloading some pictures from the site for Pulp Fiction using the text browser in CIX’s ‘go internet’ portal. The Lynx based text browser wasn’t the best way to view the web, but at this stage I didn’t have a proper Internet Service Provider (ISP).

My first try of a proper web browser was Netscape on OS/2 WARP, which came with a trial of IBM’s dial up Internet service (which I also managed to get going with Windows 95). By that time I’d ditched the built in modem on my PPC640 for a 14.4kbps Pace Pocket Modem (originally bought to go with a Sharp PC3100, but by then used with a homebrew 486 PC). Shortly afterwards CIX launched a dial up Internet service that I could combine with my usual subscription, so that was an easy switch to make.

Since then it’s been a succession of better browsers with Internet Explorer, Firefox and Chrome, better dial up speed wit 56k modems, then better ISPs/bearers with Nildram ADSL and now PlusNet VDSL. What a shame the UK government haven’t been doing more to encourage fibre to the premises (FTTP) in new build homes, as I’d love a gigabit service.

Note

[1] I still subscribe to CIX, which means I’ve had the same email address for 24 years. If you know that address (or my University email) then you can go back and see my (now sometimes super embarrassing) early Usenet posts.