The arrival of my EFM connection meant that I needed to find some way of balancing load (and failing over) between the new EFM and the existing ADSL. Thankfully there’s a healthy market in low end load balancers, and after digging through some reviews I went for the DrayTek Vigor 2820n.

ADSL

The device is basically an ADSL router with additional functionality. Getting it configured to use ADSL was a breeze, and since setting it up it seemed pretty solid (though to be honest it’s hard to tell given how awful our ADSL connection is anyway). Subjectively I’d say that this device trades a bit of top end speed for greater connection reliability, but I’ve no hard data to back that up.

WiFi

Since I was replacing an integrated ADSL/WiFi router I went for the ‘n’ variant that also has WiFi. Coverage from the same corner of the office that the previous 2Wire box inhabited seems better than before – connections in the meeting rooms on the opposite side of the floor are clearly more reliable.

Since this is used entirely for Internet access (and our Internet pipe is the thinnest part of the plumbing) I’ve been unable to discern any difference between 802.11n and 802.11g.

One disappointment is that although this device supports multiple SSIDs is seems almost impossible to do anything useful with them. What I want to do here is create a guest WiFi hotspot with different security credentials to the corporate SSID (it does that) but then I don’t want those guests on our network. I just haven’t figured out how to do anything meaningful with the SSIDs from a local network point of view. In an ideal world I’d like to have three configurations:

  1. A corporate SSID for staff.
  2. A guest SSID for visitors that just allows for access to the internet
  3. A guest+ SSID for visitors that allows for internet access and access to specific devices such as printers

I’m sure that the box contains everything that it needs to support that kind of configuration, it’s just that the software doesn’t present the right controls (or I’m too dumb to use it right).

[update 25 Nov] It turns out that I was too dumb, and that selecting the ‘Member’ option allows for a guest WiFi. Sadly there isn’t much in the way of controls over what can be connected to. The Member option stops connection between machines on different WiFi SSIDs, but anything connected on WiFi can connect to anything connected by a wire; so this remains an area where some better software and config controls could provide more like what I want.

Load balancing

This is the reason I bought it, and it does a competent enough job. The load balancing policy controls feel a bit clumsy to me, but having put some rules in for SIP and SSL (to favour the EFM connection) on WAN2 it seems to do a good enough job. Thankfully I’ve not yet seen any EFM failures that would cause us to fall back to ADSL (though I have pulled the plug to confirm that things do keep going). Whilst the regular documentation seems little more than a list of configuration options, the much better (but well hidden) application notes are pretty helpful at explaining how to do load balancing.

3G

One of the features I like on this device is the ability to fail over to a 3G WWAN connection. Sadly this isn’t an option if you have a fixed line WAN2, so I’ve not done any further investigation. If the dark day comes that our ADSL and EFM both fail at once, and 3G is still working (and I’m in the office to do something about it) then my guess is that we’ll get back up and running quicker on MiFi and laptops with WWAN and Connectify than we would be reconfiguring the router to use a 3G dongle. I expect that trying to run SIP over 3G isn’t likely to work that well anyway – so the phones don’t matter.

VPN

As a no servers company I wasn’t expecting to use the VPN functionality, but it dawned on me that it would be handy to be able to have remote access to printers, SIP phones and the router itself. It supports IPSEC, L2TP and PPTP. My attempts to configure IPSEC and L2TP with Windows 7 failed (the Vista application notes just didn’t get me across the line)[1]. I’m happy to say that I do have PPTP working reliably, and whilst this feels like a lowest common denominator solution it’s perfectly satisfactory for the task in hand.

Firewall

No servers mans no services, which means no need for fancy firewall configuration.

Voice

I didn’t get a 2820 with any SIP capabilities (which are available on the ‘V’ models), but I wish I’d known that such things existed before setting up the office VOIP system [2].

Niggles

DHCP – The previous 2Wire router was pretty good at handing out the same IP to the same MAC. The 2820 seems to pretty much insist on handing about the next IP in the availability stack for each lease request. Yes, I could define static mappings for every device in the office (as I’ve done already for the printers, and may still do for the phones), but this is just annoying.

Web admin – definitely a feel of designed by engineer rather than UI expert. It’s functional, but could be more intuitive.

Conclusion

The 2820n does what I bought it for, and maybe a little more besides, so I’m happy with it. Administration could be made a bit easier, but now that it’s working that shouldn’t really be an issue. I expect it to just sit in the corner and do its job.

[1] One of the issues here is that I didn’t want to specify a fixed end point IP for the remote device. Even though I have static IP at home I wanted the VPN to work from wherever I might be.

[2] Though to be honest the VOIP stuff on the 2820V is pretty limited, and if I wanted SIP trunking etc. I’d have probably waited for the newer 2930 if I had decided to get a device with VOIP support (and that has SSL VPN too).


One of the great things about my new office in the City is that I can now do my commute without having to use the Tube. I can jump on a train to London Bridge and either walk from there (~20min) or catch another train to take me over the river to Cannon Street, which shortens the walk to ~5min. It’s not always that simple though. If I need to be in the office by 8 then the best plan is a Gatwick Express to Victoria then the District/Circle line to Mansion House. There are also times when I need to be in other parts of London, so I’m often left torn between getting a weekly travelcard (a ticket that includes unlimited tube journeys at a premium of £9 ) or just a one week train only season ticket (and use an Oyster pay as you go for the occasional tube journeys – the break even is 5 Zone 1 rides).

  • Weekly rule – buy a Travelcard unless you’re pretty sure that you’ll make less than 5 tube journeys.

Things get even more complicated if I don’t need to be in London all week. A peak time return from Haywards Heath to London is £32.60, but to get this as a travelcard is £39.20 (that £9 premium for a week turns into a £6.60 premium for a day!)[1]. The issue here is the Gatwick Express. The Gatwick Express used to be very special – an overpriced way of separating tourists from their money and whisking them from Victoria to the Airport in 30m [2]. But these days the Gatwick Express serves commuters too by running all the way to/from Brighton in peak hours. My problem is that I like the Gatwick Express. It tends to be (slightly) less (over)crowded, and tourists (who you’ll probably never see again) are somehow less annoying than grumpy commuters (same faces every day).

  • Peak daily rule – buy a regular ticket and use Oyster PAYG for the tube rides.
  • If you need to be in town before 10am more than twice in a week then buy a weekly

But that’s not all. There’s even more confusion generated by operator specific and destination specific tariffs. It’s possible to get further discounts by choosing to use a single operator such as First Capital Connect [3] (FCC only) or a single destination such as Victoria (from which you can only get Southern Trains[4]). Things get even more baroque with off peak tickets [5], which range from £20.30 for an unrestricted travelcard to £11.40 for a restricted rail only return.

Overall it’s a very similar situation to airline tickets. There’s a comparatively small base price for a journey [6] and then you’re basically buying options (to travel when you want, to start and end your journey at different stations, to use the trains of different operators, to have Tube bundled in). There are clearly some irregularities in the options pricing model that are there to be gamed by the savvy operator, which may be one of the reasons that the National Rail site doesn’t actually bother to fully explain pricing/restrictions on the tickets it displays.

I’ll finish with a little story. I was on a train to Victoria a few weeks back and the conductor was doing his rounds checking tickets. One of the preceding stations was Cooksbridge, which apparently doesn’t have the infrastructure to sell tickets [7]. An old chap asked for a return to Victoria, and the conductor told him instead to buy a return to Aldershot and a return from Clapham Junction to Victoria. This apparently was what all the smart Cooksbridge travellers were doing, and the numbers show why. A return ticket to Aldershot (where Clapham Junction is a valid route) costs £17 and the return for the short hop from Clapham to Victoria is a mere £4.30 – a total of £21.30 to hold valid tickets for the entire journey[8], which is a saving of £10.10 (33%) against the regular ticket price of £30.40. Clearly the Haywards Heathens that use Southern to Victoria are missing a trick – they could do the same and spend £16.10 + £4.30 (= £20.40) rather than £30.80, though that would mean no option of using the Gatwick Express on the way home. London clearly exerts a strong reality distortion field on train fares (and it’s good to see that the world still has a place for friendly and helpful conductors rather than their evil twin the ‘enforcement officer’).

[1] There’s also a ‘not via Gatwick Express’ daily peak travelcard available for £34.90. So the Gatwick Express premium is £4.30 if you buy travelcards, but only £1.30 if you buy a return ticket and make a couple of Zone 1 trips on Oyster PAYG.

[2] There are plenty of regular trains that run from Victoria and London Bridge to the airport that can cost a lot less for the extra 3 minutes or so that they take

[3] FCC seem to offer the best discounts, presumably because their trains are the least reliable, and most of the carriages seems to have been designed for midgets rather than people with regular length legs. Their service from City Thameslink is however pretty handy for the office, and their evening rush hour trains that avoid London Bridge tend to be not too packed (on the rare occasions that they’re not carrying double passengers because of earlier cancellations) and offer a reasonable timetable (on the rare occasions that they actually run to schedule).

[4] Southern also operates the Gatwick Express brand, but there are some tickets that exclude its use.

[5] Defined by outbound trains that get into London after 10am. There also seems to be a ‘super off peak’ rate which also restricts the return journey to before 1645 or after 1915.

[6] There’s another whole category of ‘Advance fare’ tickets that don’t really apply to commuter routes, but that can have startling price differences if you’re going further afield to say Manchester, York or Newcastle.

[7] Most stations these days have automated ticket machines, even when there’s no (or only a  minimal) ticket office. Some though expect the travellers to buy a ‘permit to travel‘ where they pay a nominal sum, to show good faith, which is then discounted from the fair they pay on the train or before exiting the destination station.

[8] Unlike the airlines the train companies don’t have an effective means of ensuring that passengers complete entire journeys that have extraneous legs to them.


My firm moved offices a little while ago, and one of the things I was looking forward to was a much better Internet pipe than we had in our old place (which seemed like a domestic ADSL shared across 100+ people). Part of the plan was a fully VOIP telephone system, something that I’ll return to in a later post, but a disaster when you don’t have a decent connection.

I wrote already about our terrible experiences with ADSL. Upon further reflection I remain convinced that the City in London might be one of the worst places on the planet to use ADSL. Seriously – I got a better connection recently when I was on holiday in the Lake District – in a rental cottage – part way up a mountain. It remains a disgrace that all of the political attention remains focussed on rural areas (and presumably their voters) rather than business centres.

I thought I’d found salvation when I came across Urban Wimax, which uses fixed Wimax terminals to deliver up to 10Mb/s (symmetric). All it needed was installation of an antenna on our roof and we could be up and running in days. Sadly I’d not counted on a belligerent landlord. We weren’t even granted access to the roof for the pre installation survey, so getting something actually installed wasn’t going to happen (having a clause in our sublease that prohibited installation of ‘communications equipment’ – presumably aimed at satellite dishes – didn’t help either).

Reluctantly I went with the only remaining solution that would provide the required bandwidth (and that didn’t break the bank) – Ethernet First Mile (EFM). That was back in mid May, and my nightmare ended in late July when the service was finally put online (substantially later than the 30 [working] days I was originally promised). What follows are the highlights (well mostly lowlights) of what I discovered as I went through the process.

Too many mouths to feed

EFM is considered within my supplier as a ‘complex product’. All of the complexity is in the installation workflow and the balkanised sub organisations that process it. Account managers, installation managers, operations coordinators, offshored schedulers, outsourced router configurers, bearer installation engineers, site commissioning engineers – I’m sure there’s more. To make matters worse there are parts of the whole that are not allowed to talk to each other – they just pass messages (do this:done that). There’s no equivalent of try:catch, as there’s basically no exception processing. If you hit an exception then you’re in trouble, as nobody is empowered to take oversight and fix things. On the same day that the commissioning engineer turned up to switch us on I also met with the product manager. He didn’t actually have a full process map for how the product gets delivered, but was trying to build one (forensically).

The product itself

Bearer

EFM has a lot in common with xDSL, as both use the regular copper pairs between your building and a telephone exchange. The main differences are that EFM doesn’t try to carry an analogue phone signal over the same pairs, and supports having multiple pairs to provide more bandwidth. Our feeble 2Mb/s connection only needed one bearer pair, which I’m told will probably be good for 4Mb/s should we ever need to upgrade. Adding another bearer pair would take the max bandwidth up to 8Mb/s.

Network Terminal Equipment (NTE)

This is a little black box that connects to power, the bearer and ethernet. Nobody seems to call it a modem, but that’s basically what it is.

Router

We got a Cisco 1841 supplied with the service (and managed remotely). This seems like massive overkill (just like the /29 that was provisioned to satisfy our request for one IP address). It’s also a clumsy bit of kit, being 1U high, but not a whole U wide (and lacking any rack mount brackets) – so you can put it into a comms cabinet, but not properly.

The commissioning engineer got a single IP up and running on the router, and showed me web access on his laptop – I even got him to tell me what the IP was before he left the building (so that I could configure my load balancer that would use the old ADSL as a fallback).

How things could be improved

EFM might use different signalling over the copper, and have different terminal equipment, but fundamentally I don’t see how it’s that different from ADSL. ADSL has provisioning processes that (mostly) work. ADSL also has end user integrated modems/routers that are mass produced at low cost. EFM suppliers needs to clone the ADSL process, and cosy up with the consumer/SME grade equipment suppliers. That’s how EFM will get to be a £100/m for 5Mb/s product that small businesses (and ‘prosumers’) will be willing to pay for on a large scale to escape the limitations of ADSL (and SDSL).

Of course an even better plan would be for telcos to work with landlords so that fibre was already installed an terminated, and each tenant(subscriber) could then just get a virtual circuit with the bandwidth that they need. Ethernet (the regular sort that only goes 100m) will always trump EFM. It would have been great if our building could have co-operated to share a relatively modest fibre connection, but the deck is stacked against this type of arrangement.

There is a happy ending

It may be expensive, and I did have a nightmare getting it installed, but it does work well. Bandwidth seems to be as advertised, and latency is a substantial improvement on ADSL. I also have the small consolation that if I’d ordered fibre I’d probably still be waiting now.


Update 22 Nov 2011 – A whole new range of Kindles have been launched since I wrote this over a year ago. The new Kindle Touch 3G only has free 3G browsing for the Kindle Store and Wikipedia. No changes have been made to the terms and conditions attached to the Kindle I wrote about, now known as the Kindle 3G Keyboard, and ‘experimental’ web browsing over 3G worldwide has remained free. At this stage in the game I’m inclined towards thinking that the trap I wrote about will never spring, and instead my prophecy turns out to only apply to later models of the Kindle reader. Get one of the older ones whilst you still can – not only does the new version not have free 3G browsing for almost all of the web, but right now there’s not even a means to pay for it.

Update 26 Nov 2011 – The Kindle DX, also has worldwide 3G is presently on sale for $259, which seems like quite a bargain to me. It’s got a larger 9.7″ screen – perfect for textbooks, and I don’t yet see a similar model in the new Kindle line up.

Update 24 Jul 2012 – It seems that 3G browsing was notionally limited to 50MB/month, and that Amazon are now clamping down on those who exceed that limit.

Continue reading ‘Kindle 3G – it’s a trap’


I had a good chat with JP yesterday, and one aspect of that chat related to cloud computing. This got us talking about the disconnect between goods and services. Goods have become very cheap (relative to services), and I have a couple of favourite examples. The first is bicycles. A decent kids bike has been around £100 for my entire life. That was pretty serious money thirty years ago as I tried to pick out my Christmas present from the Grattan catalogue. Thirty years of inflation mean that in real terms bikes are a lot cheaper. My other case study is my lawnmower…

When I moved house about 8 years ago I had a rotten old electric mower that had been left behind by the previous owners of my last house. It was good enough for the little patch of turf we had there, but wasn’t up to the job in the new place. I went out and bought the cheapest petrol mower I could find with a decent (Briggs and Stratton) engine, which cost about the same as a kids bike. Last summer it started to play up, and a bit of googling around suggested that it would need a new diaphragm for the carburettor. I found the part on eBay for about £2, inc delivery. When the part arrived I talked my son through fitting it – he was chuffed to bits when the mower started up again. A few days later I was talking to a neighbour who told me that he’d recently bought a new mower. His old one had started to misbehave (in much the same way as mine had) and he didn’t want to pay somebody to fix it (£60 he reckoned just to have somebody look at it) and didn’t have the engineering knowhow to do the job himself. His mower had gone to the trash [1]. Goods (mowers) are cheap. Services (getting mowers fixed) are expensive.

People based services (in the first world) have to be expensive – we pretty much all have massive property bubble debts to service.

The IT economy has similar examples. Goods (servers) are cheap – a typical 1U, 2 socket entry level box has been around $3000 for as long as I can remember. Inflation doesn’t make servers get more expensive, and Moore’s law keeps making them better. Services aren’t cheap – the people that it takes to do custom development, systems administration, project management and enterprise architecture cost serious money. Of course there is labour market arbitrage (outsourcing and offshoring), but that game’s pretty well played out, and many lessons have been learned about the delicate balance between perceived cost savings and structural inefficiencies that can occur when trying to achieve them. The other trick that we see is ‘cost transparency’. This is presented to users of IT as a fully loaded cost for a given service (e.g. an email inbox, or an application server). Look at the practicalities though and it’s all about hiding the cost of ancillaries services (numerators like project managers and enterprise architects) behind real things (denominators such as servers). It should be no surprise that this leads to some WTF conversations.

Servers it seems have become the artificial currency of the IT economy. An economy that’s being torn apart by the fundamental divergence between goods and services.

So what’s this got to do with cloud?

One answer is that cloud changes how services are delivered – fewer people, more automation, self service. As a service (*aaS) is a goods based economy not a (people based) services economy, though from a business perspective it delivers annuities rather than one off sales.

Another answer is efficiency. Cloud service providers have the scale (and tools) to manage their estate in ways that achieves much greater asset utilisation. This may not seem that important in a goods are cheap world, but it has consequences. One is reduction in consumption of ancillary services (such as energy), but more important is the effect that this would have on the old IT economy. This is the stuff that JP was excited about. The server that was the denominator of many of those cost transparency calculations was typically underutilised. Make the server utilisation efficient and suddenly you’re sharing more people services across fewer things. The conversation with the consumer of that service just went to WTFx10 :-0

I’ve never been (and probably never will be) a CIO like JP, so I don’t think in their terms. One of the key issues for the CIO is benchmarking against peer organisations (for example their cost of XXX service measured on a per server basis). Cloud computing will disrupt this (a lot) as CIOs are forced to abandon old benchmarks (likely a painful process if these have become compensation related KPIs).

This brings me back to last week’s CloudCamp in London, where Simon Wardely made some excellent points. The first is that cloud won’t evaporate IT budgets, but will instead allow organisations to do more with their IT budget. The second is that this could mean more IT (measured in goods). Of course the obvious corollary of this is that there will be less spent on the traditional people based services. Watch out all you SFA Cloud Computing Consultants (or should that be watch out anybody in the IT services economy who isn’t a SFA Cloud Computing Consultant?).

[1] Hopefully it avoided landfill. There’s a bunch of pretty resourceful recyclers at our local authority tip.


Digital 9/11

08Jul10

This post is probably going to get me into trouble, but this stuff needs saying.

There’s been a sudden outburst of sanity today about this topic, so I feel obliged to throw in my 2¢.

A few weeks back I heard somebody say that we hadn’t yet seen a ‘digital 9/11’. I think what they meant here was some sort kind of event so catastrophic in its consequences that the world of IT security (I hate the term ‘cyber security’) would change forever. This got me thinking about impact and scale. The death toll on 9/11 was just short of 3000 people – the largest terrorist event ever, but a tiny proportion of the worldwide population. I would estimate that a far greater proportion of the worldwide computer population is falling victim to the various botnets and worms out there every single day. Those computers aren’t missed though, like the loved ones lost in 9/11. Malware can be removed. Systems can be rebuilt. Old machines can be consigned to the trash and new ones bought.

My take is that this isn’t really about scale. We see attacks every day that are large in scale, and this is what we live with as normal. So what about impact? This is where we head off into movie plot territory. Terrorists taking over nuclear plants, terrorists taking over safety critical systems in utilities, terrorists bringing down our financial systems. The movie plots work because we all know that these things have computers inside them, and we all know that those computers can go wrong. But that’s exactly the point – computers go wrong all the time. We’re used to that, and we work around that. Whether going wrong is caused by malice or incompetence really shouldn’t matter – we deal with so much incompetence so regularly that malice can in fact be treated as a special case of incompetence.

Will there be IT failures in the future – of course. Will some of these failures be caused deliberately (by people who we label as criminals, and a special subset that we label as terrorists) – yes. Will some of the failures cascade into high impact events – undoubtedly. Will this be the ‘digital 9/11’ that the chicken littles are screaming about (usually to get a big bag of money for some pet project) – I think not. Just as we shouldn’t be wasting resources on special anti terror schemes in the physical world (rather than just good old intelligence and response capabilities) the same is true in the online world. Be informed, and be ready to do something – whatever the cause.


Tim Bray has a post up about numbers, and this began as a comment but grew a little too long.
I once described telephone numbers as ‘the original digital identity‘. The trouble is that for way too long they were associated to land lines (and hence geographic locations) and then mobiles came along and tied them to devices.
I began an experiment a few years back of having ‘one number’ – a single point of entry that would find me wherever I was and whatever I was using. It mostly works, but there are cultural and economic issues that get in the way. People don’t expect to reach you in the US when they dial something beginning +44 (and may not respect the time zone that you’re in). Similarly some don’t like being hit in the wallet for dialing internationally (especially when EU style mobile termination charges creep into the picture).
As I can’t get Google Voice I’ve been an enthusiastic early adopter of Ribbit mobile (which does much the same job, and is reckoned to have the edge on voicemail transcription). My ‘one number’ is a SkypeIn number that I point at Ribbit, which then finds me on mobile, office or ‘home office’ extensions.
The home office piece is probably the most interesting. I like to use a headset, so I got a Plantronics T20 which gives me two lines – one connected to the regular home analogue line, and the other connected to a SIP ATA. The ATA then connects to SIP Sourcery, which gives me a Ruby dial plan that mediates between a SIP connection to Ribbit and another to Voicehost (our office VOIP provider). I could add others such as LocalPhone to get some least cost routing magic, but since most of my international calls are to the US, which Ribbit handles well, I’ve not been sufficiently motivated yet.
Of course like with any digital identity after some time we discover that we’d like to have different personae. In this case that means different numbers for different purposes – the ‘family’ number that will ring your mobile even if it is 2 in the morning, the +1 entry point for business contacts in the US etc. All the pieces to do this are present today, they’ve just not been joined up into a user friendly service yet.

Pioneer One

05Jul10

A little while ago I wrote about how a TV series could be released on bittorrent and paid for by fans. Well, somebody’s done it – check out Pioneer One.

I read about this last week on TechDirt, but it took me a few days to get around to downloading and watching it. My impression – absolutely brilliant. More please – I’ve donated already. I frankly don’t care how much this cost to make, the pilot reminded me of a good episode of X-files – great TV. If it only takes a few thousand of us to put our hands in our pockets for less than the cost of a DVD, and that gets us away from the mass produced tripe of ‘reality’ TV then this is my future of TV.

Now then… I don’t really want theme tunes or deleted scenes (I never really bother with DVD extras type stuff). Some cool merch would be a different matter altogether though.


I watched this film last week, but it’s  taken me a while to find the time for a quick review. I guess I must like Michael Moore‘s films, as I’ve watched most of them, though I know from experience to expect a certain political perspective that I might not entirely agree with.

I think he asked some important questions – questions that I’ve been asking myself and others for some time. Ultimately though there weren’t satisfactory answers. I really would like to know why when ‘productivity’ has gone up so much in the worldwide economy we’re all having to work so much harder than a generation or so ago? Moore would have us believe that the richest 1% of society is somehow living the grand life whilst the remaining 99% toil away, but where’s the evidence? Isn’t the real problem that today’s workers are paying down the debts of the past (debts run up to cover post war healthcare and social services) – or is that my rather UK centric point of view? Or maybe it’s all really down to our obsession with liability (and hence the huge ‘elf and safety tax on society)?

and that’s what it all boils down to… there are taxes on society that exist now that weren’t a drag on Moore’s family when he was growing up in Flint. Perhaps Wall St is one of those taxes, but is it really the dominant cause? Convince me – he didn’t.

Moore makes a point that Wall St (and by extension the finance industry worldwide) is sucking up science and engineering talent and wasting it by engaging in a zero sum game of algo trading and derivatives structuring. This is a question that’s certainly worthy of deeper investigation than it receives in the film – is that talent really going to waste, or (like in the space programme[1]) are there spin-offs to other parts of the economy/society that have benefit? Most fundamentally, where are those banks hovering up capital from? Why are businesses willing to pay fees (on transactions, or for structuring, or advisory services etc.) that maintain this status quo? If all the profits are from prop trading (as they have been for most iBanks since 2008) then who are the participants at the edge of the market who are (willingly?) feeding the beast?

Questions? Questions? Questions? – Moore asks many in this 2hr epic, but his answers are superficial and leave me with more questions.

One final question. Those people that we saw being booted off their farm (which is very sad) – what did they spend the money on (the money they got from whatever con man that sold them the mortgage they ultimately couldn’t afford)? Clearly our love affair with capitalism goes right to the foundation of society where people would rather borrow money to consume ‘stuff’ than hold a safe position on their family property.

[1]  The Apollo programme was reported to Congress as $25.4Bn (1973 $s), but did it really cost the US public any more than a few tons of aluminium and a few good men. Many books and case studies have been written about the economic upside from the science spin-offs of the space race.

Postscript… I first heard the term ‘sub-prime’ in the summer of 2006 from a taxi driver. We spent some time talking about CDOs, how they were risk managed and the diffusion of complexity across the financial system that would inevitably lead to cascading failure. Neither of us put our money where our mouths were. But it goes to show that many people were lying later on when they said the failures were all a huge surprise. If a taxi driver and IT bod can see what’s happening then anybody can. I met a hedge fund manager the other day who said that he was swapping out traders for industry experts (e.g. people who understood the fundamentals of a business rather than just the numbers) – I suggested that he might hire some taxi drivers.


I touched upon some time ago when I bought my new DVR. The question is how are we going to achieve distribution of video around the home when analogue TV goes away?

The good old way

I spent part of fathers day setting up distribution in my in law’s new place. They live in a place where Freeview doesn’t reach yet, so their needs were simple – redistribution of the analogue TV signal (BBC1/2, ITV and C4) along with the output of their Sky+ box. This couldn’t be easier, as the Sky+ has an analogue in for the Antenna and two outputs – one for the nearby TV and another specifically for the purpose of redistribution. All I needed to do was connect the redistribution output to an amplifier (luckily their was already a coax cable running back up to the loft) and then drop cables from the amp to the other TVs they wanted. Had I researched this properly before coming up with the bill of materials my father in law could have even bought a special type of amplifier with ‘bypass’ functionality that would facilitate a Sky ‘magic eye’ – a device that beams remote control commands back to the Sky box via redistribution connection, which is great if you want to be able to change channels in another room.

But analogue is going away

In the next couple of years the analogue TV signal is being switched off across the UK. This means that there’s no immediate point in TV makers incorporating the electronics to receive analogue signals, and as I observed with my DVR many equipment makers are assuming that you’ll connect via a better quality connection than UHF RF (e.g. HDMI, Component, Scart etc.). So both ends of the analogue connection are disappearing – so what are the alternatives?

Wireless Senders

These are boxes (normally shipped as a pair – a sender and a receiver) that retransmit a composite video and audio signal over unlicensed spectrum (2.4GHz). In my experience they’re awful, which is pretty predictable – composite video is the lowest common denominator to video interconnect (and only one step of degradation better than UHF), and the 2.4GHz band is full of other stuff that you probably already have – WiFi, DECT phones, MicroWave ovens (though you’d hope they’re not leaking).

Wired Senders

I’ve seen devices that can take a VGA signal from one room to another over Cat 5 cabling. These seem to work OK, but the one’s I’ve come across seem to be aimed at the business market rather than domestic use (with pricing set accordingly). I also suspect that they’re not properly network friendly – expecting a dedicated set of twisted pairs.

Media servers/players

I’m a big fan of media servers (and associated players like the Kiss DP-500 it’s HD sibling the DP-600, which I’ve had for some time now [1]). These are fine for watching content libraries in different rooms, but there are some issues:

  • Building a legal content library is tough – video has been slow to follow music into using formats with DRM and other encumbrances.
  • It’s not a multi room solution – even with multiple players there’s no way to achieve audio/video sync as you walk from room to room.
  • It can only deal with stale content rather than live events.

Slingbox

Slingbox make devices that can digitally encode a video signal and stream it to a receiver. Sadly these are crippled so that only one receiver can connect at a time, and it also seems that the dedicated receiver (the ‘SlingCatcher’) isn’t made any more. I also had concerns about picture quality on these, which were conceived to stream SD video over the Internet (though an HD version came along later) rather than high quality video around a house.

Pay up

You can of course just pay content distributors (like Sky) for multiple boxes for multiple rooms. This works fine for live events, and covers watching different recordings in different rooms, but doesn’t deal elegantly with watching the same recording in multiple rooms – time for some home network friendly features on those boxes perhaps (once the Holywood DRM Stasi figue out a way to lock things down to their satisfaction).

and that’s about it

it seems that multi room TV is coming to an end, but I have some ideas…

  • HDMI redistribution – seems like a really good plan, at least that is until HDCP enters the frame. Sadly the whole point of HDCP is to limit HDMI to a conversation between one playback device and one viewing device. I keep wondering what happens when TVs with HDMI/HDCP get beyond their useful lives and hackers start doing their thing? If it ever happens then HDMI redistribution would be tricky cabling wise unless it can use a home network in the middle (which brings its own challenges with bandwidth).
  • Live re(encoding) – it’s been possible to do real time MPEG4 (or similar) encoding of SD video for some time on a single x86 core. With a bit of dedicated GPU assistance I’ve no doubt that HD would be feasible these days. All that would be needed then is players capable of doing streams rather than files (hardly a problem). This is basically Slingbox for the home (with maybe extensions over the Internet for remote viewing).
  • Get at the digital stream at its source. PC based DVB cards offer the opportunity to stream and/or record TV and serve it up to various ‘receivers’ (and I believe Windows Media Center pretty much does this). The problem is that such cards are only available for free to air broadcasts – no commercial satellite or cable (and likely no free to air HD if the BBC/Ofcom give in to Holywood).

Any other thoughts – please comment below?

[1] These days a 3rd generation games console like an XBox 360 or PS/3 can also double as a media player. Even the Wii can join the party if you hack it a little (with official support often rumoured but never released).

Update 1 Jul 2010 – and a few days after writing this along comes HDBaseT (thanks to The Register for flagging this up). No mention of multi room capabilities, but it looks like exactly the right enabling technology (and could probably be somehow combined with the StarTech stuff that Patrick mentions in comments below).

Update 7 Jul 2010 – the BBC has an article out saying that ‘Analogue TVs no longer sold by UK retailers‘. It lacks detail – how many of those TVs sold in May were capable of receiving an analogue signal (most I’d bet)? The trend was however already clear.