I’ve been an avid follow of Andrew McAfee’s Blog ever since JP first pointed me in that direction. He’s clearly a man that understands how technology is reshaping how we do business. Whilst I was on holiday a few weeks ago I noticed that he’d published a book along with Erik BrynjolssonRace Against The Machine. The subtitle does a great job:

How the Digital Revolution is Accelerating Innovation, Driving Productivity, and Irreversibly Transforming Employment and the Economy

The first thing worth noting is that this book has been published only on Kindle, which means it’s bang up to date. I recall seeing data in there from Aug 2011 – only a couple of months old at the time of reading.

The second important point is that it’s short. I got through it in a couple of hours, and I’m not one of those speed readers who can consume an entire novel in the course of a plane ride.

I don’t intend to pick the book apart here, overall I think it’s a great piece of work – perhaps the first thing I’ve come across that does a good job of explaining why the world, its people and our economies are where they are, and how we might use technology to get us somewhere better. What I do want to do is spend some time picking at the recommendations, as I feel this was the weakest area, and that in pursuit of brevity the authors had failed to securely attach many of their recommendations to their main argument. They have 19 specific steps, which I’ll try to tackle in turn:

  1. Invest in education, starting with paying teachers more. I find this hard to disagree with initially. As I grew up teaching was an admirable profession, and my teachers seemed to be well enough off. Something changed along the way – the only guy that wanted to go into teaching on my degree course got the worst grade on the entire course (though he did graduate, which was better than a third of the original intake). We do seem to have ended up in a vicious circle of teaching being undervalued by society leading to lowering opinion of the career… I feel the remedy is likely to be more transformational than throwing money at the problem. We still predominantly use 19th century teaching methods, as brilliantly illustrated in this video of Sir Ken Robinson speaking at the RSA, and that’s what really needs to change.
  2. Hold teachers accountable for performance (by eliminating tenure). Since reading the book I finally got around to watching Waiting for “Superman”, and it seems like tenure (and the Unions that keep it in place) really is a huge problem for the US education system. Of course we don’t have tenure in the UK, and it still feels like education is going down the toilet. Attempts by succesive education secretaries to make teachers more accountable seem to have done nothing more than introduce beureacracy, so this topic needs to be approached with care.
  3. Separate student instruction from testing and certification. I think this one is aimed at the disastrous no child left behind act and the pernicious unintended consequences it’s had on education in the US by stripping teachers of discretion and putting in place a system that’s there to be gamed.
  4. Keep K-12 students in classrooms longer. Again a very US centric prescription. Unless the race against (or with) the machine is a zero sum game for the planet then I’d have liked to see global recommendations. What’s the right number of hours for kids to spend in classrooms, and where’s the source data for such a recommendation?
  5. Skilled immigrant visas. Immigration law is a mess just about everywhere, and usually a political/media hot potato. The public debate is all about keeping the wrong people out rather than allowing the right people in – that’s what needs to change first. The EU experiment shows that a (somewhat) borderless labour market works at an individual and state economic level, but the whole situation is bereft with envy and xenophobia.
  6. Teach entrepreneurship throughout higher education. Who and how? Teachers aren’t entrepreneurs. In most cases even business school teachers aren’t entrepreneurs. Getting this right means schools engaging in a new way with (local) businesses. I was fortunate enough to take part in such a program whilst at 6th form college (=12th grade) but it ran in the margins rather than the mainstream (during a holiday if I recall). I’d go so far as to say that entrepreneurship might be the wrong target here – how about some of the basic skills needed to run a business – book keeping, marketing, CRM etc.?
  7. Founders visas. I don’t see the difference between a founder and a skilled immigrant. So why repeat the point? My friend Kirk has also gone to some efforts to point out the flaws in any plan aimed at startups.
  8. Templates for new businesses. Is starting a business a knowledge management problem that can be resolved by codification of implicit to explicit, and greater uniformity? It’s probably worth looking at the agencies involved and the incentive structure around their activities – essentially who amongst the lawyers, bank managers, accountants etc. has their snout in the trough as part of making this difficult right now?
  9. Lower government barriers to business creation. Another snout in the trough problem, only this time it’s public and elected officials. The issue doesn’t stop at business creation either – in many cases established firms use legislation to create barriers to entry to deal with the problem of disruption from below. The authors recognise the political and economic complexities, but offer no explanation of how to deal with them. To resolve this one we need to recognise that it’s not just the 3rd world that’s riddled with corruption – it’s endemic – just dressed up more nicely as ‘lobbying’ and ‘political donations’. The founding fathers in the US recognised the need to separate church and state; we must now do something to separate state (or at least the political structure of state) and money. Politics should be a calling not a career.
  10. Investment in infrastructure. It does appal me whenever I visit the US how shoddy the basic infrastructure looks and feels, but things seem to work despite that. Also roads and railroads are the infrastructure of the industrial age rather than the information age. Would a comparatively smaller investment in say fibre to the doorstep be more useful than bigger roads with fewer pot holes? I’d suggest a look at data from S Korea might provide some illumination here.
  11. Increase funding for basic research. Another one where I’d like to see some evidence. Is China pulling ahead because their government does more of this? Has the level of basic research worldwide been able to find a different level because of the open and collaborative nature of the Internet? How do we know that present levels of investment are wrong?
  12. Keep hiring and firing as easy as possible. Why are the authors even talking about a system grounded in redundant concepts like a job for life? This one gets answered at 14.
  13. Make it more attractive to hire people than buy technology. This feels like a pure Luddite argument. The race against the machine will not be won by making the machine pay higher taxes.
  14. Decouple benefits from jobs. Again, evidence from countries that run different systems could be used to construct an argument for optimisation. Denmark and the Netherlands are referenced here, but what are they doing, and what has it accomplished? If the US sticks with its approach to health insurance then why should big firms be able to negotiate a better deal than individuals? Isn’t it dumb of insurers to allow employees of big firms to be lumped together rather than considering the risks of an individual? Or is their some dumb concept of fairness at work here where the deck is allowed to be stacked one way but not another?
  15. Don’t rush to regulate new network businesses. It would be interesting here to have comparison of how different countries regulated the Internet as it emerged, and the effect that this had on associated economic activity (both pure play net businesses and other businesses supported by the Internet).
  16. Eliminate home mortgage subsidy. The tip of the iceberg of a bigger problem where cheap and easy debt collided with limited capital. The trick is perhaps to find an investment vehicle that can be indexed against real estate markets for those who want to have their cake (accumulation of capital in property) and eat it (rent wherever they wish to go). There is probably a point to be made here on sales tax for property transactions also.
  17. Reduce subsidies to financial services. The sector was attracting too many of the brightest and best long before the realities of too big to fail were demonstrated or understood. The issue we face at the moment is that the smart people within financial services are doing a better job of gaming the system than those outside.
  18. Reform the patent system. I couldn’t agree more with this one. Software patents are a disaster, and we seem to have a fundamental mismatch between the duration of patents (to suit the pharma industry) and the lifetime of technology in other sectors. The only winners seem to be lawyers, but then the US is run by a lawyer elite.
  19. Shorten copyright and increase flexibility of fair use. This is particularly true for software – why should stuff that’s obsolete (e.g. Windows XP) be protected decades past the end of its useful life, and who are the rent seekers that think this is a good idea. Almost nothing being created today that isn’t explicitly put into the public domain or creative commons will fall into the public domain within our lifetimes.
As I read back through that lot (and I seem to have written more than the original authors did) the repeated theme is evidence, evidence, evidence. It’s all out there, so I hope that Andrew and Erik return with another episode and stronger data foundations to build their arguments on.

A few weeks ago I attended a summit on advanced persistent threats (APTs)[1] run by on of the major security vendors. So that people could speak freely there it used Chatham House Rules, so sadly I can’t attribute the piece of insight that I’m going to share here.

About five or six years ago I wrote a security monitoring baseline, in which I started out with a statement along the lines of:

All security controls should be monitored, and such monitoring should be aggregated and analysed so that appropriate action can be taken.

The lead point here is that if you have a security control that isn’t properly monitored then at best it will give you a forensic record to be analysed after something so bad has happened that it’s obvious. Most likely that control is useless. If the control got put there to satisfy an auditor then a) they’re too easily satisfied and b) they’ll be back later when they realise the rror of their ways, sooner if something bad happens.

Since then it feels like the practice of security monitoring has matured. Most sufficiently large organisations now have security operations centres (SOCs) that employ some sort of security information/event management tool(s). In many cases organisations have discovered that running a SOC on their own isn’t a good use of highly specialised resource, and have engaged some sort of managed security service provider (MSSP) to do the heavy lifting for them.

Getting back to APTs – the whole point is that they’re different. This type of attacker isn’t generically after anything of value – they want something specific, and they want to take it from you. A different discipline is needed to identify and deal with such attackers, and this is where the eyeball analogy comes in – for the eyeballs on security monitoring we need both rods and cones:

  • Rods – this is the picture you get from a traditional SOC. Monochrome, but works in low light. You can probably outsource much of this to an MSSP. Whoever does this will be reacting to a near real time environment.
  • Cones – these add the colour. You may need to shine a light into the nether regions of your network to discern what’s going on. You need people that understand the business context of what an attacker is going after – the self awareness of knowing where the crown jewels are secured. Those people will have to actively search for the low and slow threats – matching the patience and technical subtlety of the attacker

[1] I actually prefer Josh Corman’s label of Adaptive Persistent Adversaries, but Schneier is right that we need a label to rally around, and APT seems to be the one that’s stuck.


If I’d had a dummy in my mouth then I’d have definitely spat it when I read this:

The article makes out the NYSE is pitching OpenMAMA directly against AMQP. Luckily it’s sensationalist twaddle, and the author obviously doesn’t appreciate the difference between an API, which is what OpenMAMA is, and a wire protocol, which is what AMQP is.

Whilst the tech press might like to stir up a fight, I personally look forward to seeing OpenMAMA and AMQP working together (just as JMS and AMQP do). Yes, I’m sure that NYSE will continue to push their proprietary messaging wire protocol – the one that they got when they bought Wombat; but that’s all part of the technology industry’s rich tapestry.

I’d also note that MAMA isn’t new, it seems that it was first released around six years ago. Props to NYSE for opening it up though, I’m really liking how the Open-* world is well, opening more every day.


I found myself upgrading a bunch of stuff over the last weekend, which gave me cause to reflect on what was good, and what was not so good.

Android

First up was my ZTE Blade, which I’ve had running Cyanogen Mod. I wasn’t super impressed with version 7.0. There were few things that it did better than the modified Froyo build I’d previously had, and a few things didn’t work – like the FM Radio. Version 7.1 promised to fix that. The FAQ told me to use ROM Manager to do the upgrade, so I did. The whole process took place over the air, and after waiting for everything to download and install my phone was just better – all my apps and data were still there.

Ubuntu

Next was Ubuntu. I downloaded the server build of 11.10 (using BitTorrent as usual), but I already had a VM that I’d built with Alpha 3. A simple ‘apt-get dist-upgrade’ was all that was needed.

iOS5

This is where it got messy. I’d waited a few days until the load came off Apple’s infrastructure, having read the tweets about all the issues folk were having. It seemed like the main problem was simply getting the upgrade – how wrong I was about that.

First there was the obligatory iTunes update before I could get started, and the OS restart that went along with that – inconvenient, but not the end of the world.

I depend on my iPhone more than my iPad, so the iPad went first. Things did not go well. The upgrade process hung and/or failed in inexplicable ways at a number of stages. In the end I lost all of my videos (in the AVPlayer HD app) and all of my music (settings just seemed to disappear from iTunes), a bunch of extraneous apps from my iPhone also got installed. Luckily the rest of my data/settings seemed to survive, but it was a messy experience. I chose not to use iCloud.

The subsequent iPhone upgrade went a little better, but was also turbulent experience. Why is it that Apple needs to effectively backup and restore all my apps and data, and wipe the machine in between, rather than just patch up the OS in place? Clearly lots of lessons to be learned here from the OSS community.

Kindle

I upgraded my Kindle (from 3.1 to 3.3) whilst sorting out the iOS mess. It was a rawer experience than the Android or Linux updates, but very simple – download the new firmware, copy it over, invoke the update.

Conclusion

The Android upgrade experience was super impressive – everything that’s been promised with iCloud (but that we will wait for the next iOS update for to see if it’s for real). The Cyanogen Mod guys are really showing the world how it should be done – Google (and the handset makers) should have baked this in from the start, but it’s good to see a project bridging the gap. I was also impressed by Ubuntu – as I’ve become too accustomed to having to start afresh whenever I’ve used an alpha or beta version.  Upgrading the Kindle was painless, but I’ve not really noticed anything new or improved. My iPhone doesn’t seem any better either, but it was a fight getting it there. Thankfully I do notice the better Safari on the iOS5 iPad, but it’s hard to say that it was worth the grief.


I run a bunch of Linux (mostly Ubuntu) VMs on my main machine at home, which happens to be a laptop. I use VirtualBox, but what I have to say here is probably applicable to most host based virtualisation environments.

My requirements are pretty simple:

  1. The VMs need to be able to access the Internet via whatever connection the laptop has.
  2. Internet access should continue to work if I switch between wired and wireless connections (e.g. if I undock the laptop and take it into the lounge).
  3. I need to be able to access the VMs over SSH using PuTTY.
Sadly none of the networking modes supports these:
  • Not attached – obviously
  • NAT – provides Internet access, but doesn’t give an IP that I can SSH to
  • Bridged – makes me choose between wired or wireless
  • Internal network – doesn’t do any of the things I want (at least not without much extra work/plumbing)
  • Host only – doesn’t give me Internet access
  • Generic driver – don’t even go there
I got to thinking that somebody should do a lightweight VM that would act as a router etc. between the internal network and the host, but that’s not the answer either.
The correct approach was obvious once I thought about it – use two network adaptors:
  1. NAT – appears as eth0 – provides Internet access whether I’m using wired or wireless
  2. Host only – appears as eth1 – provides an IP that I can connect to using PuTTY
After adding the second adapter I had to add a few lines to /etc/network/interfaces thus:
# Host only interface
auto eth1
iface eth1 inet dhcp
Once that was done I needed to ‘ifup eth1’ to get things going. Then ifconfig revealed a 192.168.56.x IP that I could use for the Putty connection.

I missed the start of PubSub Huddle on Friday due to the catastrophic failure of the local railway system. Luckily I can now catch up using the podcasts.

Once I got there I spotted Andy Piper tinkering with the Arduino kit he had been using to demo some stuff with MQTT. I was inspired.

I don’t have an Arduino kit myself yet, but I did some time ago buy some TI LaunchPads, which are a total bargain at $4.30 each. I’d had a brief play with one set when visiting a friend in the US who had kindly acted as the posting address for me, but I’d not yet created any project.

The boards come with the installed chip preconfigured with a temperature sensing application much like the one Andy was showing me, but I wanted to cut some of my own code. TI provide a bunch of sample apps, and some video tutorials, so it wasn’t long before I had some LEDs blinking.

My next challenge was to make something that would interest the kids. Blinking LEDs took me to morse code (starting with the obligatory S-O-S), which then took me to flashing the kids names in morse.

It was pretty trivial to get the microcontrollers working on breadboard, with juice from a 3V button cell and a resistor from VCC to RST to get things going. Then… soldering time – moving the components from the breadboard to some stripboard to make something a little more durable.

I tidied the code up this afternoon and put it on GitHub.

I might try hacking a robot next (this project looks fun), and an Arduino kit looks certain for this year’s Xmas list.


I got into a conversation earlier in the week with a techie friend about the merits of SSDs, which we both use these days for our main machines. It look a odd left turn when he said:

Funny part for me is that I truly believe the SSD revolution will result in a swing back to more traditional storage tech like RDBs.
If you could get 100k+ IOPS for good old postgres, would you use something else? ;)
I replied that this ignored some of the social/cultural aspects that are driving NoSQL adoption. To a certain extent I see the entanglement of DBAs into the enterprise as a barrier of adoption to NoSQL, but there’s a flip side to this coin – so I replied:
NoSQL in the enterprise is sometimes about performance envelope, but in most cases I think it’s actually about escaping from the ‘cult of the DBA’ (and their friends the ‘priesthood of storage’). Most enterprise have a Dev:Ops barrier with the OR mapper on one side and the RDB on the other, so if people want to get with the DevOps groove they have to go some flavour of NoSQL – basic governance arbitrage.
by Jessica Allen at Dribbble
He replied:
Possibly, but I think people underestimate the tech behind the top class RDB engines, and making them go faster is sometimes a better solution then switching.. What the SSDs are doing is lowering, dramatically so, the price to increase the performance of said RDBs…
He is of course right on both counts. RDBMS has dominated for so long, and found its way into use cases way beyond what ‘relational’ was intended for precisely because the underlying engines are so fast and flexible, and SSDs will just help to sustain that status quo (at least if people can come up with file systems that provide less of an impedance mismatch between the RDB workload and underlying storage substrate – but that’s another story). I do however think that NoSQL will continue to thrive and disrupt in situations where organisational structure and the governance processes that go with it are seen as barriers to progress.

I first drew this chart back around 2004 for my friend Alexis Richardson. At the time I referred to it in the context of a proprietary research methodology, but I don’t want trademark lawyers chasing me – hence the thesaurised title for this post.

The point was very simple – we had standards based protocols for everything except the most useful case – asynchronous and reliable. There were of course protocols that occupied that corner, but they were proprietary – fine if you’re doing something within an organisation and willing to pay for it, but not so good if you’re trying to get services to work across organisational boundaries.

At the time many people were trying to bend HTTP out of shape so that it could be asynchronous and reliable with efforts like WS-RM, but I felt that what the world really needed was a simple standards based protocol for message oriented middleware (MOM).

Of course my prayers were answered not soon after, with the announcement by my friend John Davies at the Web Services on Wall Street conference[1] of the impending release of AMQP, and Alexis went on to found the hugely popular and successful RabbitMQ. The brave new world was going to look like this:

So why am I writing about this 7 years after first scribbling on a whiteboard? Well… it seems that some people still don’t get the joke. I found myself drawing the same chart again just this week whilst in the midst of a discussion with a SaaS provider on notification mechanisms. The unpleasant surprise was that people found it useful and informative (rather than obvious and patronising).

I’m going along to PubSub Huddle later this week, where I’m sure I’ll find loads of people doing cool stuff with AMQP. I just hope that I don’t have to wait too much longer for this all to become mainstream. For some time I’ve shared the vision of John O’Hara, and others in the working group, of AMQP becoming the ubiquitous interconnect[2]. Hopefully the time for that vision to become reality is upon us now that AMQP 1.0 is out the door.

[1] I recall that John didn’t have approval to talk in public about what was at the time time an internal project, but he went ahead anyway.
[2] John did a great piece in ACM ‘s Queue magazine on this (warning PDF and some scrolling required).


After getting my 27″ monitor I needed a graphics card with Dual Link DVI that could do 2560×1440. I didn’t want to spend a load, and I wanted something without a fan (and associated noise) – the EVGA G210 was the cheapest I could find – £23.05 from Scan.co.uk (with free delivery courtesy of the AVForums offer).

Installation

Pulling the old card and plugging in the new one was the usual PCI-E faff, but not too traumatic. When Windows 7 started up again it treated it as a ‘Standard VGA’ rather than going straight to Windows Update for new drivers, which I had to kick off manually. Pulling down and installing the drivers took an age – it’s a huge package, doubtless stuffed with cruft that I’ll never need or want. Once installed I was able to turn the resolution right up and enjoy my screen at its best.

Performance

I was replacing an old ATi Radeon X800, which was fairly mid range at the time I bought it (and ran cool enough to be passively cooled – at least after adding an enormous after-market heat sink). I was therefore expecting a substantial jump in performance given over 5 years of GPU evolution, and thought that even a low end card would smash my old one. My expectations were however ruined – whilst 3D performance (measured using Windows Experience Index) had picked up nicely (3.3 -> 5.9) the Desktop Aero performance has cratered (5.1 -> 2.9). If I’d bothered to look at more detailed specs perhaps I’d have picked up on pathetic memory bandwidth (and if I’d cared enough to splash a bit more money then maybe an AMD 6450 might have been more in order).

Conclusion

In normal use I haven’t noticed the drop in 2D performance, so maybe it doesn’t really matter that much. I do however get a pin sharp and huge desktop, which was the whole point, so overall I’m happy with it. Maybe I should have spent the extra £10 or so on the AMD card, but it’s not something I’m going to lose any sleep over.

PS I think I was right about the old X800 card not doing DVI digital output during startup. The new G210 certainly does let me see what’s going on every step of the way.


After some hassles getting it, the X121e has been in the household for a few days now – so it’s time for  first impressions review.

Design and build

The machine looks and feels more like an IdeaPad than the ThinkPad’s I’m used to. This is no surprise given the price. The chassis seems pretty robust, but the screen feels more flimsy, and doesn’t have a latch. It’s also lacking any visual indicators for hard disk activity, network etc.

One of the first things I did after unpacking was pop in another DIMM to take the memory up to 4GB. This was simplicity itself, as rather than having lots of separate covers on the base there’s just one giant one that gives access to RAM, HDD and the micro PCI-e slots for WiFi and 3G. This machine is seriously easy to upgrade.

I ordered the larger capacity 6 cell battery, and had to check the part number when it arrived as it seemed smaller and lighter than I expected. It doesn’t protrude at all from the regular dimensions of the case – so I can see how the 3 cell battery would make any sense at all. Claimed life is 9hrs, and I’ve not done any testing, but I’d expect 5hrs under normal use.

The screen is noticeably larger and higher resolution than a netbook screen, and fine to work with straight on. It’s matt, which I personally prefer, and whilst the viewing angle isn’t great that probably doesn’t matter for most laptop use cases.

The keyboard is a chiclet affair – something I’ve come to expect of a ThinkPad Edge. It is however rather good, and easy to type fast on. Like any proper ThinkPad it has a trackpoint, which is great. The trackpad isn’t too shabby either – though its textured finish might be a bit abrasive if used for a long time. The whole front edge of the trackpad can be pressed in for left and right mouse buttons.

Performance

Subjective performance is very good. I’m guessing that a 2nd generation Core i3 beats a 1st generation Core i7. The X121e has managed to clock up the best Windows Experience Index in the house (at 4.9). This is mostly down to the HD3000 integrated graphics turning in a decent performance (the earlier HD Graphics in my X201 definitely lets things down compared to the other components).

I had a go at hooking it up to my 27″ screen via HDMI to see if it could manage the full resolution of 2560×1440. Sadly all it could manage is 1920×1080 (1080p).

Software

I got the basic model with Windows 7 Home Premium (64bit). After configuring Windows it launched an installer for Office 2010 (and I’m guessing some other pre installed stuff). This was easy to exit, and the other pre installed stuff that I’d rather ignore has kept out of the way.

Overall

The X121e delivers the performance of last year’s high end machine at little more than a netbook price, which makes it quite a bargain. I’m glad that Lenovo managed to get me one in the end.

If you’re looking for more then there’s a thread at the OverClockers forum that includes some more detailed benchmarks and discussion of SSD upgrades amongst other things.