Oscillations and inversions in the cloud
I spent most of last week at the IGT 2009 ‘World Summit of Cloud Computing‘. There were some great speakers there, but the session that sticks in my mind was Alistair Croll’s piece at the end where he talked about the future of cloud. One of the most thought provoking statements that he made was something like ‘this won’t end like it has started, there will probably be an inversion’. This got me thinking about the oscillations that we see all around us, and particularly the tendency to move between centralised and distributed models for all kinds of things. This is something that we’ve seen a few times already in IT (mainframe – mini – PC – client server – n-tier …), and similar things happen with IT organisations (centralised infrastructure -> business aligned infrastructure and back again).
Nick Carr in his book The Big Switch frequently uses an analogy of the electricity industry for how IT is developing. In the early days of electricity, generation was distributed (to the point of use), and there was a need for substantial organic expertise in electricity. Over time electricity became a utility, where generation became somebody else’s problem; and yet after a century or so we might be on the verge of the next oscillation for electricity. There are tremendous losses between the original source of energy (whether it’s coal, natural gas, nuclear or whatever) and the point of consumption. It’s not atypical for only 25% of the energy used to actually make it out of the wall socket. This is why huge users of energy, like aluminium plants and cloud provider data centres try to get very close to sources of cheap electricity. It’s also why there’s a small but growing trend towards local generation (particularly with emerging ‘green’ sources such as solar and wind).
One of the IT megatrends that receives constant attention is Moore’s law (and it’s close cousin Kryder’s law), and the consequent doubling in capacity of various things every 18-24 months. One of the issues that’s discussed far less frequently is that different pieces of the architecture started at different places – so the gaps in absolute performance become more severe over time. On a log scale chart the lines keep on rising, but they never cross. Network is always the ‘thin straw’, which is why it makes sense to manage large data sets locally where storage is cheap (where I have to agree with Cory Doctorow and what he said here – http://www.guardian.co.uk/technology/2009/sep/02/cory-doctorow-cloud-computing).
By far the most vigorous debate in Cloud computing is around the consequences of ceding control to the provider of centralised services (aka the ‘public cloud’ providers) like Amazon, Google and Microsoft. This is why people talk about ‘private clouds’ (regardless of how nonsensical that term is). What this debate often seems to miss is that what we’ve come to call ‘cloud’ is really all about management, and has little to do with location. The naming is screwed up, because ‘cloud’ comes from what we drew on white boards to represent stuff on the internet, but the ideas and principles are sound. For now the easiest way to get great management (and hence quicker and cheaper provisioning of stuff that you need/want) is to go to the people that sell this stuff over the Internet, but the inversion is coming, the oscillation is changing phase. Things like Ubuntu’s Enterprise Cloud (UEC) can all of that management goodness, and let you run it on your own machines. Stuff like CohesiveFT’s Elastic Server lets you build your ‘machines’ to work with anybody’s IaaS layer and management tools, then their ‘cubed’ stuff abstracts away the network, config and other services so that you’re isolated from annoying detail.
Even then, a couple of clicks during the installation of an OS, or packaging stuff up by a bill of materials is beyond the desire or capabilities of the mass market. People want to just buy stuff that works. They want appliances. They want their virtual appliances to just happen on a device of their choosing, and this is where we see convergence of ‘cloud’ and what’s happening in enterprise IT… the oscillations will move into phase (at least for a while).
For some time complex software has been sold to enterprise IT in the shape of appliances. This was done to stop the IT people from doing dumb stuff with that software that would add months to the roll out time and maximise the chances of stuff breaking and leading to support calls. One of the problems of enterprise IT as it stands today is a tendency to smash things down to their constituent parts, and then rebuild things in a way that even their mother wouldn’t love. I’ve heard it said recently that ‘cloud is for everybody except the Fortune 500’, and ‘everybody in the Fortune 500 is married to Oracle’, but as Larry makes his stuff into appliances like Exadata 2 then the worlds are aligning. A data warehousing appliance is just as much about canned management as an EC2 instance.
I said recently that I didn’t want to own any servers, and wondered how large my company would have to grow before the economics tipped towards a move away from pure play SaaS and towards on site stuff? What I realise now is that the question of ownership is ancillary. The real point is that I don’t want to manage any servers… ever, and that’s fine, as when the cloud turns itself inside out, and I find that my data has returned home, I’ll still be benefiting from the canned management expertise of people that can do this stuff better and cheaper than me.
Filed under: cloud | Leave a Comment