Why I think Google fired the opening shots in the Cloud Price war
It’s been over a month now since the price drop announcements for Google Compute Engine (GCE) and the follow on price drops for AWS and Azure. This stuff has been well covered by Jack Clark at The Register, former Netflix Chief Architect Adrian Cockcroft, and my CohesiveFT colleague Ryan Koop. For an in depth strategic background I’d recommend taking a look at the interview I did with Simon Wardley on the Cloud Landscape.
In this post I want to look at how Google were able to do it (and why Amazon didn’t feel obliged to follow suit). I think it comes down to 2 things:
Amazon is building out enormous scale for its cloud, but Google already has enormous scale and is building quicker. Google is spending $2.35Bn per quarter on data centers, whilst for Amazon it’s just over a $Bn. Of course that’s not an apples to apples comparison – that Google investment is for running their services – search, gmail, Google+ etc. and the Google Cloud Platform is just a small part of that. With Amazon it’s the opposite story – their retail services might run on AWS nowadays, but the story about them selling excess capacity was always bogus, and Amazon.com is a tiny fraction of AWS capacity.
Google’s secret weapon here is that its scale is fungible. If customers want more GCE then Google can make a cold hard calculation about whether it makes more money from a machine serving up search results or renting out VMs. Anybody that knows how Google runs its economic models knows that those calculations have already been made (and that the balance surely tilts towards *aaS being a better business than selling advertising).
2. Existing customer base
Amazon has a huge existing customer base. If they drop prices too far then they’re going to hit a double whammy:
- Revenues and profitability will drop.
- Demand will go up, and may easily go past the point where it can be met by those ever growing data centres – leading to customer dissatisfaction.
On the latter point there was already an issue with shortages of C3 instances after they launched, because they provide excellent performance for relatively little money. Amazon knows its economic model as well as Google does, and hence how far it can push before risking supply exhaustion.
Google has no risk of supply exhaustion – in part because they have a small customer base (at least relative to Amazon – I’m sure it’s a lot bigger than many other public clouds). Google’s winning hand here is that it can cope with new customers arriving and existing customers using more – because of that infrastructure fungability.
Conclusion – fungability wins both ways
Randy Bias has said many times that when it comes to public cloud ‘Amazon is the one to beat, and Google is the one to watch’. Some of that comes from Google’s technical competence (and their second mover advantage that lets them learn from Amazon’s few mistakes), but in the long haul the really important thing will be infrastructure fungability. Google has more infrastructure than Amazon, is building more quickly, and has the ability to divert that huge capital investment from one service type to another. They will definitely take the hit on serving your search results slower if it means they can rent me a VM for more $.
This post originally appeared on the CohesiveFT Blog.
Filed under: cloud, CohesiveFT | Leave a Comment
Tags: amazon, aws, CAPEX, cloud, fungability, GCE, google, iaas, Jack Clark, pricing, RAM, Simon Wardley