Google Compute Engine – first impressions

17May13

This post first appeared on the CohesiveFT blog.

One of the announcments that seemed to get lost in the noise at this week’s IO conference was that Google Compute Engine (GCE) is now available for everyone.

I took it for a quick test drive yesterday, and here are some of my thoughts about what I found.

Web interface

The web UI is less bad than most of the other public clouds I’ve tried of late, but it’s nowhere near as good as AWS. I see a number of places where I think ‘that works fine now whilst I’m just playing, but I’m not going to like that when I’m using this in anger and I’ve got LOTS of stuff to manage’.
One thing I like a lot about the web interface is how well it has been connected to the REST API and gcutil command line tool. The overall effect is to give the impression ‘this is just for when you’re running with training wheels, if you’re serious about using this platform then you’ll use (or build) some grown up tools elsewhere’.

gcutil

Google have gone with their own API, which means you can’t use third party tools adapted to AWS and other popular APIs. If (as most pundits predict) Google grows to be the #2 public IaaS this won’t be a big deal as an ecosystem will grow around them. For the time being I expect the main way that people will use the API is through the gcutil command line tool. It’s very easy to get going with gcutil due to the integration with the web interface, though I do wish there were direct links from the tool guide rather than links to links (a trap for those like me that just copy links and paste into wget commands).

Access control

GCE uses OAUTH2 for access control. This is both a very clever use of standards, and a Lovecraftian horror to use.

Beware, Fluffy Cthulhu will eat your brains if you think you can just source different creds to switch between accounts

This manifests itself when you first use gcutil where the invokation creates a challenge/response – paste URL into browser, authenticate, approve, paste token back into gcutil. A ~/.gcutil_auth file is then written to save you jumping through the same hoops every time. It’s possible to make the tool look elsewhere for the credentials stored in that file (and I guess equally possible to write a script to move files into and out of the default location), but the net effect is to bind a user on a local machine to an account in the cloud, which I think will be jarring to many people who are used to just sourcing creds files into environment variables as they hop between accounts (and providers).

SSH

Google also breaks with convention over how it manages SSH keys. Most other clouds either force you to create a key pair before launching an instance, or allow the upload of the public key from a keypair you made yourself.
GCE creates a keypair for you the first time that you try to access an instance using SSH (with a different name):
  • gcutil creates a keypair and copies the private key to ~/.ssh/google_compute_engine
  • the public key is uploaded to your project metadata as name:key_string
  • new users of ‘name’ are created on instances in the project
    • and the key_string is copied into ~/.ssh/authorized_keys on those instances
  • meanwhile gcutil sits there for 5 minutes waiting for all that to finish
    • I’ve found that the whole process is much faster than that, and in the time it takes me to convert a key to PuTTY format everything is ready for me to log into an instance (whilst gcutil is still sat there waiting).
The whole process is a little creepy, as you end up signing into cloud machines with the same local username as you’re using on whatever machine you have running gcutil. This also feels like another way that gcutil ends up binding a little too hard to a single local account.

Access control redux – multi accounts

The OAUTH2 system for creating gcutil tokens does support Google’s muliple account sign on – allowing me to choose between my personal and work accounts.
The web interface doesn’t.
If I want to use the web interface with my work account then I have to use my browser in incognito mode (and jump through the 2FA auth every time, which is a pain).
At this stage I’m glad I’m only wrangling two GCE accounts. Any more and I’d be quickly running out of browsers (and out of luck if I was using my Chromebook).

Image management

The entire GCE image library presently fits onto a single browser page, and half of that is deprecated or deleted, so the choice of base OS is limited to Debian (6 or 7) and Centos 6.
There are no choices for anything more than a base OS (though there are instructions for creating your own images once you’ve added stuff to a base OS).
There is no (documented) way to import an image that didn’t start out from one of the official base images.
There is no image sharing mechanism.
There is no image marketplace (or any means to protect IP within images).

Network

This is an area where it seems Google have learned from Amazon how to do things more intelligently. The network functionality is more like an Amazon Virtual Private Cloud (VPC) than the regular EC2 network. By default you get a 10.x.x.x/16 network with a gateway to the outside world and firewall rules that let instances talk to each other on that network, and SSH in from the outside.
Firewall rules apply to the network (like VPC security groups) rather than the instance (like EC2 security groups), and there’s a very flexible source/target tagging system there that can be used to describe interconnectivity.
The launch announcement talks about ‘Advanced Routing features help you create gateways and VPN servers, and enable you to build applications that span your local network and Google’s cloud’, but if those features exist in the API I don’t (yet) see them exposed anywhere in the web UI.

Disks

The approach to disks is much more like Azure’s IaaS than AWS, at least in terms of default behaviour. terminating an instance doesn’t destroy the disk underneath it, and it’s possible to leave that disk hanging around (and the meter running) then go back and attach another instance to it later. If you don’t want the disks to be persistent then that needs to be specified at launch time (or you have to delete the disk after deleting the instance).
There’s no real difference in capability here, it’s just a difference in default behaviour.

Speed

GCE feels fast compared to AWS and very fast compared to most of the other public clouds I’ve used. Launches and other actions happen quickly, and the entire environment feels responsive. I hope this isn’t a honeymoon period (like Azure IaaS storage) where everything is fine for the first few days and crumbles under load once people have the time to get onto the service (given how Google have handled the launch of GCE I’m fairly confident they won’t repeat Microsoft’s mistakes here).
I haven’t benchmarked any instances to see if machine performance is +/- equivalent AWS instances, but I’ve heard on the grapevine that GCE has more robust performance.

Price

Seems to be set to be about the same as AWS benchmarks across instances, storage and network. GCE doesn’t seem to be competing on price (yet), but if might be offering better quality (albeit for fewer services) at the same price.
One thing that has caught people’s attention is the move to per minute billing (with 10m minimum):

I’m not so sure:

Paying for a whole hour when you tried something for a few minutes (and it didn’t work so start again) might be a big deal for people tinkering with the cloud. It might also be a thing for those bursty workloads, but I think for most users the integral of their minute-hour overrun is a small number (and Google will no doubt have run the numbers to know that exactly).

In effect per minute billing means GCE runs at a small discount to AWS for superficially similar price cards, but I don’t see this being a major differentiator. It’s also something that AWS (and other clouds) can easily replicate.

Conclusion

There’s a lot to like about GCE. It gets the basics right, and no doubt more functionality will come with time.
I see room for improvement in the identity management pieces, but the underlying security bits are well thought out and executed.
Image management is the area most in need of attention. People are religious about their OS choices, and having one flavour from each of the big Linux camps is enough for a start but not enough for the long term. Google’s next major area for improvement has to be getting the right stuff in place for a storefront to compete with AWS Marketplace. Some people might even want to run Windows :-0


No Responses Yet to “Google Compute Engine – first impressions”

  1. Leave a Comment

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.