Docker memory profiling
Not long after my friend and colleague Leslie Muller created his first virtual machine manager we came to a realisation that the primary resource constraint was RAM (rather than CPU or storage). Virtual machines can quickly consume giant quantities of RAM, and that’s what we ended up carving up the underlying hardware by.
I think RAM consumption will end up being very important in the ongoing ‘VMs or containers?’ debate. The answer to that debate is of course both, but the operational decision to choose one or the other may be largely driven by RAM considerations.
I did some casual testing using my latest (0.6.0) Node-Red image…
I ran ‘free’ before and after spinning up 3 containers based on the image:
- 138204 used – no container
- 179816 used – first container added (+41M)
- 203252 used – second container added (+23M)
- 226276 used – third container added (+22M)
The incremental hit for adding more containers that do the same thing is pretty small.
It’s worth looking at what size of VM I’d need for this. Before I even started I was using over 128M, so I’d need at least a 256M VM (assuming the usual binary size increments). If I put every Node-Red into a distinct VM then I’m using 256M each time, but incremental containers are costing only 23M – an order of magnitude better.
It’s also worth looking at a base case without using Docker. I built a fresh Ubuntu 12.04.4 VM and installed Node-Red three times on it and started up each in turn:
- 80136 used – before starting Node-Red
- 114160 used – first instance started (+33M)
- 138372 used – second instance started (+24M)
- 162592 used – third instance started (+24M)
There’s a lower up front cost of running the first instance (because there’s no Docker overhead), but things are about the same for subsequent instances. After decades of engineering operating systems are good at multi tenancy – who knew?
None of this is news, so what’s the point?
The point is that isolation by containers is cheaper than isolation by virtual machines. This is why KVM based VPSes cost about twice as much as similarly sized OpenVZ based VPSes (just take a look at LowEndBox). It’s cheaper because it uses less of that precious RAM.
2x cheaper is fine when it comes to arbitrary workloads, but we can probably get to 10x cheaper when it comes to specific applications. Imagine a world where the Docker public index becomes the basis for a service provider application store. Those service providers will be able to achieve enormous economies of scope and scale for popular applications, and that’s what’s going to be disruptive.
 What’s now VMware’s vCloud Automation Center (vCAC), and was for a while Dynamic Ops Virtual Resource Manager (VRM) started life as an in house project called Virtual Developer Environment (VDE).
 At some stage that link will have a video of Derek’s presentation, and also a video of an interview I did with him.
 It’s at this point that anybody in the room from VMware starts throwing peanuts, and pointing out that their hypervisor can take advantage of the fact that each of those VMs has many memory blocks in common – so they can be overcommitted onto physical hardware. The tightrope of hardware and facilities costs versus software licensing costs seems to be one that the enterprise is walking, and the hosting market is mostly avoiding.
 NPM was smart enough to realise that most of the work had already been done for installations 2 & 3, so they were very quick, and didn’t use much more disk.
 How I wish they had an easy comparison matrix for size/type/location.
Filed under: Docker | 1 Comment
Tags: applications, Docker, hosting, KVM, memory, OpenVZ, RAM, VPS