The DXC Blogs – The point of (Docker) containers

01Jun17

Originally posted internally 18 Nov 2016:

Background

Late yesterday I got an email from my colleagues in France asking me to review a pitch deck about our Docker capabilities for a financial services client. I’m repeating my answers here as they probably deserve some broader sharing…

Docker in the Enterprise

Their presentation hit most of Ian Mielle’s Docker in the Enterprise Checklist, which I’ve used with other customers.

The question it left me with is ‘what are they trying to achieve’, by which I mean what is the point of containerising (existing) applications? The entire thing seemed to be aimed at a mass migration exercise – what’s that supposed to accomplish?

Faster testing

For me the point of containerisation is faster cycle time for testing, which is super useful if you can insert containers into a critical bottleneck in the idea->production pipeline where existing tests are too slow. This is particularly the case in development, where containers can help to provide much quicker feedback, and save developers repeating steps that they know work. Of course once a container is the output of a development process it then makes sense to have some means to take containers to production, which then drives the need to provide for a wide variety of operational considerations.

Connecting agile infrastructure to business needs

We shouldn’t make the same mistake we have with cloud where customers have bought a story of agility, agility, agility: agile business needs agile software needs an agile infrastructure. Far too many customer conversations run along the lines of ‘so I bought a cloud to be my agile infrastructure, when do I get my business agility?’. The business agility only happens by providing a connective tissue between business needs and the agile infrastructure. Generally this will take the form of a continuous integration (CI) pipeline, and when we join that to a cloud we have the ability to do continuous delivery (CD)[1].

Containers can help speed up flow and feedback within a CI/CD pipeline, and containers in production can be the new cloud target; but we must help our customers build those pipelines, and not just repackage old apps in new wrapping (because although that might put hours on the clock for us, it doesn’t really provide much value to them).

Other container benefits

I’ll conclude by adding that containers can offer benefits in resource usage (versus VMs) with RAM, CPU and startup time, but those are only achievable in an environment where there’s a good understanding and close control over how shared libraries are used. Containers also provide application portability between environments, but making apps portable before you need to move them is a premature optimisation.

Who’s doing this already?

There was also a question about customer references, so I noted that: In terms of existing customers we have a large OpenShift deployment, but it’s not reference-able (sensitive government account). Most of our other customers are at an exploratory/experimental stage (or using containers just in their ‘mode 2’ division, often with poor consideration for operational issues). I’d also note that most of the quick wins are in the development area.

Note

[1] Where continuous delivery means the ability for any commit that passes tests to be pushed to production. I don’t generally expect our customers (especially those in regulated industries) to be shooting for CD meaning continuous deployment, where every commit that passes tests is pushed directly to production.

Retrospective

This was the blog that got most comments, so I was very pleased to see a lively discussion on the topic, and different areas of the company starting to work together.

Original Comments

CN

So for DevOPs and in particular containers, although the same could be applied to serverless/loosely-coupled apps, use cases are there certain customers or types of customers CSC should be targetting?

I’ve worked on some PoCs and small parts of big orgs may have “a product” and a dev team who are actively developing a software solution that would fit with fast testing cycles, CI and the whole ethos of these new tools.  Many others though, some newer, but typically the older, larger, legacy customers, have very large estates where they run legacy apps, some COTS, or where we provide VDI to a large part of their estate and they do not have a focal app that they develop in house.  This makes “selling” the idea or benefits of these new tools harder as it is more difficult to demonstrate the tangible benefits, whether it’s directly to the customer, or for our own internal use.

Should we be moving away from big monolithic customers?

Is it simply that in GIS we don’t see the app-side(GBS) as much and some of our bigger customers have huge development farms that we don’t see?

Are we just setting them up servers or storage and never question why or what they wanted it for and we don’t spot the pattern of “5 VMs this week, 5 more next week, then decom them” because we are treating the provisioning/ordering requests in a disconnected way?

Is there room/time/space for the PMs or PE/DEs to look holistically at a request for what is a dev/test environment as you reference above and give architecture the chance or incentive to suggest that rather than just standing up the customer 5 VMs a week ramping up to 200 or rebuilding the same 20 every week, that we could offer them a solution such as the OpenShift one you mentioned, or a VIC deployment?

Lastly would that be frowned upon when it comes to billing?  If you sell/bill 200VMs versus 40VMs runnning 5 Docker instances each…how is that efficiency recognised in revenue, e.g. how do we drive the good behaviour and new innovations?

Lots of questions, I know, but I really want to look at some practical implementations, appreciating that some are sensitive, but in particular any innovative uses which aren’t the traditional “We are company X and software is our business so we were already doing Docker/Jenkins etc. can you just sell us VIC/Openshift so we don’t have to build it ourselves?”

AC

I was directed here via Distinguished Architect TD regarding docker questions and capability in our organisation.

Your blog brings all the same points that the client is making although we are not the only vendor entrenched on the account, and the design and implementation is being driven by a 3rd party, not CSC.

I am trying to find out is where we sit in maturity for docker design and support,  container support and design, agreements and SLA’s that may have been utilised in other docker installations for a development, and production workload.

Do we have any Do’s and Don’ts with docker that are red flags?

Our client is looking to run docker in VM’s on a Vblock, which technically I see little issue with, but CN raises very similar points of concern to my own.

Have we developed a docker container with a CSC SOE contained in it either Windows 2012 or RHEL?

Would we support our SOE or an O/S in a container (are we geared up and trained in it, do we know the pitfalls and constraints that may be apparent in a highly virtualization and duplicated environment) .

CN

Interesting questions too AC

Obviously VMware’s preferred option is to use the lightweight Photon OS as the platform for hosting your containers (possibly circling back to Chris’s earlier post about VIC)

However I have heard a reasonably hard and fast “we only support RHEL” from the cloud delivery org.  That in itself is reasonable based on the scaling back we are doing pre-merger.  Is there bandwidth for people to cover other non-SOE’d environments.  Personally I can’t see why CentOS, Ubuntu and Photon couldn’t be supported, but I’m not aware of any internal training Cloud or Platform engineers have had on supporting Containers.

I know CS and CK devised the Infrastructure-As-Code workshops which have been run multiple times here in Chorley and, I believe, in Chesterfield.  That is a great jump-off point for engineers to start thinking about and learning about containers, but most engineers leave that and their day to day work doesn’t utilise that learning.

It’s a wider catch-22 question.

Sales: “Do we support containers, I have a customer that wants them?”

Operations: “We haven’t had any customers that wanted containers before so we aren’t able to support them”

I believe in theory the progression is

1. Identify Market opportunity
2. Specify and design an offering
3. Build that offering
4. Deliver the offering and industrialise/automate the delivery support
5. Provide operational support for the offering

Where does the training of the delivery/support staff come?

Is it the responsibility of the offering to assign part of it’s development budget to training Operational Engineering (OE) to be able to support the offering?

This has been mentioned on CS/SH PLM/ODF calls recently but I would be interested to see how it’s implemented in practice where sales and customer zero traditionally fund training of staff.

AC

Thanks CN,

I will reach out to our POD’s and service teams to advise on their current capabilities, however I suspect that this time it will pass us by.

CS

A few points from an offerings perspective:

1. We have SOEs for Ubuntu (and OEL and CentOS) in addition to RHEL.

1a. I’m not sure that it makes any sense to bake Docker into the SOEs as it’s too much of a moving target (something that’s been brought up in a rash of ‘Docker’s not ready for production’ blog posts over the past few months).

2. We’ve been working on integration of Agility with Kubernetes (which implies Docker underneath)

3. VMs on a vBlock is still fine, but we’re moving now to Modern Platform, which provides a much more flexible (and lower cost) infrastructure

#2 there is probably the key one in terms of answering the question of what we have from an offering perspective for Docker/containers. Having an offering for Docker itself makes about as much sense as having an offering for ‘ps’ or ‘grep’ or any other user space Unix tool – we need to manage at a higher level, and bringing together our Cloud Management Platform and Kubernetes does that (though it still leaves a bunch of holes to fill with other aspects of service management).

Going back to the original post… anybody who’s buying into a container management solution without thinking about how it connects to their CI to become CD probably hasn’t thought things through completely, and we need to provide more help with that process.

NS

If someone says the only answer is RHEL for the cloud, then talk to someone else as they don’t understand cloud . Take a look at RHEL Atomic and Alpine Linux for some other alternative container OSs. The container OS should be the responsibility of the application developer though and no-one should expect support from the IT support function / CSC IMO.

Running Docker on top of your Linux of choice is fine, but that’s a very limited approach and isn’t suitable for containers in production. For that, as CS mentions, you need a container manager. Agility isn’t there yet and until we can see what it does and what limitations it may have it’s difficult to know whether it will be a great tool or just a tick box. Shout out to the Agility team – share your design plans and get some feedback!

Docker/containers are just one aspect of the emerging cloud native application development/hosting options. If you take into account PaaS and serverless computing paradigms then at present I can’t see how Agility will support this wider capability. Agility + support for containers seems a little like the old argument that if I have a VM and can automatically deploy some middleware on it I have a PaaS platform! I therefore think CSC needs to get a little opinionated in this space and start to think about re-using some of the account knowledge around OpenShift, Cloud Foundry, BlueMix, etc. that is surely out there.

CN

Interesting point about the RACI for the Container OS NS.

I agree everything should be self service and neither the Dev OR CSC should look after container OS…over and above dropping it and spinning up a new one if required, but that “should” all be handled by Agility/Openshift/vRealize/Bluemix….etc.

I know that currently the Agility Devs are focussing on delivering pre-promised road map items and updates/fixes…not sure where Docker integration/management is on their Roadmap.

PC

Hi AC,

Docker containers are also implemented in Windows Server 2016 which is now GA.  The features is called Windows Containers.  I believe we also have a Windows 2016 SOE that is available.  Please reach out to PT for more information.

Here is a great quickstart article:  Windows Containers Quick Start

We’ll be looking at Windows Containers for new MyWorkStyle offering deployments, in the new year, so watch this space!  :-)

CN

Interesting to hear that PC would be a great Lunch+Learn in AEC or GIS town hall topic. I’d sign up to see that use case!

PC

Thanks for the feedback.  My initial thoughts are that containers would help us constrain the sprawl of virtual machine hogs in an environment, helping us to control costs but also better control the predictabilty of resource usage over time.  It’s no more than a theory at this stage.  We need some lab time to see if it has wings and to truly understand any other benefits.  I’m very open to ideas and new ways.  :-)

AC

Hi PC,

I think for me its the service, support and availability aspect from an application perspective.

If the Base O/S becomes almost the equivalent of a Hypervisor for the docker layer and it’s component, it is almost a commodity item.  (Take it with a grain of salt, but we don’t need to patch monthly, we roll out new VM’s with the pre patched SOE instead.)  a standard sized VM with a standard OS spread over multi vblocks or cloud environments, just means far lower support costs for CSC.

It should also mean we have capability to report on performance metrics,  not just at the CPU layer,  but now,  more importantly at the docker, and application layer,

End user experience can be a higher focus. (how long does it really take a user to carry out these tasks, and where is that constraint or bottleneck)

Also improved service for our clients with a higher application uptime far simpler to achieve, and simpler DR scenarios that if not automated, then far swifter to trigger and spin up.

I haven’t even started to dig into the real potential of Client PC docker containers yet.  That could be a whole world of fun.



No Responses Yet to “The DXC Blogs – The point of (Docker) containers”

  1. Leave a Comment

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.