Why trust != management, and what can be done about it
For most enterprises the essence of trustworthiness is their internal build, which normally comes in client and server flavours for a variety of ‘supported’ operating systems. Machines running this build are trusted to access corporate resources, anything else is kept out with policies, firewalls and mechanisms like network access control (NAC). That internal build is considered trustworthy because it carries with it a bunch of tools that are meant to ensure that the machine only runs an intended subset of applications. This includes patch management, antivirus, host intrusion prevention systems (HIPS) and various user policy management systems. Historically these security subsystems have been perceived as necessary to shore up weaknesses in the ‘vanilla’ operating system. This approach is however fatally flawed because the world has moved on in two key ways:
- Operating system security out of the box is far better than it used to be. In all likelihood a ‘vanilla’ installation with auto update installed will be in a far more recent patch state than an enterprise machine relying upon some contrived mousetrap for patch deployment that was conceived before the auto update mechanisms matured.
- To an appropriately skilled attacker the enterprise layered defence looks like a panopticon. One hole is all that it takes to exploit an unpatched (or even unknown) vulnerability and install some malware (as a rootkit, in the kernel, where it will be hard to detect).
This means that the corporate build becomes little more than security theatre against a modern, stealthy and targeted attack.
Dealing with this means that security engineers must go back to square one in questioning what it is that they want to trust? Typical answers to this are:
- Systems connecting to corporate assets must be free from spyware and other types of malware.
- Users interacting with corporate assets must be identified, and from that identity entitlements decisions can be determined.
- The scope of what the corporate help desk can be expected to support should be minimised (in the interest of efficiency and cost effectiveness).
Right now those answers are arrived at by a variety of management mechanisms, and each has its weaknesses:
- Installing patch management, AV, HIPS etc. in an effort to keep the malware out.
o These mechanisms simply aren’t effective against the most pernicious malware. They’re great at keeping the lumps out, but things still get through.
- Enrolling machines within corporate directory infrastructure
o This is a necessary step towards establishing user identity in most situations, but here lies a fundamental question of whether it is the machine that we need to trust, or the user, or some combination of both?
- Restricting the means to install applications
o This results in a constant tension between user choice and cost of management. To reduce the obvious friction that results this invariably leads to a population of users with enhanced privileges over the machines that they use. This in turn leads to an enlarged risk surface area as those users have far more ability to disrupt configuration from the intended managed baseline (maliciously or otherwise).
So… what could we choose to trust instead?
I proposed that trusted virtual (client) appliances offer a new set of choices. Let me first describe what one of these things is:
- A virtual machine offering a single function, or a limited number of functions.
- A cryptographic chain of trust is established from the hardware (e.g. TPM) through the boot loader and hypervisor to a signed image of the appliance.
- The appliance is able to attest to services that it accesses that it hasn’t been tampered with.
- The appliance can (should?) be constructed from components with known provenance.
Whilst this approach superficially looks like putting a virtual bubble around a traditional managed build it does offer a number of distinct differences:
- When dealing with malware the emphasis has changed from trying to keep the bad stuff out to only running the good stuff that was put in.
- Barriers can be established between different services and the applications that access them.
o All of the application eggs don’t have to be put into the same security basket.
o At the extreme this means that there is no issue having an untrusted, unmanaged set of applications accessing public services sitting alongside trusted apps accessing sensitive services.
- The attack surface area associated with a given machine has been reduced from the OS to the hypervisor (a microkernel) – something that security researchers have been suggesting for some time.
There are of course challenges ahead:
- Services haven’t yet been built to understand client attestation
o Existing methods like 2SSL, SSH and VPN based authentication will have to be reused for the time being
- Provenance services are in their infancy, and even when they do grow up undesirable things still can/will find their way past whatever mechanisms are placed in a secure software development lifecycle on the road to provenance.
- The present generation of hypervisors deal very well with server side resource management (CPU, memory, IO, storage) but aren’t yet very well adapted to client specific concerns around keyboard, video and mouse (with GPU and screen output sharing being the really tricky part).
So… getting back to the original plot… managed builds have probably come to the end of their useful life as a means of dealing with issues of trustworthiness, but by bringing together virtualisation and stronger trust assurance mechanisms it’s possible to recast managed builds in a way that not only deals with the trust problems, but also gives flexibility back to users and service providers.
Filed under: security | 1 Comment
Tags: malware, management, managment, security, trust, virtual appliance, virtualisation