Why I’m a NAC nonbeliever
I was recently speaking at a conference, and the subject of network access control (NAC) came up. At the time I gave a rather glib answer that ‘it’s not the network that you wish to control access to, but the data and services that wrap it’. That’s been my position for some time, but it’s probably worth unpacking some of the detail around this.
The heart of the issue here is where do we achieve policy enforcement points (PEPs)? The issue can therefore be recast in terms of entitlements services. It almost always makes more sense to put PEPs within applications or application infrastructure, but there will be times when these are old and brittle. In this case does it make more sense to have the PEP close to the service (network based entitlements services) or close to the client (NAC)?
Then comes along the question of posture – NAC advocates that infected clients shouldn’t be allowed to connect to any services (except perhaps disinfection), but this assumes that infection can be reliably detected, which is at odds with the efficacy of most AV etc. (especially against targeted attacks). To steal a quote from the conference ‘just because I’m standing straight doesn’t mean I don’t have pancreatic cancer’. I covered many of the underlying issues in more detail in my discussion of why trust != management.
What about unauthorised devices? This raises some interesting follow up questions, like why should I care about unauthorised devices, and if I do why am I in the business of running a network that makes me care about this stuff? The heart of these questions lies in where perimeters are drawn. If I choose to run a client network that has all sorts of sensitive data flowing across it, and provides unfettered access to services containing data that I care about then I certainly should care about the network itself. If however I reperimiterize around core data assets, and connect client machines to public networks then the concept of unauthorised machines evaporates. If I have unauthorised devices appearing on my data centre network then I have a physical security problem rather than an information security problem.
So… I think this leaves NAC as an unsuccessful legacy of outdated network management practices. Or did I miss something?
Filed under: security | 2 Comments
Tags: entitlements, nac, reperimiterisation, reperimiterization, security
For most enterprises the essence of trustworthiness is their internal build, which normally comes in client and server flavours for a variety of ‘supported’ operating systems. Machines running this build are trusted to access corporate resources, anything else is kept out with policies, firewalls and mechanisms like network access control (NAC). That internal build is considered trustworthy because it carries with it a bunch of tools that are meant to ensure that the machine only runs an intended subset of applications. This includes patch management, antivirus, host intrusion prevention systems (HIPS) and various user policy management systems. Historically these security subsystems have been perceived as necessary to shore up weaknesses in the ‘vanilla’ operating system. This approach is however fatally flawed because the world has moved on in two key ways:
- Operating system security out of the box is far better than it used to be. In all likelihood a ‘vanilla’ installation with auto update installed will be in a far more recent patch state than an enterprise machine relying upon some contrived mousetrap for patch deployment that was conceived before the auto update mechanisms matured.
- To an appropriately skilled attacker the enterprise layered defence looks like a panopticon. One hole is all that it takes to exploit an unpatched (or even unknown) vulnerability and install some malware (as a rootkit, in the kernel, where it will be hard to detect).
This means that the corporate build becomes little more than security theatre against a modern, stealthy and targeted attack.
Dealing with this means that security engineers must go back to square one in questioning what it is that they want to trust? Typical answers to this are:
- Systems connecting to corporate assets must be free from spyware and other types of malware.
- Users interacting with corporate assets must be identified, and from that identity entitlements decisions can be determined.
- The scope of what the corporate help desk can be expected to support should be minimised (in the interest of efficiency and cost effectiveness).
Right now those answers are arrived at by a variety of management mechanisms, and each has its weaknesses:
- Installing patch management, AV, HIPS etc. in an effort to keep the malware out.
o These mechanisms simply aren’t effective against the most pernicious malware. They’re great at keeping the lumps out, but things still get through. - Enrolling machines within corporate directory infrastructure
o This is a necessary step towards establishing user identity in most situations, but here lies a fundamental question of whether it is the machine that we need to trust, or the user, or some combination of both? - Restricting the means to install applications
o This results in a constant tension between user choice and cost of management. To reduce the obvious friction that results this invariably leads to a population of users with enhanced privileges over the machines that they use. This in turn leads to an enlarged risk surface area as those users have far more ability to disrupt configuration from the intended managed baseline (maliciously or otherwise).
So… what could we choose to trust instead?
I proposed that trusted virtual (client) appliances offer a new set of choices. Let me first describe what one of these things is:
- A virtual machine offering a single function, or a limited number of functions.
- A cryptographic chain of trust is established from the hardware (e.g. TPM) through the boot loader and hypervisor to a signed image of the appliance.
- The appliance is able to attest to services that it accesses that it hasn’t been tampered with.
- The appliance can (should?) be constructed from components with known provenance.
Whilst this approach superficially looks like putting a virtual bubble around a traditional managed build it does offer a number of distinct differences:
- When dealing with malware the emphasis has changed from trying to keep the bad stuff out to only running the good stuff that was put in.
- Barriers can be established between different services and the applications that access them.
o All of the application eggs don’t have to be put into the same security basket.
o At the extreme this means that there is no issue having an untrusted, unmanaged set of applications accessing public services sitting alongside trusted apps accessing sensitive services. - The attack surface area associated with a given machine has been reduced from the OS to the hypervisor (a microkernel) – something that security researchers have been suggesting for some time.
There are of course challenges ahead:
- Services haven’t yet been built to understand client attestation
o Existing methods like 2SSL, SSH and VPN based authentication will have to be reused for the time being - Provenance services are in their infancy, and even when they do grow up undesirable things still can/will find their way past whatever mechanisms are placed in a secure software development lifecycle on the road to provenance.
- The present generation of hypervisors deal very well with server side resource management (CPU, memory, IO, storage) but aren’t yet very well adapted to client specific concerns around keyboard, video and mouse (with GPU and screen output sharing being the really tricky part).
So… getting back to the original plot… managed builds have probably come to the end of their useful life as a means of dealing with issues of trustworthiness, but by bringing together virtualisation and stronger trust assurance mechanisms it’s possible to recast managed builds in a way that not only deals with the trust problems, but also gives flexibility back to users and service providers.
Filed under: security | 1 Comment
Tags: malware, management, managment, security, trust, virtual appliance, virtualisation
I promised a more detailed post about this in my previous one about ERM. This is not intended to be entitlements services 1-01, but there is some necessary preamble to set the scene. Somebody probably ought to write that tutorial, as web search and WikiPedia are unusually unhelpful in this area, but that’s not going to be me (or at least not right now).
Directories 0.0 (the dark ages) – this wasn’t all that long ago. Commercial implementations of directories first came along in the late nineties, and became mainstream at the turn of the millennia. Before that we faced a world where each piece of software had its own concept of identity, and its own store of stuff that needed to be managed. This was seriously confusing for users and expensive to manage, it also meant that good security policies (e.g. password expiry and complexity) were hard if not impossible to enforce.
Directories 1.0 (the noughties) – the arrival of a central place that organisations could store user data has quietly revolutionised how software interacts with identity/security concepts. No longer do we tolerate standalone stores of user information (or worse still credentials) – we expect integration, and that expectation is met via lowest common denominator LDAP interfaces. We also have means to export/import information from one director to another via LDIF, though this usually remains the preserve of ID management wonks.
Directories 1.0 turns out to be good enough for a huge swath of use cases – things that only need ‘coarse grained’ authorisation models, where it’s sufficient to know that somebody exists in a given group. There is however trouble in paradise – not all authorisation problems turn out to be simple enough to be expressed via the coarse grained mechanism of groups. In part this has led to directory abuse, with most enterprise directories turning out to have more groups than they have people in them, but that isn’t the end of the problem. The problem that we face today is that each application or service that requires a fine grained authorisation model comes with something baked into that application or service (just like the dark ages apps had their own mini directories with just their users in them). Each piece of software like this comes with its own role mapper, its own entitlements concept, its own policy language, its own tool to define policy. All of these individual moving parts are in their own way as bad as the old mini directories as each represents a chance to foul things up ranging from downright poor implementation (e.g. where the entitlements table in the app database becomes a juicy target for attack) through to management cost holding down all of the complexity (just how do you persuade the auditors that person A in role X can sell a thingy, but only with person B in role Y’s approval?). The answer to these issues is of course…
Directories 2.0 (the future – for a while at least) – entitlements services do for fine grained authorisation what directories did for course grained authorisation. They centralise the stuff that makes sense, they allow delegation of the stuff that really shouldn’t be centralised, and they bring consistency and a coherence of approach. Entitlements services typically consist of three tiers:
- Policy Administration Point (PAP) – a place where administrators define roles, and the polices that map those roles to permissions against underlying resources. This can be done in a standards based way using a vocabulary like XACML. PAPs also support a bunch of reporting functionality to help keep those auditors at bay.
- Policy Enforcement Points (PEP) – the sharp end of the operation, where questions about whether a user can do something to a resource get asked and the answer gets acted out. These can be implemented directly into custom software using SDKs, and PEPs may also be available for commonly available infrastructure software like web portals, relational databases and instant messaging platforms.
- Policy Decision Points (PDP) – the middle tier between the PAP(s) and PEPs so that things scale nicely. Policy decision points are where questions about entitlement get answered (to make things go really fast the PEPs might cache these answers to save a service call each time a decision needs to be made).
So… what this should mean is that once enterprises have chosen a PAP from the vendor of their choice and rolled out a layer of PDPs all that they then need to do is start leaning on their software developers and suppliers to put in a suitable PEP rather than baking in their own mini PAP/PDP in the way that the dark ages stuff had its own mini directory. This is kind of where things run into trouble right now… it’s early days, and PAP/PDP infrastructure is hard to come by. Worst of all there isn’t yet the entitlements service equivalent of LDAP (let’s call it something like XSSIP for eXtensible Security Service Interface Protocol). XACML looks a lot like LDIF in this context, but I doubt that directories 1.0 would have worked out as it has if everybody needed to have multiple directories to support various products.
I therefore have a call to action – we need a open, industry standard interface to be defined between PEPs and PDPs. I realise that this won’t be easy, but it’s got to be easier than managing the baked in entitlements mess that confronts us today, which means that there’s a healthy market out there.
Filed under: security | 10 Comments
Tags: authorisation, authorization, directory, entitlements, identity, idm, ldap, ldif, pap, pdp, pep, xacml
This isn’t a post about consumer DRM, which I think has been covered well enough before by Cory and others (though some of the Bob=Carol issues still apply). Enterprises have a load of stuff that they need to (or are obliged to) protect. This is a post about the issues that I see with entitlements enforcement products using encryption that pitch themselves at enterprise use cases:
- Identity Management Integration. Most stuff will happily integrate with the usual directories, but this isn’t enough in a world is flat enterprise. What about customers, suppliers, offshore development centres, outsourced workers – are these people really going to find their way to the directory? Some try to deal with this using PKI, but that just brings the onset of another pain with key distribution. If PKI really worked as advertised a decade ago then we’d all be quite accustomed to working with key selectors, and we’d all have a bunch of private keys to fuss over; but we aren’t and we don’t. Information cards do a great job of hiding the complexity of PKI, and when the identity metasystem becomes more ubiquitous I’m sure that it will help with this problem. Until then I hand victory to the identity based encryption (IBE) guys. This is a solved problem, just not yet ubiquitously.
- Client Integration. ‘If you want to do business with me then please just install this plugin’. This is an unreasonable request for business partners, doubly so for customers. This problem only gets solved by standards. Initially it will be the de facto standards of the most popuilar client software providers, but ultimately it must be open standards that support user choice.
- Content classification. Enforcement depends on policy, and to write a meaningful policy one must understand the assets that the policy refers to. Manual classification of information assets can (painfully) be made to work in a small silo, but to make anything to scale to an enterprise it needs to be highly automated. To succeed at automating this process means dealing with the multiple dimensions of content (search), specific regulatory requirements (which can often be dealt with by regex), internal taxonomies (e.g. URI stubs), who actually creates and uses stuff (something that I’ve heard called ‘identity for data’, or ‘chain of custody’) etc. Most of what I’ve seen attempts to do a few of these things, but I’ve not yet seen a complete solution to this multi-dimensional problem.
- Reinventing entitlements services. This probably isn’t a fair point (in 2008), and is a recent addition to this list that I’ve been carrying around in my head for some time. I think it will however become more important as entitlements services emerge to become ‘directories 2.0’ (which is probably a worthwhile topic for an entirely different blog post). The point is that roles and policies should really not need to be defined separately for each enforcement point, and at the end of the day ERM is just another policy enforcement point (PEP) – so it would be great to see something that could make use of existing policy administration infrastructure.
Of course ERM is just one means of dealing with the broader anti-data leakage (ADL) / data leakage prevention (DLP) problem, though I feel that most of the points above apply equally to ADL/DLP products.
Filed under: security | 8 Comments
Tags: ADL, cryptography, DLP, DRM, encryption, ERM, idm, PKI
I attended an excellent seminar last night run by the Open Rights Group on the subject of ‘Creative Business in the Digital Era’. I wasn’t sure exactly what I expected to learn there, particularly as the course materials are available on their wiki, but I hoped that there would be some interesting people and dialogue. I wasn’t disappointed :)
What emerged was a discussion about the tension between ‘community’ and ‘brand’, and the place that an individual artist can carve out for themselves. There was some argument that ‘brand’ was a relic of some broken old school marketing theory that no longer applied to the Web 2.0 world of today, but I’m not entirely convinced by this (or maybe just too brainwashed into the ‘7Ps‘ point of view).
It seems to me that in the context of creative media that ‘community’ is not ‘scale free‘ (a term that the community around the web science research initiative [WSRI] seems somewhat obsessed by). Whilst an artist might be able to make a comfortable living with 1,000 true fans, it seems that 10,000 fans doesn’t bring a more comfortable lifestyle, or 100,000 fans make an artist rich.
There appears to be a chasm between the artists like my friend NLX that can get by doing gigs for what’s in the tip bucket and selling CDs for $10 (where I hope they get about $9 in their pocket), and signed artists who get a few cents for each CD but sell CDs in vast enough numbers via big media company big marketing budgets. The only way across this chasm looks like getting a lift over – planes and helicopters courtesy of the big media companies.
So… why doesn’t the community around an artist scale? Why can’t Rieser get rich by selling £7 CDs from their web site to people who loved their part in the BloodSpell soundtrack (I love the contrast between their gritty ‘I want to be a Rock Star’ and the studio polished ‘Rock Star’ by Nickelback)? How will anybody even become a rock star if/when the studios get crushed under the heel of sticking to a failing, ailing business model where they treat their customers as criminals with DRM and go around suing kids?
PS My journey to finding Rieser is something of a web 2.0 parable. I like Science Fiction, and I’m a big fan of Charles Stross (particularly Accelerando). Charlie’s blog had a plug for BloodSpell, so I took a look.What I found had pretty rough graphics, a story line that held my attention to the end, and some great music…
PPS I didn’t intend to take more than 6 weeks away from blogging. I just got kind of busy.
Filed under: media | 3 Comments
Tags: 1000 true fans, long tail, media, scale free, web science
Subscription bounded blog search
One thing that I find repeatedly frustrating is when I’ve seen a really cool article or posting about something (usually on a blog, or linked from a blog) that I want to reference or go back to, and I just can’t find it again. This would be less of a problem if I was a more diligent social bookmarker, though I then expect I’d just get lost in a sea of bookmarks and tags.
The answer I think is to have subscription scoped search (with the option to spider to a given link depth). Then I could just upload my OPML file (or start from my online aggregator if I used such a thing) type in some keywords and the magic would commence.
Wishful thinking, a reasonable feature request for you know who, or already being done and I was just too dumb to figure out where and how?
PS It continues to amuse me that the spellchecker on the tool that I’m using to write this doesn’t recognise the word ‘blog’ – come on guys this is the business that you’re in!
Filed under: wibble | 4 Comments
Tags: blogs, search
The interest feed
When I subscribe to anybody’s blog there is usually a choice of feeds between ‘posts’ and ‘comments’. Whilst ‘posts’ usually suits me fine I find that it isn’t adequate when I make a comment and want to watch the unfolding discussion. I never choose ‘comments’, even on my favourite blogs, because the noise to signal ratio is too high. The result is that I get thrown back into the Web 1.0 days of having to check a particular URL with a browser every so often. This is a real bore, and so I think I often miss out.
So… I propose a new class of feed – let’s call it the ‘interest’ feed. This gives you the same stuff as ‘posts’, but also lets you see ‘comments’ on any post that’s designated as interesting (e.g. by making a comment to it).
Of course the underlying issue here (once again!) is identity, as such a scheme would imply customised feeds for each individual subscriber (or some sort of personalised identity driven meta feed mashup).
I think this is the sort of thing that the major blogging platforms could start offering – come on guys – it’s not that hard!
Filed under: blogging | 3 Comments
Tags: ATOM, blog, feed, identity, rss
Despite the lack of comments (yet) the post on persona has resulted in some good behind the scenes debate.
Something that came out of this is that I agreed to post an illustration of how a legal entity fits into the persona illustration in order to effect the LLP concept:
Sadly this still leaves us with a missing mechanism for creating the appropriate legal entities. Companies (e.g. Ltd in the UK or LLC in the US) could be used, but seem a bit cumbersome and therefore unfit for purpose. I’m not a lawyer, but I do find myself wondering if trusts could be used in this context (e.g. legal trust meets IT trust)?
Filed under: security | 5 Comments
Tags: identity, idm, llp, persona, security, trust
RSS Feed
I’m always annoyed when I can’t find an obvious link to an RSS (or ATOM) feed for a blog that I like, so I’m doubly annoyed that I can’t find an obvious way to have one for this blog. Maybe I’ve just made a poor choice of theme that puts style ahead of substance?
Anyway, the default feed can be found here (which I think is RSS 2.0) , other RSS flavours and ATOM are also available.
Filed under: this blog | 2 Comments
Tags: rss
Persona
OK, it’s time for my first serious post, and it’s not about a brand of fertility monitor.
Persona is a term that’s increasingly being used in conversations around digital identity, but it’s not one that I typically find to be well defined. The Wikipedia entry doesn’t help much, as it is about the more general definition of persona as ‘a social role’. When I checked the Identity Gang glossary (or Identipedia) the last time I was looking for help that didn’t help either, though I now see that there are a number of definitions there (I can’t decide whether this is better or worse than none at all). The discussion about Limited Liability Persona is getting some more traction in the aftermath of the Scoble/FaceBook debacle, but that concentrates on a proposed legal framework and the underlying definition of persona is somewhat implicit.
The purpose of this post is to put forward my own definition, and hopefully by eliciting some comments it will be possible to find some sort of consensus definition.
It is my contention that persona is an abstraction between an entity (usually a biological entity, or person) and a bundle of one or more digital identifiers, so that the entity can present themselves differently according to context. This is similar to using a role as an abstraction between a digital identifier and a bundle of privileges (though I’m increasingly leaning towards attribute based access control [ABAC] in favour of role based access control [RBAC] as role management is a deep and sticky tar pit).
At this stage it’s usually helpful to offer some examples:
-
‘Blogger’ – my persona of ‘blogger’ associates with my digital identity (OpenID) ‘thestateofme.wordpress.com’, which in turn places me in the role of ‘author’, which gives me the privilege to ‘post’, ‘approve comments’ etc.
-
‘Web surfer’ – my persona of ‘web surfer’ associates with a bundle of digital identities (OpenID, search engine company, web mail provider etc.), which in turn place me into roles like ’emailer’, ‘photo uploader’ that then let me have privileges like ‘send email’, ‘create new album’ etc.
-
‘Employee’ – my persona of ’employee’ gets me a bundle of digital identities that are mostly issues by my employer, some on internal systems, others on Internet connected system with different namespaces…
Hopefully you’re getting the drift by now, and this helps?
Filed under: security | 6 Comments
Tags: digital identity, idm, llp, persona, security, trust

