Perhaps I was being a bit dull when I first read through Andrew McAffe’s The Ties that Find, as I seem to have missed the key point, which is that weak ties are where new information comes from. Thanks to Dr Felix Reed-Tsochas for calling this out so explicitly during his section of the networks masterclass at the recent SVCO event. I should also say that Mark S. Granovetter’s original paper on The Strength of Weak Ties is well worth a read – what wonderful insight for something written more than 25 years ago.

This gets me thinking that there’s probably a gravity analogy lurking here – once something falls inside a field of influence then it becomes less useful because it has lower entropy.

There’s also a potential molecular chemistry analogy here – that weak ties (like covalent bonds) take less energy to get reactions going than strong ties (like ionic bonds).

If enterprise 2.0 is looking to encourage innovation, to get those reactions going, to suck in that entropy, then we need to facilitate those weak ties. If we keep the enterprise 1.0 blinkers on and stop things at the electronic borders that wrap around our buildings then that’s not going to happen very well.

and that’s why enterprise 2.0 shouldn’t be ‘enterprisey‘.


This post has been stewing for some time, and perhaps the fuss today over the launch of the .tel domain gives me a good reason to serve it up.

It’s my view that telephone numbers were THE original digital identity scheme. Of course like most pioneering activities things weren’t thought through particularly well, and we’ve seen various changes and kludges applied along the way. The system still works though, and most people (even amongst the less technically savvy) are aware of the limitations without even giving them much thought.

Security seems like a good place to start. For some reason my colleagues in the IT security world seem to turn purple and start ranting when I talk about telephone numbers being a type of digital identity. “They’re not secure”, I hear the cry. Let’s put things into perspective – a number is just a type constrained special case of a string format address. Less constrained cases (that are also used for the purposes of digital identity) include email addresses and OpenID URIs. None of these things are inherently secure or insecure, but we tend to associate them with the various degrees of badness embedded in the common implementations. When I dial a number I could be misdirected elsewhere (by an attacker, or just some clever call forwarding), and when I receive a call with caller line identification (CLI) it could be spoofed. It is true that the telephony system that we mostly use today is riddled with security hole, and that there are few good ways of establishing trust, but that’s mostly not the fault of the numbers.

Namespace management has been a key problem over the years. As the use of telephone numbers for personal identity became more common we see the same growing pains that we’re presently encountering in the journey from IPv4 to IPv6. Corporate exchanges were a bit like NAT, but corporate citizens came to demand personal addresses (=numbers), and sometimes more than one (for fax machines etc.). We also bump up against some cognitive psychology issues here – too much namespace = too long to remember. For those of you with kids you can think of yourself as being an expensive NAT router next time you answer their calls :)

Geographic anchoring is somewhat related to the namespace management issues. This is of course a hang over from the days where the physical location of exchange switching equipment was meaningful, but it continues to affect us. I’ve been trying for some time to run with ‘one number’ – a single telephone number that will reach me wherever I am in the world, on whatever device I choose to have with me. The mechanics behind this work surprisingly well; all of the issues are around social etiquette that’s annealed around our use of numbers. People still get offended when I don’t give them a ‘mobile’ number, and others find it impossible to grasp that dialling something that’s purportedly anchored in London will actually reach me in office in NY (or wherever else). I’m told that in some parts of the world great significance is attached to which class of number (from many on a business card) should be used at any given time.

Of course ‘one number’ isn’t a panacea. People still worry about things like long distance costs and roaming charges. +44 may alienate those from +1 or +34 or whatever (it may even be blocked on some corporate exchanges and pay as you go mobiles); so what I may really need is some identity virtualisation, and luckily services to do this already exist.

So, rounding up, telephone numbers were there being digital identity before the term was even coined. Since we still use telephones a lot we still have to consider the use of telephone numbers as part of a broader identity landscape, and that’s particularly important when the conversation moves onto unified communications – something that I’ll probably post about another day.

PS I’m intrigued by the utility of putting contact data into DNS versus something webby like Portable Contacts, and would love to hear stories of how this will be used in anger?


They in this case are the machines that we use every day, or more specifically the software running on them. By using language like ‘they’ perhaps I’m already using a person like metaphor that’s inappropriate to the situation. Regardless, we’re confronted each day by machines that make us do repetitive tasks rather than taking them on for us.

This post was originally going to be called something like ‘bookmarks with input’, as such things would solve a tiny subset of the trouble at hand. Certainly bookmarks with input would solve the issue that got me thinking about the broader problem space.

My frustration began with the browser on one of the mobile devices that I lug around with me. I use it quite frequently to check railway timetables, and then use the information from those timetables to determine which station to head for, or train to jump onto. Like most mobile devices it’s design can best be described as read mostly – doing input is a pain, and also bandwidth is limited, so browsing through many pages to get to the right place is a slow process. So… rather than repeatedly making me input the same variables why doesn’t the device do this for me (or at least let me save a given input set for rapid future reuse)? This is less of a problem on the desktop, where both browsing and input are higher bandwidth experiences, and yet we still sit there like robots – repetitiously pointing and clicking. The server side for many services don’t help either, by insisting that some sort of session state be established and maintained, and sliding you straight back to square one if you dare to let things time out or ask a question without hoping through the right sequential steps and up the right ladder.

How do we stop ourselves being robots playing snakes and ladders? The geeky answer is scripting. In anticipation of the comments that might say ‘you can do all that with Greasemonkey‘, I would like to ask ‘why don’t we do all that with Greasemonkey, or indeed any other scripting environment’? I think the answer is that scripting is for geeks, because scripting is hard – it’s another place where the machine makes you think like a computer rather than a computer learning from our actions.

Things get much more interesting if the script is automatically generated, perhaps by observing pattens of repeated behaviour. More interesting still if the generated script can then by edited in a simple and meaningful way. I think that’s what Platypus does, but that still leaves me wondering why such functionality is in an add-on to an add-on rather than part of the everyday out of the box experience?


Reversion marketing is the evil twin of conversion marketing. The reversion marketing experience from a consumer point of view is about receiving such dreadful service that you choose to leave. Why would any organisation do this? Well, it’s a way of getting rid of unprofitable customers without directly saying to them ‘we don’t want you any more’. It’s the corporate version of the person who can’t say to their boy/girlfriend that they don’t want to see them anymore, and instead behaves like a jerk so the other party breaks off the relationship. This is dark side of customer segmentation. Whilst the textbooks are full of what you should do to extract more revenue from the most profitable customers they have little to say about what to do with the unprofitable ones.

So why can’t organisations say ‘we don’t want you any more’? Is it more or less damaging to brand values and perception to be transparently pruning customers, or to be delivering bad service to those customers so that they self prune? In a socially network world full of feedback and reviews, can the reverted customer do a lot of damage; or would a declined customer be even worse?

From my own perspective this is a practice that I see most often from large and resilient brands, which gets me wondering if I’m mistaking incompetence for malice? I’d love to hear from any insiders whether reversion marketing is an actual process anywhere?


Digital ego?

25Nov08

I spent yesterday at the Silicon Valley Comes to Oxford (SVCO) event, which I can heartily recommend to anybody interested in hearing about the intersection of technology and entrepreneurship from the horses mouth.

There were many highlights to the day, but for me the most interesting presentation was by Susan Greenfield on ‘The Brain: neuroscience of the computer’. The central question behind the presentation was what would it take to digitize our personal identity (and establish digital consciousness)? This clearly is digital identity on a scale way beyond the scrum of tokens and federation. So much so that I wondered whether digital identity is the right label, or whether it should be digital ego? It certainly seems to me that Freud’s (translated) Id isn’t relevant to this discussion of a broader concept of ID. I also feel that until we get the foundation concepts of digital identity sorted out – so that the tokens and federation and all that stuff actually work, we’re in no place to build a digital ego.


My experiment with directed social bookmarking seems to be working out well, though I still don’t have an appropriate feedback vehicle. Nudge, nudge to those that have offered to help.

One of the interesting things that’s happened is that people who I direct stuff towards are starting to become significant in my tag cloud. This got me thinking about whether that might be a bit of a privacy problem, and how such a problem might be solved.

A seemingly obvious answer would be to use meaningless but unique numbers (or identifiers) MBUNs. I could then tell the target of a directed feed which MBUN to subscribe to, and off they go. This could however be problematic for me in terms of remembering who is represented by which MBUN.

There must be a more elegant way of doing this?

Well, I think there is, but maybe it’s not quite ready yet. What if rather than having an RSS feed as the conduit for a directed feed I used an personal AMQP queue instead. That would I think be cool. The plumbing and name space management aren’t there yet, but I don’t think they’re insurmountable problems.

Please comment on other cool things that could be done with personalised (or should that be personalised) queues?


I’ve been too quiet of late, and part of the problem has been this blog post, which has become something of a mental bolus. It’s time to get it out.

The title really says it all. It’s my assertion that software support, or at least big company support for enterprise customers, is a myth. Not just a myth, but a costly myth that we seem to desperately cling to for reasons that elude my understanding.

When was the last time you phoned up a major software supplier’s support desk and said ‘I’ve got a problem with your stuff – it doesn’t work like it should do’ and they came right back and said “gosh – thanks for pointing that out, we’ll get right on with fixing that, and let you know as soon as possible when the fix will be ready”? It just never happens. My typical experience seems to be modelled on the stages of grief:

1. Denial – there’s nothing wrong with our software. You must be using it incorrectly.

2. Anger – how dare you suggest we make anything other than a perfect product. It’s supposed to work like that.

3. Bargaining – we can do a fix for you, but which of your high priority feature requests for the next version are you willing to give up so that we can still make the shipping date.

4. Depression – the fix is done, but we can’t let you have it as it needs to go through our regression testing cycle. We’d much rather you lived with a broken system than give you something that’s not properly tested.

5. Acceptance – here’s your patch, it took us so long to do it that we’ve rolled it up with a bunch more and called it a service pack.

So… software support is just like grief, except you pay something like a 25% annuity for the privilege. No wonder they call it S&M ;-)

I could leave it there, and let the debate rage in the comments; but maybe that won’t happen. The blogosphere has become a surprising write only medium of late. So what do I think should be done about this?

Let’s start by looking at where good software support still happens (it does) – Open Source and startups. Startups do good support because they’re desperate to have a working product and happy customers. This is how things should be. Open Source lets you fix it yourself (if need be) or pay the original authors, or some third party that thinks they’re a bit handy with a text editor and compiler, to do it for you. There is maybe a boundary condition with Open Source, which is where enterprises pay for huge OSS S&M contracts, which I feel are just as bad as huge S&M contracts for proprietary stuff.

I think the general principle for a working system looks like fixed price pay per incident. The problem with this is that it’s victim pays; and there’s also a temptation for ‘incidents’ to be closed before they’re thoroughly resolved. However, I think it would take a LOT of incidents for any enterprise to run up a bill that looks like their present S&M bill.

The pedants amongst you will now be wanting to comment about the ‘M’ bit for maintenance being an options contract on upgrades to future versions. I know, so let’s keep the discussion about support.


One of my colleagues spends a lot of time seeing how we can introduce more enterprise 2.0 technologies to the workplace, and when I come across good stuff in that field I tend to throw it over the wall to him. It therefore struck me as insane that when I was reading this from Andrew McAfee and specifically looking at a picture depicting how bad email is for collaboration, that I sent him links by email.

This got me thinking that there should be a better way, but I quickly realised that the shallow streams of consciousness that we get from the social web aren’t directed enough. Once again a problem seems to have come up that is identity dependent and fine grained. So… we’ve started a little experiment of using del.icio.us tags that are directed at each other. I’ve borrowed the @name convention from Twitterees as a mechanism for doing this.

The missing piece seems to be a feedback mechanism (other than an email saying ‘thanks good link’ or whatever).


I was recently speaking at a conference, and the subject of network access control (NAC) came up. At the time I gave a rather glib answer that ‘it’s not the network that you wish to control access to, but the data and services that wrap it’. That’s been my position for some time, but it’s probably worth unpacking some of the detail around this.

The heart of the issue here is where do we achieve policy enforcement points (PEPs)? The issue can therefore be recast in terms of entitlements services. It almost always makes more sense to put PEPs within applications or application infrastructure, but there will be times when these are old and brittle. In this case does it make more sense to have the PEP close to the service (network based entitlements services) or close to the client (NAC)?

Then comes along the question of posture – NAC advocates that infected clients shouldn’t be allowed to connect to any services (except perhaps disinfection), but this assumes that infection can be reliably detected, which is at odds with the efficacy of most AV etc. (especially against targeted attacks). To steal a quote from the conference ‘just because I’m standing straight doesn’t mean I don’t have pancreatic cancer’. I covered many of the underlying issues in more detail in my discussion of why trust != management.

What about unauthorised devices? This raises some interesting follow up questions, like why should I care about unauthorised devices, and if I do why am I in the business of running a network that makes me care about this stuff? The heart of these questions lies in where perimeters are drawn. If I choose to run a client network that has all sorts of sensitive data flowing across it, and provides unfettered access to services containing data that I care about then I certainly should care about the network itself. If however I reperimiterize around core data assets, and connect client machines to public networks then the concept of unauthorised machines evaporates. If I have unauthorised devices appearing on my data centre network then I have a physical security problem rather than an information security problem.

So… I think this leaves NAC as an unsuccessful legacy of outdated network management practices. Or did I miss something?


For most enterprises the essence of trustworthiness is their internal build, which normally comes in client and server flavours for a variety of ‘supported’ operating systems. Machines running this build are trusted to access corporate resources, anything else is kept out with policies, firewalls and mechanisms like network access control (NAC). That internal build is considered trustworthy because it carries with it a bunch of tools that are meant to ensure that the machine only runs an intended subset of applications. This includes patch management, antivirus, host intrusion prevention systems (HIPS) and various user policy management systems. Historically these security subsystems have been perceived as necessary to shore up weaknesses in the ‘vanilla’ operating system. This approach is however fatally flawed because the world has moved on in two key ways:

  1. Operating system security out of the box is far better than it used to be. In all likelihood a ‘vanilla’ installation with auto update installed will be in a far more recent patch state than an enterprise machine relying upon some contrived mousetrap for patch deployment that was conceived before the auto update mechanisms matured.
  2. To an appropriately skilled attacker the enterprise layered defence looks like a panopticon. One hole is all that it takes to exploit an unpatched (or even unknown) vulnerability and install some malware (as a rootkit, in the kernel, where it will be hard to detect).

This means that the corporate build becomes little more than security theatre against a modern, stealthy and targeted attack.

Dealing with this means that security engineers must go back to square one in questioning what it is that they want to trust? Typical answers to this are:

  • Systems connecting to corporate assets must be free from spyware and other types of malware.
  • Users interacting with corporate assets must be identified, and from that identity entitlements decisions can be determined.
  • The scope of what the corporate help desk can be expected to support should be minimised (in the interest of efficiency and cost effectiveness).

Right now those answers are arrived at by a variety of management mechanisms, and each has its weaknesses:

  •  Installing patch management, AV, HIPS etc. in an effort to keep the malware out.
    o These mechanisms simply aren’t effective against the most pernicious malware. They’re great at keeping the lumps out, but things still get through.
  • Enrolling machines within corporate directory infrastructure
    o This is a necessary step towards establishing user identity in most situations, but here lies a fundamental question of whether it is the machine that we need to trust, or the user, or some combination of both?
  • Restricting the means to install applications
    o This results in a constant tension between user choice and cost of management. To reduce the obvious friction that results this invariably leads to a population of users with enhanced privileges over the machines that they use. This in turn leads to an enlarged risk surface area as those users have far more ability to disrupt configuration from the intended managed baseline (maliciously or otherwise).

So… what could we choose to trust instead?

I proposed that trusted virtual (client) appliances offer a new set of choices. Let me first describe what one of these things is:

  • A virtual machine offering a single function, or a limited number of functions.
  • A cryptographic chain of trust is established from the hardware (e.g. TPM) through the boot loader and hypervisor to a signed image of the appliance.
  • The appliance is able to attest to services that it accesses that it hasn’t been tampered with.
  • The appliance can (should?) be constructed from components with known provenance.

Whilst this approach superficially looks like putting a virtual bubble around a traditional managed build it does offer a number of distinct differences:

  • When dealing with malware the emphasis has changed from trying to keep the bad stuff out to only running the good stuff that was put in.
  • Barriers can be established between different services and the applications that access them.
    o All of the application eggs don’t have to be put into the same security basket.
    o At the extreme this means that there is no issue having an untrusted, unmanaged set of applications accessing public services sitting alongside trusted apps accessing sensitive services.
  • The attack surface area associated with a given machine has been reduced from the OS to the hypervisor (a microkernel) – something that security researchers have been suggesting for some time.

There are of course challenges ahead:

  • Services haven’t yet been built to understand client attestation
    o Existing methods like 2SSL, SSH and VPN based authentication will have to be reused for the time being
  • Provenance services are in their infancy, and even when they do grow up undesirable things still can/will find their way past whatever mechanisms are placed in a secure software development lifecycle on the road to provenance.
  • The present generation of hypervisors deal very well with server side resource management (CPU, memory, IO, storage) but aren’t yet very well adapted to client specific concerns around keyboard, video and mouse (with GPU and screen output sharing being the really tricky part).

So… getting back to the original plot… managed builds have probably come to the end of their useful life as a means of dealing with issues of trustworthiness, but by bringing together virtualisation and stronger trust assurance mechanisms it’s possible to recast managed builds in a way that not only deals with the trust problems, but also gives flexibility back to users and service providers.