Maaking it work

26Feb09

Yesterday’s posts on wikis and the semantic web were probably less constructive than they could have been. I was long on identifying the issues, and short on fixing the problems.

I stressed the importance of user experience, and in many cases the social networking sites already give us that, though I would love to see JP’s ‘graphic equaliser’ brought to life. Unfortunately existing sites give us a good user experience on their terms, inside of their walled garden. GapingVoid hit the nail on the head here, but it’s not just about making money, it’s about freedom to work together on our own terms rather than somebody else’s Ts&Cs.

I’ll (re)use some use cases to illustrate how things might work…

  1. Planning the party – how do you get a dozen people in roughly the same place at the same time.. I still see the Wiki as a core to solving this problem. All that’s really needed is a place where everybody can contribute information like flight and mobile numbers so that it can be consolidated. Some kind of federated identity is clearly needed to get people through the door, and maybe OAuth provides a neat way of integrating with what people are using already if they’re not tooled up for OpenID, Information Cards or SAML. Secure syndication is clearly a problem, which is why everybody should have their own queue. Messaging as a Service (MaaS) based on AMQP gives us a way to do this. What happens to those messages? I think they become inputs to the ‘graphic equaliser’, which then throws them out in the user’s preferred modality (e.g. Fred just changed his flight details, I find out by  text to my mobile because I’ve told my system that I’m on the road without full fat net access).
  2. ‘Bank 2.0’ – this is very similar to planning the party in that it’s all about creating puddles of knowledge to support decisions. Once again we see content aggregation, syndication, and customised consumption as the core requirements. In this case though there are many things about existing web technology, and in particular HTTP – a synchronous, unreliable protocol, that make things tough. For human decision support there might be some wiggle room on timeliness, but everybody wants accuracy. For automated decisions (e.g. algorithmic trading) things need to be as reliable as possible, and as fast as possible. Moving trades between systems, processing a settlement, sending a confirmation – these are all inherently asynchronous activities that ought to be reliable. Once again AMQP seems perfectly positioned to fit the bill, and once again some kind of MaaS looks like the solution for internal integration and external connectivity.

So, MaaS looks like the foundation here. In some ways I think these services will become like an inside out version of web hosting – a place to consume rather than a place to publish; though clearly we need both.


I don’t think this is going to be what Sean was asking for, but I also didn’t want to ignore the call to action, and the points that follow will hopefully lead to some useful debate. This post will probably be provocative, so lets start right out with my point of view – semantic web is not the solution, now can we get a bit clearer about what’s the problem?

Firstly, a little history. I first came across semantic web technology about 6 years ago when working on web services (SOA) governance. Most of the governance issues seemed to boil down to a lack of shared meaning, and so semantic web looked promising as a means to establish shared meaning. Unfortunately it turned out that the cure was worse than the disease. This is one of the reasons why SOA is dead.

From my perspective the key difference between the web and the semantic web is that the former can be utilised by anybody with a text editor and an HTML cheat sheet, whilst the latter requires brain enhancing surgery to understand RDF triples and OWL. The web (and web 2.0) works because the bar to entry is low, the semantic web doesn’t work because the bar to entry is too high. TimBL exhorts that the problems we collectively face could be solved by everybody contributing just a little bit of ontology, but that song only sounds right to a room full of web science sycophants. Everybody else asks onwhatogy?

I’m also not convinced that financial services needs more ‘interoperability of information from many domains and processes for efficient decision support‘. Yes, decision support and the knowledge management that underpins it needs information from many domains, but is there really an interoperability problem? Most of the structured data moving inside and between financial services organisations is already well formatted and interoperable. There’s more work to be done everywhere with unstructured data; but there’s even more to be done at better facilitating the boundary between implicit knowledge (in people’s heads) and explicit (on rotating rust, or SSDs in not too long) and back. Semantic web concentrates too much on the explicit-explicit piece of the knowledge creation cycle. The web 2.0 platforms seem to be stealing a march over semantic web by having the usability to make traversing the tacit-explicit and explicit-tacit boundaries easy. It’s all about user experience; if the techies decide to build that experience on top of a triple store and some nice way of hiding (or inferring) ontologies then cool – but who cares – somebody else might do better on top of a flat file, in a tuple space or whatever.


At the end of my technology time line that I posted yesterday I wondered if I should have included more services, particularly in what seems like an innovation vacuum over the last six years. On reflection I think I know why I didn’t include many. Almost everything on the time line was something where I triumphed over adversity – a product came along, its concept was good, but it didn’t quite cut it, but after a bit of tweaking it was made to work – most importantly lessons were learned, and it was these lessons that in part shaped who I am and what I do. When I look at services though there has been no triumph, just continuous improvement. There still is adversity. Things haven’t yet been made to work as well as they should.

This brings me to wikis. Next week I’m looking forward to spending some time on the ski slopes and apres ski bars of the alps in the company of a bunch of people that I mostly don’t know (yet) who are all converging for a stag (bachelor) party. Co-ordinating the logistics of flights, hotels, mobile numbers etc. by email is a nightmare, and I was once again struck by the idea that this would be the perfect use of a wiki. The trouble is that getting everybody to the same wiki is probably just as hard as getting everybody to the same bar. The problems are:

  1. The signup hurdle – creating a wiki requires somebody to sign up as an administrator, and all of the users to create accounts (or associate OpenIDs if they have them). This is a lot harder than blasting out a list of email addresses. Perhaps there’s a good, free (at point of use) and secure wiki platform that lets you drop in a bunch of email addresses and it automagically configures accounts and sends invites – let me know if you’ve seen it?
  2. The compliance trolls – many of us spend our working days behind corporate firewalls with web filters designed to prevent nasty things like ‘data leakage’ and ‘use of non compliance collaboration platforms’ etc. This cuts off a lot of web 2.0 – not good when you want to communicate.
  3. Alerts and updates – ’email inboxes are like todo lists where other people get to add items’ (Chris Sacca), so when I send somebody an email I get their attention. Sadly I’ve not yet seen a good (in band) way of doing this with wikis. It should be possible to do something with RSS, but nobody seems to have done it right yet (and I suspect that there would also be security issues that would prevent popular online aggregators working properly). It may have been 3 years ago that he posted, but Jermey Zawodny is still right – Wiki RSS feeds suck.

I feel that the social networking applications could do something about this, and maybe there’s even a BatchelorPartyOrganiser app out there for the platform de jour (see 2 above for why we’re not using it), but even if we could herd all the participants inside a walled garden I’m not sure that all the needs would be satisfied. What we really need is an open, connected and participative experience of content aggregation and syndication.

For now I guess we’re stuck with an ugly torrent of emails. I’ll probably print mine out before I go in case my batteries run out.


Charlie seems to have kicked off what might become a geek meme over at http://www.antipope.org/charlie/blog-static/2009/02/technology_timeline.html

I’ll break from my usual tradition of not mentioning brand names. Many of the brands I encountered are now consigned to the history books, others are still going strong. I don’t imply any endorsement, choices were mostly down to what made best (economic) sense at the time.

  1. Cassette recorder: age 4. The first electronic item I ever owned. It was a no brand front loading portable, mono only. I still have a tape I made of some bedtime stories – ‘I’m huff the hedgehog and I want my dinner and if I don’t get it soon I’ll get finner and finner’ – those were the days before speech therapy.
  2. Pocket calculator: age 7. A basic Texet LED model that chewed through 9v batteries at a shocking rate. I later modded it to work off a mains adaptor.
  3. Electronics kit: age 8. A Tandy (Radio Shack) 50in1. This was probably the real turning point for me, making up circuits for things like transistor radios and burglar alarms. I have still never managed to get a crystal radio to work (my most recent attempt being a few weeks ago). Thanks dad.
  4. Scientific calculator: age 9. A Casio fx-81. Playing with this got me into trig and other types of maths that they weren’t teaching me yet at school, which all proved to be a good grounding for engineering.
  5. Music centre: age 10. I wanted an Amstrad tower, as they were all the range, but got a cheaper unit. It still served me well for making mix tapes until I started collecting discarded HiFi separates a few years later.
  6. Computer: age 14. A Commodore Plus/4. It hadn’t been a great commercial success as the follow up to the C64, which meant that I got it with a huge package of software for £99 from Poundstretcher. It did however have a better BASIC interpreter than its predecessor. I coded my first programs to be published on this machine. This wasn’t the first home computer, dad got a ZX-81 when they came out, and we later got a Dragon32, but this was the first that was mine alone.
  7. Colour TV/monitor: age 14. This was kind of necessary to use the Plus/4 in my bedroom. I got an ex demo Philips set that had RGB and composite video inputs that worked with just about all of the 8 bit home machines (though I had to make my own video leads as it had obscure connectors). It’s still working today.
  8. Modem: age 15. Ever since seeing ‘Wargames’ I’d wanted a modem, and a deal on the Compunet modem put one in my reach (though the service fees and phone bill became a problem). It did 1200/75, which I also used for tinkering with Prestel and other videotex services and 300/300, which was the going rate for most BBSs at the time. The C64 that I needed to use this thing was extensively modded and hacked over the years.
  9. 16 bit computer: age 16. Another Commodore, this time the Amiga 500. This got me playing with A/D converters and making up MIDI interfaces for my brother and his friends. I came up with a design that switched a transistor for an op amp saving component costs and making fabrication easier (everything got soldered to the pins of the chip, which was then glued into the case – no PCB required). This machine was the one that made me learn C.
  10. PC: age 18. An Amstrad PPC-640. I bought this mostly for the integrated V.22bis modem, but it proved useful for many other purposes including PASCAL programming at University and my earliest forays into internet services.
  11. ISP: age 21. I needed a way to get online beyond the University network (which had JANET connectivity) so I got a CIX account in order to use there recently launched internet gateway. I still use it today, though I’ve been through many pure ISPs since.
  12. Palmtop: age 24. Life after university came with many moves, and little space for stuff, so I got a Sharp PC3100 to use on the hoof along with a ‘pocket’ 14.4k modem.
  13. Homebrew PC: age 25. Over the years I had built many PCs for other people, but this was the first for myself. I had a 486-DX2-66 that needed a home, and started out with 1MB RAM, though this quickly became 4MB. I have never bought a complete desktop PC unit for myself.
  14. DVD player: age 27. This was actually a kit for my PC with a DVD-ROM drive and an MPEG-II decoder card (as CPUs at the time didn’t have the grunt to run the codec in real time). I had to make a huge S-Video lead to watch stuff on the TV whilst it played upstairs in the study, and for some odd reason the chrominance and luminance on the decoder outputs were reversed forcing me to put crossover connectors into the cable.
  15. Mobile phone: age 28. I held out for some time on getting one of these, but when buying my first house it became something of a necessity. Prepaid one year contracts (at rates that we still haven’t really returned to) helped sweeten the pill.
  16. PDA: age 31. An early Sony Clie. The best feature may have been a multi system remote application that I once used to turn off a very annoyingly loud TV in a pub. Everybody glanced around for a second and got back to their drinks and conversations.
  17. Smartphone: age 32. When I realised that the Treo 600 gave me something that could be a phone and PDA in less pocket space then I was sold. I’ve not gone back to a regular phone since, though I still use the GSM mobile I had before as a host for foreign PAYG SIM cards when I’m travelling.
  18. MP3 player: age 32. Attempts to put music onto MMC cards on (smart)phones had proven a bit lame so I bought a 2G iPod. The crucial thing was that I owned about 11GB of music at the time, which fitted comfortably onto its 20GB hard disk. I still use it in my car (which I did not buy from new).
  19. PVR: age 32. Unlike Charlie I do watch TV, but I don’t like to run my life to the schedules, and I don’t watch adverts. I bought a Pace Twin as soon as they came out, and almost immediately invalidated my warranty by upgrading it to 60GB so that it would record 30hrs. I was an Alpha tester for the TwinRip app that lets me copy stuff onto my PC.
  20. Streaming media player: age 32. I had been tinkering with video on my PC for ages, and a network attached appliance that allowed me to watch stuff on the TV was pretty irresistible. The original machine is still going, though its DVD player is long dead, and it’s sat on top of its younger HD capable sibling. I still wonder at the lunacy over ‘rights’ that means there isn’t a product that successfully brings together the functions of the last two items (and no, I don’t count ‘media’ PCs).

Wow – it’s 6 years since there was a cool new product category that I felt the need to get into. Maybe the netbook I ordered yesterday will change things. I also wonder if I should have included more services, like webmail and social networking?


It was a little over a year ago when I first wrote about persona, and I’ve done a couple of follow ups since.

It seems that the term is finally finding its way into common usage, and I’m encouraged by the recent posts by Nishant Kaushik and Mark Dixon. Mark’s visualisation is particularly good (and I hope he doesn’t mind me linking to it here):

All of this good work leaves me wondering why the definitions at IdentityCommons (which seems to be where Identipedia has moved to) haven’t really caught up, but maybe that’s just a mopping up exercise? Once again, here’s my own definition:

  • Persona is an abstraction between an entity (usually a biological entity, or person) and a bundle of one or more digital identifiers, so that the entity can present themselves differently according to context.

I think I may have been concentrating too much on implementation details when I came up with that one, but it still encapsulates the key points about expression of a specifically modified identity in a given context (according to a user preference).

One of the challenges I’ve come across many times in the last year is that this concept of persona cuts across what many people have internalised as an aspect of ‘role’. The whole point of using another term is that ‘role’ has become too overloaded, and as we go about designing systems to support the various activities we engage in it’s useful to carve pieces of ‘role’ down to size. I do however accept that this causes trouble for people who’ve invested a lot of time and money into something that already kind of does ‘persona’ but has called it ‘role’. It may be just semantics, but I often find that semantics are very important.


I first touched upon this a while ago when I called it the ‘interest feed‘, and JP recently got me going again with his posts about customer perspective. In theory this is the type of problem that could be addressed by ‘synopsys‘, but some deeper digging is making me think that things are worse than they first appeared.

I though I’d have a go at brute forcing a part of the problem. Each time a wrote a comment on somebody else’s blog, I’d subscribe to the comments feed and put it into a group in my aggregator called ‘comments watch’. That way I wouldn’t miss out as the conversation developed. So far it’s been an almost complete failure, as it turns out that very few blogging platforms support a comments subscription for a single post. The feed stock for microsubscriptions just isn’t there.

I feel a nasty cludge coming along.

What does the rest of the world do about this? Do people making comments just shoot their mouth off and move on? Or do folk obsessively (and manually) return to where they’ve been before (and I can’t help thinking of my West Highland Terrier as I write this)?


Classen’s law

06Jan09

Sean Park’s ‘The power of power laws‘ reminded me once again of one of my favourites – Theo A C M Classen’s “logarithmic law of usefulness”. I finally got around to doing a WikiPedia entry for it here, which I hope is notable enough to survive the WikiPedia deletionists (about which I’m entirely in agreement with Tim Bray).

It’s because of Classen’s law that I’ve declared myself as a singularity non believer (even though I enjoy reading singularity inspired SF). I think to have a real singularity it would take something that would drive an exponential increase in usefulness, and if Classen’s right that would need something to drive technology at a double exponential rate. Given how hard it has turned out to be to keep Moore’s law a reality that seems unlikely, though when Moore’s law finally has a hard collision with the laws of physics who knows what might emerge to take the place of reducing the feature size of 2D semiconductors?


James McGovern came up with a good starter for 10, but since he called me out to add some more here goes:

  1. Ignoring Pareto – many enterprise architects end up becoming the creators of internal ‘standards’, and then become the standards cops. All too often the 80:20 rule is ignored (and in fact this tends to be more like 90:10 for many things IT), which results in an application being shoehorned into an inappropriate ‘standard’ platform, or the platform squished out of shape to accept an application that shouldn’t be there. Good architects are the masters of good exception processes.
  2. Thinking linear scalability will be enough (or worse aspiring to linear scalability). It won’t be – too many things in the real world follow power laws, and it’s no coincidence that the systems we build to model them and manage them also need to scale accordingly. Of course Moore’s law is itself a power law, and many have relied upon it to get them out of trouble, but the terrain is getting muddy as we go from single cores following Moore’s law to multi-core systems and the need to design expressly parallel applications.
  3. Too many patterns, and not enough understanding of antipatterns and how we get into them. If I pick up a patterns book then there’s sure to be stuff in there where I’ll wonder how it ever could be used, if I pick up an antipattern book I’ll be able to think of at least one application that’s fallen into every trap. My view is that there’s more value in avoiding the holes than there is to staying on the path.
  4. Thinking that the laws of software development only apply to other people, particularly Conway’s law when dealing with any organisation large enough to have people that call themselves enterprise architects.

Last week I was asked (at very short notice) to come up with a presentation on what a social networking operating system would be. This is in part why I felt it was necessary to spend time on social network modalities. In the end the presentation wasn’t formalised (or constrained) by being put into a document, so this blog post is the first written record (assuming that my white board scribblings were consigned to the bin shortly after the workshop ended).

The thesis is fairly simple. Over time we’ve seen operating systems develop to provide sets of basic functions and services so that applications don’t have to do this themselves. This achieves a number of key things:

  1. Applications can be simpler, because the operating system does the ‘heavy lifting’ for them.
  2. Applications can work together, because there are common interfaces provided by the operating system.

This makes me think that we’re still in some sort of pre-history with social networking applications, as since they aren’t built on a common operating system they necessarily have to provide their own essential functions and services, and these typically don’t work well together.

So what does a social networking operating system let me do? I think it’s like this – it will let you join together functional aspects from social networking applications, and do this in the context of the user. An example might be the feedback mechanism for directed social bookmarking. Lets suppose that the user wishes to provide feedback via microblog @name posts. Without a social network OS that user is forced to switch contexts from their RSS aggregator (where the social bookmark is consumed) to their microblogging application (where they can make the feedback post). Not only do they need to switch applications, but they might also have to deal with context mismatches between namespaces etc. With a social networking OS the user would be able to press a button to make that response in context – the reply by microblog post (or whatever else they wanted to do) would become a feature of the RSS aggregator.

More broadly a social networking OS allows a user to consume social web applications (in the context of their choice) and connect to other social web applications in the modality of their choice (and without having to change context).

This example makes me think that the social networking OS would therefore be a very client centric rather than server centric technology, as the user context is on the client. I therefore think that the environment for such an OS is almost certainly the browser, which means that the preferred language is almost certainly JavaScript. This would give us the following evolution:

  • Internet
    • Paradigm – distributed machines
    • OS – Unix
    • Language – C
    • Protocol – TCP/IP
  • Web
    • Paradigm – application server
    • OS – J2EE
    • Language – Java
    • Protocol – HTTP
  • Social web
    • Paradigm – social network
    • OS – something in the browser?
    • Language – JavaScript
    • Protocol – stuff based on HTTP, but not really HTTP itself (could be replaced by AMQP?)

I suspect that the example I illustrate above could be pulled off with some ninja GreaseMonkey scripting, but that doesn’t mean that I see GreaseMonkey as the heart of a social networking OS.

It would be remiss of me to close without a hat tip to OpenSocial, which seems to have been an effort to create something like a social networking OS. I remain curious about what’s become of it? I’d also love to hear from any OpenSocial guru who can explain how it might be used to achieve the use case outlined above?