Update (3 May 2010) – I’m getting increasingly sick of how often this machine fails to record things. Worse still I’ve even seen it say that it’s started to record something, but when I go to watch it there’s nothing in the list. Reliability is awful compared to when I first started using it. I’m also annoyed by its propensity to record things on C4+1 rather than C4 when doing series record.
and now back to the original script…
As an early adopter of the Pace Twin I’ve been using a Freeview PVR for as long as they’ve existed, and as I posted before, mine was pimped a little (with a 60GB drive) to improve on the stock spec. Sadly the recent Freeview channel shuffle seems to have brought it to the end of its useful life. Perhaps it’s just the poor signal quality from my communal aerial feed (though my other Freeview TV and boxes seem to cope), or maybe it’s just not up to snuff any more? Regardless of the cause, viewing and recording of many channels had become too much of a hit and miss affair. Time for a change, and some new gear.
I didn’t do my usual extensive research, just a quick dive into some end user reviews on a few online shopping sites. The Sagem seemed to come out less badly than some of the others, and was in stock for £129.99 at my local Argos (along with a £10 voucher offer) so I ventured out into the rain and bought one.
The big shock is that it has no UHF output. Perhaps this shouldn’t be a surprise, as analogue TV is being progressively decommissioned, but one of the nice things about the Twin was being able to watch it on any TV in the house. I briefly considered buying a UHF modulator from Maplin to fix this omission, but in the end plumped for a jury rig arrangement with my ancient VCR (Scart in -> UHF out).
There are however some good things about the new box:
- It can record on both tuners at once
- and you can watch another recording at the same time
- Series linking seems to work, even for those Saturday evening shows that jump around the schedule every week
- The 320GB (~160hrs) drive is much bigger than the 60GB (~30hrs) that I had in the Twin, so I can leave the kids stuff on there
- and it allows me to put stuff into sub-folders so there’s not too much clutter in the programme list
- It has an ‘exportation’ feature that allows programmes to be copied onto FAT32 USB devices
- and the exported files seem to be bog standard MPEG2 .ts files, no nasty encryption or silly file formats
There are also a few areas where I can see a need for improvement:
- It can play mp3s off USB media (FAT32 only, though it seems to work but whine with regular old FAT), which is cool but:
- You need to switch mode from media list to player if you want it to actually play successive tracks
- It is supposed to be able to copy stuff from USB to the local HDD for more convenient access, but this is a totally unreliable process. I have given up on trying to copy my music library (16GB) onto it. It seems to struggle with a single album’s worth of files, never mind a few hundred.
- The favourites lists are not all that easy to edit, and the split between TV and Radio can also be confusing (it seemed to me initially that I had to choose TV or Radio, but in fact you can have both in a favourites list).
- The UI/remote control combination can seem a little sluggish at times.
- The skip forward function (achieved by pressing >|) skips 5mins. UK ad breaks tend to be 4mins, so the function becomes pretty useless, as you either have to miss or rewind a minute. The 1min skip on the Twin was fine, and the 30s skip on my Panasonic box is OK (though 8 button presses per ad break is a bore). C’mon guys, either make this configurable, or make the default fit for purpose.
It would of course be killer if:
- I could use it to replace my streaming media player (a Kiss DP-600), which would involve:
- Having an Ethernet port and UPNP support
- Better still if you could do ‘exportation’ over the LAN
- The ability to play DivX (HD)
- Having an Ethernet port and UPNP support
- If the media player stuff actually worked then a directory sync tool would be handy for managing large libraries.
Overall though I’m finding it hard to complain given the price, and the fact that it hasn’t failed in any meaningful way (yet – fingers crossed). Even if it does end up missing a recording, then that doesn’t seem too much of a big deal these days. I wonder how much longer it will be before the whole concept of a PVR becomes redundant, and we just have a local cache of the media library in the sky (for those times when it just doesn’t make sense to drag those bits across the internet)?
Filed under: technology, Uncategorized | 9 Comments
Tags: dtr67320t, freeview, pace twin, PVR, sagem
I’m usually an aggressive early adopter of new gadgets, but I’ve not been able to bring myself to buy an e-book reader yet. This is mostly due to the DRM deployed by Amazon, Sony etc. and the consequences that has for how I would use the books and what would happen to them in the future.
As I commute almost every day, and travel frequently when I’m not commuting [1] then I like the idea of being able to cart a bunch of books around with me in a consistent, light weight package. I do however tend to be a read once sort of guy. Books accumulate in my life. I’m barely getting started on the books that were bought for me last Xmas, and the mince pies are already in the shops for this year – yikes. I don’t anticipate a time when I’ll have the luxury to go back and read books again. When I’m done with books I tend to be a hoarder, but I also like to lend them out to friends. This is the one difference for me between books and videos – both I tend to use once then move on, but books get passed on, videos don’t; that said, the ideal distribution model for both would probably be a giant library in the sky, with some local cache of stuff queued up for those times that I’m offline (which I tend to be when I’m consuming books or videos) [2].
Details on what B&N are(n’t) doing with DRM aren’t clear yet. One might hope that they’re driving an open format, and trusting their users to do the right thing, but I doubt this will be the case; I expect that they’ll simply have an even more elaborate DRM scheme that somehow supports lending between devices. Will this now mean that I can recall a book from a friend without having to see them? Can I check first whether they read the book yet or not?
Anyway, this seems on the face of it to be less evil than the Kindle, and also looks sexier. I particularly like the look of the touch screen/keypad thing.
[1] I’m told by regular e-book users that Kindle usage (particularly if it’s in a leather cover) seems to be OK on the bits of flights when you’re not allowed to use other ‘approved electrical devices’. I wonder if the airlines will formally accept this or clamp down as e-books become more popular. One of the main reasons I normally have a chunk of dead tree in my bag is to have something to keep me occupied when the netbook and iPod are verboten.
[2] There’s probably some underlying marketing point lurking here. If I was paying to rent a book rather than buy it outright then I’d likely be more open to living with DRM and other content management malarkey; though obviously the price point would have to reflect that (I’m thinking £2-4 to rent versus £5-15 to buy).
Filed under: technology | Leave a Comment
Tags: DRM, e-book, e-reader, kindle, nook
uncommon sense
“No risk it, no biscuit” said my friend John as we sat down for a curry the other night. He’s a trader, and tends to think of almost everything in terms of risk.
Later on in the conversation we got on to the topic of ‘common sense’, and how it seems to be disappearing from life as we know it. “Common sense is just risk” came John with his usual refrain, but I have to totally agree with him this time. What we call common sense is all about risk, choosing to take risk at an emotional level rather than having somebody with a risk assessment form show up and fill it out whilst wearing a hard hat and hi-vis jacket then finally saying “I wouldn’t if I were you, something bad might just happen”. Nassim Nicholas Taleb writes a lot about emotional mechanisms and their relationship to risk in his excellent ‘Fooled by randomness’ (I’ve not yet got around to the more popular ‘Black Swan’). He points out that without emotions, without common sense, we become incapable of making decisions. That’s bad at an individual level, but it’s worse at a society level. When we substitute common sense, emotion and competence with process then there might be high hopes that the processes will be efficient and infallible, but in reality that’s almost never the case. Our overall productivity is now choking on broken process in almost every area of daily life. The most egregious examples seem to be associated with ‘terrorism’ where almost any amount of unproductive inconvenience is acceptable if it is supposed to save ‘just one life’ from a massively low probability event[1]. No common sense, no risk management. But it’s not just about terrorism and associated government fearmongering, it seems that the tabloid press has convinced the general public that no amount of risk is tolerable in any area where the state can possibly intervene with some misconceived legislation (backed up by enforcement that slews randomly between incompetent and heavy handed).
What can be done to fix this?
I fear that there are no easy answers. This one’s a combination of individual and social responsibility, individual and social risk appetites, education, unwinding complex legislation, honesty from politicians[2], cynicism of the popular press… the list goes on.
[1] If anybody knows what the government is doing about the threat to the public from lightning strikes, and where I should stand in line for my lightning security theatre then please let me know?
[2] OK, I realise here that I’m asking for the impossible. Something intimately intertwined with this whole problem is that it seems to be acceptable mandatory for politicians to be liars. Goodness in politics seems to be measured in units of liarbility.
Filed under: grumble | 2 Comments
Tags: common sense, politics, risk
A little while ago I put out a plea for stronger authentication for Google Apps, and it seems that my wish has been granted with Tricipher launching their myOneLogin for Google Apps[1]. I had tried myOneLogin before, and frankly wasn’t too impressed. This time things are different though, the issues I’d seen before with Chrome compatibility and general fiddlyness seem to have been fixed, but best of all is the use of a proper strong (soft) token, in the shape of VeriSign VIP Access for Mobile.

I first came across VIP when I saw the news that Verisign and PayPal had teamed up to do a deal on tokens. I wanted one, even if it was going to cost a few quid, but they were initially only available in the US, and I heard nothing more about them. Did the marketing guys lose interest, or did the phishing problem go away, or did something else come along? It turns out that eBay/PayPal will sell you a VIP hard token (a device with a button on it to generate one time passkeys [OTPs]) for $5/£3, but why bother when you can use a free mobile token on your BlackBerry/iPhone/whatever? The soft tokens can be used in a variety of other places, which begs the question of why other sites aren’t jumping on the bandwagon, and why nobody seems to be pushing this? Part of the answer might be the funding model; I’m not sure how Verisign are getting paid for this stuff, but I’m sure they’re not running their service as a charity for the web.
[1] Premier Edition only, as it needs SAML support
Filed under: identity, security | 2 Comments
Tags: authentication, google, identity, saas, security, strong authentication, tricipher, verisign, vip
Mini review – 3 MiFi
I had great hopes for MiFi. I was going to be like Pig-Pen from Peanuts, just with fewer flies and more connectivity. I would walk the earth with my own little bubble of Internet goodness. No more messing about with dongles for the netbook. My iPod Touch would become like an iPhone (just without voice). Life would be great.
personal cloud of connectivity?
Plasticky
It was clear from the first pictures that I saw that the device itself would be a bit plasticky, and it is. This is clearly something that doesn’t look like it will stand up to many knocks and bumps from daily use, but that shouldn’t matter; the whole point is that it just sits in my bag doing its thing – 3G on one side WiFi on the other. I understand that these things can’t be carved out of solid titanium billets, as that makes antenna design even more challenging than it is already, but some sort of carry case would help stop it from getting too scruffy too soon.

Battery
Unfortunately it can’t be left in the bag all day. The battery only lasts for 5h (and that’s the claimed life, I’ve not seriously tried to find out what the figure is in real world use). This means that it has to be brought out and charged – frequently. At least there’s a little USB-MiniUSB cable for the purpose, leaving it looking like a slightly overweight dongle hanging off my netbook.
Network nightmare
Charging by plugging into the netbook is fine when the MiFi is switched off, but things get interesting when it’s on. The device presents itself as a network card rather than a modem, and on my machine it gave itself quite a high priority (above my WiFi adaptor). This means that if the 3G modem is on then you get a slow connection, and if it’s off then you get a whole lot of problems. Things can be fixed by a quick visit to the network connections control panel, just don’t forget to press the Alt button if you’re a Vista or Win7 user or you’ll never even see the Advanced menu option where adaptor priority options live.
Getting on
The Huawei E5830 device has three buttons on it, and unfortunately you need to use all three to make it go. Firstly the device has to be powered on (press and hold for 2s), then the WiFi needs to be switched on (press and hold for 2s) then the 3G needs to be switched on (press and hold for 2s). Steps 2 and 3 can be reversed if you choose. This all seems a little pointless to me. The sole purpose of the device is to bridge 3G to WiFi. Like the competing Novatel 2352 this should all be done with a single power on. I’ve heard a counter argument that this arrangement helps roamers from running up huge bills by having the thing accidentally turn on, connect, and serve up a windows update or similar to their laptop. If that’s a real concern then leave it at home, or take the battery (or SIM) out.
Staying on
My first train journey with the MiFi wasn’t much fun. Not only did it seem less good at getting connections that my usual Novatel XU870, but it was equally pathetic at reconnecting after going through a tunnel or whatever. Once again the only point of this device is to connect to 3G and retransmit packets over WiFi. I don’t want to have to press a button on the side of it every time the 3G connection is lost. Total user experience FAIL.
Will I send it back?
Probably not, though I’ve been sorely tempted, and I still have a week to choose. It has already proven useful as a means to provide emergency connectivity to me and my colleagues, such as last week when Gmail was having a bad hair day and I needed IMAP/SMTP connectivity (which I can’t get on the office network). Unfortunately it’s clear to me already that it’s an occasional use device rather than an all the time device. That occasional use would be helped out by better international options, like having some decent roaming tariffs for data, or being unlocked and able to accept a local data plan PAYG SIM (found just between the hens teeth and rocking horse droppings at the shop by the arrivals gate in the airport). Let’s see how it handles the trip to Manchester later in the week?
Update – after unlocking and upgrading the firmware I’ve posted a follow up review here.
Filed under: technology | 9 Comments
Tags: 3G, hotspot, mifi, mobile, review, wifi
Telepersona
This is one of those posts that I’ve been meaning to write for ages, and just not got around to it. I’d like to pull together some of the concepts that I’ve touched upon before such as persona, and the fact that telephone numbers are a primitive form of digital identity. Much of what I have to say got spat out already in comment’s on Sean Park’s post on Advanced economies, but it’s probably worth rounding things up.
The fact that we present different faces to the world is probably no more clear than with telephone numbers, and any contacts management system worth its salt will have fields for ‘work’ and ‘home’. ‘Mobile’ is perhaps more troubling though, as that’s the number that somebody literally carries around with them wherever they go. All sorts of customs have grown up over the years about which number is appropriate for a given circumstance, but these aren’t uniform – there can be huge differences between national cultures. Whilst I was actively going through a process of simplifying my business card down to one number I was learning that a typical card in Italy has around five! At the same time I hit resistance from people that thought I was holding out on them by withholding my mobile number, even though the number on my card routed to my mobile whenever I was away from my desk.
There are also distortions caused by telco pricing structures – calls to national numbers are typically free (meaning within some kind of bundle) whilst international calls might not be; mobile calls, and especially mobile calls to/from a different country can get scarily expensive.
There are a growing number of services now, like Google Voice, that facilitate the ‘one number’ paradigm – this is the number that you can reach me on wherever I am and whatever device I’m using. I think this works about as well as the idea that we should have one identity, one email address, one face to the world. It’s too simplistic, and therefore works badly. I don’t just want one number, I want many numbers – maybe a ‘personal’ number that I give out to friends (e.g. the people I connect to on FaceBook) and a ‘work’ number that I give out to colleagues and work contacts (e.g. the people I connect to on LinkedIn). I may even want to have a US ‘work’ number in addition to my native UK one so that I’m a bundled call away for those in the +1 country code (and I pick up the [least cost] routing charge). A simple block diagram for the overall offering might look like this:

Of course there are still issues – if I call somebody back then CLI will probably show the number for the device I’m using rather than the ‘in’ number that my contact might have in their address book. Will they know that it’s me calling and answer, or send an unknown number straight to voicemail? If they do pick up the call will they add the device number to my contact details, and route around my rules and routing in future. CLI can of course be manipulated, but it’s hard enough to do this right with one number harder still if there are multiple inbound numbers.
There are other complexities… SIP wants us to be an email address rather than a number, though of course we all have plenty of those already. Skype gives us another IM like identity, as does GoogleTalk. The receptionist that couldn’t cope with short dialling codes, ‘call me on *443 8805’, will probably have a brain haemorrhage if I say ‘call me on [email protected]’.
Just because this might be a bit tricky doesn’t mean that we shouldn’t push hard to get it sorted out. It would be nice if this was fixed before my kids become teenagers (and start hogging the phone all night).
Filed under: identity | 3 Comments
Tags: google talk, google voice, identity, mobile, persona, sip, skype, telco
Custom CSS frustration
I spent a little time (well far too much really) earlier this week tarting up the corporate blog, so that it would look more like our web site. It wasn’t much fun.
I’d had in mind that I might just merge the two, but it turns out that WordPress.com doesn’t offer the DNS flexibility that I needed, so that route was quickly backed out.
Next up was the tricky area of look and feel. WordPress.com has some great themes to choose from, but limits the kind of full customisation that you’d get from hosting your own WordPress for perfectly good security reasons (unless you’re a VIP blogger). To balance things up it offers the (paid for) option of Custom CSS so that you can get control over look and feel. You can start with any of the existing themes, or use the SandBox theme, which is essentially a blank piece of paper. I’d done a little CSS hacking before, though I certainly wouldn’t consider myself an expert, but trying to clone the look and feel of our web site turned into a miserable experience:
- Firstly the documentation is next to useless. There’s a forum dedicated to CSS, but just about everything in it leads to some condescending ‘CSS has a learning curve’ type comment. It seems that nobody has bothered to pull together a simple HowTo for newbies – this is how you get you logo in the header, this is what you should do if you want a sidebar on the right rather than the left etc. The best I did find was Mark’s Classic CSS explained.
- It didn’t take me long to trip over the WordPress Theme Generator, which is very neat, but turned out to be a bit useless as the themes it makes are for people doing their own WordPress hosting. It would be fantastic if such as tool existed to generate CSS to sit on top of SandBox, and frankly it’s a disgrace that WordPress.com don’t offer something like this to help people get started.
- It turns out that SandBox isn’t documented in a WordPress.com friendly manner. Or if it is then (as Douglas Adams might say) it’s ‘in the cellar, in the bottom of a locked filing cabinet stuck in a disused lavatory with a sign on the door saying “Beware of the Leopard“‘. The best I could find was Lorelle’s all the styles for the SandBox theme – a dated list of what’s there, with essentially nothing to say how to use it. There’s lots of detail in the readme, and more on the project home page, but I must confess that this was hard to parse, and seemed to be intended once again for those making custom theme’s for their own servers rather than custom CSS for WordPress.com.
- I was mighty impressed by the SandBox theme competition, the results of which seemed to somewhat work when I tried them as Custom CSS, but I remained thwarted by simple things like column placement.
In the end I pretty much gave up. Having found that the Vigilance theme had almost all of the customisation I needed (thanks Drew) it was time to return to productivity rather than pain. Maybe I’ll cough up the $14.97/yr so that @clarkjch can have his favourite font, but then maybe not.
Filed under: blogging, could_do_better, grumble | 5 Comments
Tags: CSS, Custom CSS, SandBox, theme, WordPress, WordPress.com
Techlust – Nokia’s ‘Booklet’
I’m having a bout of tech lust today following the news about Nokia’s ‘Booklet‘.
There have been lots of disparaging comments from various quarters that it’s just another Windows 7 Netbook. But that’s missing a couple of key points:
- It has HDMI out. I really don’t care if this can’t drive a full HD screen for video. I use a 1920*1200 monitor in the office, and it would be nice to just get rid of the fuzz that goes with an analogue VGA connection. Of course it would be nicer still if the built in ‘HD display’ does 720p, which is just fine for video on the move (as I mentioned before), but so far detailed specs (and pricing) seem thin on the ground.
- It has integrated 3G, so I don’t have to put up with some annoying USB dongle or ExpressCard sticking out of the side of the machine.
Of course I agree with those saying that this isn’t an earth mover for Nokia, and I probably won’t end up buying one. I do however hope that it spurs on other netbook makers to have similar video and 3G capabilities. Perhaps then we can get past the tied down craziness of mobile broadband deals where you have to buy a device (I for one can’t see why the telcos insist on selling you a dongle for £19.99, as I’d be amazed if there was profit in there rather than subsidy, though maybe having some sort of silly exclusive deal is the only way they get reasonable prices for the people that do want these things). I also hope that this doesn’t become one of those things that you can only buy lease from a telco (and that finally ends up on Expansys when it’s too old or unpopular, like Nokia’s Internet Tablet thingy [that lacked 3G connectivity] or the Samsung NC10 with HSDPA[just a bit too dated and expensive]).
Lenovo, if you’re listening I would really like to be able to buy the s10-2 with 3G (unlocked of course) from your web site (rather than having to hack it), and while you’re at it you can get on with making an s10-3 with the 720p screen and HDMI.
Filed under: technology | 1 Comment
Tags: 3G, HD, HDMI, netbook, techlust, WWAN
How Gmail could be better
I’ve been using Gmail since the earliest days (Oct 2004), when you had to know somebody with spare invites, and I’ve always liked it. Now that I use it at work as well as for personal mail I find that there are a few niggles that I’m sure the Google guys could sort out:
- Label before sending. Labels are indeed a great way of organising things, but why is it that labels can only be applied to received emails? Surely the label drop down (and the new drag and drop labels) should be available when composing a fresh email.
- Bring back options for plain text line wrap. Wrappping at the 78th char might be all well and good for some ancient standard, though it was only a few months back that this behaviour emerged, and it seems that protests have fallen on deaf ears. Fixed width might be fine for anybody still using a TTY terminal to read their email, but it looks rubbish on a BlackBerry (and presumably any other mobile device) where the screen width is less than 80 characters. I’m not sure what the non standard approach was breaking, but things seem worse now.
- Make the add contact semantics more consistent. Sometimes I can add contacts by clicking on the down arrow next to Reply and selecting add contacts, other times that option isn’t there, and I need to click on the sender name, click on the down arrow beside ‘Video and more’, select ‘Contact details’ and then hit ‘Move to my contacts’. I’ve not been able to figure out why the behaviour is different from one mail to another.
- Allow me to edit contacts in mailing lists (groups). This is perhaps more of a Google Apps issue than Gmail, but it’s closely enough related to deserve some time here. Firstly I’d like to be able to go back and edit a line in a mailing list so that I have full name in plain text beside the email address. I know that if I followed the “Name” <[email protected]> convention at initial input then I could get what I wanted, but sometimes that’s pretty clumsy. It would also be great if there was some way of importing contacts to groups from the contacts app, and from Google spreadsheet (and perhaps others using .csv).
- Make better use of white space. To the right of the reading/composing window I end up with a column of white space where the ads would be in regular Gmail or Apps Std Edition. This isn’t much of an issue on a normal screen, but is a big waste of space on a Netbook. I never even use the ‘New Window, Print All, Expand All, Forward All that live at the top of this wasted real estate. Surely there could be an option to stick them on a horizontal ribbon?
I’m sure some of you will now comment along the lines of what a dufus I’ve been, and that such and such can be achieved by doing so and so. Bring it on.
Filed under: could_do_better | 4 Comments
Tags: gmail, google
I spent the weekend in the North-East of England visiting family and friends, so I didn’t spend as much time as usual with Google Reader keeping up with events in the IT (and broader) world. As I did a mammoth catch up session on the way home I came across two things that I think were necessary, inevitable, took too long to happen, but I’m glad that we got there in the end:
1. Finally an open XACML API – I said some time ago (in this blog post, and during Q&A following my presentation at Catalyst Europe) that XACML was like LDIF for entitlements, without an LDAP – an interchange format without an interface. It’s great to see that some of the major players have finally got around to tackling this one. Hopefully this is the first step towards entitlements being factored out of every application (and service) that we use.
2. Google Apps + OpenID = identity hub for SaaS – it would be easy to dismiss this as yet another announcement of an OpenID identity provider (IDP), which the world is already awash with. I think this one is different however for a number of reasons:
- Google Apps is becoming the place that everybody signs into. In the old enterprise world everybody signed into AD so that they could access email (via Outlook and Exchange). The in new SaaS world everybody signs into Google Apps so that they can access email via Gmail. There are some huge directory management challenges lurking here, but that’s a post for another day.
- There seems to be some commitment by others to become relying parties (RPs) for Google Apps OpenIDs – thus dealing with the asymmetry that’s plagued many of these things so far – everybody wants to be an IDP and nobody wants to be an RP.
- The discovery protocol moves the game along by providing a means for an RP to determine whether my domain is able to serve up a Google Apps OpenID. Hopefully this will be generalised (and standardised) later in order to remove the Google dependency.
Having had two wishes granted here’s my third…
Google (or a third party working with Google) – please give me a means to provide strong(er) authentication. I will pay for the tokens, but I don’t want to have to build a directory, RADIUS server and enterprise like federation capability just so that I can play. Give me a means to sign in that’s better than just passwords (and some choice over whether that’s an OTP, smartcard, biometric, out of band message or whatever) and a means to let third party RPs know that I signed in strongly, and the SaaS world will take a huge step forward. A repeat of ‘Twittergate‘ can be avoided.
Filed under: identity, security | 1 Comment
Tags: directories, google, identity, idm, ldap, ldif, OpenID, saas, security, strong auth, strong authentication, twittergate, xacml