I’ve known the chaps at Paremus since shortly after they set up shop, and I’ve watched the evolution of ServiceFabric since its earliest days. Since it has all the makings of a killer PaaS offering I thought I’d sharpen up my practical cloud skills by getting it running on EC2.
The first challenge is that ServiceFabric uses multicast to communicate between nodes in the fabric, and this isn’t something supported by EC2 (or any other IaaS that I’m aware of). This isn’t a problem though, as I set up CohesiveFT’s VPNcubed, which supports multicast. It also has the side benefit of allowing me to create a network topology that spans cloud and non cloud machines, so I can throw in some boxes from my home network to try out hybrid configurations. I kept things simple, and set up a single manager for the VPN-Cubed for EC2 Free Edition, which went pretty much as described in the step by step guide.
The next stage was to create some workload, so I used Elastic Server to create an AMI that had Ubuntu 9.04 as the base, along with the VPN-Cubed client, Sun Java 6 and Paremus’s Nimble. Nimble wasn’t there already, but it was a few minutes work to upload the package and enrol it into the build system, which then created and provisioned an EC2 instance for me automatically.
Once the Nimble enabled AMI was up and running I got it connected into the VPN overlay, and started up Nimble with:
./posh -sc "repos -l springdm;add org.springframework.osgi.samples.simplewebapp@active"
I recommend giving this a go yourself if you have 5 minutes to spare – it’s a wonderful demo of dynamic provisioning.
Once Nimble had done it’s stuff it was then just a question of browsing to the http://nimble-machine-vpn-addr:8080/simple-web-app and I could see that the plumbing was working.
Snags along the way:
- Firewalls – maybe stating the obvious, but it really is crucial to get the right end points defined as being able to talk to each other, and security groups didn’t quite seem to cut it as expected.
- OpenVPN throwing its toys out of the pram over an SSL verification error because the date was wrong on one of my home VMs. This stuff is much easier to diagnose when using OpenVPN straight from the command line (openvpn vpncubed.conf) rather than via it’s daemon.
So, that’s it for day one, a working dynamically provisioned web application running within a VPN overlay network.
For day two I’m moving on to full fat ServiceFabric, and will join battle properly with multicast and VPN binding issues. Wish me luck.
Following post – Paremus ServiceFabric on EC2 days 2/3
Filed under: cloud | 4 Comments
Tags: aws, cohesiveft, ec2, elastic server, nimble, osgi, paas, paremus, sca, servicefabric, vpn, vpn cubed
I spent most of last week at the IGT 2009 ‘World Summit of Cloud Computing‘. There were some great speakers there, but the session that sticks in my mind was Alistair Croll’s piece at the end where he talked about the future of cloud. One of the most thought provoking statements that he made was something like ‘this won’t end like it has started, there will probably be an inversion’. This got me thinking about the oscillations that we see all around us, and particularly the tendency to move between centralised and distributed models for all kinds of things. This is something that we’ve seen a few times already in IT (mainframe – mini – PC – client server – n-tier …), and similar things happen with IT organisations (centralised infrastructure -> business aligned infrastructure and back again).
Nick Carr in his book The Big Switch frequently uses an analogy of the electricity industry for how IT is developing. In the early days of electricity, generation was distributed (to the point of use), and there was a need for substantial organic expertise in electricity. Over time electricity became a utility, where generation became somebody else’s problem; and yet after a century or so we might be on the verge of the next oscillation for electricity. There are tremendous losses between the original source of energy (whether it’s coal, natural gas, nuclear or whatever) and the point of consumption. It’s not atypical for only 25% of the energy used to actually make it out of the wall socket. This is why huge users of energy, like aluminium plants and cloud provider data centres try to get very close to sources of cheap electricity. It’s also why there’s a small but growing trend towards local generation (particularly with emerging ‘green’ sources such as solar and wind).
One of the IT megatrends that receives constant attention is Moore’s law (and it’s close cousin Kryder’s law), and the consequent doubling in capacity of various things every 18-24 months. One of the issues that’s discussed far less frequently is that different pieces of the architecture started at different places – so the gaps in absolute performance become more severe over time. On a log scale chart the lines keep on rising, but they never cross. Network is always the ‘thin straw’, which is why it makes sense to manage large data sets locally where storage is cheap (where I have to agree with Cory Doctorow and what he said here – http://www.guardian.co.uk/technology/2009/sep/02/cory-doctorow-cloud-computing).
By far the most vigorous debate in Cloud computing is around the consequences of ceding control to the provider of centralised services (aka the ‘public cloud’ providers) like Amazon, Google and Microsoft. This is why people talk about ‘private clouds’ (regardless of how nonsensical that term is). What this debate often seems to miss is that what we’ve come to call ‘cloud’ is really all about management, and has little to do with location. The naming is screwed up, because ‘cloud’ comes from what we drew on white boards to represent stuff on the internet, but the ideas and principles are sound. For now the easiest way to get great management (and hence quicker and cheaper provisioning of stuff that you need/want) is to go to the people that sell this stuff over the Internet, but the inversion is coming, the oscillation is changing phase. Things like Ubuntu’s Enterprise Cloud (UEC) can all of that management goodness, and let you run it on your own machines. Stuff like CohesiveFT’s Elastic Server lets you build your ‘machines’ to work with anybody’s IaaS layer and management tools, then their ‘cubed’ stuff abstracts away the network, config and other services so that you’re isolated from annoying detail.
Even then, a couple of clicks during the installation of an OS, or packaging stuff up by a bill of materials is beyond the desire or capabilities of the mass market. People want to just buy stuff that works. They want appliances. They want their virtual appliances to just happen on a device of their choosing, and this is where we see convergence of ‘cloud’ and what’s happening in enterprise IT… the oscillations will move into phase (at least for a while).
For some time complex software has been sold to enterprise IT in the shape of appliances. This was done to stop the IT people from doing dumb stuff with that software that would add months to the roll out time and maximise the chances of stuff breaking and leading to support calls. One of the problems of enterprise IT as it stands today is a tendency to smash things down to their constituent parts, and then rebuild things in a way that even their mother wouldn’t love. I’ve heard it said recently that ‘cloud is for everybody except the Fortune 500’, and ‘everybody in the Fortune 500 is married to Oracle’, but as Larry makes his stuff into appliances like Exadata 2 then the worlds are aligning. A data warehousing appliance is just as much about canned management as an EC2 instance.
I said recently that I didn’t want to own any servers, and wondered how large my company would have to grow before the economics tipped towards a move away from pure play SaaS and towards on site stuff? What I realise now is that the question of ownership is ancillary. The real point is that I don’t want to manage any servers… ever, and that’s fine, as when the cloud turns itself inside out, and I find that my data has returned home, I’ll still be benefiting from the canned management expertise of people that can do this stuff better and cheaper than me.
Filed under: cloud | Leave a Comment
Tags: cloud
Waving or drowning?
Earlier this year I gave a talk on cloud security at the e-Crime congress. One of the other speakers was John Suffolk, who when he wasn’t struggling with some very badly formatted PowerPoint [1] asked the audience ‘who in this room thinks they are keeping up with technology?’. I think I ruined his script a little by sticking my hand up, as apparently the normal response by an entire audience at a technology conference is to passively accept that they’re not keeping up.
This raises a fascinating question for me – why is it that people in the technology industry feel that they are constantly slipping behind? I could perhaps blame it on traditional British reserve, but there were plenty of US and other visitors in the audience. It might also be ‘head above the parapet’ syndrome, where others feel that they are competent in keeping up, but don’t feel like putting that to the test in public (and for what it’s worth John didn’t give me a hard time over it, though he also didn’t catch up afterwards to get the story behind the action).
What really scares me is the prospect that we have hundreds of people in senior positions within the IT industry who basically accept that they are to some degree clueless about what’s going on. People who’ve given up on keeping up, somehow overwhelmed by the consequences of Moore’s law and all that it brings down upon us.
Has it always been like this, or are things getting worse over time? If we’d asked a conference full of microcomputer enthusiasts in the early 80s the same question would their answer have been the same (there was after all a bewildering array of new machines, languages, applications and accessories emerging on the scene at the time)?
I put my hand up because I consider that it’s my job to keep up with this stuff. I may not know everything about anything, but I try hard to have a broad (and necessarily superficial) knowledge of as much as possible. I know that plenty of others will say that they’re too busy with their ‘day job’ to spend the necessary time, but what does it actually take? I probably spend an hour or so a day in my RSS aggregator (and Twitter) catching up on what people who’ve passed some kind of arbitrary interest threshold have to say (and reading the stories that they have to tell, or following the links they’ve exposed for me). I get through a lot more stuff in that hour than I used to in the days before RSS and Twitter – just as Moore brings us an exponentially growing problem Classen brings us logarithmic utility to deal with it.
The broader point here is that as a civilisation we must be keeping up with this stuff. Unless there’s some secret warehouse in the Bay area that’s shipping in alien technology then we’re dealing with a closed loop. People create all of this new stuff, and so the knowledge about what it does and how it can be useful is simply distributed; and the web (2.0) and all of the collaboration tools that sit on it not only give us the means to create new stuff (and introduce new complexity) but they also give us the means to harness and understand it.
My prediction – individuals will continue to feel left behind, whilst society as a whole continues to plough forward.
[1] almost certainly not his fault. I bet that he (or more likely an assistant or somebody else that does slide mongering for him) has to contend with some ancient version of MS Office running on an obsolete build of Windows on top of crumbling neutered hardware. If my impressions of public sector IT (even amongst its highest operatives), formed by my own somewhat dated experience, are wrong then please correct me with a comment below.
Filed under: technology | Leave a Comment
Tags: Classen's law, keeping up, Moore's law, technology
It’s not just about the money
Hopefully we’re seeing the beginning of the end rather than the end of the beginning as media companies align themselves with incumbent politics to suppress the new freedoms of the Internet in order to maintain their outdated business models. Locally we have the Digital Economy bill, full of dreadful stuff that has emerged in the wake of Mandelson ditching the bothersome and inconvenient consultation process for the Digital Britain paper. I notice echoes of Andy Burnham’s ridiculous desire for a ‘child safe internet‘[1] also re-emerging. How about we concentrate our efforts first on having an internet without crime. On the broader stage we’re getting ACTA, where the media companies are trying to enact an international fiat rather than fight it out in their home political environments.
It’s easy to write this off as greedy and corrupt politicians being bought off by the big companies (which after all happens all the time), but there’s more to it than that. The media companies hold the reigns to the stars, and the politicians need those stars to endorse their campaigns and make them seem popular – you like this music/actor/whatever, and they’re friends with us, so you like us…! I suspect that whilst the media distribution companies are throwing every carrot/cent they can at lobbying, campaign contributions etc. there’s a stick in their hands in the form of be nice to us or we won’t help you look good to a celebrity enchanted public.
First they came for the ‘terrorists’, and I did not speak out—because I was not a ‘terrorist’;
Then they came for the ‘child pornographers’, and I did not speak out—because I was not a ‘child pornographer’;
Then they came for the ‘pirates’, and I did not speak out—because I did not consider myself a ‘pirate’;
Then they came for me—and there was no one left to speak out for me.
With apologies to Martin Niemöller
[1] I once had the misfortune to hear Burnham the buffoon speak about this in person, and unfortunately it was the wrong sort of venue to argue back (and I fear that it would have been like trying to argue with a dining room table). It’s the most atrocious form of sound bite politics, and about as realistic as child safe motorways (the Internet is after all the ‘information super highway’, though that’s also a label I’ve considered to be nonsense since the first time I heard it).
Filed under: politics | Leave a Comment
Tags: acta, digital Britain, digital economy, three strikes
The network isn’t ubiquitous
and probably never will be.
My brother has been moving house this week, which has caused him to spend a certain amount of time off net, and to get very angry with BT (though it all got sorted out in the end [1]). Sadly my suggestion to get a Vodafone 3G dongle doesn’t seem to have worked out as an interim measure – coverage isn’t always what it looks like on the maps (I have similar issues with my Three data service, which is great on my commute and in the office, but patchy to problematic if I ever need it at home).
It seems that salvation has come in the form of an open WiFi network that he can mooch off, though I wonder how long that state of affairs will remain in place. There seem to be three actors at work here:
- Consumers (demand) would like to have ubiquitous access everywhere. They’re prepared to pay something to get this, but would prefer for things to be free at the point of access. Consumers desperately want the fallacies of distributed computing to be true. A lot of the time consumers sit within their little bubbles of connectivity which makes it seem like everything is OK, but then a trip abroad [2] or a house move comes along and burst the bubble.
- Telcos (supply) have to deal with the fallacies, and making those fallacies appear to be true costs some serious money. The most gross distortions appear to arise around roaming charges, where the commonly accepted model seems to be similar to the hotel telephone – let’s screw the travelling user, they can always expense it to their company. I suspect that this isn’t a well calculated mechanism to maximise revenue, and that there’s something like the Laffer curve that applies to data charges; but the telcos I know aren’t like Formula 1 teams and investment banks, and don’t employ armies of quants to figure this stuff out.
- The media distribution industry (an externality to this market) would very much like the network and its fallacies to pack up, go home and let them return to a comfy oligopoly over physical distribution of content. Through their bought and paid for politicians they hope to get international law and private militias to interfere in the running of the network. If an entire town can be booted off the net for the ‘crime’ of one person then extending those bubbles of connectivity is going to get harder rather than easier. Proposed ‘three strikes’ laws would mean that only an idiot would leave a WiFi connection open, as any intended generosity could quickly turn into disconnection for them and their entire household.
Of course this is an overly simplistic view of the picture. There are complications:
- Telcos like BT would like to offer services over unlicensed spectrum, which is why they incentivise users of their newer WiFi routers to provide virtual access points for OpenZone subscribers and FON users.
- There is a 4th actor in the picture – the bad guy (who is another externality to the market). The bad guy doesn’t care if you meant to share your network; if it’s open or weakly protected (e.g. with WEP [3]) then it’s his for the taking. The bad guy won’t confine himself to a email collection and a some light surfing, if he wants to download a few gigs of movies [4] over bittorrent then he will. This is troublesome in a world of capped tariffs, and huge problem in a world of ‘three strikes’.
Returning to the central point… We’re increasingly living in a world where the network provides many of the things we take for granted, and whilst we might not feel actual pain when it’s not there a certain degree of discomfort may arise. People talk about their ‘comfort zones’ in all sorts of contexts, but increasingly one of the many boundaries of significance will be the connectivity envelope. The laws of physics and economics will always assert their limitations, but we’re on the verge of legislation making things much worse than they need to be. Of course if you’re reading this then you’re almost certainly in a democracy where you can use your vote to push things back into shape?
[1] thanks @jobsworth
[2] it was interesting to see @doctorow asking the question I asked before about why there aren’t any shops in airport arrivals selling PAYG 3G data cards/SIMs, though so far as I can tell there isn’t a viable 3G PAYG offering at all now in the US. Of course the Acela should have WiFi just like the NXEC main line.
[3] most WEP protected access points will succumb to tools like aircrack-ng in less than 5 minutes, and all the bits you need can be bundled together in a convenient USB bootable distro like BackTrack.
[4] and let’s not even get started about what those movies might be, and how much trouble you’re in if they’re found on your hard disk rather than the bad guys (as once he 0wned your WLAN popping your C:\ drive was a doddle).
Filed under: technology | Leave a Comment
Tags: 3G, BT, economics, network, network fallacies, telco, three strikes, wifi
Update (3 May 2010) – I’m getting increasingly sick of how often this machine fails to record things. Worse still I’ve even seen it say that it’s started to record something, but when I go to watch it there’s nothing in the list. Reliability is awful compared to when I first started using it. I’m also annoyed by its propensity to record things on C4+1 rather than C4 when doing series record.
and now back to the original script…
As an early adopter of the Pace Twin I’ve been using a Freeview PVR for as long as they’ve existed, and as I posted before, mine was pimped a little (with a 60GB drive) to improve on the stock spec. Sadly the recent Freeview channel shuffle seems to have brought it to the end of its useful life. Perhaps it’s just the poor signal quality from my communal aerial feed (though my other Freeview TV and boxes seem to cope), or maybe it’s just not up to snuff any more? Regardless of the cause, viewing and recording of many channels had become too much of a hit and miss affair. Time for a change, and some new gear.
I didn’t do my usual extensive research, just a quick dive into some end user reviews on a few online shopping sites. The Sagem seemed to come out less badly than some of the others, and was in stock for £129.99 at my local Argos (along with a £10 voucher offer) so I ventured out into the rain and bought one.
The big shock is that it has no UHF output. Perhaps this shouldn’t be a surprise, as analogue TV is being progressively decommissioned, but one of the nice things about the Twin was being able to watch it on any TV in the house. I briefly considered buying a UHF modulator from Maplin to fix this omission, but in the end plumped for a jury rig arrangement with my ancient VCR (Scart in -> UHF out).
There are however some good things about the new box:
- It can record on both tuners at once
- and you can watch another recording at the same time
- Series linking seems to work, even for those Saturday evening shows that jump around the schedule every week
- The 320GB (~160hrs) drive is much bigger than the 60GB (~30hrs) that I had in the Twin, so I can leave the kids stuff on there
- and it allows me to put stuff into sub-folders so there’s not too much clutter in the programme list
- It has an ‘exportation’ feature that allows programmes to be copied onto FAT32 USB devices
- and the exported files seem to be bog standard MPEG2 .ts files, no nasty encryption or silly file formats
There are also a few areas where I can see a need for improvement:
- It can play mp3s off USB media (FAT32 only, though it seems to work but whine with regular old FAT), which is cool but:
- You need to switch mode from media list to player if you want it to actually play successive tracks
- It is supposed to be able to copy stuff from USB to the local HDD for more convenient access, but this is a totally unreliable process. I have given up on trying to copy my music library (16GB) onto it. It seems to struggle with a single album’s worth of files, never mind a few hundred.
- The favourites lists are not all that easy to edit, and the split between TV and Radio can also be confusing (it seemed to me initially that I had to choose TV or Radio, but in fact you can have both in a favourites list).
- The UI/remote control combination can seem a little sluggish at times.
- The skip forward function (achieved by pressing >|) skips 5mins. UK ad breaks tend to be 4mins, so the function becomes pretty useless, as you either have to miss or rewind a minute. The 1min skip on the Twin was fine, and the 30s skip on my Panasonic box is OK (though 8 button presses per ad break is a bore). C’mon guys, either make this configurable, or make the default fit for purpose.
It would of course be killer if:
- I could use it to replace my streaming media player (a Kiss DP-600), which would involve:
- Having an Ethernet port and UPNP support
- Better still if you could do ‘exportation’ over the LAN
- The ability to play DivX (HD)
- Having an Ethernet port and UPNP support
- If the media player stuff actually worked then a directory sync tool would be handy for managing large libraries.
Overall though I’m finding it hard to complain given the price, and the fact that it hasn’t failed in any meaningful way (yet – fingers crossed). Even if it does end up missing a recording, then that doesn’t seem too much of a big deal these days. I wonder how much longer it will be before the whole concept of a PVR becomes redundant, and we just have a local cache of the media library in the sky (for those times when it just doesn’t make sense to drag those bits across the internet)?
Filed under: technology, Uncategorized | 9 Comments
Tags: dtr67320t, freeview, pace twin, PVR, sagem
I’m usually an aggressive early adopter of new gadgets, but I’ve not been able to bring myself to buy an e-book reader yet. This is mostly due to the DRM deployed by Amazon, Sony etc. and the consequences that has for how I would use the books and what would happen to them in the future.
As I commute almost every day, and travel frequently when I’m not commuting [1] then I like the idea of being able to cart a bunch of books around with me in a consistent, light weight package. I do however tend to be a read once sort of guy. Books accumulate in my life. I’m barely getting started on the books that were bought for me last Xmas, and the mince pies are already in the shops for this year – yikes. I don’t anticipate a time when I’ll have the luxury to go back and read books again. When I’m done with books I tend to be a hoarder, but I also like to lend them out to friends. This is the one difference for me between books and videos – both I tend to use once then move on, but books get passed on, videos don’t; that said, the ideal distribution model for both would probably be a giant library in the sky, with some local cache of stuff queued up for those times that I’m offline (which I tend to be when I’m consuming books or videos) [2].
Details on what B&N are(n’t) doing with DRM aren’t clear yet. One might hope that they’re driving an open format, and trusting their users to do the right thing, but I doubt this will be the case; I expect that they’ll simply have an even more elaborate DRM scheme that somehow supports lending between devices. Will this now mean that I can recall a book from a friend without having to see them? Can I check first whether they read the book yet or not?
Anyway, this seems on the face of it to be less evil than the Kindle, and also looks sexier. I particularly like the look of the touch screen/keypad thing.
[1] I’m told by regular e-book users that Kindle usage (particularly if it’s in a leather cover) seems to be OK on the bits of flights when you’re not allowed to use other ‘approved electrical devices’. I wonder if the airlines will formally accept this or clamp down as e-books become more popular. One of the main reasons I normally have a chunk of dead tree in my bag is to have something to keep me occupied when the netbook and iPod are verboten.
[2] There’s probably some underlying marketing point lurking here. If I was paying to rent a book rather than buy it outright then I’d likely be more open to living with DRM and other content management malarkey; though obviously the price point would have to reflect that (I’m thinking £2-4 to rent versus £5-15 to buy).
Filed under: technology | Leave a Comment
Tags: DRM, e-book, e-reader, kindle, nook
uncommon sense
“No risk it, no biscuit” said my friend John as we sat down for a curry the other night. He’s a trader, and tends to think of almost everything in terms of risk.
Later on in the conversation we got on to the topic of ‘common sense’, and how it seems to be disappearing from life as we know it. “Common sense is just risk” came John with his usual refrain, but I have to totally agree with him this time. What we call common sense is all about risk, choosing to take risk at an emotional level rather than having somebody with a risk assessment form show up and fill it out whilst wearing a hard hat and hi-vis jacket then finally saying “I wouldn’t if I were you, something bad might just happen”. Nassim Nicholas Taleb writes a lot about emotional mechanisms and their relationship to risk in his excellent ‘Fooled by randomness’ (I’ve not yet got around to the more popular ‘Black Swan’). He points out that without emotions, without common sense, we become incapable of making decisions. That’s bad at an individual level, but it’s worse at a society level. When we substitute common sense, emotion and competence with process then there might be high hopes that the processes will be efficient and infallible, but in reality that’s almost never the case. Our overall productivity is now choking on broken process in almost every area of daily life. The most egregious examples seem to be associated with ‘terrorism’ where almost any amount of unproductive inconvenience is acceptable if it is supposed to save ‘just one life’ from a massively low probability event[1]. No common sense, no risk management. But it’s not just about terrorism and associated government fearmongering, it seems that the tabloid press has convinced the general public that no amount of risk is tolerable in any area where the state can possibly intervene with some misconceived legislation (backed up by enforcement that slews randomly between incompetent and heavy handed).
What can be done to fix this?
I fear that there are no easy answers. This one’s a combination of individual and social responsibility, individual and social risk appetites, education, unwinding complex legislation, honesty from politicians[2], cynicism of the popular press… the list goes on.
[1] If anybody knows what the government is doing about the threat to the public from lightning strikes, and where I should stand in line for my lightning security theatre then please let me know?
[2] OK, I realise here that I’m asking for the impossible. Something intimately intertwined with this whole problem is that it seems to be acceptable mandatory for politicians to be liars. Goodness in politics seems to be measured in units of liarbility.
Filed under: grumble | 2 Comments
Tags: common sense, politics, risk
A little while ago I put out a plea for stronger authentication for Google Apps, and it seems that my wish has been granted with Tricipher launching their myOneLogin for Google Apps[1]. I had tried myOneLogin before, and frankly wasn’t too impressed. This time things are different though, the issues I’d seen before with Chrome compatibility and general fiddlyness seem to have been fixed, but best of all is the use of a proper strong (soft) token, in the shape of VeriSign VIP Access for Mobile.

I first came across VIP when I saw the news that Verisign and PayPal had teamed up to do a deal on tokens. I wanted one, even if it was going to cost a few quid, but they were initially only available in the US, and I heard nothing more about them. Did the marketing guys lose interest, or did the phishing problem go away, or did something else come along? It turns out that eBay/PayPal will sell you a VIP hard token (a device with a button on it to generate one time passkeys [OTPs]) for $5/£3, but why bother when you can use a free mobile token on your BlackBerry/iPhone/whatever? The soft tokens can be used in a variety of other places, which begs the question of why other sites aren’t jumping on the bandwagon, and why nobody seems to be pushing this? Part of the answer might be the funding model; I’m not sure how Verisign are getting paid for this stuff, but I’m sure they’re not running their service as a charity for the web.
[1] Premier Edition only, as it needs SAML support
Filed under: identity, security | 2 Comments
Tags: authentication, google, identity, saas, security, strong authentication, tricipher, verisign, vip
Mini review – 3 MiFi
I had great hopes for MiFi. I was going to be like Pig-Pen from Peanuts, just with fewer flies and more connectivity. I would walk the earth with my own little bubble of Internet goodness. No more messing about with dongles for the netbook. My iPod Touch would become like an iPhone (just without voice). Life would be great.
personal cloud of connectivity?
Plasticky
It was clear from the first pictures that I saw that the device itself would be a bit plasticky, and it is. This is clearly something that doesn’t look like it will stand up to many knocks and bumps from daily use, but that shouldn’t matter; the whole point is that it just sits in my bag doing its thing – 3G on one side WiFi on the other. I understand that these things can’t be carved out of solid titanium billets, as that makes antenna design even more challenging than it is already, but some sort of carry case would help stop it from getting too scruffy too soon.

Battery
Unfortunately it can’t be left in the bag all day. The battery only lasts for 5h (and that’s the claimed life, I’ve not seriously tried to find out what the figure is in real world use). This means that it has to be brought out and charged – frequently. At least there’s a little USB-MiniUSB cable for the purpose, leaving it looking like a slightly overweight dongle hanging off my netbook.
Network nightmare
Charging by plugging into the netbook is fine when the MiFi is switched off, but things get interesting when it’s on. The device presents itself as a network card rather than a modem, and on my machine it gave itself quite a high priority (above my WiFi adaptor). This means that if the 3G modem is on then you get a slow connection, and if it’s off then you get a whole lot of problems. Things can be fixed by a quick visit to the network connections control panel, just don’t forget to press the Alt button if you’re a Vista or Win7 user or you’ll never even see the Advanced menu option where adaptor priority options live.
Getting on
The Huawei E5830 device has three buttons on it, and unfortunately you need to use all three to make it go. Firstly the device has to be powered on (press and hold for 2s), then the WiFi needs to be switched on (press and hold for 2s) then the 3G needs to be switched on (press and hold for 2s). Steps 2 and 3 can be reversed if you choose. This all seems a little pointless to me. The sole purpose of the device is to bridge 3G to WiFi. Like the competing Novatel 2352 this should all be done with a single power on. I’ve heard a counter argument that this arrangement helps roamers from running up huge bills by having the thing accidentally turn on, connect, and serve up a windows update or similar to their laptop. If that’s a real concern then leave it at home, or take the battery (or SIM) out.
Staying on
My first train journey with the MiFi wasn’t much fun. Not only did it seem less good at getting connections that my usual Novatel XU870, but it was equally pathetic at reconnecting after going through a tunnel or whatever. Once again the only point of this device is to connect to 3G and retransmit packets over WiFi. I don’t want to have to press a button on the side of it every time the 3G connection is lost. Total user experience FAIL.
Will I send it back?
Probably not, though I’ve been sorely tempted, and I still have a week to choose. It has already proven useful as a means to provide emergency connectivity to me and my colleagues, such as last week when Gmail was having a bad hair day and I needed IMAP/SMTP connectivity (which I can’t get on the office network). Unfortunately it’s clear to me already that it’s an occasional use device rather than an all the time device. That occasional use would be helped out by better international options, like having some decent roaming tariffs for data, or being unlocked and able to accept a local data plan PAYG SIM (found just between the hens teeth and rocking horse droppings at the shop by the arrivals gate in the airport). Let’s see how it handles the trip to Manchester later in the week?
Update – after unlocking and upgrading the firmware I’ve posted a follow up review here.
Filed under: technology | 9 Comments
Tags: 3G, hotspot, mifi, mobile, review, wifi