For the last few years the fantastic chaps at GreenQloud have been hosting my automated builds for OpenELEC. Sadly (for me) their business is shifting from running a cloud to selling their ‘QStack‘ cloud platform to others, so GreenQloud are shutting down their IaaS (so that they’re not competing with their customers).
Filed under: cloud, Raspberry Pi | Leave a Comment
Tags: BigV, Bytemark, GreenQloud, hosting, openelec, PiChimney, Raspberry Pi, Raspi, RPi
Whilst on vacation in Spain I’ve found networks that seem to be like something out of a Cory Doctorow novel – domestic WiFi routers hanging off domestic WiFi routers hanging off domestic WiFi routers. At first I thought it was my Airbnb host being cheap and having a cosy arrangement with a neighbour to provide Internet, but it’s much more systematic than that.
Six routers deep
Here’s a traceroute from my laptop:
Tracing route to google-public-dns-a.google.com [220.127.116.11] over a maximum of 30 hops: 1 3 ms 1 ms 1 ms 192.168.0.1 2 6 ms 7 ms 5 ms . [192.168.2.1] 3 9 ms 8 ms 7 ms 192.168.1.20 4 963 ms 940 ms 697 ms 192.168.10.1 5 368 ms 464 ms 159 ms homestation.Home [192.168.1.1] 6 685 ms 728 ms 769 ms 192.168.144.1 7 * * * Request timed out. 8 1580 ms 658 ms 588 ms 109.Red-80-58-106.staticIP.rima-tde.net [18.104.22.168] 9 * * * Request timed out. 10 3538 ms 2147 ms 1566 ms GOOGLE-Ae2-GRAMADNO2.red.telefonica-wholesale.net [22.214.171.124] 11 723 ms 397 ms 877 ms 126.96.36.199 12 1975 ms 1198 ms 1047 ms 188.8.131.52 13 865 ms 431 ms 425 ms google-public-dns-a.google.com [184.108.40.206] Trace complete.
That’s six different routers on RFC1918 class C private networks (and a lot of latency) before I hit the Internet proper. It’s also a whole ton of NATing and way too much potential flakiness. On a good moment I’ve seen 2Mb/s, but in reality it seems lucky when packets get through at all, and amazing when Skype works.
I’ll try to unpick what’s going on at each router in turn…
The router in the house I’m renting is my old friend the TP-Link TL-WR841N. I have one of these at home running OpenWRT, but the one here has the (awful) stock firmware on it. Luckily the admin password hasn’t been changed, which came in handy when it needed some help reconnecting after a long power cut.
The WAN link of the router is connected to a TP-Link powerline adaptor
At first I thought this was connected through to a neighbour, but that was because I was looking downstairs for an ADSL modem or similar that wasn’t there. When I looked upstairs (in the laundry room) I found its twin attached to a Ubiquiti power over ethernet coupler:
and that was for a WiFi antenna mounted on the roof:
A quick detour to Solyaires Internet
I didn’t set the system up, and I don’t pay the bill, but my research would suggest that it’s connected to Solyaires Internet or some similar system for distributing Internet via 5GHz WiFi connections. So instead of a community effort where people create a mesh network to share, this seems to be a commercial endeavour (and it’s not a mesh – more like a spider’s web).
One amusing thing I’ve noticed is that my in law’s apartment building (which is miles away from the house I’m renting) has exactly the same egress IP onto the Internet. Here’s their tracert (oddly despite fewer layers of router they get much worse bandwidth):
Tracing route to google-public-dns-a.google.com [220.127.116.11] over a maximum of 30 hops: 1 <1 ms <1 ms 1 ms 192.168.100.1 2 1 ms 1 ms <1 ms homestation [192.168.1.1] 3 107 ms 56 ms 42 ms 192.168.144.1 4 * * * Request timed out. 5 62 ms 72 ms 102 ms 109.Red-80-58-106.staticIP.rima-tde.net [18.104.22.168] 6 * * * Request timed out. 7 160 ms 142 ms 136 ms GOOGLE-Ae2-GRAMADNO2.red.telefonica-wholesale.net [22.214.171.124] 8 67 ms 59 ms 58 ms 126.96.36.199 9 58 ms 65 ms 60 ms 188.8.131.52 10 57 ms 59 ms 59 ms google [184.108.40.206] Trace complete.
The second router along is a Belkin F7D1301, which judging by the Amazon reviews is a very ordinary router indeed. It has no password set, so the admin interface is wide open, which is obviously a terrible idea from a security perspective. My best guess as to what’s going on here is that the WiFi distribution outfit use some of their customers as Internet mules, acting as a relay from one point to the next. It’s pretty shocking how amateurish the setup is though.
The third router doesn’t have an open admin interface. Looking at its response headers I see a Boa 0.93.15 web server, which could suggest a Zyxel/Edimax piece of kit (which might be a full router, or might be some sort of ‘range extender’). That web server is susceptible to a basic authentication bypass exploit, but I wasn’t feeling nefarious enough to pwn it (this was a look but don’t touch exercise). The basic auth prompt was ‘Graham-New’ so I suspect it’s a wise home user (another relay mule?) rather than something professionally configured.
This one has an airOS admin screen implying something from Ubiquiti networks, and likely kit that’s run by an actual service provider rather than sat in somebody’s home.
Neither of these had web admin screens on ports 80 or 443 so I have less to go on (but at least they’re somewhat secure).
The home.homestation implies that we’re back to consumer ADSL gear, and my best guess is that the WiFi connections are being back-hauled by a bunch of consumer grade ADSL links.
The final 192.168.x.y router might just be the local telco being awful and aggregating many ADSL connections onto one public IP.
Part of a broader broadband problem?
I asked a friend who lives and works in Spain about her experiences, and she said ‘it’s unreliable, it’s slow, and the telephone companies are from the last century’. Flicking through local papers I also see that WiFi delivery is a pretty normal offering, and priced in line with ADSL services at around €24/month.
Whilst here I’ve been lucky enough to see Spain included in Three’s ‘Feel at home‘ roaming deal, which means I’ve also been able to check out 3G service. The 3G I’m getting is pretty typical of a mobile service – when it’s good it’s OK (~1Mb/s), when it’s bad it’s not there at all.
In general I’d say that the house WiFi and 3G are about on par in terms of bandwidth and reliability – good enough for keeping up with what’s going on in the world beyond, but not so good that I’d want to depend on it for any kind of business use.
Something must be very wrong with the Internet connectivity market in the Costa Tropical (and perhaps Spain more generally) for this type of arrangement to be tolerable (never mind commonplace). I’ve been visiting Almuñécar for many years now, and back in the early days the ADSL provision seemed to be much the same as back home in the UK. I get the feeling that the FTTC connection I have at home now would be considered enough to serve hundreds of properties. It’s been great to see investment in infrastructure like roads over the past decade, but it’s a shame that the technology infrastructure hasn’t had the same attention.
 I got so sick of large downloads from my home network failing that I’ve lashed up a combination of autossh and bittorrent so that it will download things eventually, and I don’t have to burden the network (and my mouse finger) with redoing the same failed file time and time again.
 I’m guessing that the homestation in the path from my in-laws’ was a different one, as in addition to being ‘homestation’ rather than ‘homestation.Home’ in the traceroute it also serves up an admin GUI over the web.
Filed under: networking | 1 Comment
Tags: adsl, Internet, NAT, router, wifi
Apple and Google have both launched laptops in the past few days that are both amazing and seriously flawed. If only somebody could make a machine that has the best of both worlds.
The leaks were pretty much spot on, so in the end the new MacBook brought few surprises. I really want a small, light, robust laptop with a decent battery life, so it looks almost ideal.
Why the MacBook is wrong for me
8GB max RAM – it’s barely enough to run a busy browser, and certainly doesn’t have the headroom for running a few VMs for test/demo purposes. I’ve had a laptop with 16GB RAM for two years now, and I’m really not willing to downsize.
I could live with the small(ish) SSD, the low powered processor and the lack of ports, but the lack of RAM is the deal breaker for me. I know that the mainboard is smaller than a Raspberry Pi, but RAM doesn’t take that much space.
Can it be fixed?
No – not unless Apple decide to squeeze in the extra memory, and I rate the chances of that happening within the product life-cycle at approximately zero.
The original Pixel was an enigma to me – too high end for the ChomeOS that it runs, but not high end enough to really distinguish itself. The Pixel2 seems different – it’s so high end that it stands out on the merits of the hardware. i7 processor, 16GB RAM, 12″ screen (I really don’t care that it’s a touchscreen) – we’re certainly headed in the right direction here.
Why the Pixel2 is wrong for me
ChromeOS – I may joke that any desktop OS is just a bootloader for Chrome, and that’s almost true, but not true enough. Even though this machine has the memory to run local VMs it doesn’t have the OS to do that. Not having Skype is also a major issue for me.
Puny SSD – cloud services are great when you have connectivity, which rules out a lot of the time when I actually want a small and light laptop – like when I’m on planes, trains etc. Of course even if the OS problem can be solved, 64GB doesn’t leave much space for VM images. When it’s possible to get (reasonably priced) tiny 1TB SSDs it’s such a shame that they’re not an option.
Can it be fixed?
Possibly – I’ve not seen a detailed tear down yet to establish how SSD is done in the Pixel2, and whether the tiny original one can be upgraded to something more suitable. I have greater confidence in the OS side of things, as I’ve seen the Linux community do a good job of porting things onto previous Chromebooks.
Update [19 Mar 2015] – David Radkowski let me know that the SSD is soldered onto the motherboard, so although I’d expect the OS piece to be fixable the lack of storage is pretty much a show stopper. Whilst it’s possible to get huge capacity SD cards these days for add on storage, I wouldn’t want to be running VMs off them.
A quick diversion to USB-C
It’s interesting to note that both of these laptops use USB-C for power and other purposes.
Many Mac fanboys seem to be disgusted at the decision to replace magsafe with USB-C – just think of all that shiny new stuff that’s going to fall victim to clumsy idiots tripping over power cables. There’s also a loud conspiracy theory that it’s all about selling lots of expensive proprietary dongles.
Google is doing a much better job of talking calmly about USB-C being a new industry standard.
With the ability to carry 100W of power it seems that USB-C will soon be pretty much everywhere, and I like the idea of commodity chargers, video adaptors etc. I also like the idea that I can top up my laptop from the same battery pack I might use for my phone or tablet.
If it was just Apple going down the USB-C road then that would be a problem, but the fact that both of these new laptops from such different stables are released in the same week and headed in the same direction gives me some confidence that USB-C is here to stay and it’s just the opposite of a scam – it’s something with real potential to deliver better value and convenience – just don’t trip over the cable.
Google have done a better job here by having USB-C on both sides to allow charging and monitor attachment at the same time, and it also helps that they have some conventional USB3 ports, but then they did have more volume to play with. I’d note that when I last bought a laptop with MacBook Air lost points on the number of bits and bobs I’d need to carry around to support it – I was thinking about total travel volume and weight – not just the machine.
What would work for me
A MacBook with 1TB SSD and 16GB RAM – just take my money.
An i7 16GB Pixel2 with 1TB SSD and Ubuntu – likewise.
A Canonical badged i7 16GB Pixel2 clone with 1TB SSD – YES PLEASE.
Both of these machines are tantalisingly close to being perfect – just a couple of spec tweaks and I’d be ready to buy. So who’s going to exploit the me shaped gap they’ve left in the market? Lenovo, HP, Dell and Toshiba might all have been contenders in earlier days, but I feel it’s more likely to be Samsung, Acer or Asus, perhaps even Xiaomi that will get the joke this time around.
Or maybe I’m just part of some pinnacle IT clique that’s too small to be worth marketing to, and I’ll be stuck with my 16GB Lenovo X230 (with its 1TB SSD) for the rest of eternity?
Filed under: could_do_better, technology | 1 Comment
Tags: apple, Canonical, Chromebook, ChromeOS, google, laptop, MacBook, Pixel, Pixel2, Ubuntu, USB-C
I’ve modified my automated build system for OpenELEC so that it now creates RPi2 builds in addition to regular old RPi builds – https://resources.pichimney.com/OpenELEC/dev_builds/?C=M;O=D
Filed under: Raspberry Pi | 3 Comments
Tags: openelec, Raspberry Pi, RPi
I’m a bit behind on writing this up, but just as I sometimes call out bad customer service it’s also worth highlighting good experiences.
When I bought my Chromebook back in December 2012 I got it from John Lewis. Partly this was because I had a ton of John Lewis vouchers (that I generally use for online grocery shopping at Waitrose), and partly getting it delivered to my local Waitrose was going to be more convenient that hanging around at home for postal delivery.
Since I got my Lenovo X230 the Chromebook has been mostly languishing under my bed, occasionally being pulled out when I need something with a keyboard. Around November last year I noticed that the battery was flat, which I thought was just down to having not plugged it in properly. Sadly next time I used the Chromebook it was clear that the battery wasn’t charging. I tried various software things I found online to attempt to revive the battery, but nothing worked. It seemed that the battery (or charging circuit) would need to be replaced.
Had I been able to find a new battery easily I’d have probably fixed it myself, even though they’re built in rather than removable, but that wasn’t an option.
2yr warranty to the rescue
Some checking revealed that I had until Christmas 2014 on the two year warranty, so I called the John Lewis help line, and was promised a call back the next day by their technical support people. That didn’t happen, but my second attempt went much better and collection of the broken Chromebook was arranged. On the day promised (and with a text telling me the hour of collection) DPD arrived with a carton and packing material to take it away.
A little over a week later I got an email telling me it was on the way back to me, and once again I was told which hour the parcel would arrive. It came back with a job ticket saying the battery had been replaced, and was working perfectly. I’d done a factory reset and wipe before sending it off, but as most of the state is stored in the cloud it only took moments to get back to fully functional.
Since John Lewis price match, and offer a two year warranty on things like this I should probably try to use them more often. If it wasn’t for the extra year of warranty my Chromebook would now be stranded in the house where it could be run from its power supply.
 The texts identified themselves as coming from AppleRepair.
 Some of the whining I see online would suggest that DPD are as bad as
CShityLink were, which isn’t my experience at all. I love the fact that I get an email early in the day with a one hour delivery window (so I know when it’s safe to walk the dog and pop down the road to the local store), and they’ve never let me down.
 At the time of arranging the repair there had been some muttering about batteries not being covered by the warranty, but this wasn’t an issue in the end (and what exactly are customers supposed to do about things that have batteries built in anyway?).
Filed under: did_do_better | 2 Comments
Tags: battery, Chromebook, DPD, guarantee, John Lewis, Waitrose, warranty
ClusterHQ, the team behind Flocker, have announced Powerstrip, an Apache licensed tool to prototype Docker extensions. Powerstrip works as a proxy between the Docker command line interface (CLI) and the Docker daemon allowing the Docker API to be extended. The main advantage of this approach is that by extending the Docker interface, rather than wrapping the Docker CLI, it becomes much easier to compose together Docker add ons such as Flocker or Weave.
Filed under: Docker, InfoQ news | Leave a Comment
Tags: API, ClusterHQ, composition, Docker, Fig, Flocker, Powerstrip, Weave
The Administrator setup for Google Apps Migration guide makes things look pretty straightforward, but it’s much, much more complicated. What should be just a couple of check boxes turned out to be a twisty turny journey through hidden menus littered across distant parts of the administrators console.
The move from CohesiveFT to Cohesive Networks meant I needed to move all of my email out of one service and into another. Last time I did this it was easy – suck email down from old account using an IMAP client (Outlook), then push email up to the new account via IMAP. Obviously this was too much of a good thing, and was hurting Google’s poor, tiny and fragile infrastructure.
It all started out fine
I actually had no problem whatsoever pulling down all of my emails from the old account, even though at 3.1GB of data it should have bust my bandwidth limit. The trouble began when I tried to upload to the new account. About 30 items (of about 35,000) made it over, and then it choked.
Google Apps Migration for Microsoft Outlook
Next I tried the official tool. But that didn’t get me very far:
I didn’t have admin access to the new account, but I was assured that the Email Migration API was enabled. If you were an admin, and you saw this then you’d probably think everything was fine:
Further down the same page there’s a section about the Email Migration API. It doesn’t actually let you do anything – it just links to this (not very helpful) web page:
To actually get headed in the right direction you first have to click on the little ‘Show More’ at the bottom of the Security page:
This brings up the ‘Advanced settings’ option. It will remain a mystery of the universe why Google choose to hide a single extra item with a ‘Show more’.
At this point you might jump straight at ‘Manage API client access’ but don’t. It’s ‘Manage OAuth domain key’ that you want first:
Now check the box to ‘Enable this consumer key':
It takes a few minutes for this to take effect. So grab a coffee or check email or something before returning to the ‘Manage API client access’ part:
Now paste in your domain name and the URL for the email API, which is https://www.googleapis.com/auth/email.migration
If you’ve waited long enough after enabling the consumer key for your domain then Authorize should work.
We’re not done yet
At this stage I managed to upload about 70 emails from the tool before it failed complaining about network issues. Subsequent attempts didn’t get any further.
A visit to Apps > Google Apps > Setting for Gmail > Advanced settings revealed some additional boxes to be checked:
Got there in the end
The migration tool still didn’t work, but I was now able to upload via IMAP (just as I’d planned to do in the first place). It took a whole day, but it got there in the end.
It’s quite possible that I could have made my Outlook IMAP upload work just by doing the last bit (in the Google Apps menu).
Enabling mail API access, which is what the migration tool seems to want, is much harder than it should be (or is made out to be). It’s also pointless, as the migration tool doesn’t seem to work properly.
I can’t end here without saying
The only time I ever use Outlook (which I despise) is for doing this sort of thing. Well… it ought to be useful for something.
 Since Google’s infrastructure is basically the largest in the world I’m struggling to imagine what sort of abuse let to them clamping down on email uploads, but I’d bet it has something to do with spammers.
 The Google Apps Migration for Microsoft Exchange Administration Guide (pdf) got me pointed in the right direction here.
Filed under: howto | Leave a Comment
Tags: API, email, gapps, google, Google Apps, howto, IMAP, mail, migration, Outlook
I fell into a trap with my new Gen 8 Microservers like this:
- Install 60 day trial license for iLO Advanced
- Update BIOS date/time
- Find that trial license has now expired :(
There really should be some sort of warning on the license page (and maybe also the serial/password tag) to say update your clock before applying a trial license. Here’s how I got things back to factory defaults:
Firstly press F8 at the appropriate part of the boot sequence:
The config tool opens on the option to set defaults:
So just hit enter and then F10 to confirm:
That’s it – the trial license will now work again. If like me you set a more memorable password than the one on the factory tag then that will have to be reconfigured.
Filed under: howto | Leave a Comment
Tags: default, factory, Gen8, HP, iLO, Microserver, reset
I’ve been a fan of HP Microservers since the original NL36 model. When the newer Gen8 servers came to market they were a bit pricey, but the cost has come down, and cash back deals have returned. Faster CPUs, larger official memory capacity, dual NICs and remote console capabilities makes these ideal for a home lab.
I’ve been working on our new vns3:turret platform a lot recently. It’s designed to run on enterprise networks rather than in the public cloud, which means that I needed some VMware hosts to play with. My older NL36s and NL40 Microservers were pressed into action, but the need for more capacity pushed me towards the latest model (which isn’t all that new any more, and might well be replaced by a Gen9 offering any day).
A bare bones model with G1610T CPU, 2GB RAM and no disk is presently £149.95 (£179.94 in VAT) at ServersPlus. HP are offering £35 cashback so that’s an out of pocket cost of £144.94 – not quite as amazing as when the original Microservers came with £100 cash back, but not far off.
I went for the 16GB ESXi 5.5 Test Bed Bundle (which has since been updated to ESXi 6.0), and ServersPlus did an excellent job of getting me the machines quickly and efficiently.
The Gen8 looks a lot prettier than the earlier model, and it’s much easier to get the motherboard out (though that’s only necessary for a CPU upgrade as the RAM is now easily accessible).
Unfortunately the 5.25″ drive bay has been sacrificed for a laptop style optical drive slot, which limits additional storage options. The eSATA port has also disappeared.
The newer drive caddies don’t feel as robust as the older ones, not that it matters once a disk is screwed in.
Probably the best feature of the Gen8 is the inclusion of HP Integrated Lights-Out (iLO), which can be used to provide a remote keyboard/video/mouse (KVM) capability. Out of the box the remote console only works until the OS boots, but an iLO advanced license provides the ability to use KVM after boot. Those licenses are hideously expensive at full sticker price, but there’s a healthy secondary market, and I found one on Amazon for less than $20. A 60 day free trial license can also be obtained.
Since I keep the servers out in my garage (which is presently very cold) I’m glad that I don’t have to go out there.
16GB of ECC RAM is officially supported and very easy to install. It’s a shame it’s not 32GB, but with the standard CPU offerings the balance is probably right.
One of the things that put me off the Gen8 when it launched was the weedy CPU range. The Celeron G1610T and Pentium G2020T on offer are both a bit weak (though notably better than the AMD CPUs in earlier Microservers). Fortunately the CPUs are upgradable. I was able to find a couple of E3 1220L V2 parts on eBay for £129 each, which at 17W power rating are an ideal upgrade option. Others have had success with 45W CPUs such as the E3 1265L V2, and many have even got away with running full power 69W parts such as the E3 1230 V2 (even though the heat sink is only rated at 35W).
Besides the extra speed on offer my main reason for doing a CPU upgrade was to get VT-d, though my attempt to pass through the B120i storage controller to a VM failed.
We’re going to need a bigger
The Gen8 has two integrated Broadcom GigE ports (which is great for VMware) plus the iLO has its own port (though it can share one of the main ports if required). Along with buying secondary GigE NICs for the other servers in my garage this has quickly pushed me from 5 ports to 8 ports to 16 ports
The supplied USB drive with the HP customised ESXi 5.5 install just worked, and I was immediately able to start installing VMs onto iSCSI and NFS storage without even putting any drives into the bays. I’ve yet to load up these machines, but I’m tempted to migrate over a bunch of VMs from my present Hyper-V setup on a Dell T110 II as potentially both Microservers will have a lower power budget than the single larger server (and provide better tolerance to a single machine hardware failure).
I had a go at installing NAS4Free on ESXi using raw device mappings (RDM) to 4x 2TB HDDs. Everything seemed to work pretty well, and I was able to get a nice big RAID-Z volume. That’s a setup I’d probably only use for warm storage or media files as I’d want SSD for anything else.
I really like the Gen8 Microserver. It’s proper server engineering in a small, cheap and elegant package. The best bit is the iLO capability, but there are plenty of other things to like about it.
 I’m not too concerned about the possibility of newer Microservers, as the Gen8 is very capable, and the Gen9 is unlikely to be offered at such a bargain price.
 In some places the Gen8 is available with the E3 1220L V2, though I’ve never seen it on sale in the UK.
 There are so many CPU choices that there’s a FAQ about them.
Filed under: review, technology | 2 Comments
Tags: CPU, ESXi, Gen8, HP, iLO, license, Microserver, NAS, RAM, ZFS