The Administrator setup for Google Apps Migration guide makes things look pretty straightforward, but it’s much, much more complicated. What should be just a couple of check boxes turned out to be a twisty turny journey through hidden menus littered across distant parts of the administrators console.
The move from CohesiveFT to Cohesive Networks meant I needed to move all of my email out of one service and into another. Last time I did this it was easy – suck email down from old account using an IMAP client (Outlook), then push email up to the new account via IMAP. Obviously this was too much of a good thing, and was hurting Google’s poor, tiny and fragile infrastructure.
It all started out fine
I actually had no problem whatsoever pulling down all of my emails from the old account, even though at 3.1GB of data it should have bust my bandwidth limit. The trouble began when I tried to upload to the new account. About 30 items (of about 35,000) made it over, and then it choked.
Google Apps Migration for Microsoft Outlook
Next I tried the official tool. But that didn’t get me very far:
I didn’t have admin access to the new account, but I was assured that the Email Migration API was enabled. If you were an admin, and you saw this then you’d probably think everything was fine:
Further down the same page there’s a section about the Email Migration API. It doesn’t actually let you do anything – it just links to this (not very helpful) web page:
To actually get headed in the right direction you first have to click on the little ‘Show More’ at the bottom of the Security page:
This brings up the ‘Advanced settings’ option. It will remain a mystery of the universe why Google choose to hide a single extra item with a ‘Show more’.
At this point you might jump straight at ‘Manage API client access’ but don’t. It’s ‘Manage OAuth domain key’ that you want first:
Now check the box to ‘Enable this consumer key':
It takes a few minutes for this to take effect. So grab a coffee or check email or something before returning to the ‘Manage API client access’ part:
Now paste in your domain name and the URL for the email API, which is https://www.googleapis.com/auth/email.migration
If you’ve waited long enough after enabling the consumer key for your domain then Authorize should work.
We’re not done yet
At this stage I managed to upload about 70 emails from the tool before it failed complaining about network issues. Subsequent attempts didn’t get any further.
A visit to Apps > Google Apps > Setting for Gmail > Advanced settings revealed some additional boxes to be checked:
Got there in the end
The migration tool still didn’t work, but I was now able to upload via IMAP (just as I’d planned to do in the first place). It took a whole day, but it got there in the end.
It’s quite possible that I could have made my Outlook IMAP upload work just by doing the last bit (in the Google Apps menu).
Enabling mail API access, which is what the migration tool seems to want, is much harder than it should be (or is made out to be). It’s also pointless, as the migration tool doesn’t seem to work properly.
I can’t end here without saying
The only time I ever use Outlook (which I despise) is for doing this sort of thing. Well… it ought to be useful for something.
 Since Google’s infrastructure is basically the largest in the world I’m struggling to imagine what sort of abuse let to them clamping down on email uploads, but I’d bet it has something to do with spammers.
 The Google Apps Migration for Microsoft Exchange Administration Guide (pdf) got me pointed in the right direction here.
Filed under: howto | Leave a Comment
Tags: API, email, gapps, google, Google Apps, howto, IMAP, mail, migration, Outlook
I fell into a trap with my new Gen 8 Microservers like this:
- Install 60 day trial license for iLO Advanced
- Update BIOS date/time
- Find that trial license has now expired :(
There really should be some sort of warning on the license page (and maybe also the serial/password tag) to say update your clock before applying a trial license. Here’s how I got things back to factory defaults:
Firstly press F8 at the appropriate part of the boot sequence:
The config tool opens on the option to set defaults:
So just hit enter and then F10 to confirm:
That’s it – the trial license will now work again. If like me you set a more memorable password than the one on the factory tag then that will have to be reconfigured.
Filed under: howto | Leave a Comment
Tags: default, factory, Gen8, HP, iLO, Microserver, reset
I’ve been a fan of HP Microservers since the original NL36 model. When the newer Gen8 servers came to market they were a bit pricey, but the cost has come down, and cash back deals have returned. Faster CPUs, larger official memory capacity, dual NICs and remote console capabilities makes these ideal for a home lab.
I’ve been working on our new vns3:turret platform a lot recently. It’s designed to run on enterprise networks rather than in the public cloud, which means that I needed some VMware hosts to play with. My older NL36s and NL40 Microservers were pressed into action, but the need for more capacity pushed me towards the latest model (which isn’t all that new any more, and might well be replaced by a Gen9 offering any day).
A bare bones model with G1610T CPU, 2GB RAM and no disk is presently £149.95 (£179.94 in VAT) at ServersPlus. HP are offering £35 cashback so that’s an out of pocket cost of £144.94 – not quite as amazing as when the original Microservers came with £100 cash back, but not far off.
I went for the 16GB ESXi 5.5 Test Bed Bundle, and ServersPlus did an excellent job of getting me the machines quickly and efficiently.
The Gen8 looks a lot prettier than the earlier model, and it’s much easier to get the motherboard out (though that’s only necessary for a CPU upgrade as the RAM is now easily accessible).
Unfortunately the 5.25″ drive bay has been sacrificed for a laptop style optical drive slot, which limits additional storage options. The eSATA port has also disappeared.
The newer drive caddies don’t feel as robust as the older ones, not that it matters once a disk is screwed in.
Probably the best feature of the Gen8 is the inclusion of HP Integrated Lights-Out (iLO), which can be used to provide a remote keyboard/video/mouse (KVM) capability. Out of the box the remote console only works until the OS boots, but an iLO advanced license provides the ability to use KVM after boot. Those licenses are hideously expensive at full sticker price, but there’s a healthy secondary market, and I found one on Amazon for less than $20. A 60 day free trial license can also be obtained.
Since I keep the servers out in my garage (which is presently very cold) I’m glad that I don’t have to go out there.
16GB of ECC RAM is officially supported and very easy to install. It’s a shame it’s not 32GB, but with the standard CPU offerings the balance is probably right.
One of the things that put me off the Gen8 when it launched was the weedy CPU range. The Celeron G1610T and Pentium G2020T on offer are both a bit weak (though notably better than the AMD CPUs in earlier Microservers). Fortunately the CPUs are upgradable. I was able to find a couple of E3 1220L V2 parts on eBay for £129 each, which at 17W power rating are an ideal upgrade option. Others have had success with 45W CPUs such as the E3 1265L V2, and many have even got away with running full power 69W parts such as the E3 1230 V2 (even though the heat sink is only rated at 35W).
Besides the extra speed on offer my main reason for doing a CPU upgrade was to get VT-d, though my attempt to pass through the B120i storage controller to a VM failed.
We’re going to need a bigger
The Gen8 has two integrated Broadcom GigE ports (which is great for VMware) plus the iLO has its own port (though it can share one of the main ports if required). Along with buying secondary GigE NICs for the other servers in my garage this has quickly pushed me from 5 ports to 8 ports to 16 ports
The supplied USB drive with the HP customised ESXi 5.5 install just worked, and I was immediately able to start installing VMs onto iSCSI and NFS storage without even putting any drives into the bays. I’ve yet to load up these machines, but I’m tempted to migrate over a bunch of VMs from my present Hyper-V setup on a Dell T110 II as potentially both Microservers will have a lower power budget than the single larger server (and provide better tolerance to a single machine hardware failure).
I had a go at installing NAS4Free on ESXi using raw device mappings (RDM) to 4x 2TB HDDs. Everything seemed to work pretty well, and I was able to get a nice big RAID-Z volume. That’s a setup I’d probably only use for warm storage or media files as I’d want SSD for anything else.
I really like the Gen8 Microserver. It’s proper server engineering in a small, cheap and elegant package. The best bit is the iLO capability, but there are plenty of other things to like about it.
 I’m not too concerned about the possibility of newer Microservers, as the Gen8 is very capable, and the Gen9 is unlikely to be offered at such a bargain price.
 In some places the Gen8 is available with the E3 1220L V2, though I’ve never seen it on sale in the UK.
 There are so many CPU choices that there’s a FAQ about them.
Filed under: review, technology | Leave a Comment
Tags: CPU, ESXi, Gen8, HP, iLO, license, Microserver, NAS, RAM, ZFS
The New Stack – Why Docker, Containers and systemd Drive a Wedge Through the Concept of Linux Distributions
The announcement of Rocket by CoreOS was perceived by many to be a direct challenge to Docker, particularly as it came on the eve of DockerCon Europe and threatened to overshadow news coming out at the event. Docker, Inc. CEO Ben Golub was quick to fire back with his ‘initial thoughts on the Rocket announcement’. This piece isn’t about the politics of ecosystems and VC funded startups, which I’ll leave to Colin Humphreys (and note an excellent response from Docker Founder and CTO Solomon Hykes). It also isn’t about managing open source community, which I’ll leave to Matt Asay. Here I want to look at systemd, which lies at the heart of the technical arguments.
Filed under: Docker, The New Stack | Leave a Comment
Tags: CoreOS, Docker, init, Linux, Rocket, systend
At their re:invent 2014 show Amazon launched AWS Key Management Service (KMS), “a managed service that makes it easy for you to create and control the encryption keys used to encrypt your data, and uses Hardware Security Modules (HSMs) to protect the security of your keys”. At launch the service supported EBS, S3 and Redshift. Additional support for Elastic Transcoder was added in late November.
Filed under: cloud, InfoQ news, security | 2 Comments
Tags: amazon, aws, cloud, encryption, HSM, KMS, security
TL;DR – it runs – now I need to put together some GPIO nodes.
Updated 5 Dec 2014 – In my original post Node-RED was so slow it was unusable. Using Michal Vondráček’s node-ws package fixed that.
Node.js on WRTnode
My first hurdle was to get Node.js running on the WRTnode. Node has previously been run on OpenWRT, but that implementation was old and specifically targeted to a big endian MIPS architecture. Luckily Michal Vondráček published a working WRTnode implementation of Node.js the day before the ThinkMonk hack day.
I next struggled with installation, as ‘npm install –production’ was first running out of memory and then complaining about a lack of filesystem locks. Thankfully Node-RED creator Nick O’Leary was on hand to point out that I could simply copy an installed Node-RED from another system (like a VM on my laptop).
With Node.js installed (mostly onto a USB stick) and Node-RED also copied onto the USB stick I was able to start Node-RED, see it coming up on port 1880 and browse to it.
Having installed Michal’s node-ws and deleted the ws package from the Node-RED node_modules directory everything works:
I now need to get the GPIO mapped so that I can get Node-RED to blink some lights etc.
Filed under: WRTnode | 1 Comment
Filed under: InfoQ news, security | Leave a Comment
Tags: MSL, Netflix, open source, PKI, SSL, tls