November 2020

29Nov20

There’s been lots happening this month, with the puppy growing, some IT failures, moving on from some previously loved products and services, more updates on streaming and VR, and a few things I forgot in October…

Puppy

He’s done a lot of growing from the start of the month:

To the end of the month:

and seems to have less of a look of ‘hey, I borrowed this dog costume, but it’s a few sizes too big’.

He’s also allowed outside now, so walkies have begun.

More photos on my daily #pupdate.

IT failures

November hasn’t been a great month for my home kit behaving itself.

NAS drive failure

I woke up one morning to the dog whining, and when I got downstairs I could hear the NAS whining. A quick look at the admin console revealed a failed HGST 4TB drive :(

I’ve been seeing warnings for a little while about the volume filling up, so time for an overdue upgrade and a 4 pack of 8TB Seagate IronWolf Pros.

It took a couple of days for the RAID5 array to resilver, which isn’t too shabby.

I then had a little adventure upgrading the filesystem to 64bit in order to make all the extra space usable.

Interrupted Power Supplies

‘The WiFi’s not working’ turned out to be a failure of the coat cupboard UPS that runs the NAS, router and a few other odds and ends. I’ve written before about how ropey power can be in my neighbourhood, which means I have a bunch of UPSs, which also means that I’m too frequently dealing with failures.

It seems that battery failures take out a UPS even when it has a good mains supply, and I’ve never seen any warning of battery issues :/ Luckily this time around I was able to get a replacement battery in half an hour from my local ScrewFix, and everything was back as it should be.

I keep hoping that Eric Raymond’s UPSide project for an open source hardware UPS comes to fruition, but maybe I should just buy a Tesla Powerwall? Meanwhile I replaced the batteries in a Sweex UPS that died on me a few years ago, and that’s back working, and I proactively replaced the battery on my other PowerWalker 650, as that was a similar vintage to the one that failed.

Leaving Three

I’ve used Three for my personal mobile phone for over 6 years, and really valued their ‘Go Roam’ service especially when I’ve been in the US a lot. Unfortunately they’ve developed a nasty habit of treating existing customers like fools, and hiking up tariffs.

Last year I called them, and eventually got to a satisfactory deal (though still not as good as what they were offering new SIM only customers). It took something like an hour, and I vowed not to repeat the process. This time I just requested a porting authorization code (PAC) for my phone number and switched to PlusNet (a mobile network virtual operator [MVNO] running on EE’s network). I may return to Three when travel resumes. But for the time being it seems like they didn’t want me as a customer, in what seems like an exercise in reversion marketing. It’s not even like I’ve been using the service much – I’d expect the most costly thing I’ve done in the last year or so was that call centre contact for the last renewal :/

At the start of the year I had 6 Three contracts – 3 for my devices and 3 for family members. We’re now down to 2; and that’s not down to any antipathy or service issues – it’s been purely about avoiding exploitative price hikes.

So long Pinner, hello Pushpin

I’ve been using Pinboard.in for bookmarking since the first demise of del.icio.us; so when I got an iPhone again almost five years ago I bought Pinner as the app to save and recall stuff.

Pinner served me well over the years, though it wasn’t perfect, and didn’t seem capable of syncing all 13,000+ bookmarks I’ve collected. But it hasn’t been updated in over 3 years, and it seems iOS 14 has introduced breaking changes. So time to move on.

I’m still getting used to Pushpin, but it seems to get all the basics right

We’re all streamers now

I’m talking next month at GitHub Universe and they sent me some kit to make sure that I can be seen and heard properly:

The Logitech Brio webcam is pretty amazing in terms of configurability and the sharpness of the resulting video; and the JLAB microphone seems to work better with my conferencing setup than the Yanmai mic I was previously using (following Terence Eden’s review).

Here’s what things look like on my side of the camera on my (messy) desk:

Since taking the photo I got a Newer mic boom, and I have a Stream Deck control panel on my Christmas list.

VR

Prescription lenses

I’ve never loved the experience of using glasses with my Oculus Quest, so I finally ordered some lenses from VR Optician. They took a couple of weeks to arrive, and were dead easy to fit.

It’s a revelation not having to fiddle to get the headset around my glasses, or worry about eyelash smudges or fogging; though I do find I need to be very careful about headset placement to get a sharp view. It’s also weird to take off the headset and find myself part blind to the outside world.

I’d held off initially due to worrying about shared use of the headset, and constantly having to swap out lenses. But I’m now pretty sure I’m the only person using it, and having the lenses is a definite plus.

Beating Beat Saber

The quest continues… after finally getting Full Combo at Expert on all OST levels 1 & 2 I’ve been chipping away at OST 3 and Extras, with now only a couple remaining on each. Meanwhile I’ve got quite into the Camelia levels, though they’re much tougher, so I’m doing those on Hard rather than Expert.

Pi Stuff

I’ve written previously about using the High Quality camera as a USB OTG webcam, and now somebody’s written some Ansible scripts to automate the process :)

Things I forgot from October

Another Now

I’ve enjoyed much of Yannis Varoufakis’s earlier work, so his latest, Another Now, was an obvious one to buy, and I got the audiobook for my daily (pre new pup) walks.

The use of a connection between ‘many worlds’ to present ideas about alternatives to neoliberal capitalism is an engaging way of presenting ideas, but it didn’t leave me wholly convinced that he had a realistic way of tackling the status quo. But nevertheless an enjoyable read/listen.

Starblinken

The Starblinken recreates a little visual prop from the Millennium Falcon, and was on sale for May the 4th. But when it arrived the weather was too nice, and my spare time got expended outside. Once the cold dark days arrived it was time for some assembly.

It was a fun little project to put together, and I like how Dave managed to get a scene from the film into the PCB.


Background

Earlier in the year I wrote about upgrading from my old DS411j to a new DS420j, and how simple the upgrade process was. But I knew there would be trouble ahead, as Synology doesn’t provide a way to upgrade volumes created on its 32bit systems past 16TB.

Last weekend one of my HGST 4TB drives failed, and I was already seeing warnings about running out of space. It was time for some newer and bigger spindles. So I ordered a 4 pack of 8TB Seagate Iron Wolf Pro.

Each new drive took about 9 hours to resilver, so it was a couple of days before I had my RAID5 array fully working on the new drives. With the last one in place the storage pool and volume were automatically expanded to the maximum size of 16TB, but that was leaving over 5TB unusable. Not an immediate problem, but one that would eventually bite.

Workaround

!WARNING!

The stuff that follows isn’t supported by Synology, and if it goes wrong could destroy your data. Don’t consider doing this unless you are 100% confident in your backups.

!WARNING!

I’ll detail below the process that I gave up on. Thankfully I found a workaround detailed in a blog post – Expand Synology volume beyond 16TB; but since those details apply to different Synology types to my DS420j here’s what I did…

I already had Entware opkg installed, using these instructions. What follows I ran as root in an SSH session, but you can always prefix commands with sudo. It might also be advisable to run from a screen session in case your SSH link gets cut.

The crucial tool is resize2fs, but DSM’s version is too old, so opkg is needed to get a newer version:

opkg install resize2fs

I didn’t use a pinned version, and the one I got was 1.45.6

The RAID volume needs to be unmounted to be resized, but the Entware tools are on that volume, so first they need to be copied elsewhere:

umount /volume1/@entware-ng/opt
cp -R /volume1/@entware-ng/opt/* /opt/

Then shut down services and unmount the RAID volume:

syno_poweroff_task -d

/volume1 will now be unmounted, and it’s possible to check the filesystem, which is mandatory before resizing.

e2fsck -f /dev/md2

That took about an hour on my filesystem, and I was glad for the suggestion to answer ‘a’ to the first y/n question as there were hundreds of them.

Next up is the command to convert the filesystem from 32bit to 64bit:

resize2fs -b /dev/md2

This took almost 4 hours for my system, with the CPU pegged at 100% pretty much throughout. It’s possible that adding a ‘p’ flag would have helped a little by showing progress.

At this point enough is done for DSM to pick up the rest. So:

reboot

and log back in. Then go to Storage Manager > Storage Pool > Action > Resize. That then kicks off a process that validates the extra array space, which for my system ran for a few hours.

And then I had a 21.82TB volume :)

The longer way

If I was less impatient I’d have copied all my data over to the old spindles (plus a spare to replace the failed drive) on my old NAS, then created a new volume on the new NAS, then copied all the data back. That would have taken days, possibly weeks, and would have carried a bunch of its own risks.

Doh!

I stupidly thought I could save a bunch of time by putting the old spindles into my old NAS and just restore the RAID5 set. But it doesn’t work like that. Drives 1,2,3 had been pulled from the array at different times, and so when they were brought back together they were inconsistent.


October 2020

31Oct20

Puppy

It wasn’t long after we lost Dougal that we decided the house just felt wrong without a dog. So… new puppy – Maximus:

More photos on my daily #pupdate

We’re all streamers now

Returning to this theme, I’ve added a few more links to my streaming tag on Pinboard.

Lighting

On the recommendation of Ken Corless, I bought a Neewer LED lamp that I fitted above my secondary monitor. The version with a mains adapter was ridiculously more expensive than the one without, so I got the battery only version. That was fine until the battery ran out. So I grabbed a spare 12v power supply and… it didn’t fit because the lamp has a weird size power socket. Luckily I was able to get an adapter for the adapter – 5.5mm x 2.1mm Female Socket to 2.5mm Male Plug.

Pi Stuff

35mm lenses on HQ camera

@biglesp tweeted about using Canon EF lenses with the HQ camera so I ordered an adapter for FD lenses, as I have quite a stash of them from a kit I bought some time ago to satisfy my childhood desire to have a Canon A-1.

Here’s a selfie I took using my 50mm f1.4 S.S.C. lens:

Lots of bokeh potential with wide lenses, and some pretty significant cropping going from 35mm frame to 6.3mm (1/2.3″) sensor, turning my 50mm ‘standard’ lens into something like a 280mm equivalent telephoto (so if I try my 300mm lens it will become a 1660mm telescope).

M2 SSD case

I’ve written in the past about using my Pi4 with an SSD, using a USB3 to SATA adapter, but another option is this case, with M.2. Meanwhile the Pi4 based compute module has arrived, which breaks out PCIe opening up the option of attaching SSDs through that interface for even better performance.

ESXi for the Pi

Is now available is a ‘fling‘ from VMware. I’ve not had the chance to try it yet, and when I do I’ll probably try putting Ubuntu 20.10 on as a guest alongside Raspbian.


September 2020

30Sep20

As the family has gone back to school September seems to have whizzed by, with fewer interesting things to report on as life settles back to a weekly routine. I’ve mentioned the punctuations of comedy nights and online whisky tastings before, so there’s little new to report…

Frontline Live

My amazing friend Katz Kiely set up Frontline Live in the early days of the pandemic to help frontline health workers get access to desperately needed personal protective equipment (PPE).

As things have settled down she thought that the site should be backed by a registered charity, which I’m pleased to say I’m now a trustee for (after it got set up in what seemed like record breaking time – about a day and a half from submitting the docs to getting approval).

Covatar

My colleague Caitlin McDonald got an awesome new profile pic from Covatar. So I got one for myself. What do you think?

Tech stuff

Dapr

I got to know Mark Chmarny during his time at Google, but he’s now at Microsoft working on Azure stuff and he suggested I take a look at Dapr. It’s described as ‘An event-driven, portable runtime for building microservices on cloud and edge.’, and there’s a bunch of good samples and examples for it on GitHub, which provides an easy on ramp for people to get started.

NATS

Long time readers of this blog will know that I’ve been a fan of asynchronous messaging and event driven architectures since the early 2000s as the Advanced Message Queuing Protocol (AMQP) took shape. I first met Derek Collison as he did due diligence on VMware’s acquisition of RabbitMQ, and he’s somebody who’s been into messaging since pretty much the beginning, as Tibco created Rendezvous and then Enterprise Message Service (EMS nee E4JMS).

NATS came along as Derek was creating Apcera, and since the sale of that platform he’s focussed all of his attention on the messaging platform.

I remain convinced that most of the complexity in enterprise architectures comes from people using HTTP in places where it’s just not suitable (then having to layer on all types of guff to make up for that). I’d previously hoped that AMQP would provide salvation, and at one stage the ubiquity of RabbitMQ underlying infrastructure software fooled me that it had happened. But the complexity seems to be piling on above the platforms again, so I’m hoping that NATS can help people get back to simpler and cleaner architectures.

Raspberry Pi stuff

I wrote a while ago about booting my Pi4 from a USB3 attached SSD. This post from Jeff Geerling provides a deeper look at UASP, TRIM and performance.


August 2020

29Aug20

ICYMI

It’s been something of a busy month for blogging, with posts on Cloud Migration, Java, the UK exam fiasco, Hugo. and RIP Dougal.

BBQ

The new Kamado Joe that I mentioned last month got its first run, which looked good, but ended up being a little dry:

Hawksmore @ Home

The Hawksmore at Home BBQ Box came out much better :)

Shelf

I made a shelf to go beside the BBQ for extra prep area using some left over kitchen worktop:

It looks lovely, but despite the yacht varnish it already seems to be bowing and delaminating, so I’d be surprised if it lasts through a UK winter.

Cote de boeuf

Some friends recommended Handcross Butchers, which is local(ish), does online ordering and delivers to my area. I got a Cote de boeuf, which turned out amazingly – quite possibly the best steak I’ve ever had.

Could – Should – Would

On of my favourite finds on the web this month was ‘An Officer’s Guide to Breaking the Rules‘:

The CO hated being told that I couldn’t do something when he knew full well that I could but that the rules didn’t allow it.  So I agreed that we would discuss such things in the following terms: Could – Should – Would.  In the first instance I would explain what could be done, what was physically possible given the resources at our disposal, without any of the constraints of the rules.  I would then explain what should be done to be compliant with the rules, regulations, policy, etc.  Finally, we would have a conversation about what we would do, under what circumstances and when the operational imperative would justify setting aside certain rules in order to achieve the intent.  It worked.  I never told him something was impossible when it wasn’t, but we also (frequently) made defensible risk-based decisions to break the rules when the circumstances justified it.

WSL2 and VSCode

For many years I’ve been using a combination of Git Bash and Atom for development, but times change, and better tools emerge, so I’ve pretty much entirely switched to using Windows Subsystem for Linux (WSL) version 2 as my command line home and VSCode as my IDE. It’s very pleasant and productive.

GitHub Actions

I’ve also been trying out GitHub Actions for Continuous Integration (CI) and Continuous Delivery (CD), and they’re just like how I expected code pipelines to work (rather than the complexity horror show of Jenkins and its menagerie of plugins). Here’s my example repo hugo-learn-action-example.

New role

There have been a lot of changes at DXC since Mike Salvino took over as the new CEO almost a year ago, and as part of that wave of change I’ve taken a new role as CTO for Modern Apps & Cloud Native. I describe the new job as:

leading customer adoption of platforms, continuous delivery, and modern languages and frameworks

My post on ‘What are Modern Applications?‘ from earlier in the year provides a good background on the space I’m moving into.

1990

Following a pointer from Charles Stross I bought myself the DVD box set of 1990, dubbed 1984+6. It’s a never repeated 1970s TV series starring Edward Woodward looking at a dystopian future UK under the boot of a Home Office ‘Public Control Department’. Right now, it seems far too close to a prequel of what 2021 post Brexit England is going to look like (except in the show it seems one of the few counterbalances to government power is EU membership and the European Convention on Human Rights (ECHR) – protections that are being torn away).

One thing that does seem consistent between fictional 1990 and real life 2020 is that the politicians and their civil servants seem greatly concerned with what the press has to say about them; or as I put it on Twitter, “Government of the press, by the press, for the press”

Beating Beat Saber

Beat Saber remains my favourite Oculus Quest game, and I use it for ‘fitness gaming’ workouts on the days that I’m not hitting the cross trainer. For the last few months I’ve been slogging through the levels of OST1 & OST2 to get ‘full combo’ at Expert level, and this month I finally cracked the final holdouts with ‘Unlimited Power’ and ‘Balearic Pumping’. Onwards with OST3 – ‘Give A Little Love’ and ‘Reason for Living’ have succumbed already; I expect ‘Burning Sands’ will be the last to fall, as it’s very long and complex.

I’ve found (particularly at Expert level) that getting full combos takes a different technique to getting high scores. I’ve been using a gentler, less energetic approach to making sure I make all the cuts, rather than high movement slashes through every block. I’ll be going back over some levels to see if I can push my Bs to As to Ss (and maybe even SSs), but ultimately I know I’m not top class at the game from my rankings (and also that I totally can’t keep up with Expert Plus levels).

For what it’s worth, I share my brother’s ire at Oculus deprecating their standalone IDs in favour of Facebook logins.


RIP Dougal

28Aug20

Born on 22 May 2005, Dougal came home with us on 16 July 2005:

He loved hanging out with other dogs, and had to run hard to keep up since he was little:

He loved exploring the local woods, and would join me for my daily trips to the shop.

Just last weekend some other dog walkers were asking his age and couldn’t believe he was 15 (many people still though he was a pup).

I know they’re all good dogs. He was a very good dog, and we’re going to miss him a lot.


TL;DR

Replace layouts/index.html with layouts/_default/single.html in your chosen theme. The home page for the site will be created from content/_index.md and additional pages can be created at content/pagename/index.md (NB no _ before index that time).

Background

$daughter0 has offered to build web sites for some friends and family members, which means she’s learning various aspects of web development, and I’m helping out with pointers, getting dev environments up and running, source control, hosting etc.

She found a theme that she liked for Jekyll (the default static page generator used by GitHub) in the shape of Hyde, but we quickly realised that was a (too typical for Ruby these days) journey into broken and deprecated dependency hell.

Luckily there’s a port of Hyde to Hugo, and having used Hugo for some work stuff I knew it was simple to use and actively maintained (thanks Steve Francia).

It didn’t take us too long to get a basic single page site customised with the right look and feel and published to GitHub pages.

And then she wanted another page, which took us into the oxymoronic world of multiple single pages.

The answer is right there in the docs

If you read carefully enough, the page on content organisation explains how things work. It’s just that like everything with super powerful and flexible frameworks, it shows how to do many things in many ways, which can make it hard for the uninitiated to see the wood for the trees.

When I finally figured it out the conversation went something like this:

$d0 – is it easy or hard?

me – easy.

$d0 – if it’s easy then am I stupid for not figuring it out?

me – no, because I’ve been right there with you scratching my head about it too.

For the record we spent far too much time picking through ‘Need multiple static pages – home, about, contact, prices etc.‘ and ‘Hugo: adding more pages to single-page themes‘. Sometimes Google and Stack Overflow have all the answers, and sometimes you’re digging through stuff that was overtaken by events dozens of versions back.

Docs are hard

Hugo’s docs are good, but there’s a lot there to understand, and for any given site it’s likely that you’ll need a small subset of functionality. So it’s really hard for the uninitiated to determine what they need to learn, and what’s superfluous.

There are also some great 3rd party intros out there, like Chuxin Huang’s ‘Noobs guide to Hugo‘; but if you follow somebody else’s footsteps you end up at the same destination. And nobody really wants a website that looks just like another. The whole point is customisation, but minimising the effort and blast radius of that customisation.

My own example

I’ve dropped a minimal Hugo/Hyde site demoing multiple single pages onto GitHub at cpswan/hyde-msp-example


TL;DR

Standardised tests like A Levels will inevitably have winners and losers, and in normal circumstances those people will usually know why they won or lost. For the losers it’s likely that they experienced some bad luck; but the key point here is that experience – they know why things didn’t work out. When that experience of bad luck gets replaced by an algorithm handing out bad luck outcomes it’s easy to see how a sense of injustice quickly builds; as the connection between experience and outcome has been severed.

Background

This is a post about the unfolding high school ‘exam’ fiasco in the UK, where 18-year-olds are getting their A Levels that are the gateway to University, and 16-year-olds are about to get their GCSEs.

In any process like this there are inevitably winners and losers…

In an ordinary year

Winners:

  • The kid who went through just the right past papers, so the exam hits the stuff they prepared for
  • The kid who slacked off all the way through, but pulled it out of the bag in a final push for the exams

Losers:

  • The kid who was upset because their dog died that morning
  • The kid who got ill
  • The kid who worked hard all year, but somehow fluffed it on the day

2020 was going to be different

With exams cancelled because of corona virus there were always going to be different winners and losers, because supposedly kids were going to be assessed based on evidence they’d been able to provide through to March:

Winners:

  • Kids that worked hard all the way through
  • Kids who dodged that bad exam day

Losers:

  • Kids who slacked off all the way through, but planned on pulling it out of the bag in a final push for the exams

But not like that

It seems that the government decided it would be far too much trouble to look at evidence for individuals (even as a means of refining their model)[1], which leads to different winners and losers:

Winners:

  • Kids who go to schools that have previously done well (especially private schools[2])

Losers:

  • Outstanding kids going to historically poorly performing schools.

Even ignoring any work done over the past 2 years, the latter point could have been re-calibrated for by paying some attention to past GCSE results. The kid who got 9 A*s at a school that never previously had anybody getting any A*s is exactly the sort of outlier that data scientists live for[3].

The bad luck lottery

As noted above, in an ordinary year there will be a bunch of kids who are unlucky. Something happens on the vital day/week/month that makes them unable to meet their own expectations.

But to keep 2020 the same as 2017-2019[4] means handing out bad luck by algorithm, and that leads to an experiential gap that comes with a massive sense of discrimination.

If your dog died, or you got ill, or you fluffed it on the day then you know about that – it happened to you. There’s a correlation between lived experience and outcome.

But there were no exams, which means there can’t be any bad luck on exam day, which in turn means that a giant dollop of bad luck has been handed out to kids where there’s no correlation between lived experience and outcome. They just feel shafted, because they have been.

Matt Day perfectly sums up how ridiculous it would be for us to hand out bad luck in other aspects of life to keep the statistics in line:

Joining the fight

I just chipped in to the Good Law Project ‘Justice for A Level Students‘ campaign, and there’s also a campaign for Grading Algorithm: Judicial Review from Foxglove.

Conclusion

Any standardised testing process will have winners and losers, and we’re used to situations where bad luck leads to bad outcomes. I don’t think anybody’s comfortable with those same bad outcomes being handed out by algorithm, especially when it seems that the algorithm has been designed to preserve historic inequality.

Notes

[1] The BBC’s ‘Why did the A-level algorithm say no?‘ provides a good overview, and I’ve collected some more on the combo of ‘education’ and ‘algorithm’ PinBoard tags.
[2] I’m not writing this to ding private schools – they’re an ongoing part of structural inequality that I’ve willingly participated in myself, and for my own kids.
[3] Though it seems that Ofqual scared off the good statisticians and data scientists with NDAs that seem like the prequel to a cover up.
[4] The best description I’ve read of what happened is that grades were handed out to ‘ghosts of past students‘ rather than paying any attention to the 2020 individuals.


Which Java?

14Aug20

Or should that be:

which java

TL;DR

Practices for installing and maintaining Java have evolved over time, which can lead to tension between teams who are set in a particular way, and other teams who see that as backward.

The present state of the art is not to have Java on hosts at all, and to containerise apps that use Java, but for when it is needed on hosts a Software Development Kit (SDK) manager such as SDKMAN provides a sensible way to take care of things.

Background

A colleague reached out to me asking whether Java should be installed from the OS package manager, or standalone? This raised a number of concerns:

  1. Whose JDK – Oracle, OpenJDK, IBM, (Zing, Zulu, Corretto[1])?
  2. Which major version?
  3. If minor versions and patches aren’t updated by the OS manager then who/what is doing that?
  4. Side by side installation for multiple app servers?
  5. Are any system tools dependent on Java?
  6. Does it matter is java is on the PATH?
  7. How is CLASSPATH set?
  8. How is the app/app server launched, and what does the script do to PATH and CLASSPATH?

I noted that:

Pretty much all of those questions stop mattering if using containers.

Evolution

I’d observe that practice has developed over time. In the early days of Enterprise Java I was pretty close to the action, but since then I’ve been a more distant observer:

  • 2000 Install Sun Java from tarball into /opt/java
  • 2005 Install IBM Java or jRockit from rpm/deb into wherever they went
  • 2010 Install Oracle Java from rpm/deb (because the distros couldn’t package it)
  • 2015 Install OpenJDK from yum/apt
  • 2020 Put Java stuff into containers

As I was feeling out of touch a little I asked Twitter, though the results were far from conclusive:

Early voting had containerisation well ahead, but things later swung back to more established approaches.

What was new to me, which was the whole point of asking, was people pointing out that they used SDKMAN to solve this problem, and I think that’s probably the best answer for when Java is needed on hosts.

Safety and security

I recall the glorious days when the Java Virtual Machine (JVM) was considered a safe sandbox. Those days are long behind use, and the JVM looks more like a giant pile of Common Vulnerabilities and Exposures (CVEs) that need their own sandbox.

Then there’s the issue of dependency management (aka DLL Hell), where once clear and orderly installations can become a mess of intertwined dependencies that need the finest IT archaeologists to figure out what’s broken and how things ever worked.

For both of those reasons, I generally try to run Java apps in containers (along with anything using Node.js or Ruby).

Conclusion

Containerisation is the way to go for Java apps wherever possible, but for when Java does need to be installed on hosts SDKMAN seems to provide the best of both worlds between standalone Java installation and OS package managers.

Note

[1] Thanks to Mike Moate for reminding me about Amazon’s Corretto, which is their distribution of OpenJDK that comes with “long-term support that will include performance enhancements and security fixes”.


I mentioned the 5 Rs in The Application Portfolio Manager a couple of years ago, and I’m returning to them as they’ve been coming up fairly frequently, and also they’ve become the source of some confusion.

5 Rs

The original[1] 5 from Gartner’s Five ways to migrate applications to the cloud (penned by my awesome friend and former colleague Richard Watson):
  1. Rehost
  2. Refactor
  3. Revise
  4. Rebuild
  5. Replace

6 Rs

Then AWS decided it was 6 Strategies for Migrating Applications to the Cloud:

  1. Rehosting
  2. Replatforming (~= revise but may also have pieces of rebuild)
  3. Repurchasing (~= replace)
  4. Refactoring/rearchitecting (kind of brings refactor and rebuild together)
  5. Retire
  6. Retain (do nothing option, should be periodically ‘revisit’)

7 Rs

And now they’re added #7:
      7. Relocate (for moving VMware VMs from on-prem to VMC)

Don’t mix cross-ply and radial on the same axle

Things start skidding out of control[2] when people start mixing the Gartner 5 Rs with AWS’s 6/7 Rs. There’s some parity, but also significant differences that make it possible to come up with a list of Rs that’s got lots of overlap:

  1. Rehost
  2. Refactor
  3. Revise
  4. Replatform
  5. Rearchitect

Or that’s missing some key treatments

  1. Replace
  2. Rebuild
  3. Repurchasing
  4. Retire
  5. Retain

The second set there is more contrived than the first. But this is a classic case of where a consistent taxonomy is helpful. For that reason I’ve been encouraging people to standardise on the AWS definitions.

Update 4 Aug 2020

Richard Watson commented on LinkedIn that the original framework he presented with Chris Haddad only had 4 Rs:

I guess Recode got split into Refactor and Revise.

Update 8 Sep 2020

Somebody sent me an HP deck from 2011 with their Rs (or Re-s):

  1. Re-learn
  2. Re-factor
  3. Re-host
  4. Re-architect
  5. Re-interface
  6. Replace
  7. Retire

So I guess that’s yet another source of confusion for my former HP(E) colleagues.

Re-learn was our process of helping clients understand what applications they actually had, the infra they ran on, the resources they consumed, the technologies, the quality, etc. We would use that result in Apps rationalization to figure out the best future for each app. We used tooling to help with that as well.

Re-interface was all about interconnectivity between systems. It was enabling applications to share data to open up and consolidate business processes.

The HP Rs trace their root back to Electronic Data Systems (EDS) when cloud was nascent, and weren’t at all focused on cloud migration, but rather the broader topic of application portfolio management (including migration off mainframes).

Update 30 Sep 2020

Watching VMworld 2020 I see that VMware has the following:

  1. Retain
  2. Rehost/Migrate
  3. Replatform
  4. Build and Refactor
  5. Retire

So that’s pretty much 5 of the 6 AWS ones, but notably not including Relocate, which is there specifically for VMware stuff :/

Retire also looks like it’s being used to do the same work as repurchase ‘retire traditional app and convert to new SaaS apps’.

Notes

[1] Maybe not so original. Although Gartner folk can trace their Rs back to about 2010 there are people from EDS who recall them from 2005/6 (when cloud was just becoming a thing).
[2] As if Charley Says wasn’t terrifying enough growing up in the 70s, another reason to avoid stranger’s cars was in case they might spin out of control because of the wrong tyre mix.