I like to have permanent SSH connections from (a VM on) my home network to the various virtual private servers (VPSs) that I have scattered around the globe as these give me SOCKS proxies that I can use to make my web traffic appear from the US or the Netherlands or wherever (as mentioned in my previous post about offshoring traffic).

I’ve been using Ubuntu VMs since Jaunty Jackalope and when I discovered AutoSSH I made myself some init scripts that would make the connections. Later on I modified those scripts to run in Screen so I could jump onto them if needed for any troubleshooting. That was all fine in the world before systemd, but with Ubuntu 14.04 LTS reaching end of life there’s no longer a pre systemd choice for a mainstream distro[1]. So I’ve bitten the systemd bullet, and upgraded my VMs to Ubuntu 18.04 LTS.

Of course… my old init scripts didn’t just work. So I had to cobble together some systemd service units instead.

[Unit]
Description=AutoSSH tunnel in a screen
After=network-online.target

[Service]
User=changeme
Type=simple
Restart=on-failure
RestartSec=3
ExecStart=/usr/bin/screen -DmS tunnel1 /usr/lib/autossh/autossh \
-M 20020 -D 0.0.0.0:12345 [email protected]

[Install]
WantedBy=multi-user.target

The unit source code is also in a gist in case that’s easier to work with.

The unit can then be enabled and started with:

sudo systemctl enable autossh_screen.service
sudo systemctl start autossh_screen.service

Going through it line by line to explain what’s happening:

  • Description is a plain text explanation of what the unit is for. In my own I note which location the tunnels go to.
  • After is used to ensure the network is ready for making SSH connections
  • User defines which user the screen runs as, and should be changed to the appropriate username
  • Type simple tells systemd that we’re not running a forking process
  • Restart on-failure means that if screen crashes for some reason them systemd will try to restart it
  • RestartSec tells systemd to wait 3s before doing any restarts (so it doesn’t thrash too hard on something that keeps failing)
  • Execstart gets us to the actual command that’s running…
    • /usr/bin/screen is the default location for screen on Ubuntu (installed with ‘sudo apt-get install -y screen’)
    • -DmS tunnel1 tells screen to Detach but not fork, force a new session, and name the screen ‘tunnel1’ (mine are named after where they go to so that when I resume those screens with ‘screen -r’ I can pick out which VPS I’m using)
    • /usr/lib/autossh/autossh is the default location for autossh on Ubuntu (installed with ‘sudo apt-get install -y autossh’)
    • -M 20020 configures the monitoring port for autossh – make sure this is different for each unit if you’re running multiple tunnels
    • -D 0.0.0.0:12345 gives me a SOCKS tunnel on port 12345 – again make sure this is different for each unit if you’re running multiple tunnels
    • [email protected] is the username and fully qualified hostname for the VPS I’m connecting to
  • WantedBy defines what we’d have previously considered the default runlevel (normal system start)

Although I’ve been using Ubuntu 16.04 and 18.04 to acclimatise to systemd for the past few years I’m by no means an expert, so it’s possible that I could have done better here. Should I have used the ‘forking’ type and stuck with -d rather than -D in the screen flags? I just don’t know. This was cobbled together with help from this autossh gist and this python in screen example I found.

Update 9 May 2019

For a good overview of systemd check out Digital Ocean’s Systemd Essentials: Working with Services, Units, and the Journal (and their other posts linked at the bottom of that overview). There’s more at my systemd PinBoard tag.

Note

[1] I’ve kicked the tyres on Devuan, but we didn’t get along.


All three of the major cloud service providers have (or have announced) ‘have your cake and eat it’ versions of their services where data resides on premises whilst stuff is managed from a control plane in the cloud.

All of these services are predicated on a notion that data needs to reside on premises, whilst at the same time providing a subset of the services available in the public cloud, using the same management interface and underlying APIs.

Servers huggers gonna hug

We have to ask why organisations (or at least the people working in them) might think that they need to keep their data on premises, and there are essentially two reasons that come up time after time:

  1. Sensitivity – this label covers the plethora of security, privacy and regulatory related things that ostensibly get in the way of data being put into the ‘public’ cloud.
  2. Latency is for when the round trip from the customer’s on premises location to the cloud and back introduces unacceptable latency.

The latency argument is pretty clear cut

If the ms it takes to get data from your factory sensor to the cloud and back to the robot is too much then cutting that out by having the kit close by is clearly going to work. This is a good reason for adopting this type of hybrid model. Of course other hybrid models that deal with ‘edge’ compute are also available, so there are choices to filter through.

The sensitivity argument is much more murky

In principle there’s a clear separation of concerns between the data, which is sensitive, and that stays on premises; and the control plane metadata, which isn’t sensitive, and can happily go back and forth to that public cloud that we were unwilling to trust with our sensitive data.

In practice there’s an administrative level back door wired up from the kit hosting my sensitive data going right into that public cloud that we were unwilling to trust with our sensitive data. Awkward. Of course we can spend some due diligence time picking over controls and monitoring; and some lawyer time picking over contracts over who gets blamed for what.

Things get much murkier if you ship logs

If the control plane is just about turning stuff on and off then we can claim a separation between control metadata (not sensitive) and app data (sensitive), and the lines around that claim stay pretty sharp and clean. But once we start throwing logging across that line it’s no longer sharp and clean, especially when we get to exception handling.

Exceptions contain things like stack traces, and stack traces have a nasty habit of carrying with them the in memory plain text of all that sensitive stuff you’ve so carefully encrypted at rest and in motion.

For sure developers can be asked to write code that doesn’t leak sensitive data to logs, and that’s just as easy to police as every other aspect of code security.

This can also become the province of ‘data loss prevention’ (DLP) technologies, though they’ve tended to focus on human driven channels like email and file sharing rather than system stuff like logs.

An approximation that emerges here is that if the data is so sensitive that it needs to be kept on premises then it’s likely also the case that the logs and any associated log management need to stay on premises too. Log shipping to take advantage of cloud based log management tools seems to puncture any clean line between sensitive app data that must be kept on premises and control metadata that can be allowed into the public cloud.

Conclusion

The latency argument for these data on premises, management in the cloud models stands up well to scrutiny; the sensitivity argument (which seems far more prevalent) isn’t quite so robust. It’s clear that the cloud service providers want to lure the server huggers in with a ‘have your cake and eat it’ model, but it’s less clear that the model is robust in the face of security, privacy and regulatory demands that customers insist can only be dealt with using on premises infrastructure. Of course the cloud service providers know this, and have chosen to launch these services anyway, so they must see some profitable middle ground.

Fundamentally the issue here is all about control. Do the server huggers just want control of their data, in which case these approaches might appease; or are they trying to hold onto control of the whole infrastructure?


I mentioned Swift Playgrounds in my Learning to Code post a few years back, but at that time I didn’t have a new enough iPad to try it for myself. That changed when I recently got an iPad Mini 5[1], so I’ve been running through the Learn to Code modules.

It’s like a puzzle game, that you solve with code

The setting is somewhat reminiscent of 2D puzzle games like Icicle Works or Chip’s Challenge, except that it’s 3D, and the character is moved with code rather than cursor keys.

Things start out very much in the vein of Turtle graphics that will be familiar to anybody who’s encountered Scratch. There’s the usual move_forward, turn_left, do_action type stuff; but it’s not long before you’re writing functions, doing loops, using conditional and logical operations and building algorithms – and that’s just Learn to Code 1. Learn to Code 2 gets into variables, types and arrays, plus a bunch of object oriented (OO) stuff, though without ever using words like ‘class’ or ‘constructor’.

I’ve personally found going through the first couple of modules more like playing a game, and less like learning a new language. Partly this makes me feel like I’ve learned how to play Swift Playgrounds, but maybe not (yet) learned much (idiomatic) Swift.

It’s not perfect, but it does seem very very good

There are a few recurrent annoyances:

  1. Getting the cursor where you want it sometimes seems to be an ordeal, and I’ve often found myself having to scroll the screen a little up/down to enable the cursor at all.
  2. The keyword suggestions above the keyboard are generally all you need. Until you need the ! operator, at which point you have to pull up the full keyboard.
  3. If it thinks you want to enter an integer then you’ll be given the numbers only keyboard, so to put in anything except an integer requires a bit of cursor jiggling hoop jumping – if only there was a button for ‘give me the whole keyboard’.

There were a few places where I wondered if I’d have been able to make progress if I didn’t already know a bit of Java/C# and the general way that OO languages work.

Look closely, and you’ll see that I’m cheating this level

It’s also possible to cheat some of the tests that let you move onto the next level (a route I chose to take a few times when it felt like I was ‘grinding’ and it was asking me to type in stuff I’d done already on earlier levels because it’s impossible to build a reusable library of functions, and too time consuming to copy/paste from previous work).

Beyond the annoyances noted above it seems to be an excellent way into programming and the logic behind it. The puzzle game element delivers regular dopamine hits, and the underlying syllabus has enough repetition to ensure that the key concepts are absorbed, but not so much that it becomes boring. Could you write a Swift program to solve a given problem after Learn to Code 1 & 2 – probably not[2], does it provide the right mental framework to underpin a programming mind set – certainly yes.

A friend was visiting at the weekend to collect their $son0 who had been visiting my $son0 and said that their $son1 had expressed an interest in programming as a career (and computer science as an A Level choice), but she was concerned that they’d never actually done any programming and hence didn’t really know if they liked it or not. I suggested that Swift Playgrounds could be a good acid test – the puzzle game element might be a bit basic/childish for a gaming addicted teenager, but the programming element is deep enough to determine if they’re into it (or not).

I made that recommendation because Swift Playgrounds seems to nail the very thin middle ground between the instant gratification of gaming, where the expectations of an entire generation have been calibrated by Minecraft; and the inherently deeper challenges of learning to write and (more importantly) debug code (that my generation was forced into if we wanted to play games, as we had to type them into BASIC 8 bit computers from magazine printouts and library books).

You need a recent iPad

I was hardly surprised that Swift Playgrounds didn’t work on my ancient iPad 2 when I first heard about it, but it seems that recent versions only work with ARKit (A9 or later CPU) capable models. On one hand that probably means that Playgrounds can do funky stuff with robots and drones and augmented reality, but it also means that those older iPads that people might want to hand down to their kids might not be any use for this means to learn programming, which is a shame.

Whether Swift Playgrounds is so good that it’s worth buying a new iPad for is a question my brother was scratching his head over as he left my place the other day (after my nephew had devoured the first few levels and had to be torn away).

There’s also a web version, which is not the same at all

Before I got my new iPad I’d come across the Online Swift Playground when I was trying to do some stuff with emoji flags[3]; but it’s very much an empty vessel for testing Swift code, and not a place to learn programming by playing a puzzle game.

Conclusion

I’m very impressed with Swift Playgrounds, and from everything I’ve seen it might just be the best way to get kids into a coding mindset. For anybody that’s already putting a reasonably recent iPad into the hands of their kids it’s something they should definitely try. Whether it’s worth buying an iPad for (or at least the deciding factor in getting an iPad versus something else) is a trickier question. I can only note that the £99 that my dad spend on a ZX81 would today buy a basic (32GB 9.7″) iPad and leave around £50 in change (at least according to this inflation calculator).

Notes

[1] The Mini 5 really is an impressive machine – I recently tweeted ‘Wow, no wonder the iPad mini 5 feels brisk, its single core @geekbench is faster than my desktop (Ryzen 5 2600) and my gaming laptop (i7-7700HQ)’. The only thing I’d change is the screen ratio (from 4:3 to 16:10) to make it small enough to fit in a jacket pocket like my 2013 Nexus 7 did.
[2] I’ve taken a glimpse at Learn to Code 3, which seems to move away from the puzzle game format and on to more open front end development, and of course Playgrounds itself is an open environment where anything that can be done with Swift can be tried out. So there’s lots there to fill the gap between the intro modules and for real coding.
[3] But then I ran into the terrible emoji support for Windows 10, where I got ISO country letter pairs rather than actual flags :(


Towards the end of my recent trip to Pas de la Casa I realised that I was missing the telemetry I’d got from the Valnord App whilst skiing in Arinsal. A quick search suggested that the Ski Tracks app would be a good purchase for my iPhone and Apple Watch, so this is a reflection on the single day that I used it so far.

It worked pretty much as expected

As I got booted up before the first lift I set the app in motion from my Apple Watch (Series 2), and as I glanced at it after a few runs it seemed to be recording runs as expected.

When I got to the end of the day and looked at the phone app I was surprised to see nothing there, but then it imported the days runs presenting an overview:

a chart of speed and altitude:

and a breakdown of individual runs:

Faster earlier?

I was pleased that my day had been recorded, but a bit miffed about some inconsistency. When I looked at my watch after run 13, which had been a clear blast down Obagot III it had read 122.6km/h (76.2mph), so how did that become 64.3mph?

I now wish I’d done a screen grab of my Apple watch.

Health Tracking

One of the main things I use my Apple Watch for is health tracking, and I’d been surprised how little activity got clocked up in my first few days. The watch was barely registering the hike from the hotel to the lifts and back.

Using Ski Tracks seems to have massively over-corrected things:

 

It seems that every minute I had the app running counted as exercise (even when it knows I was stood standing for lifts or sat on them); and I’d clocked up a whopping 1700+ move calories – I should have felt much hungrier by the end of that day :0

Hopefully this stuff will get fixed in an update. I’d like my actual ski time to count as movement and exercise, but not all the waiting around for lifts etc.

It didn’t run down my batteries

A big promise for this app was to not run down batteries (which can happen quickly when GPS is used all the time), and it kept good to that promise. My watch battery seemed to go a little quicker than usual, but my phone was still at around 60% at the end of the day skiing.


The group that I’ve been skiing with for the past few years wasn’t coming together this year due to some health issues and other commitments, so my daughter and I chose to return to Pas de la Casa, which we last visited three years ago (on the proviso that we could return to Hotel Les Truites, which fortunately still had space).

Getting there and back

I planned flights around the Andbus transfers arranged via Pasdelacasa.com. Unfortunately fog at Gatwick significantly delayed our departure, but thankfully the Andorra Resorts team were very responsive and rebooked us onto a later bus (though this turned out to be news to the driver, and we just made it onto the last two available seats on a packed bus). The trip back was also made fraught by protests for Catalan independence (though luckily I’d allowed stacks of time, which I’d planned on spending in the T2 lounge at Barcelona Airport – though sadly it’s been closed for months for ‘improvement works‘).

The length of time spent on transfers (which essentially becomes a whole day getting there, and another getting back) are probably the main thing that would put me off returning to Andorra.

The skiing

Without such a large group getting around the Grandvalira area was pretty easy, and we spent a lot more time over in El Tarter and Soldeu, though the ‘funnel lift’ as we came to call ‘Cubil’ remained a significant and annoying bottleneck between one side of the area and the other. They really need something with more capacity at that point in the network.

With half term holidaymakers from the UK and France the entire area was busy, and though we found a handful of quiet lifts, all the primary routes were pretty choked up. I also can’t remember being on a trip where lifts stopped quite so frequently for bozos who can’t sit down and stand up again without falling over or some other buffoonery.

All 5 days that we skied were beautifully sunny, so there was no fresh snow. The piste was good throughout at the start of each day, though things got a little piled up and soggy by the end of the day. Our last day saw some of the runs we wanted to do closed.

Equipment

Having had a decent experience with Skiset last year in Austria I used them again this time and got a voucher for Surf Evasio 1. My plan to pick up gear on our day of arrival was thwarted by the delays, but when we pitched up on Saturday morning the place was busy but not too busy, and they did have stuff ready for us[1].

For boots they used a 3D scanning machine for fit, and the boots I got (Nordica brand) were the most comfortable I’ve ever had.

The ‘excellence’ skis they gave me were Head Supershape i.Magnum, and I didn’t get along with them. The bases were pretty scratched up, so they didn’t glide; and everything felt like hard work. So I took the shop up on their offer to swap, and when asked what I wanted different the answer was simply ‘faster’. They gave me some Nordica Dobermann SLC skis that were absolutely brilliant. Their target user profile of ‘groomed, expert, short turn, high speed’ suited me perfectly. I’d say it’s a close call between these and the Lecroix Mach Carbons I had on my last trip to Pas for best skis I’ve ever had.

My daughter was very happy with the Roxy Dreamcatcher 78s that she got (which were definitely faster than the Heads I wore on day one, but a little slower than the Nordicas I switched to).

Eats

Restaurant La Familia had been great last time, so we took the easy option (as it’s just downstairs from the hotel) and dined there again on our second and final nights. It’s now rated #1 restaurant in Pas, which I think is well deserved. On arrival we grabbed take away pizza from Oh Burger Lounge, which was good enough for us to return a couple of days later for the dining in experience.

A chance stop at the Hotel Nordic restaurant in El Tarter led to us returning there for three days in a row. The tapas portions are huge, tasty, and surprisingly inexpensive for a place that seems so posh. Having seen the salmon and avocado salad I had to go back to try it; and my daughter insisted on returning again the following day (it helped that she loved the run down Aliga). The hotel also has great WiFi.

Conclusion

A bit like returning to your old school, Pas (and Grandvalira) felt smaller, though there was still plenty of fun to be had. I left last time feeling like there was more to explore, whilst I left this time feeling like I’d exhausted the place. If I go back to Andorra I’d like to try Arcalis, though it doesn’t look like there’s enough there for a whole week.

Note

[1] For some weird reason they’d prepared a snowboard for me rather than skiis, but as soon as the mistake was recognised they sorted things out, and the service was very friendly. I was less impressed that they somehow managed to give my daughter different sized boots (which we realised when one of them wouldn’t fit the bindings), but that mistake was also sorted quickly and efficiently.


Life with a better Outlook

People frequently send me ‘invites’ with attached Internet Calendaring and Scheduling (.ics) files. This is problematic, as I don’t use (fat client) Outlook, and web/mobile Outlook might be able to open those files (sometimes), but can’t do anything useful with them[1].

To make matters worse, it’s pretty common for the ‘invite’ to say nothing about when the event is happening. It’s a secret. Open the .ics attachment and all will be revealed.

Still using letterhead

This as a subset of a larger problem where people wrap a few lines of text that could so easily just go into an email into a PDF, Word doc, Excel spreadsheet or whatever; which completely fails the keep it simple, stupid (KISS) test. It’s also an accessibility issue, and a potential security issue – making people open other apps (that they may not have, or might have difficulty using) so that those apps can read attachments that could be carrying any manner of malware.

My kids’ school is one of the worst perpetrators. They’ll happily send a PDF that has a single line of content. Often something completely mundane like, ‘please log onto section blah of the parent portal (for the actual info we’re trying to convey)’.

I suspect that in their case (and many others) it’s a failure to adapt to ‘digital’. They may crow about kids using iPads in their classes[2], but I suspect that their approval workflow is much the same as when everything went to the headmaster’s secretary to be manually typed onto school letterhead.

Email overload is a modern disease, and much of the ‘digital’ conversation these days goes on about shinny new tools that are supposed to eliminate email (and it seems any free RAM you might have had on whatever you’re using). But attachments make overload worse rather than better, as they introduce more friction into reading email.

You can help by…

If you’re asking somebody to come along to an event, for sure add an .ics file in case it might be useful. But make sure to say (in the subject line) when it’s happening, so it’s not obligatory to open that attachment.

More generally a good rule of thumb might be that anything that fits onto a page or less shouldn’t be an attachment – just put the text into the body of the email. Hooray – you’re now a digital native.

Notes

[1] Let’s not go down the rabbit hole of incompatibility between the worlds of Microsoft, Google and Apple when it comes to invites.
[2] The iPad fad was in fact short lived, and they soon switched (back) to laptops.


‘Filter failure at the outrage factory’ is a term I’ve been using on Twitter[1], usually as part of a quote tweet for something describing the latest social media catalysed atrocity, but I thought it deserved a longer form explanation, hence this post.

I fear that I buried the lede in a note when I first published this. But the core point is that Rule 34 (if it exists there is porn for it on the Internet) is being weaponised against the general population. The people out past (what should be) the 5th standard deviation are very into their peculiar peccadilloes, and that looks like ‘engagement’ to the algorithms.

The Outrage Factory

This is a label I’m going to smear across the entire advertising funded web, but the nexus of the issue is where traditional media and its previously trusted brands intersects with social media. The old sources of funding (subscriptions and classifieds) dried up, and that drove media companies into the arms of online advertisers, which quickly became a race to the bottom for our attention. Click bait headlines, fake news, outrage – they all get attention. Facebook, and YouTube and all the rest run off algorithms that finds the stuff that grabs and holds our attention, and those algorithms have discovered that outrage == $$$. Unfortunately those algorithms don’t care whether the material they hype is based on truth or some conspiracy theory nonsense or just an outright falsehood.

Filter Failure

Clay Shirky famously said [2] “It’s not information overload. It’s filter failure.” when describing how we could deal with the fire hoses of information the Internet can throw at us. At the time (2008) we were seeing the beginning of a shift from hand selected filters such as the RSS feeds one might subscribe to in Google Reader to ‘collaborative filters’ where we started to use the links posted by friends, colleagues and trusted strangers on Facebook and Twitter. Notably those were the days before algorithmic feeds.

The problem that developed from there is that where we thought we were handing curation over to our ‘friends’ we ended up handing curation over to the attention economy[3]. JP Rangaswami laid out Seven Principles for filtering, and it seems what we have today fails on all counts – we lost control.

Why care?

We should care because our democracy is being subverted. The checks and balances that worked in the world of print, radio and TV have proven utterly ineffective, and bad actors like the Internet Research Agency (IRA) are rampantly exploiting this to undermine our society[4].

We should care because dipshit stuff like the anti-vax movement that used to be consigned to wingnuts on the fringe of society has been sucked into the mainstream to the extent that it’s ruining herd immunity and we’re having life threatening and life changing outbreaks of completely preventable diseases.

We should care because algorithmically generated garbage is polluting the minds of a whole generation of kids growing up with iPads and Kindles as their pacifiers.

We should care because our kids are finding themselves a click or two away from Nazi propaganda, the Incel subculture, and all manner of other stuff that should be found somewhere out past the 5th standard deviation in any properly functioning society rather than being pushed into the mainstream.

We should care because this is the information age equivalent of the Bhopal disaster, and the Exxon Valdez, and the Great Molasses Flood, where toxic waste is leaking out and poisoning us all.

What can we do?

We don’t have to play along and sell our attention cheaply.

A wise friend once tweeted ‘I predict that in the future, “luxury” will be defined primarily by the lack of advertisements’. So upgrade your life to a life of luxury. Leave (or at least uninstall) Facebook. Install an ad blocker. Curate you own sources of information on something like Feedly.

Update 23 Apr 2019

I’ve been tracking items relating to this with the ffof tag on Pinboard.in

Update 25 Apr 2019

Sam Harris’s ‘Making Sense’ podcast interview with ‘Zucked‘ author Roger McNamee ‘The Trouble with Facebook‘ provides an outstanding tour around this topic.

Notes

[1] About not amplifying outrage, listening at scale, outrage amplification as part of the system design, and recommendation engines pushing anti-vax onto new parents
[2] “It’s Not Information Overload. It’s Filter Failure.” Video, Clay Shirky at Web 2.0 Expo NY, Sept. 16–19, 2008.
[3] Aka ‘surveillance capitalism
[4] Renée DiResta‘s Information War podcast with Sam Harris is a good place to start, and then maybe move on to The Digital Maginot Line.


Skills development and training is a huge part of driving an organisation forward into the future, and so it’s something that I spend a lot of time and energy on. I’ve seen a bunch of stuff over the past year that leads me to expect a revolution in training.

Katacoda

I first came across Katacoda at RedMonk‘s Monkigras conference a couple of years ago when Luke Marsden showed me the Weave.works Tutorials he’d been building; and I immediately fell in love with the openness and interactivity of the platform. I’d seen training before that simulated a real world environment, but nothing that provided an authentic on demand experience[1].

I spent the following months creating what became ‘DevOps Dojo White Belt’ taking the ‘Infrastructure as Code Boot Camp’ materials we’d used for in person workshops and making it into something that could scale a lot better[2].

It was much more recently that I saw the full potential of Katacoda realised. The team that created our ‘DevOps Dojo Yellow Belt’ incorporated Continuous Integration and Continuous Delivery (CI/CD) pipelines in such a way that we could test whether students were achieving specific outcomes.

Qwiklabs

After attending a Google Cloud Derby hosted by Ant Stanley under the auspices of the London Serverless Meetup I was sent an invite to some Google Cloud Platform (GCP) training on the Qwiklabs platform (that was recently acquired by Google).

Specifically I was invited to do the GCP Architecture Challenge, which turned out to be like no training I’d ever done before. As explained in the ‘quest’ intro: ‘Challenge Labs do not provide the “cookbook” steps, but require solutions to be built with minimal guidance’…’All labs have activity tracking and in order to earn this badge you must score 100% in each lab.’

I found it was like doing an escape room. Each lab would describe an outcome that needed to be achieved, and it was up to me to figure out how to do that (using the tools provided), against the clock. Perhaps I should have done some training first, but it was really fun to learn stuff on the fly, solve the puzzle, and escape from the room lab before the clock ran down.

Open book (it’s OK to use Google and Stack Overflow)

The emergent point here is that students shouldn’t expect to be spoon fed every minute detail – they’re expected to go and read the docs, dive into Stack Overflow, search for answers on Google and in GitHub, maybe even watch some YouTube primers. Real life is an open book test, so training should reflect that.

Exercism

I saw this outcome oriented theme continue when a colleague pointed me towards Exercism yesterday. It provides training for a wide variety of programming (and scripting) languages with a common underpinning that it’s all based on tests. Just like with test driven development (TDD) you write code to pass the test suite. This results in a stunning variety of working results that can be seen from other people’s submissions, which are worth reviewing to discover everything from language idioms to significant performance improvements. Students can edit the tests too, adding things that might have been missed. It’s a really neat way of learning a language and at the same time getting into the discipline of TDD.

Codecademy, Microsoft Learn and The Hard Way

I can’t finish without a nod to these:

I started using Codecademy when it first launched, and put up with the early wrinkles because it was so beautifully immersive. I was lured back in the past week by their Learn Vue.js course (after seeing Sarah Drasner showing off the potential of Vue.js in her Servless Days London talk[3]), and it was great (the course and Vue.js) – though I can’t say I’m keen on the cost of their ‘Pro’ subscription model.

When I saw Microsoft launch Learn at Ignite 2018[4] my first take was ‘they’ve totally ripped off Katacoda’, but that was quickly followed by being seriously impressed at how well they’ve incorporated on demand subscriptions for the underlying Azure services it provides training for. There’s some seriously good introductory material there, and I hope over time they’ll close the gap to meet up with Azure certifications.

The Hard Way‘ isn’t so much a platform as a methodology designed to get ‘muscle memory’ into students, and show them the essential details that often gets buried under layers of abstraction (but matters when things fall apart). I first came across it with Kelsey Hightower‘s Kubernetes The Hard Way, and got pulled deeper with Ian Miell‘s Learn Git/Bash/Terraform The Hard Way, though it’s worth noting that it all started with Zed A. Shaw.

Conclusion

The future of IT training is interactive and outcome oriented. Rather than being spoon fed a particular way to do something we can expect to be given open platforms that allow experimentation (and provide a safe place to try things out), and open challenges that test our ability to solve problems with the entire resources of the Internet at our hands (just like when we’re putting knowledge to use at work). If we want people to use TDD and build CD pipelines, then it should be no surprise that those same tests and pipelines can form an essential part of their skills development. The good part is that this unleashes unlimited creativity, because people can figure out their own way to achieve an outcome; and approaches can be compared with peers and mentors to discover how things might be improved. It’s a long way from passively sitting in a classroom, or watching a video.

Notes

[1] For me the best demo of Katacoda is the 10 minute beginner level ‘Get Started with Istio and Kubernetes‘ – there’s just so much packed into it.
[2] At the time of writing over 10,000 DXCers have done the White Belt.
[3] I can’t find a recording of that talk, but her VueConf.US talk ‘Serverless Functions and Vue.js‘ seems pretty similar.
[4] I wasn’t actually at Ignite (I’ve never been) – it just seems to me that Learn is by far the most important thing that came out of the 2018 event.


TL;DR

Getting a Midas Plus squirting properly again is very easy once you learn the secret of removing and cleaning the thermostatic cartridge. A simple procedure due to some great design, but one that isn’t documented anywhere that I could find.

Background

In just over a year since I had a new shower fitted as part of an en-suite refurb it went from ‘Oh yeah!’ to ‘meh’. I’ve had Aqualisa showers at home for over 20 years, which I guess makes me a loyal customer, but in that time I’ve needed a few replacement thermostatic cartridges – generally because the flow rate has become a bit useless.

Checking filters

The trouble shooting guide in the installation instructions suggests three possible causes for poor flow rate – twisted hose, debris in shower head, and debris in filters, with ‘check and clear as necessary’ as the action to take. I knew it wasn’t the first two, as flow was fine if I turned the thermostat to cold. So I set about checking the filters, which meant taking the mixer bar off its mount.

  • First I turned off the water supply
  • Then I taped over the fixing nuts with electrical tape (as they’re shiny stainless steel, and I didn’t want to scratch them with my dirty old adjustable spanner [they’re 29mm, and I don’t have a regular spanner that big])
  • After loosening the nuts I was able to pull the mixer bar away and remove the filters. The cold filter was clear, but the hot filter did have quite a large crystal deposit in it, which I rinsed off with a jug of water.

All of this made absolutely no difference. The shower flow rate was still poor.

The cartridge

I called the Aqualisa support team for help, and after talking through the problem they offered to send me a replacement thermostatic cartridge. This led to a follow on question from me about how to fit it once it arrived? The support person said to follow the installation guide, but I’d already been through that, and it makes no mention at all of cartridge removal and replacement. She then told me to start by taking the end cap off the control. When I went and tried that I soon discovered that isn’t how to get the cartridge out.

Picture of cartridge from Aqualisa website

I then did what I perhaps should have done in the first place. I took to YouTube, and found this video ‘Thermostatic cartridge: maintenance, replacement and calibration‘, which showed me that the cartridge is held in by a (very well hidden) grub screw on the bottom of the mixer bar. All I had to do was:

  • Turn off the water supply (IMPORTANT – the water on/off on the left hand side of the bar simply acts on the cartridge, so if you take it out without isolating the supply first the cartridge will fly out and then water will gush from the mixer bar).
  • Undo the grub screw on the bottom right hand side of the mixer bar with a 3mm hex bit
  • Pull the cartridge out
  • Rinse away any debris using a jug of water

    Other side of cartridge showing where grub screw fits

Once I popped the cartridge back in the shower was as good as new.

Crucially there was no need to remove the end cap and mess around with the thermostatic calibration.

I have questions

Once you know where the grub screw is, the simplicity of removing and refitting the cartridge stands out as a wonderful piece of design. Somebody thought very hard about how to make this as easy as possible; and then somebody else decided to exclude any mention of the grub screw and the cartridge from the instructions. Given that in the past Aqualisa have provided detailed instructions for the much more complex process for removal and refitting of earlier designs it seems really odd that they managed to make everything so much better, then chose to keep the details to themselves. This must result in a higher than needed support burden, and associated drag on profitability, which I find utterly bizarre.

Also the filters on the cartridge itself are (by my estimation) finer than the main inlet filters. So ‘check and clear’ for the filters should definitely mean both the inlet filters and the filters around the cartridge itself. Again I’m guessing that’s by (very good) design, and then somewhere else a decision has been made to withhold crucial information :/

 


Cloudflare recently announced two additional capabilities for their “serverless” Workers: support for WebAssembly as an alternative to JavaScript, and a key-value store called Workers KV. WebAssembly will allow Workers to be written in compiled languages such as C, C++, Rust and Go. Workers KV provides an eventually consistent state storage mechanism hosted across Cloudflare’s global network of over 150 data centres.

Continue reading the full story at InfoQ.