This is the blog version of a Twitter conversation with my colleague Graham Chastney.

Huawei, and the war on trade

POTUS #45 has been pursuing a ‘trade war’ with China, as this appears to be popular with his base, even though it makes stuff more expensive for them and will ultimately harm the US economy. It’s not really a trade war, more a war on trade.

The latest target in that war is Huawei. First came a US export ban, then Google pulled their access to Android licenses and the underlying Google services and now ARM is breaking away.

This leaves us asking what happens to a mobile phone maker that relies on ARM for hardware, and Google for software and services when ARM and Google are put into a position that forces an end to those business relationships?

The direct and immediate consequence seems clear – Huawei’s mobile phone business (at least in the West) is toast. But let’s look at the longer term (unintended) consequences, and particularly the role played by open source, and how regulators might try to react.

This is great for RISC-V

If Huawei can’t work with ARM then the clear alternative will be the open source RISC-V Instruction Set Architecture (ISA).

Huawei is a Gold member of the RISC-V Foundation

RISC-V has been making great strides forward over recent years with industry giants like Western Digital and Nvidia jumping on board. But they’ve been nibbling into the bottom of ARM’s world with the low spec microcontrollers that do housekeeping stuff on hard drives and graphics cards.

A deep pocketed and cornered industry behemoth like Huawei can now drive a full on assault at the top of ARM’s market to get RISC-V a place in the high end Systems on Chip (SoCs) used in mobile phones.

But bad for Western RISC-V specialists

As they’ll be cut out of the action as all the work will be done in Shenzhen rather than Cambridge.

The software part has already played out

Android as we know it in the West isn’t a thing. The phones being used on the Chinese side of the Great Wall might ostensibly use Android, because they’re based on Android Open Source Project (AOSP), but they don’t use Google apps or the underlying services. So cutting Huawei off from Google’s Android licenses cuts them off from Western consumers, but doesn’t impact their domestic market at all.

We have however seen this show before (without the populist politics). Amazon sells Fire tablets, which are Android without Google services. This saves Amazon from paying Google a license.

Many people install Google apps and services onto their Fire devices, and both companies do nothing to prevent this. Google doesn’t mind losing out on a license fee if they still get all that tasty customer data, and maybe even some sales in the Play store. Amazon doesn’t mind people using Gmail on a device that’s pushed up their volume production economics, and likely also pulled along a Prime subscription and maybe some Kindle sales.

Huawei could very easily sell AOSP devices into the Western market where customers self help themselves to Google apps and services. Google’s hands would be clean as they wouldn’t be taking a license fee or helping Huawei in any way.

What I don’t see happening here is Huawei trying to build a portfolio of services and associated apps to appeal to those of us West of the Great Wall. Amazon didn’t bother, because it would take $Billions to build the platforms and establish the customer intimacy.

So what happens when the US Federal Government tries to cut Huawei off from RISC-V and AOSP?

Both of these projects originate from the West, and so it’s conceivable that Western governments will feel a sense of ownership over them. Furthermore it’s conceivable that Western governments will see open source as a loophole around export controls – a loophole that has to be closed.

It is of course completely impractical to prevent the export of open source. It’s inherently a globalised phenomenon. But this might be one of those times when politics tries to trump practicality. We’ve seen this show before too with the export controls of cryptography, or the more recent statement by Australia’s Prime Minister that ‘the laws of Australia will trump the laws of mathematics‘.

Open source, apart from you” as Graham puts it may be where the politicians and the agencies under them might want to go. But making that real will rapidly play out first as tragedy then as farce as an endless game of whack’a’mole ensues in a futile attempt to stop Unicode characters from crossing international borders.

It is sadly a losing game that governments will feel obliged to play, because the alternative is to accept that open source is a trans national power that transcends the power of national government; and of course that’s an alternative that’s unacceptable to a national government.

So I predict that the US Government will huff and puff and try to blow the house down, and they’ll drag the Five Eyes allies along under the banner of ‘intelligence’ protection, and then the rest of the West under the banner of ‘trade’. We’ve seen this show before too with the hapless ‘war on drugs‘, where we now have the ridiculous situation where many states have legalised cannabis whilst most of the West keeps it unlawful because the US told them too (and that’s still the Federal government position).

Conclusion

The US Federal government is on the brink of cornering itself into a war on open source (which will become a side skirmish to the war on general purpose computing), and it’s going to get very messy and very silly.

Tags

I tag a bunch of stuff on Pinboard.in, so if you want to read more about RISC-V, open source hardware or the war on general purpose computing click on.


After building my RC2014 CP/M system and Gigatron I decided to dig through the pile of old integrated circuits I stashed away before leaving home thirty years ago. I don’t remember the source, but it seems that I stripped down 4 systems (which might have been something like data loggers) that had a reasonably complete set of TMS 9900 series components. So I have CPUs, serial I/O, programmable systems interfaces and more at hand (though kits with all the parts are available inexpensively from eBay).

There is already a design that I can adapt

Stuart Conner has published plans for a TMS 9995 Breadboard or PCB System, which I think will work well as the basis for an RC2014 build. The RC2014 backplane carries power, a 16 way address bus, an 8 way data bus, and the various signal lines I’ll need.

The TMS 9995 memory layout and memory access scheme isn’t identical to the Z80, so I don’t think I’ll be able to use the standard RAM and ROM modules, even though they use the same glue chips as Stuart’s design.

Stuart doesn’t just have the hardware design; he also provides some firmware, and even some debug firmware. My plan is to drop it onto a 27W512 EEPROM[1] and I’ll then be able to use a jumper on the top address line to switch between BASIC and Forth.

The other major change I’m planning is to ditch the RS232Max, as I can just use TTL levels and an FTDI adaptor.

Getting it onto RC2014

I’ve ordered a few RC2014 prototype modules, and my general plan is:

  • One module for CPU, clock and serial too if I can squeeze it on.
  • One module for ROM, with jumper select for BASIC/Forth
  • One module for RAM (which I might just hack from my now unused RAM module since I did the CP/M upgrade)

That should leave me with a spare prototype board to lash up some sort of blinkenlights I/O board from one of the TMS9901NL I have at hand.

Given that I already have CPU, serial, EEPROM and most of the glue chips that just leaves me needing some chip sockets, a crystal and a few pullup resistors and capacitors. I’ll also get a fresh RAM, as I don’t want to mess with my working CP/M setup.

Update 20 May 2019

After a bit more noodling on the memory arrangements I’m now pretty sure that a regular RC2014 32k RAM module will work along with the pageable ROM module. It seems that Stuart (9995 breadboard design) has gone for switching the /CE signal whilst Spencer (RC2014) has gone for switching the /OE signal, but I think both approaches are equally valid. I’m using the following signal lines as equivalent:

  • Z80 : 9995
  • MREQ : MEMEN
  • RD : DBIN
  • WR : WE

I now need to figure out why Stuart uses an inverted WE/CRUCLK into the 9902 for serial? Maybe I should just ask him.

Note

[1] I burned the K0001000 factory ROM image onto a 27W512 EEPROM to replace the supplied 27C512 one time programmable (OTP) EPROM, and it’s working perfectly in my CP/M system. As I ordered a 3 pack from eBay that leaves me with a couple to play with.


I’ve had a bit of a binge on this topic over the past few weeks, putting together some kits that I bought ages ago. My RC2014 came first, and then this, a Gigatron.

Why?

Ken Boak did a lightning talk on his Gigatron at OSHCamp 2018, and it sounded super interesting.

What is it?

A computer without a microprocessor, that runs entirely on 74 series Transistor Transistor Logic (TTL) chips plus a 32KB static RAM, a 1Mb EPROM (arranged as 64K x 16 bits) and a bunch of diodes.

Despite the simplicity of the hardware it’s capable of driving a VGA monitor (albeit at quarter resolution) along with sound.

It’s the computer that could have been made with late 70s components, but wasn’t, because the trade off between hardware simplicity and software complexity needed to program such a thing wasn’t really manageable back then[1].

It uses a Harvard architecture, that separates program code (in the EPROM) from data (in RAM) rather than the Von Neumann architecture that we generally take for granted.

It is therefore deeply weird for anybody (like me) who grew up with 80s 8 bit ‘micros’. But it’s rather satisfying to play Tetris on something you put together in an afternoon; and it’s a journey of discovery to figure out how on earth that actually works. Co-creator Walter Belgers does a great job of explaining it in his Hackerhotel 2018 presentation.

Assembly and testing

The Gigatron kit comes with a really nice spiral bound handbook that explains everything from basic soldering through to step by step assembly guide.

It took me about 4hrs to put it together, including a bunch of time wasted on an upside down 74HCT161 in the Program Counter[2].

It fired up almost first time. I didn’t have a spare 74HCT161 to replace the one I cut away, so I went with surface mounting it to the top of the through holes, then putting a blob of solder at each spot on the bottom. That didn’t quite work, but after I poked through a short piece of offcut wire into each hole the board started up properly and I got the ‘cylon’ on the blinkenlight LEDs.

When fully complete I found VGA, sound and input all working as expected, meaning I could start using the build in apps. Unfortunately it wasn’t stable when I put it in the enclosure with rubber feet holding up the PCB, which is why it’s pictured out of the (rather nice) supplied case in the photo above. I think it’s fixed now (after a little more work on the program counter joints), though I still suspect a possible (and intermittent) PCB track break around U5.

Extras

My Gigatron came with a ‘Pluggy McPlugface’ input adaptor that lets a PS/2 keyboard be used for BASIC etc., but I’ve not got around to digging out an old keyboard to try it yet.

What’s next?

I’d like to have a go a writing (or at least adapting) a program written in Gigatron Control Language (GCL).

Try it yourself in the online emulator

If you want to see what the Gigatron looks like in action without building one yourself then check out the emulator.

Notes

[1] I guess the metaprogrammers in the LISP community might have had the tools at hand to do something like this, but they didn’t.
[2] It was the third chip that I put in, and I couldn’t believe my own carelessness. I used to be fairly handy at desolding ICs, but I now realise that back then I used a clumsy old 30w soldering iron that was pretty good at delivering heat to many pins at once. What I should have done was just cut off the chip and pulled the pins, and that’s what I did in the end, but not before lots of solder sucking and wrenching.


I mentioned in my RC2014 post that I’d got myself a new oscilloscope, so this is the blog post to complement my review on Amazon.co.uk.

Background

The ancient single trace Telequipment S51E that I saved from a skip a 6th form college as they upgraded to shiny new dual trace scopes has served me well over the years, but at 1MHz it doesn’t have the bandwidth for retro computing projects. I’d briefly looked at what was available a few years ago, and knew that I could pick up something modern and effective for a few hundred pounds.

At around the same time I was saving the S51E I came across my first digital storage scope, which was probably from HP. I recall that it had cost something staggering like £50,000, and even at that price had less memory than my Amiga. Oh my have things changed as the digital revolution and commodification have reached the arena of test equipment.

Standalone vs PC peripheral

My expectation was that it would be cheaper to get a USB scope that uses a PC as its display, but that wasn’t the case – at least not at the bandwidth I wanted. Hantek do a 20MHz 2 channel PC-Based unit (the 6022BE) for £57.99, or 70MHz with 4 channels (the 6074BE) for £165.99, but I wanted 100MHz and although there is a 6102BE that would have fitted the bill I wasn’t finding them for sale.

The PicoScope 2000 series appeared to come highly recommended, and the lower bandwidth models come in at under £100, but unfortunately the 100MHz version starts at £479 for two channels.

So I picked the standalone DSO5102P as it had the right features (2 channels and 100MHz) at a very attractive price of £227.99. I also quite like the idea that I can use it without a PC or laptop, but I can connect it to a PC or laptop should I want to.

It’s very easy to use

The first task I had was to check the clock on my RC2014, so I plugged in, switched on, hooked up a probe and dialled in V/div and time base. I needn’t have bothered, as there’s an Auto Set button that would have got me straight to where I needed to be.

Like the scopes I grew up with it can display a live picture, but because it’s a digital storage scope it just takes a press of the Run/Stop button to pause time, at which point it’s possible to then scroll backwards and forwards along the time base.

Fancy features, and easy help

There’s stuff like Fast Fourier Transforms (FFT) that turn it into a spectrum analyser, and there’s no need to memorise the manual on how that stuff works, because the not quite so obvious features are documented in help pages that can be reached right there on the device.

It’s not quite as fancy as the PicoScope – so no serial port decoding, but my expectations are still calibrated to a 90s electronics lab rather than Star Trek.

There’s an app

The DSO5102P has a USB interface (Type A) on the front for copying to storage (which I haven’t tried yet) and another (Type B) on the back for connection to a PC. Once connected to a PC there’s an app that essentially provides a copy of what’s on the scope’s screen, and the ability to manipulate and save that. It’s functional rather than amazing, but if I ever need to put some traces into blog posts, it will let me do that. Like this:

Conclusion

The just over £200 mark seems to be about as cheap as standalone scopes get, but that buys a LOT of functionality. Given that I grew up in an environment where dual trace was considered high end it’s remarkable how much things have improved. It’s possible to drive the price even lower when buying a PC peripheral rather than standalone, but (surprisingly) only for low end specs with less bandwidth.


This is the first post in a short series (with perhaps more to follow as I go deeper).

Why?

I came across the RC2104 at OSHCamp 2018 where I got to meet its creator Spencer Owen and make one of his Mini kits with my son. When he put his Full Monty kit on sale for ‘Cyber Monday’ I had two reasons to buy:

  1. My daughter is doing Computing GCSE, and I thought it would be good for her to put together something where she could see the CPU, ROM, RAM and interfaces (plus she like soldering).
  2. I wanted to try out the CP/M Upgrade Kit to relive my days of running stuff on an RML380Z in the far corner of the computer lab at my high school.

What is it?

The RC2014 design is very modular. with each module plugging into a simple backplane that has power, data bus, address bus and miscellaneous signals running across it. It’s a step up from building on breadboard or stripboard, but has the same underlying simplicity and elegance.

In the Full Monty kit I got:

  • 8 way backplane – used to connect modules together
  • CPU – a Z80
  • RAM – a 62256 static RAM (32kb) supported by a couple of 74HCT series TTL chips
  • ROM – an AT27C512 one time programmable (OTP) EPROM (64kb) supported by a 74HCT32 and some jumpers so that different chunks can be addressed to select BASIC, CP/M monitor or Small Computer Monitor.
  • Clock – a simple 7.3728MHz crystal based clock using a 74HCT04
  • Serial -a 68B50 supported by a 74HCT04 with headers for an FTDI cable (and space for a 232Max chip for those wanting RS-232)

The components on each module are more retro inspired than retro authentic, but that helps to keep the complexity down. RAM in the 80s would have used lots more smaller chips, and the same would go for ROM (which would be EPROM rather than EEPROM). 80s designs also tended to stay in lane according to the CPU type and supporting chips for I/O etc. rather than mixing from the different worlds of Intel/Zilog and Motorola/MOS.

Assembly and testing

There are over 600 solder joints, but everything uses easy to work with through pin connectors, so after a few hours of me working on the backplane and my daughter making the modules we were ready to try it out.

It didn’t work first time :(

I then started working my way through the troubleshooting guide[1]… No obvious bad joints. Power was present at all the key points. No shorts on bus lanes. Good continuity along bus lanes.

At this stage I broke out my ancient Telequipment S51E single trace analogue oscilloscope that I saved from a skip at my 6th Form College when they were refreshing to shiny new dual trace scopes. I showed me that there was a clock signal, but didn’t have sufficient bandwidth to display it properly – you can’t look at a 7MHz clock on a 1MHz scope. I ordered a shiny new scope (more on that in another post).

Next up I switched each major chip with my RC2014 Mini – everything still worked on the Mini, and the Full Monty still refused to fire up.

At this stage I reached out to Spencer for some advice, and he suggested that I try each of the modules in my Mini (pulling the chip that was on the module). This let me eliminate the CPU, RAM and ROM from enquiries. I was pretty sure the clock was good (and my new scope arrived in the middle of troubleshooting to confirm that), so attention focused onto the Serial module.

Spencer noted that the only difference between serial on the Mini and the serial module is the resistors – 1k vs 2k2 respectively, so maybe my FTDI connector wasn’t getting along with the slightly weaker signal. I was using an Embecosm EHW5 USB UART that I got at the OSHUG Chiphack FPGA workshop, and that’s served me well for various projects, but Spencer was right – it was my ‘FTDI’ cable, and plugging it directly into the RX/TX lines got me a working terminal.

When my daughter got home that evening she was able to type in a BASIC program and test her creation.

I ordered a cheap FTDI cable from eBay, and when that arrived it worked perfectly with the serial module (even though it appears to have a counterfeit FTDI chip, which I guess is to be expected for low price parts from dubious sources[2]).

Extras

Along with the Full Monty kit I fell for the upsell to get some extra modules:

  • Digital I/O – provides 8 bits of input using push switches and 8 bits of output onto LEDs
  • Joystick – a derivative of the input side of the Digital I/O board to connect 2 DB9 joysticks (like those that came with the Atari VCS console and most of the 80s home computers)
  • Raspberry Pi Serial – connects a Raspberry Pi Zero with terminal software so that the RC2014 can be used with a USB keyboard and HDMI or composite screen rather than using a terminal (usually another PC with terminal emulation)

I made a disastrous error with the I/O board of thinking I knew the colour codes for 2K2 resistors vs 330R that let to a lot of bothersome desoldering and a tiny bit of PCB track repair. This is why we have multimeters, and if we have multimeters we should use them.

I was disappointed to find that the joystick port didn’t work, and this turned out to be due to a fairly fundamental design issue. 80s joysticks hold the pins high for up/down/left/right/fire, and the switches connect to GND to switch them low. The RC2014 module holds the pins low, and switches to 5v to pull them high. On my old Competition pro 5000 the 5v line isn’t even connected. I bodged this by bridging pins 7 & 8 on the DB9 connectors.

My Pi Zero needed the mounting holes drilled out a fraction for the mounting screws to fit through. I’ve not got around to playing with that part yet (as it’s a feature of the Mini that I’ve already explored).

CP/M

Digital Research’s Control Program/Monitor (CP/M) was the ubiquitous operating system for things that were simultaneously grown up and microcomputers prior to the advent of the IBM PC and PC-DOS/MS-DOS. I didn’t see a ton of it in my youth (at least until the Amstrad PCW came along), because it was too high end for home computers (at least for stock systems without a load of pricey expansions) – so it was somewhat aspirational.

The CP/M Upgrade kit comes with:

  • Pageable ROM – to provide more control over which ROM section is being accessed.
  • 64kb RAM – for extra working memory
  • Compact Flash – to provide mass storage in place of floppy disks or hard disks. The module comes with a pre formatted 128MB card that has a small selection of essential CP/M utilities on.

As it’s an upgrade it reuses the RAM chip and 74 chips from the RAM & ROM modules, but oddly comes with its own R0001009 ROM (versus the R0000009 ROM that came with the Full Monty kit, where the 1 represents the presence of the ‘CP/M Monitor, for pageable ROM, 64k RAM, 68B50 ACIA, CF Module at 0x10, with origin at 0x0000’ in bank 4).

The additional modules worked first time, and following Spencer’s Simple Guide I found myself at the once familiar A> prompt.

The challenge then shifted to finding something useful or interesting to do with a CP/M machine; but practically the first obstacle was getting some CP/M programs onto my RC2014…

The compact flash card comes with a DOWNLOAD.COM utility on the A: drive, which comes from Grant Searle’s CP/M on Breadboard project and provides a way to copy files through the terminal emulator console. Usually I use PuTTY as my terminal emulator, but throwing bytes down a serial link at 115200bps without any hardware flow control is a recipe for disaster, so I switched to Tera Term and configured it for a 10ms per character delay to slow things down a little. Grant’s archive contains a FilePackage.exe program for Windows that encodes CP/M programs and files to the format used by DOWNLOAD.COM, though I had to register COMDLG32.OCX on Windows 10 as simply having the control in the same directory doesn’t work for VB6 apps running on much newer Windows.

With DOWNLOAD.COM working it was time to plunder the archives for MBASIC and Zork etc. With the equivalent of something like 1600 80s 5.25″ floppy disks’ worth of space to play storage shouldn’t be a worry.

Having got used to multi tasking multi user systems I keep wanting to bring up another terminal whilst it’s busy doing something boring like file copying; but of course CP/M is a single tasking single user system, so it needs a bit of patience. Apparently it’s possible to run Unix on the RC2014, so maybe that’s a project for another day.

Notes

[1] Chelsea Back’s Further Debugging RC2014 and First Programs was also helpful.
[2] For things like FTDI cables the problem these days is finding vendors who don’t have compromised supply chains. At least FTDI have backed away from their scorched earth approach to clones.


I like to have permanent SSH connections from (a VM on) my home network to the various virtual private servers (VPSs) that I have scattered around the globe as these give me SOCKS proxies that I can use to make my web traffic appear from the US or the Netherlands or wherever (as mentioned in my previous post about offshoring traffic).

I’ve been using Ubuntu VMs since Jaunty Jackalope and when I discovered AutoSSH I made myself some init scripts that would make the connections. Later on I modified those scripts to run in Screen so I could jump onto them if needed for any troubleshooting. That was all fine in the world before systemd, but with Ubuntu 14.04 LTS reaching end of life there’s no longer a pre systemd choice for a mainstream distro[1]. So I’ve bitten the systemd bullet, and upgraded my VMs to Ubuntu 18.04 LTS.

Of course… my old init scripts didn’t just work. So I had to cobble together some systemd service units instead.

[Unit]
Description=AutoSSH tunnel in a screen
After=network-online.target

[Service]
User=changeme
Type=simple
Restart=on-failure
RestartSec=3
ExecStart=/usr/bin/screen -DmS tunnel1 /usr/lib/autossh/autossh \
-M 20020 -D 0.0.0.0:12345 [email protected]

[Install]
WantedBy=multi-user.target

The unit source code is also in a gist in case that’s easier to work with.

The unit can then be enabled and started with:

sudo systemctl enable autossh_screen.service
sudo systemctl start autossh_screen.service

Going through it line by line to explain what’s happening:

  • Description is a plain text explanation of what the unit is for. In my own I note which location the tunnels go to.
  • After is used to ensure the network is ready for making SSH connections
  • User defines which user the screen runs as, and should be changed to the appropriate username
  • Type simple tells systemd that we’re not running a forking process
  • Restart on-failure means that if screen crashes for some reason them systemd will try to restart it
  • RestartSec tells systemd to wait 3s before doing any restarts (so it doesn’t thrash too hard on something that keeps failing)
  • Execstart gets us to the actual command that’s running…
    • /usr/bin/screen is the default location for screen on Ubuntu (installed with ‘sudo apt-get install -y screen’)
    • -DmS tunnel1 tells screen to Detach but not fork, force a new session, and name the screen ‘tunnel1’ (mine are named after where they go to so that when I resume those screens with ‘screen -r’ I can pick out which VPS I’m using)
    • /usr/lib/autossh/autossh is the default location for autossh on Ubuntu (installed with ‘sudo apt-get install -y autossh’)
    • -M 20020 configures the monitoring port for autossh – make sure this is different for each unit if you’re running multiple tunnels
    • -D 0.0.0.0:12345 gives me a SOCKS tunnel on port 12345 – again make sure this is different for each unit if you’re running multiple tunnels
    • [email protected] is the username and fully qualified hostname for the VPS I’m connecting to
  • WantedBy defines what we’d have previously considered the default runlevel (normal system start)

Although I’ve been using Ubuntu 16.04 and 18.04 to acclimatise to systemd for the past few years I’m by no means an expert, so it’s possible that I could have done better here. Should I have used the ‘forking’ type and stuck with -d rather than -D in the screen flags? I just don’t know. This was cobbled together with help from this autossh gist and this python in screen example I found.

Update 9 May 2019

For a good overview of systemd check out Digital Ocean’s Systemd Essentials: Working with Services, Units, and the Journal (and their other posts linked at the bottom of that overview). There’s more at my systemd PinBoard tag.

Note

[1] I’ve kicked the tyres on Devuan, but we didn’t get along.


All three of the major cloud service providers have (or have announced) ‘have your cake and eat it’ versions of their services where data resides on premises whilst stuff is managed from a control plane in the cloud.

All of these services are predicated on a notion that data needs to reside on premises, whilst at the same time providing a subset of the services available in the public cloud, using the same management interface and underlying APIs.

Servers huggers gonna hug

We have to ask why organisations (or at least the people working in them) might think that they need to keep their data on premises, and there are essentially two reasons that come up time after time:

  1. Sensitivity – this label covers the plethora of security, privacy and regulatory related things that ostensibly get in the way of data being put into the ‘public’ cloud.
  2. Latency is for when the round trip from the customer’s on premises location to the cloud and back introduces unacceptable latency.

The latency argument is pretty clear cut

If the ms it takes to get data from your factory sensor to the cloud and back to the robot is too much then cutting that out by having the kit close by is clearly going to work. This is a good reason for adopting this type of hybrid model. Of course other hybrid models that deal with ‘edge’ compute are also available, so there are choices to filter through.

The sensitivity argument is much more murky

In principle there’s a clear separation of concerns between the data, which is sensitive, and that stays on premises; and the control plane metadata, which isn’t sensitive, and can happily go back and forth to that public cloud that we were unwilling to trust with our sensitive data.

In practice there’s an administrative level back door wired up from the kit hosting my sensitive data going right into that public cloud that we were unwilling to trust with our sensitive data. Awkward. Of course we can spend some due diligence time picking over controls and monitoring; and some lawyer time picking over contracts over who gets blamed for what.

Things get much murkier if you ship logs

If the control plane is just about turning stuff on and off then we can claim a separation between control metadata (not sensitive) and app data (sensitive), and the lines around that claim stay pretty sharp and clean. But once we start throwing logging across that line it’s no longer sharp and clean, especially when we get to exception handling.

Exceptions contain things like stack traces, and stack traces have a nasty habit of carrying with them the in memory plain text of all that sensitive stuff you’ve so carefully encrypted at rest and in motion.

For sure developers can be asked to write code that doesn’t leak sensitive data to logs, and that’s just as easy to police as every other aspect of code security.

This can also become the province of ‘data loss prevention’ (DLP) technologies, though they’ve tended to focus on human driven channels like email and file sharing rather than system stuff like logs.

An approximation that emerges here is that if the data is so sensitive that it needs to be kept on premises then it’s likely also the case that the logs and any associated log management need to stay on premises too. Log shipping to take advantage of cloud based log management tools seems to puncture any clean line between sensitive app data that must be kept on premises and control metadata that can be allowed into the public cloud.

Conclusion

The latency argument for these data on premises, management in the cloud models stands up well to scrutiny; the sensitivity argument (which seems far more prevalent) isn’t quite so robust. It’s clear that the cloud service providers want to lure the server huggers in with a ‘have your cake and eat it’ model, but it’s less clear that the model is robust in the face of security, privacy and regulatory demands that customers insist can only be dealt with using on premises infrastructure. Of course the cloud service providers know this, and have chosen to launch these services anyway, so they must see some profitable middle ground.

Fundamentally the issue here is all about control. Do the server huggers just want control of their data, in which case these approaches might appease; or are they trying to hold onto control of the whole infrastructure?


I mentioned Swift Playgrounds in my Learning to Code post a few years back, but at that time I didn’t have a new enough iPad to try it for myself. That changed when I recently got an iPad Mini 5[1], so I’ve been running through the Learn to Code modules.

It’s like a puzzle game, that you solve with code

The setting is somewhat reminiscent of 2D puzzle games like Icicle Works or Chip’s Challenge, except that it’s 3D, and the character is moved with code rather than cursor keys.

Things start out very much in the vein of Turtle graphics that will be familiar to anybody who’s encountered Scratch. There’s the usual move_forward, turn_left, do_action type stuff; but it’s not long before you’re writing functions, doing loops, using conditional and logical operations and building algorithms – and that’s just Learn to Code 1. Learn to Code 2 gets into variables, types and arrays, plus a bunch of object oriented (OO) stuff, though without ever using words like ‘class’ or ‘constructor’.

I’ve personally found going through the first couple of modules more like playing a game, and less like learning a new language. Partly this makes me feel like I’ve learned how to play Swift Playgrounds, but maybe not (yet) learned much (idiomatic) Swift.

It’s not perfect, but it does seem very very good

There are a few recurrent annoyances:

  1. Getting the cursor where you want it sometimes seems to be an ordeal, and I’ve often found myself having to scroll the screen a little up/down to enable the cursor at all.
  2. The keyword suggestions above the keyboard are generally all you need. Until you need the ! operator, at which point you have to pull up the full keyboard.
  3. If it thinks you want to enter an integer then you’ll be given the numbers only keyboard, so to put in anything except an integer requires a bit of cursor jiggling hoop jumping – if only there was a button for ‘give me the whole keyboard’.

There were a few places where I wondered if I’d have been able to make progress if I didn’t already know a bit of Java/C# and the general way that OO languages work.

Look closely, and you’ll see that I’m cheating this level

It’s also possible to cheat some of the tests that let you move onto the next level (a route I chose to take a few times when it felt like I was ‘grinding’ and it was asking me to type in stuff I’d done already on earlier levels because it’s impossible to build a reusable library of functions, and too time consuming to copy/paste from previous work).

Beyond the annoyances noted above it seems to be an excellent way into programming and the logic behind it. The puzzle game element delivers regular dopamine hits, and the underlying syllabus has enough repetition to ensure that the key concepts are absorbed, but not so much that it becomes boring. Could you write a Swift program to solve a given problem after Learn to Code 1 & 2 – probably not[2], does it provide the right mental framework to underpin a programming mind set – certainly yes.

A friend was visiting at the weekend to collect their $son0 who had been visiting my $son0 and said that their $son1 had expressed an interest in programming as a career (and computer science as an A Level choice), but she was concerned that they’d never actually done any programming and hence didn’t really know if they liked it or not. I suggested that Swift Playgrounds could be a good acid test – the puzzle game element might be a bit basic/childish for a gaming addicted teenager, but the programming element is deep enough to determine if they’re into it (or not).

I made that recommendation because Swift Playgrounds seems to nail the very thin middle ground between the instant gratification of gaming, where the expectations of an entire generation have been calibrated by Minecraft; and the inherently deeper challenges of learning to write and (more importantly) debug code (that my generation was forced into if we wanted to play games, as we had to type them into BASIC 8 bit computers from magazine printouts and library books).

You need a recent iPad

I was hardly surprised that Swift Playgrounds didn’t work on my ancient iPad 2 when I first heard about it, but it seems that recent versions only work with ARKit (A9 or later CPU) capable models. On one hand that probably means that Playgrounds can do funky stuff with robots and drones and augmented reality, but it also means that those older iPads that people might want to hand down to their kids might not be any use for this means to learn programming, which is a shame.

Whether Swift Playgrounds is so good that it’s worth buying a new iPad for is a question my brother was scratching his head over as he left my place the other day (after my nephew had devoured the first few levels and had to be torn away).

There’s also a web version, which is not the same at all

Before I got my new iPad I’d come across the Online Swift Playground when I was trying to do some stuff with emoji flags[3]; but it’s very much an empty vessel for testing Swift code, and not a place to learn programming by playing a puzzle game.

Conclusion

I’m very impressed with Swift Playgrounds, and from everything I’ve seen it might just be the best way to get kids into a coding mindset. For anybody that’s already putting a reasonably recent iPad into the hands of their kids it’s something they should definitely try. Whether it’s worth buying an iPad for (or at least the deciding factor in getting an iPad versus something else) is a trickier question. I can only note that the £99 that my dad spend on a ZX81 would today buy a basic (32GB 9.7″) iPad and leave around £50 in change (at least according to this inflation calculator).

Notes

[1] The Mini 5 really is an impressive machine – I recently tweeted ‘Wow, no wonder the iPad mini 5 feels brisk, its single core @geekbench is faster than my desktop (Ryzen 5 2600) and my gaming laptop (i7-7700HQ)’. The only thing I’d change is the screen ratio (from 4:3 to 16:10) to make it small enough to fit in a jacket pocket like my 2013 Nexus 7 did.
[2] I’ve taken a glimpse at Learn to Code 3, which seems to move away from the puzzle game format and on to more open front end development, and of course Playgrounds itself is an open environment where anything that can be done with Swift can be tried out. So there’s lots there to fill the gap between the intro modules and for real coding.
[3] But then I ran into the terrible emoji support for Windows 10, where I got ISO country letter pairs rather than actual flags :(


Towards the end of my recent trip to Pas de la Casa I realised that I was missing the telemetry I’d got from the Valnord App whilst skiing in Arinsal. A quick search suggested that the Ski Tracks app would be a good purchase for my iPhone and Apple Watch, so this is a reflection on the single day that I used it so far.

It worked pretty much as expected

As I got booted up before the first lift I set the app in motion from my Apple Watch (Series 2), and as I glanced at it after a few runs it seemed to be recording runs as expected.

When I got to the end of the day and looked at the phone app I was surprised to see nothing there, but then it imported the days runs presenting an overview:

a chart of speed and altitude:

and a breakdown of individual runs:

Faster earlier?

I was pleased that my day had been recorded, but a bit miffed about some inconsistency. When I looked at my watch after run 13, which had been a clear blast down Obagot III it had read 122.6km/h (76.2mph), so how did that become 64.3mph?

I now wish I’d done a screen grab of my Apple watch.

Health Tracking

One of the main things I use my Apple Watch for is health tracking, and I’d been surprised how little activity got clocked up in my first few days. The watch was barely registering the hike from the hotel to the lifts and back.

Using Ski Tracks seems to have massively over-corrected things:

 

It seems that every minute I had the app running counted as exercise (even when it knows I was stood standing for lifts or sat on them); and I’d clocked up a whopping 1700+ move calories – I should have felt much hungrier by the end of that day :0

Hopefully this stuff will get fixed in an update. I’d like my actual ski time to count as movement and exercise, but not all the waiting around for lifts etc.

It didn’t run down my batteries

A big promise for this app was to not run down batteries (which can happen quickly when GPS is used all the time), and it kept good to that promise. My watch battery seemed to go a little quicker than usual, but my phone was still at around 60% at the end of the day skiing.


The group that I’ve been skiing with for the past few years wasn’t coming together this year due to some health issues and other commitments, so my daughter and I chose to return to Pas de la Casa, which we last visited three years ago (on the proviso that we could return to Hotel Les Truites, which fortunately still had space).

Getting there and back

I planned flights around the Andbus transfers arranged via Pasdelacasa.com. Unfortunately fog at Gatwick significantly delayed our departure, but thankfully the Andorra Resorts team were very responsive and rebooked us onto a later bus (though this turned out to be news to the driver, and we just made it onto the last two available seats on a packed bus). The trip back was also made fraught by protests for Catalan independence (though luckily I’d allowed stacks of time, which I’d planned on spending in the T2 lounge at Barcelona Airport – though sadly it’s been closed for months for ‘improvement works‘).

The length of time spent on transfers (which essentially becomes a whole day getting there, and another getting back) are probably the main thing that would put me off returning to Andorra.

The skiing

Without such a large group getting around the Grandvalira area was pretty easy, and we spent a lot more time over in El Tarter and Soldeu, though the ‘funnel lift’ as we came to call ‘Cubil’ remained a significant and annoying bottleneck between one side of the area and the other. They really need something with more capacity at that point in the network.

With half term holidaymakers from the UK and France the entire area was busy, and though we found a handful of quiet lifts, all the primary routes were pretty choked up. I also can’t remember being on a trip where lifts stopped quite so frequently for bozos who can’t sit down and stand up again without falling over or some other buffoonery.

All 5 days that we skied were beautifully sunny, so there was no fresh snow. The piste was good throughout at the start of each day, though things got a little piled up and soggy by the end of the day. Our last day saw some of the runs we wanted to do closed.

Equipment

Having had a decent experience with Skiset last year in Austria I used them again this time and got a voucher for Surf Evasio 1. My plan to pick up gear on our day of arrival was thwarted by the delays, but when we pitched up on Saturday morning the place was busy but not too busy, and they did have stuff ready for us[1].

For boots they used a 3D scanning machine for fit, and the boots I got (Nordica brand) were the most comfortable I’ve ever had.

The ‘excellence’ skis they gave me were Head Supershape i.Magnum, and I didn’t get along with them. The bases were pretty scratched up, so they didn’t glide; and everything felt like hard work. So I took the shop up on their offer to swap, and when asked what I wanted different the answer was simply ‘faster’. They gave me some Nordica Dobermann SLC skis that were absolutely brilliant. Their target user profile of ‘groomed, expert, short turn, high speed’ suited me perfectly. I’d say it’s a close call between these and the Lecroix Mach Carbons I had on my last trip to Pas for best skis I’ve ever had.

My daughter was very happy with the Roxy Dreamcatcher 78s that she got (which were definitely faster than the Heads I wore on day one, but a little slower than the Nordicas I switched to).

Eats

Restaurant La Familia had been great last time, so we took the easy option (as it’s just downstairs from the hotel) and dined there again on our second and final nights. It’s now rated #1 restaurant in Pas, which I think is well deserved. On arrival we grabbed take away pizza from Oh Burger Lounge, which was good enough for us to return a couple of days later for the dining in experience.

A chance stop at the Hotel Nordic restaurant in El Tarter led to us returning there for three days in a row. The tapas portions are huge, tasty, and surprisingly inexpensive for a place that seems so posh. Having seen the salmon and avocado salad I had to go back to try it; and my daughter insisted on returning again the following day (it helped that she loved the run down Aliga). The hotel also has great WiFi.

Conclusion

A bit like returning to your old school, Pas (and Grandvalira) felt smaller, though there was still plenty of fun to be had. I left last time feeling like there was more to explore, whilst I left this time feeling like I’d exhausted the place. If I go back to Andorra I’d like to try Arcalis, though it doesn’t look like there’s enough there for a whole week.

Note

[1] For some weird reason they’d prepared a snowboard for me rather than skiis, but as soon as the mistake was recognised they sorted things out, and the service was very friendly. I was less impressed that they somehow managed to give my daughter different sized boots (which we realised when one of them wouldn’t fit the bindings), but that mistake was also sorted quickly and efficiently.