Saturday was very rainy, so I thought I’d finally get around to upgrading my home lab from ESXi 5.5 to 6.5. I started with my #2 Gen 8 Microserver as I don’t have any live VMs on it, and thus began many wasted hours of reboot after reboot failing to get anything to work.

Slow iLO

The Integrated Lights Out (iLO) management on the Gen 8s is their best feature, and I was able to start out by mounting the .iso for ESXi 6.5 through the iLO remote console and rebooting the server into that to kick off the upgrade.

Sadly the first pass failed on one of the VIBs that’s standard in the HPE ESXi 5.5 bundle (and not used by the Microserver).

After that things spiralled into doom. It wouldn’t boot from the Virtual CD, then it wouldn’t boot from a USB stick, then it wouldn’t complete the upgrade of the internal USB stick I boot from, then it wouldn’t boot from a fresh USB stick.

All the while the iLO was really slow compared to my #1 Microserver.

I did all the usual stuff

Power cycles, iLO resets, BIOS updates, iLO firmware updates (which took ages).

I thought maybe the CPU was overheating and checked the thermal paste.

Maybe it’s a USB problem

I disconnected the internal boot USB stick (and checked it in my laptop)

I disconnected the external KVM (it’s not like I really need it with iLO) in case that was causing issues.

And still the iLO was slow, and USB boot wasn’t working

At this stage I took a close look at the Active Health System Log, which was about 1GB.

A thread I’d found on Reddit ‘HP iLO4 very slow‘ after Googling ‘slow iLO’ suggested that flash issues could cause problems, and maybe a giant AHS log file could be the cause of flash issues.

Reformatting iLO Flash

Perhaps I should have just cleared the log, but I instead went for reformatting the iLO NAND.

Curiously the iLO Self-Test wasn’t reporting a problem with the embedded Flash/SD_CARD, so I wasn’t able to do things the easy way from the (now v2.7.0) iLO web interface. I had to download the HP Lights-Out Configuration utility and feed it a lump of XML to send over to the iLO.

HPQLOCFG.exe -f Force_Format.xml -s iLO_IP -u administrator -p mypassword

<LOGIN USER_LOGIN="administrator" PASSWORD="mypassword">
<RIB_INFO MODE="write">

How did this happen?

I still don’t know. I’d love to read the AHS logs, but the tools to do that live behind a redirect loop on the HPE web site, and may require an active support contract.

Perhaps it’s because the server was powered down, but with the iLO still running, for a few years?

The iLO is supposed to be out of band, and so it shouldn’t affect things like the host USB bus, but I’m guessing that a few corners might have been cut to keep down the cost of adding iLO to the Gen 8 Microserver. I’m also guessing that decision hasn’t impacted many users because a lot of those machines went into home labs yet it seems this isn’t a common problem.

Reformatting worked

After the reformat and a reboot of the iLO it was back to snappy performance. Better still USB storage started working again, so I could finally do that ESXi upgrade.

If you’re here for my experiments in culinary science move along swiftly, this post isn’t for you. This is all about enterprise architecture versus cloud native architecture.

RDBMS is a meatball

Enterprises use (or at least have used) Relational Database Management Systems (RDBMS), and such things have become deeply embedded into the organisation and culture around maintaining ‘books and records’ of the firm. Something I’ve previously labelled the ‘cult of the DBA'[1].

Enterprise meatballs don’t scale

RDBMS are generally limited by ‘the biggest box money can buy’. That’s not entirely true since the advent of Oracle Real Application Clusters (RAC), but by then much of the norms of RDBMS use were well established. The story goes roughly like this.

You might have a business need for some of my data, but you can’t use my database because my application is already running the biggest box money can buy to the ragged edge of its performance envelope[2]. Get your own RDBMS with its own box, and I’ll give you a copy of the data with an Extract Transform Load (ETL) job

and so we get spaghetti

RDBMS to ETL to RDBMS to ETL to RDBMS to you get the drift… Meatballs and spaghetti, spaghetti and meatballs.

It quickly gets messy. The worst part is the T in ETL, because the shape and naming keeps changing as data gets re-purposed for different uses.

What changes in the cloud?

Sticking with the metaphor, the cloud has infinitely large meatballs[3], so no need for spaghetti any more.

‘Cloud scale’ architecture liberates us from ‘the biggest box money can buy’ because the clouders found ways to scale horizontally. This is largely achieved by throwing off the shackles of ‘relational’, though we get to keep that if it’s really needed, and we can still use SQL too if that’s useful.

This simplifies things greatly. Every app can rally to the same source of truth, and ‘master data management’ boils down to the good management of one giant database rather than the cat herding exercise of figuring out how you got 122 different ways of describing ‘yield curve’.

This does not map well to present enterprise organisations

Meatballs, and the monoliths built on top of them, fit super snugly into traditional organisation structures (the purpose, boundaries and budgets for each siloed function). The spaghetti that wired everything together then became a cultural norm (how we do things around here).

Enterprise adoption of cloud native data management might hold the promise of greatly simplifying everything, but will be fought every step of the way as it cuts across the organisation structure and culture that evolved around it.

If (as Adrian Cockcroft says) ‘DevOps is a reorg’ then this is the same. Somehow ‘cloud data management is a reorg’ sounds less catchy. It should probably happen alongside the DevOps reorg anyway.


[1] See NoSQL as a governance arbitrage
[2] This is usually somewhere between a small fib and a massive lie. The biggest box that money can buy has been bought in the anticipation of many things that might affect capacity management over time, including how long it takes to get approval to buy anything. But the lie is told anyway because who wants to worry about another group’s capacity needs (or worse still setting up an internal charge back for their usage)?
[3] Not actually true, but in the real world you’ll run out of money before they run out of capacity.


Decision making is at the heart of an organisation’s purpose, but it’s rare to see much effort being spent on improving the quality of decision making, and typical to see all decisions mired in time consuming bureaucratic process. We can do better, with a little coarse filtering, some doctrine and situational awareness, and a bias towards tightening feedback loops.


Over the past few months this topic has come up in a few different places for me. First there was Sam Harris’s ‘Mental Models‘ podcast conversation with Farnam Street[1] blog founder Shane Parrish. Then there was Dominic Cummings‘[2] epic[3] ‘High performance government, ‘cognitive technologies’, Michael Nielsen, Bret Victor, & ‘Seeing Rooms’‘. All against a background of daily tweets from Simon Wardley about his mapping, culminating in this excellent explainer video from Mike Lamb:

Do good, or avoid bad?

My first observation would be that most organisations import the human frailty of loss aversion, and so the machinery of decision making (generally labelled ‘governance’) is usually arranged to stop bad decisions rather than to promote good decisions.

It’s also usual for the same governance processes to be applied to all decisions, whether they’re important or not. Amazon’s founder and CEO Jeff Bezos is a visible example of somebody who’s figured this out and done something about it. Bezos distinguishes between irreversible (Type 1) and reversible (Type 2) decisions. In his 2015 letter to shareholders he writes in a section headed ‘Invention Machine’:

Some decisions are consequential and irreversible or nearly irreversible – one-way doors – and these decisions must be made methodically, carefully, slowly, with great deliberation and consultation. If you walk through and don’t like what you see on the other side, you can’t get back to where you were before. We can call these Type 1 decisions. But most decisions aren’t like that – they are changeable, reversible – they’re two-way doors. If you’ve made a suboptimal Type 2 decision, you don’t have to live with the consequences for that long. You can reopen the door and go back through. Type 2 decisions can and should be made quickly by high judgement individuals or small groups.

As organizations get larger, there seems to be a tendency to use the heavy-weight Type 1 decision-making process on most decisions, including many Type 2 decisions. The end result of this is slowness, unthoughtful risk aversion, failure to experiment sufficiently, and consequently diminished invention.[4] We’ll have to figure out how to fight that tendency.

And one-size-fits-all thinking will turn out to be only one of the pitfalls. We’ll work hard to avoid it… and any other large organization maladies we can identify.

Data driven decision making

If you torture your data hard enough, it will tell you exactly what you want to hear[5]

‘Data is the new oil’ has been a slogan for the last decade or so, and Google (perhaps more than any other organisation) has championed the idea that every problem can be solved by starting with the data.

Unfortunately data is just raw material, and data management systems (whether they’re ‘big’ or not) are just tools. Data driven decisions need the right data (scope), correct data (accuracy), appropriate processing and presentation, and a proper insertion point into the decision making process. The Google approach can easily become A/B testing into mediocrity; but most organisations don’t even get that far. They spend $tons on some Hadoops or similar and a giant infrastructure to run it on, then build what they hope is a data lake, which in fact is a data swamp, somehow expecting insight to squirt forth directly into the boardroom.

Strategy first, then a data machinery to support that strategy, not the other way around.

Being agile

Deliberate little a.

Whether we’re learning from evolution or OODA loops we know that the fastest adapter wins. So a relatively high level decision that an organisation might commit to is being adaptive to customer needs.

Agility, agility, agility – we want to adapt to ever changing customer needs, which means we need Agile software development, which means to need an agile infrastructure… buy a cloud from us. The latter part is a jokey reference to behaviour I saw in the IT industry a few years back, and I think most players have now figured out that clouds don’t magically impute agility, that you actually need to build something that provides end-end connectivity from need to implementation.

The point here is that you can’t just pick a single aspect of ‘agile’, like buying a cloud service, or choosing to do Agile development[6]. It has to be driven end-end. This means that leaders can’t just decree that somebody else (lower down) in the organisation will ‘do agile’, they have to get involved themselves, and the ‘governance’ processes need to be dragged along too.

The Wardley adherents following along will at this stage be struggling to contain:

But Chris, Simon says that Agile is only suitable for genesis activities, and we should use Lean and Six Sigma for products and commodities.

To which I respond, genesis is the only area of any interest. For sure products and services should be bought; and even if you’re a product or service company the only interesting things happening within those companies are genesis. It’s turtles all the way down for the Lean and Six Sigma stuff, and it’s not interesting for decision making because (by definition) we already know how to do those things and do them well.


Also don’t waste time on decisions that other people have already figured out the answers to. That’s what doctrine’s all about, and Mr Wardley has been kind enough to catalogue it for us. This is why he tells us that there’s no point in using techniques like Pioneer, Settler, Town Planner (PST) until doctrine is straightened out, because it’s like building without foundations.


Organisations function by making decisions, about Why, What and How, so it’s startling how bad most organisations are at it, and how easily organisations that get good at decision making find it to outpace and outmanoeuvre their competition (or even just the status quo for organisations that don’t compete). It’s also sad but true that some of the best brains for decision making are sat within investment funds, effectively throwing tips from the sidelines rather than getting directly involved in the game.

The first step is doctrine – don’t spend time and treasure on stuff that’s already figured out. The next step is it to categorise decisions by their reversibility (which is inevitably a proxy for impact) and stream different categories through different levels of scrutiny. Then comes the time to focus on making good timely decisions in addition to avoiding bad decisions.


[1] Named after Warren Buffett‘s residence in Omaha where he spends his time reading and thinking about how to steer the fortunes of Berkshire Hathaway.
[2] Cummings is a contentious figure for me. I despise what he did as the Campaign Director of Vote Leave (wonderfully portrayed by Benedict Cumberbatch in Channel 4’s ‘Brexit: The Uncivil War‘); but I find that I must admire the way that he did it. He ran a thoughtful 21st century campaign against a bunch of half-hearted nitwits who clearly struggled to drag themselves out of the Victorian era. No wonder he won. I should also note that he’s disavowed himself of what’s subsequently become of Brexit, as his vision and strategy has not been taken on by those now handling the execution.
[3] It seems every Cummings post is an example of ‘If I had more time, I would have written a shorter letter’. He’s obviously still keeping himself very busy.
[4] Bezos footnotes: “The opposite situation is less interesting and there is undoubtedly some survivorship bias. Any companies that habitually use the light-weight Type 2 decision-making process to make Type 1 decisions go extinct before they get large.”
[5] With apologies to Ronald Coase who originally said, ‘If you torture the data enough, nature will always confess’.
[6] For an excellent treatise on the state of Agile adoption I can recommend Charles Lambdin’s ‘Dear Agile, I’m Tired of Pretending

As we hit the second anniversary of NotPetya, this retrospective is based on the author’s personal involvement in the post-incident activities.

Continue reading the full story at InfoQ.

It turned out that my TMS9995 system had no modules in common with my CP/M system, as it’s using the ROM and RAM modules left over from the CP/M upgrade. All I needed was another backplane to be able to run both at once.

SC116 3 slot backplane

As the TMS9995 uses three modules: CPU, ROM and RAM I wanted a 3 slot backplane[1], and it turns out that Steve Cousins makes just the thing with his SC116, which is available on Tindie. I ordered one yesterday, and it arrived today :)

It only took a few minutes to put together (in part because I left out all the bits I didn’t need).

Pi terminal server

It’s great to use my RC2014s from a serial terminal on my laptop or PC, but that means I can’t do anything with them when I’m away from home.

To get over that I’ve used the Raspberry Pi that’s sat on the UPS on my desk (my first original Model B) along with a couple of UART cables.

I wanted to get another FT232 cable, but the eBay supplier I used last time is away. So instead I order a pair of PL2303 based cables from Amazon. These turned out to work fine with my laptop, but not so great with the Pi, where I could only get one working at once (due to power issues?), and also hit a somewhat well documented issue with (clone?) PL2303X chips where it’s necessary to sudo modprobe -r pl2303 && sudo modprobe pl2303The compromise I ended up with was the FT232 for the CP/M system using screen -S Z80 /dev/ttyUSB0 115200 and the PL2303 for the TMS9995 using sudo minicom -D /dev/ttyUSB1This means I can now SSH to my Pi and connect from there to the two RC2014 systems.


[1] At some stage I may make a TMS9901 I/O module, and I’d also like to see if I can add a TMS9918 video adapter (like this one), so I might have to upgrade to 5 slots later on.

Playing Tetris on my Gigatron reminded me that I still had my original Game Boy in a box in the loft, which had last been out when my kids were too young to remember.

I found it in a sorry state – the screen cover had fallen off, and the case had turned orange. Sadly I forgot to take the ‘before’ picture, but it may have been even worse than the one pictured in this Reddit thread ‘Quite possibly the most yellowed DMG I have yet to come by‘. Here’s what it looked like once finished:

Ersatz Retr0bright

The first problem I went after was the yellowing of the plastic. I found a couple of videos from ‘The 8-bit Manshed’ that provided a good overview of disassembly (part 1)[1] and plastic restoration (part 2). Having previously read the Wikipedia article on Retr0bright I knew that all I needed was some hair salon hydrogen peroxide (already on hand) and some sunlight[2].

I put the salon creme on with a brush, popped the parts into a zip lock bag, and left it out in the back garden. As can be seen above it came out nicely :)


The detached screen went back on with a dab of superglue (cyanoacrylate) on each corner – easy.

Battery terminals

These had picked up the usual green encrustation over the years. The removable ones went into vinegar whilst the case was whitening, and I sprayed the PCB mounted ones with contact cleaner and brushed them off.

When I put things back together it wasn’t firing up, as the PCB +ve terminal dome had corroded badly (which happened back when I used the Game Boy regularly).

The solution was to wrap some kitchen foil around the terminal, which now provides good conduction from the battery to the system.

Screen Lines

With everything nominally working again it was time for some Tetris, but that revealed some missing lines on the screen. It’s a common problem, and one with a well documented repair. One thing I did differently was to place a piece of scrap paper over the ribbon connection to the LCD and (briefly) run the soldering iron directly over that, which takes care of better heat transfer (as conduction beats convection) whilst avoiding direct contact between the ribbon plastic and the iron tip.

I can now get back to some of my old games

Including the 8×1 cartridge I picked up in Sham Shui Po’s ‘Golden Shopping Centre‘ in ’92.

I think my favourite was Shanghai, or was it Bubble Bobble – they were all good, maybe apart from the Chinese language ones that I couldn’t figure out :/


[1] Easy when you have a tri-wing screwdriver, which I didn’t back in the day, but that became an essential when DSs were a feature in $son0’s life.
[2] The 8-Bit Guy’s ‘Adventures in Retrobrite – New techniques for restoring yellowed plastic!‘ provides a good run down of various alternative techniques.


Having got a TMS9995 system on RC2014 working as far as the monitor I still wanted to run BASIC and Forth. I’d not been able to get my pageable ROM card working, and the switchable ROM module is too small. But it’s too small because it’s designed to carve a 64K EPROM into 8K pieces. All I had to do was take the gating out of the A13 and A14 lines. That took a couple of jump leads and a solder blob on the back:

and disconnecting U2A:

I can now use the A15 jumper to choose between BASIC:

and Forth:

It’s all a bit hacky, but a lot less work than making my own ROM module from scratch.

I already wrote about my plans to get a TMS9995 running on RC2014, so this is the post about how I put together an RC2014 version of Stuart Conner’s TMS 9995 Breadboard or PCB System.


Taking a look at the TMS9995 pinout and the RC2014 module template it became clear that there was no easy way to place the CPU that would align well to the address bus, so I chose a position to one side of the module that lined up the clock pins and made the data bus reasonably easy. That placing also meant that I had plenty of space for the TMS9902 based serial port.


I went with the following mappings

TMS9995 : RC2014

A0-A15 : A15-A0 (as TI convention for numbering is reverse of industry norm)
Vss : GND
Vcc : 5v
NC : M1
Reset : Reset
CLKOUT : Clock
MEMEN : Mreq
D7-D0 : D0-D7 (same reversal as address bus)


This whole thing came about because I found a stash of TMS99xx chips. So I already had the TMS9995 CPU, and TMS9902 asynchronous communications controller. I also knew that I’d be able to use my existing RC2014 RAM and ROM modules. With some 74LSxx TTL chips and pull up resistors on hand,  all I needed were some sockets, a crystal, and a couple of capacitors, which I ordered from Bitsbox (rather than lots of little eBay orders).

The whole thing went onto an RC2014 prototype module. I chose to put the CPU and serial together on one module (as I only had one lot of right angle edge connector).

Clock check

Before getting too far into the build I checked that I could get a 3MHz clock out of my TMS9995 on breadboard:

All was well:


I knew that the address bus would be tricky, so I got that out of the way first:

Next up was the data bus and clock, which got me to a stage where I could test the clock again:

After that I found space for the serial parts. It turned out that I could mount the TMS9902 right by the CPU to take advantage of its use of A10-A14 with some simple bridges. The 74LS138 also sat nicely in reach of A7-A9 using the space remaining over the power lines. That just left me needing to place the 74LS04 inverter for CRUCLK (which Stuart told me was needed after TI changed things between the TMS9900 and TMS9995)[1].

Here’s the finished board from the front:

and the back:

It didn’t run first time :(

and it took me ages to track down the short between A9 and the /CE on the 9902. Eventually I realised that Stuart’s debug test images were working without the 74LS138 at which point I found the bug.

I then had the delight of running Stuart’s third test image and seeing a screen full of ASCII characters :)

ROM wrangles

I expected Stuart’s 32k images with EVMBUG and BASIC/FORTH to work in the RC2014 pageable ROM module with the jumpers set for a 32k page size and A15-0 for the lower page, but that hasn’t worked for me.

I have however succeeded in getting EVMBUG running from the switchable ROM module, which allowed me to run HELLO WORLD from the monitor:

I suspect I’m going to have to knock up my own 32k ROM module to get BASIC working


I’d have liked to try Cortex BASIC, but getting HELLO WORLD running from the monitor will have to do for now.


1. It seems that a couple of the TMS9901 that I have are the -95 versions that should work with a TMS9995 without inverting the CRUCLK, but none of the 9902s :/

This is the blog version of a Twitter conversation with my colleague Graham Chastney.

Huawei, and the war on trade

POTUS #45 has been pursuing a ‘trade war’ with China, as this appears to be popular with his base, even though it makes stuff more expensive for them and will ultimately harm the US economy. It’s not really a trade war, more a war on trade.

The latest target in that war is Huawei. First came a US export ban, then Google pulled their access to Android licenses and the underlying Google services and now ARM is breaking away.

This leaves us asking what happens to a mobile phone maker that relies on ARM for hardware, and Google for software and services when ARM and Google are put into a position that forces an end to those business relationships?

The direct and immediate consequence seems clear – Huawei’s mobile phone business (at least in the West) is toast. But let’s look at the longer term (unintended) consequences, and particularly the role played by open source, and how regulators might try to react.

This is great for RISC-V

If Huawei can’t work with ARM then the clear alternative will be the open source RISC-V Instruction Set Architecture (ISA).

Huawei is a Gold member of the RISC-V Foundation

RISC-V has been making great strides forward over recent years with industry giants like Western Digital and Nvidia jumping on board. But they’ve been nibbling into the bottom of ARM’s world with the low spec microcontrollers that do housekeeping stuff on hard drives and graphics cards.

A deep pocketed and cornered industry behemoth like Huawei can now drive a full on assault at the top of ARM’s market to get RISC-V a place in the high end Systems on Chip (SoCs) used in mobile phones.

But bad for Western RISC-V specialists

As they’ll be cut out of the action as all the work will be done in Shenzhen rather than Cambridge.

The software part has already played out

Android as we know it in the West isn’t a thing. The phones being used on the Chinese side of the Great Wall might ostensibly use Android, because they’re based on Android Open Source Project (AOSP), but they don’t use Google apps or the underlying services. So cutting Huawei off from Google’s Android licenses cuts them off from Western consumers, but doesn’t impact their domestic market at all.

We have however seen this show before (without the populist politics). Amazon sells Fire tablets, which are Android without Google services. This saves Amazon from paying Google a license.

Many people install Google apps and services onto their Fire devices, and both companies do nothing to prevent this. Google doesn’t mind losing out on a license fee if they still get all that tasty customer data, and maybe even some sales in the Play store. Amazon doesn’t mind people using Gmail on a device that’s pushed up their volume production economics, and likely also pulled along a Prime subscription and maybe some Kindle sales.

Huawei could very easily sell AOSP devices into the Western market where customers self help themselves to Google apps and services. Google’s hands would be clean as they wouldn’t be taking a license fee or helping Huawei in any way.

What I don’t see happening here is Huawei trying to build a portfolio of services and associated apps to appeal to those of us West of the Great Wall. Amazon didn’t bother, because it would take $Billions to build the platforms and establish the customer intimacy.

So what happens when the US Federal Government tries to cut Huawei off from RISC-V and AOSP?

Both of these projects originate from the West, and so it’s conceivable that Western governments will feel a sense of ownership over them. Furthermore it’s conceivable that Western governments will see open source as a loophole around export controls – a loophole that has to be closed.

It is of course completely impractical to prevent the export of open source. It’s inherently a globalised phenomenon. But this might be one of those times when politics tries to trump practicality. We’ve seen this show before too with the export controls of cryptography, or the more recent statement by Australia’s Prime Minister that ‘the laws of Australia will trump the laws of mathematics‘.

Open source, apart from you” as Graham puts it may be where the politicians and the agencies under them might want to go. But making that real will rapidly play out first as tragedy then as farce as an endless game of whack’a’mole ensues in a futile attempt to stop Unicode characters from crossing international borders.

It is sadly a losing game that governments will feel obliged to play, because the alternative is to accept that open source is a trans national power that transcends the power of national government; and of course that’s an alternative that’s unacceptable to a national government.

So I predict that the US Government will huff and puff and try to blow the house down, and they’ll drag the Five Eyes allies along under the banner of ‘intelligence’ protection, and then the rest of the West under the banner of ‘trade’. We’ve seen this show before too with the hapless ‘war on drugs‘, where we now have the ridiculous situation where many states have legalised cannabis whilst most of the West keeps it unlawful because the US told them too (and that’s still the Federal government position).


The US Federal government is on the brink of cornering itself into a war on open source (which will become a side skirmish to the war on general purpose computing), and it’s going to get very messy and very silly.


I tag a bunch of stuff on, so if you want to read more about RISC-V, open source hardware or the war on general purpose computing click on.

Update 22 Jun 2019

It seems I’m not the only one thinking along these lines. Bunnie Huang has blogged ‘Open Source Could Be a Casualty of the Trade War‘, which also has a thread on Hacker News.

Update 25 Jul 2019

Alibaba has announced a 2.5GHz 16 core RIDC-V chip and Stewart Randall at Technode has published ‘China’s chipmakers could use RISC-V to reduce impact of US sanctions‘.

After building my RC2014 CP/M system and Gigatron I decided to dig through the pile of old integrated circuits I stashed away before leaving home thirty years ago. I don’t remember the source, but it seems that I stripped down 4 systems (which might have been something like data loggers) that had a reasonably complete set of TMS 9900 series components. So I have CPUs, serial I/O, programmable systems interfaces and more at hand (though kits with all the parts are available inexpensively from eBay).

There is already a design that I can adapt

Stuart Conner has published plans for a TMS 9995 Breadboard or PCB System, which I think will work well as the basis for an RC2014 build. The RC2014 backplane carries power, a 16 way address bus, an 8 way data bus, and the various signal lines I’ll need.

The TMS 9995 memory layout and memory access scheme isn’t identical to the Z80, so I don’t think I’ll be able to use the standard RAM and ROM modules, even though they use the same glue chips as Stuart’s design.

Stuart doesn’t just have the hardware design; he also provides some firmware, and even some debug firmware. My plan is to drop it onto a W27C512 EEPROM[1] and I’ll then be able to use a jumper on the top address line to switch between BASIC and Forth.

The other major change I’m planning is to ditch the RS232Max, as I can just use TTL levels and an FTDI adaptor.

Getting it onto RC2014

I’ve ordered a few RC2014 prototype modules, and my general plan is:

  • One module for CPU, clock and serial too if I can squeeze it on.
  • One module for ROM, with jumper select for BASIC/Forth
  • One module for RAM (which I might just hack from my now unused RAM module since I did the CP/M upgrade)

That should leave me with a spare prototype board to lash up some sort of blinkenlights I/O board from one of the TMS9901NL I have at hand.

Given that I already have CPU, serial, EEPROM and most of the glue chips that just leaves me needing some chip sockets, a crystal and a few pullup resistors and capacitors. I’ll also get a fresh RAM, as I don’t want to mess with my working CP/M setup.

Update 20 May 2019

After a bit more noodling on the memory arrangements I’m now pretty sure that a regular RC2014 32k RAM module will work along with the pageable ROM module. It seems that Stuart (9995 breadboard design) has gone for switching the /CE signal whilst Spencer (RC2014) has gone for switching the /OE signal, but I think both approaches are equally valid. I’m using the following signal lines as equivalent:

  • Z80 : 9995
  • RD : DBIN
  • WR : WE

I now need to figure out why Stuart uses an inverted WE/CRUCLK into the 9902 for serial? Maybe I should just ask him[2].


[1] I burned the K0001000 factory ROM image onto a W27C512 EEPROM to replace the supplied 27C512 one time programmable (OTP) EPROM, and it’s working perfectly in my CP/M system. As I ordered a 3 pack from eBay that leaves me with a couple to play with.
[2] I did, and the answer is that due to differences between how the TMS9900 and TMS9995 handle clocks it’s necessary to invert the clock. Apparently there are TMS9902-95 ICs out there that don’t need this, but not in the bunch I have.