T-Shirt Sizes and the Copycloud
TL;DR
T-Shirt sizes are frequently used to create the VM types and cost structure for private clouds, but if the sizing isn’t informed by data this can lead to stranded resources and inefficient capacity management. It’s the antitheses of dynamic capacity management where every VM is sized according to the resources it actually consumes, ensuring that as much workload as possible fits onto the minimal physical footprint.
Background
A little while ago I wrote about Virtual Machine Capacity Management, and I won’t repeat the stuff about T-shirt sizing and fits. That post was aimed at the issues with public clouds. This post is about what happens if the idea of T-Shirt sizing is applied to private environments.
Copying T-shirt sizes is one of the ways that private clouds pretend that they’re like public clouds, but it’s a move that throws away the inherent flexibility that’s available to fit allocations to employment and make best use of the physical capacity. It’s worth noting that the problem gets exacerbated by the fact that those private clouds generally lack T-shirt ‘fits’ found in the different instance type families of public clouds. A further complicating factor is where T-shirt sizes are arbitrary and often aligned to simple units rather than being based on practical sizing data.
A look at the potential issue
If we size on RAM and say that Medium is 1GB and Large is 2GB then any app running (say) a 1GB Java VM is going to need an Large even though in practice it will be using something like 1.25GB RAM (1GB for the Java VM + a little overhead for the OS and embedded tools). In which case every VM is swallowing .75GB more than it needs – effectively we could do buy 5 get 3 free.
Could a rogue app eat all my resources?
The ‘rogue app’ thing is nothing but fear, uncertainty and doubt (FUD) to justify not using dynamic capacity management. Such apps would be showing up now as badly performing (due to exhausting the CPU or RAM allocations that constrain them), so if there were potential rogues in a given estate they’d be obvious already. Even if we do accept that there might be a population of rogues, if we take the example above we’d have to have more than 3/8 of the VMs being rogue to ruin the overall outcome, and that’s a frankly ridiculous proposition.
Showing a better way
The way ahead here is to show the savings, and this can be done using the ‘watch and see’ mode present in most dynamic capacity management platforms. This allows for the capture of data to model the optimum allocation and associated savings – so that’s the way to put a $ figure onto how arbitrary T-shirt sizing steals from the available capacity pool.
T-Shirts can be made to fit better
If T-Shirts still look desirable to simplify a billing/cost structure then the dynamic capacity management data can be used to determine a set of best (or at least better) fit sizes versus unit based. So returning to the example above Medium might be 1.25GB rather than 1GB, and then every VM running a 1GB Java heap can fit in a medium.
So why do public clouds stick to standard sizes?
Not all public clouds force arbitrary sizing – Google Compute Engine has custom machine types. It’s also likely that the sizes and fits elsewhere are based on extensive usage data to provide VMs that mostly fit most of the time.
That said, one of the early issues with public cloud was ‘noisy neighbour’ and so as IaaS become more sophisticated the instance types became about carving up a chunk of physical memory so that it was evenly spread across the available CPU cores (or at the finest grain hyperthreads).
Functions as a Service (FaaS) (aka ‘Serverless’) changes the game by charging for usage rather than allocation, but it achieves that by taking control of the capacity management bin packing problem. Containers as a Service models have so far mostly shown through the cost structure of underlying VMs, but as they get finer grained it’s possible for a model that carves between allocation and usage might emerge.
Conclusion
T-Shirt sizes are a blunt instrument versus the surgical precision of good quality dynamic capacity management tools. At their worst they can lead to substantial stranded capacity and corresponding wasted resources. There can be a place for them to simplify billing, but even then finer grained capacity management leads to finer grained billing and savings.
Filed under: cloud | Leave a Comment
Tags: capacity, sizing, T-shirt, VM
My MBA
TL;DR
I started out doing an MBA as it felt like a necessary step in pushing my career forward. The most important part was getting my head around strategy, and by the time that was done the journey had become more important than the destination.
The motivation
I’d not been working at Credit Suisse[1] for long when I saw a memo announcing the Managing Director selection committee. It had little mini bios for each of the committee members – the great and the good of the company, and as I read through them it was MBA, MBA, MBA, PhD in Economics, MBA… If my memory serves me well there were two people who had neither an MBA nor a PhD in Economics (and a couple who had both). I took it as a clear signal for what it took to get along in that environment. Much as I love economics from my time when I took it as a high school option I didn’t have the stomach for 9 years of study to get a (part time) PhD, so an MBA was the obvious choice.
Years later…
I found myself in San Francisco for the first time. I’d reached out to John Merrells, as at the time he was one of the few people I knew in the area. John had double booked himself for the evening we’d planned to get together, so he invited me along to a CTO dinner with the promise of meeting some interesting people. I barely saw John that evening, and instead found myself at a table with Internet Archive founder Brewster Kahle and to be Siri creator Tom Gruber. At some stage the conversation turned to how MBAs were destroying corporate America, and there came a point where I mentioned that I’d done an MBA. Brewster rounded on me asking:
Why would a smart guy like you waste your time doing THAT?
The best response I could think of at the time was:
Know your enemy.
Picking a course
A career break to do a full time MBA pretty much never crossed my mind. I’d only just left the Navy, just left the startup that I’d joined after the Navy, and my wife was pregnant. I had to put some miles on the clock in my new IT career, and keep earning. I was however spending three evenings a week away from home in London, and I had 4 longish train rides every week, which presented some good chunks of ‘spare’ time for study.
The Open University course was the first that I looked at, and although I did my due diligence in looking at alternatives it was the clear winner. It was inexpensive, accessible, and (at the time) the only distance learning course with triple accreditation. It was only later that I properly came to appreciate just how forward thinking much of the syllabus was – pretty much the opposite of the backward looking formulaic drivel that the San Francisco CTOs complained about.
Foundation
My first year was spent on a ‘foundations of senior management’ module that introduced marketing, finance, HR and change management. A key component of the course was to contextualise theory from the reading materials in the everyday practice of the students; so many of my Tutor Marked Assessments (TMAs) were about applying course concepts to the job I was in.
In retrospect I found this course interesting (especially the stuff on marketing) but not particularly challenging, but the hard part came next.
Learning to think strategically is hard
The next module was simply called Strategy, and was the only compulsory module after the foundation. It was also by far the most important part of the course, as doing it (and especially the residential school component of it) literally made me think differently, and it was a tough slog to do that.
We spent a good deal of time looking at stuff from Porter and Mitzberg and Drucker, but the point wasn’t to latch onto a simple tool and beat every problem down with it. The point was to critically evaluate all of the tools at hand, use them in context when they were useful, and see the bigger picture. If only we’d had Wardley maps to add to the repertoire.
One of the things that we spent a lot of time on was the shaping of organisation culture. My preferred definition of culture, ‘the way we do things around here’, comes from that course, and the TMA I’m most proud of from the whole course was ‘Can Mack ‘the knife’ carve CSFB a new culture’ where I looked at the methods, progress and risks involved in John Mack’s attempts to change the organisation I worked for.
Options
The final half of the course was three six month long optional modules. I picked finance, technology management and knowledge management. The first two were frankly lazy choices given that I was a technology manager working in financial services, but I still learned a good deal in both. Knowledge management was more of a stretch, and made me think harder about many things.
Along the way, the journey became more important than the destination
I’d embarked on my MBA for the certificate at the end, but as I completed the course it really didn’t matter – I didn’t bother going to a graduation ceremony.
From a learning perspective the crucial part had been getting half way – completing the strategy module.
From a career perspective just doing the course had pushed my into a new place. I’m forever grateful to Ian Haynes for his support throughout the process. He signed off on CSFB paying the course fees, and allowing me time off for residential schools[3]. More importantly though, he put opportunity in my way. When our investment bankers asked him to join a roadshow with London venture capital (VC) firms his answer was, ‘I’m too busy – take Chris instead’. This wasn’t a simple delegation – I was three layers down the management/title structure. Through the roadshow and various follow ups I got to know the General Partners at most of the top London VCs, our own tech investment banking team, and a large number of startup founders.
The thing that you miss out on by not going to a ‘good school’
Since finishing my MBA I’ve got to spend time at some of the world’s top business schools – LBS, Judge, Said, Dartmouth, and got to meet the people who studied there and talk about their experiences. Without fail they talk about ‘the network’ – the people they met on the course and the alumni who continue to support each other. The cost in time and treasure is largely about joining an exclusive club. Of course the Open University has its own network and alumni, but it’s not the same.
That said – the opportunity cost of going to a top school (especially if it’s full time) is not to be underestimated. Perhaps it’s a shame that people generally only get to learn about discounted cash flow analysis once they’re on the course.
Looking back, the one thing I’d change would be starting earlier. I had time on my hands whilst at sea in the Navy that could have been spent on study that was instead frittered away on NetHack ascensions.
Did ‘know your enemy’ help?
Absolutely yes.
I’ve spent pretty much the whole of my career as the ‘techie’, often treated with scorn by ‘the business’ – whether that’s Executive Branch officers in the Navy, or traders and bankers in financial services etc. The MBA gave me the tools and knowledge to not be bamboozled in conversations about strategy or finance or marketing or change; and that’s been incredibly helpful.
Why am I writing about this now?
I didn’t start blogging until a few years after I’d completed my MBA, so it’s not something that I’ve written about in the past (other than the occasional comment on other people’s posts). My post on Being an Engineer and a Leader made me think about how I’ve developed myself more broadly, and the MBA journey was a crucial part of that.
Conclusion
If you want to level up your career then a part time MBA is a great way to equip yourself with a broad set of tools useful to a wide variety of business contexts. Even if you don’t ultimately choose to change track or break through a ceiling, the knowledge gained can lead to a more interesting, well rounded and fulfilling career.
Notes
[1] Then Credit Suisse First Boston (CSFB).
[2] To misquote Scooby Doo – he would have gotten away with it if it wasn’t for those damn Swiss. Most strong cultures in company case studies come from founders (HP, Apple etc.) but Mack had nearly pulled off a culture change at CSFB when the board pulled the plug on him for suggesting a merger with Deutsche Bank. He’d even figured out a way to deal with the high roller ‘cowboys’ I fretted would take off once the good times returned to banking with the creation of the Alternative Assets Group (essentially a hedge fund within the bank where the ‘cowboys’ would get to play rodeo without disrupting the rest of the organisation).
[3] I think the bank got a good deal – for the same cost in money and time as sending me on a week long IT course each year they got a lot more effort and learning.
Filed under: culture | Leave a Comment
Tags: MBA, strategy
Being an Engineer and a Leader
TL;DR
Leadership and management are distinct but interconnected disciplines, and for various reasons engineers can struggle with both. My military background means that I’ve been fortunate enough to go through a few passes of structured leadership training. Some of that has been very helpful, some not so much. Engineers want to fix things, but working with other humans to make that happen isn’t a simple technical skill as it requires careful understanding of context and communications – something that situational leadership provides a model for.
Background
A couple of things have had me thinking about leadership in the past week. First there was Richard Kasperowski’s excellent ‘Core Protocols for Psychological Safety'[1] workshop at QCon London last week. He asked attendees to think about the best team we’ve ever worked in, and how that felt. Then there was Ian Miell’s interview on the SimpleLeadership podcast that touched on many of the challenges involved in culture change (that goes with any DevOps transformation).
My early challenge
I joined the Royal Navy straight from school, going to officer boot camp at Britannia Royal Naval College (BRNC) Dartmouth. I did not thrive there, and ran some risk of being booted out. On reflection the heart of the problem was that I was surrounded by stuff that was broken by design, and the engineer in me just wanted to fix everything and make it better.

A picture taken by my mum following my passing out parade at Dartmouth
I was perceived as a whiner, and whiners don’t make good leaders. The staff therefore had concerns about my leadership abilities[2].
One of Amazon’s ‘Leadership Principles‘[3] appears to be expressly designed to deal with this:
Have Backbone; Disagree and Commit
Leaders are obligated to respectfully challenge decisions when they disagree, even when doing so is uncomfortable or exhausting. Leaders have conviction and are tenacious. They do not compromise for the sake of social cohesion. Once a decision is determined, they commit wholly.
Basically – don’t whine once the decision has been taken – you’ll just be undermining the people trying to get stuff done.
My opportunity for redemption
Because I was so bad at leadering the Navy sent me for additional training on Petty Officers Leadership Course (POLC) at HMS Royal Arthur (then a standalone training establishment in Corsham, Wiltshire). Dartmouth had spent a few days teaching ‘action centred leadership‘ (aka ‘briefing by bollocking’) – it was all very shouty and reliant on position power (which of course officers have). POLC was four weeks of ‘situational leadership‘, where we learned to adapt our approach to the context at hand. Shouty was fine if there actually was a fire, but probably not appropriate most of the rest of the time. I learned two very important lessons there:
- Empowerment – seeing just how capable senior rates (Navy terminology for NCOs) were, it was clear that in my future job as their boss I mostly had to get out of their way.
- Diversity – the course group at POLC was pretty much as diverse a group possible in the early 90s Royal Navy. We had people from every branch, of every age, and at almost every level[4]. It was here that my best team ever experience referenced above happened, which in some ways is strange as we formed and reformed different teams for different tasks across the four week course. One thing that made it work was that we each left our rank badges outside the gate, as we were all trainees together. Over the duration of the course the diversity of backgrounds and experience(s) really helped to allow us as a group to solve problems together with a variety of approaches.
Putting things in practice
Years after POLC, when I made it to my first front line job, an important part of my intro speech to new joiners was, ‘my only purpose here is to make you as good as you can be’. This was very powerful, my department mainly much ran itself, I provided top cover and dealt with the exceptions, and my boss was able to concentrate on his secondary duties and getting promoted. Pretty much everybody was happy and productive, and along with Eisenhower’s prioritising work technique[5] I had an easy time (which became clear when it came to comparing notes with my peer group who in some cases tried to do everything themselves).
Influence and the (almost) individual contributor
It turns out that first front line job at sea with a department of 42 people was the zenith for my span of control in any traditional hierarchical management context. I’ve continued to be a manager since then, but typically with only a handful of direct reports and little to no hierarchical depth. In that context (which is pretty typical for modern technology organisations) it turns out that leadership can be much more important than management. Such leadership is about articulating a vision (for an outcome to be achieved) and providing the means to get there (which more often than not is about removing obstacles rather than paving roads).
This is where engineers can show their strength, because the vision has to be achievable, and engineers are experts in understanding the art of the possible. From that understanding the means to getting there (whether that’s paving roads or removing obstacles) can be refined into actionable steps towards achieving outcomes. The Amazon technique of working backwards codifies one very successful way of doing this.
The point here is that leaders don’t need the position power of managers to get things done, and that situational leadership presents a better model for understanding how that works (versus action centred leadership).
Conclusion
Engineers typically have an inherent desire to fix things, to make things better, and early on in their careers this can easily come across badly. An effective leader understands the context that they’re operating in, and the appropriate communications to use in that context, and situational leadership gives us a model for that. Applying the techniques of situational leadership provides a means to effective influence in modern organisational structures.
I was very lucky to be sent on Petty Officers Leadership Course – it was a career defining four weeks.
Notes
[1] Google have found that psychological safety is the number one determinant of high performing teams, so it’s a super important topic, which is why I wanted to learn more about the core protocols. This is also a good place to shout out to Modern Agile and its principles, and also Matt Sakaguchi’s talk ‘What Google Learned about Creating Effective Teams‘.
[2] In retrospect things would have been much easier if somebody (like my Divisional Officer) had just sat my down and said ‘this is a game, here are the rules, now you know what to do to win’, but it seems that one of those rules is nobody talks about the rules – that would be cheating.
[3] At this point it’s essential to mention Bryan Cantrill’s must watch Monktoberfest 2017 presentation ‘Principles of Technology Leadership‘, where he picks apart Amazon’s (and other’s) ‘principles’.
[4] Whilst ‘Petty Officer‘ is there in the course title attendees ranged from Leading Hands to Acting Charge Chiefs (who needed to complete the course to be eligible for promotion to Warrant Officer).
[5] When I first heard about this technique it was ascribed to Mountbatten (I’m unclear whether it was meant to be Louis Mountbatten, 1st Earl Mountbatten of Burma or his father Prince Louis of Battenberg) – the (almost certainly apocryphal) story being that as a young officer posted to a faraway place he’d diligently reply to all his correspondence, which took an enormous amount of time. On one occasion the monthly mail bag was stolen by bandits, leading to much worry about missed returns. Later it transpired that very few of those missed returns were chased up – showing that almost all of the correspondence was unnecessary busy work that could be safely ignored. It’s interesting to note here that POLC was created at the behest of Lord Louis Mountbatten and Prince Phillip was one of the first instructors.
Filed under: CTO | 2 Comments
Tags: engineering, leadership, psychological safety
Gemini – one week on
It’s been a week since I wrote about my first impressions, so here’s an update. Nothing about those first impressions has changed, but I’ve learned and tried a few more things.
Planet App Launcher
There’s a Planet button to the left of the space bar that brings up a customisable app launcher, and it’s pretty neat. The main thing I use it for is forcing portrait mode, but sometimes it’s also handy for apps.
RDP
I already wrote about how good the Gemini is for SSH, but it’s also great for Windows remote desktops (over SSH tunnels), because there’s no need to pop up a virtual keyboard over half the screen (and RDP definitely is a landscape mode app).
It’s a talking point
I was using my Gemini a fair bit at the couple of days that I spent at QCon London this week, and whilst some people studiously ignored it, and others had it pushed in their face because I’m so keen to show it off, there was a fair bit of ‘what’s that?’.
It does a good job of replacing my laptop
My laptop came out of my bag once during QCon (for me to attack something that I knew would need a desktop browser and my password manager) – in a pinch I could have used RDP for that, but since I did have my laptop with me…
The lesson here is that I’ll likely carry my laptop less.
The battery lasts well enough
I still don’t feel like I’ve pushed the envelope here, but the Gemini has been with me for some long days, and made it through.
X25
Apparently all production Gemini’s were supposed to be made with the X27 System On Chip (SOC), but it seems the first 1000 slipped through with the X25. I’m in the not really bothered camp on this, whilst I get the impression that some owners are fuming and feeling betrayed. For me this will just be something that clearly identifies my Gemini as one of the first. At some stage in the future this might bite me as software assumes X27ness, but at that stage I might be in need of a newer/better one anyway.
I’d love a 7″ Gemini
The Gemini hasn’t displaced my much loved (and now very long in the tooth) Nexus 7 2013 (LTE) from my life, and one reason for that is screen size. A Gemini with more screen and a slightly less cramped keyboard (like my old Sharp PC 3100) would be a thing of beauty.
I’ve still not tried dual boot to Linux
and that probably won’t change until things mature a little (or I find a desperate need).
Filed under: Gemini | Leave a Comment
Tags: Gemini, RDP, X25
Last night we celebrated the 10th anniversary of CloudCamp London by celebrating the 40th anniversary of the Hitch-Hikers Guide to The Galaxy (HHGTG). It was a lot of fun – probably the best CloudCamp ever.

I can’t say that I was there from the beginning, as I sadly missed the first CloudCamp London due to other commitments. I have been a regular attendee since the second CloudCamp, and at some stage along the way it seems that I’ve been pulled into the circle of Chris Purrington, Simon Wardley and Joe Baguley to be treated as one of the organisers. The truth is that Chris Purrington has always been the real organiser, and Simon, Joe and I just get to fool around front of house to some degree. I should also shout out to Alexis Richardson, who was one of the original CloudCamp London instigators, but at some stage found better uses for his time (and developed a profound dislike for pizza).
CloudCamp has been amazing. Simon taught us all about mapping. Kuan Hon taught us all about General Data Protection Regulation (GDPR) long before it was cool. It was where I got to meet Werner Vogels for the first time (when he just casually walked in with some of my colleagues). It’s where I first met Joe Weinman and many others who’ve ended up leading the way in our industry. It’s been and I hope will continue to be a vibrant community of people making stuff happen in cloud and the broader IT industry. Meetups have exploded onto the scene in the past ten years giving people lots of choice on how they can spend their after work time, so it’s great that the CloudCamp London community has held together, kept showing up, kept asking interesting questions, and kept on having fun.
Simon asked us to vote last night to rename CloudCamp to ServerlessCamp, as (in his view at least) that’s the future. With one exception we all chose to keep the broader brand that represents the broad church of views and interests in the community.
Here’s my presentation from last night, which in line with the theme was a celebration of Douglas Adams’s genius:
In keeping with the theme I wore my dressing gown and made sure to have a towel:

It’s been a great first decade for CloudCamp London – here’s to the next one.
Filed under: cloud, presentation | Leave a Comment
Tags: cloud, cloudcamp, community, HHGTG
Gemini first impressions
TL;DR
The Planet Computers Gemini is a 6″ Android (and Linux) clamshell device with a keyboard by the same designer who did the Psion Series 5. The keyboard enables on the move productivity with things like SSH that just isn’t possible with just a touch screen.
Background
I was lucky enough to hear about the Gemini very shortly after its crowd funding launch on IndieGoGo, and placed order 35 for the WiFi+LTE version.
Unboxing
The box is pretty nice, though (not shown) mine had split on one corner in transit. Everything seemed fine inside though.

The keyboardy welcome message is neat (even if it’s superfluous).

It’s supplied with a charger and USB A-C cable that has to be plugged into the left hand side to provide power. Looking at the charger specs I’d expect it to be quicker than a standard 5v USB charger.

The device itself came shrouded in a cover where the adhesive was just a little too sticky. Look very closely and it’s possible to see some spots on the screen (protector?) where some specs of dust appear to be trapped.

Setup was easy, and in line with other Android devices I’ve got for myself and family members over the last few years. It just worked out of the box.
The keyboard
I’m using the Gemini to write this. It works best when placed on a flat surface, but it’s workable when held.
As per Andrew Orlowski’s review at The Register the main issue is the space bar, which needs to be hit dead centre to work properly. I’m also just a tiny bit thrown off by the seemingly too high placement of the full stop on its key, but even as I type this I get the urge to go faster, and the whole thing has a lot more speed and accuracy than even the best virtual keyboards I’ve used. The addition on buzz feedback to the natural mechanical feedback is also very helpful in letting me know when I’ve hit a key properly.
The use of the function key to provide three things on many buttons works well, and my only complaint there might be that @ deserves better than to be a function rather than a shift.
One crucial thing is the availability of cursor keys, which makes precise navigation of text work in a way that just isn’t possible with just a touch screen. It’s also worth mentioning how naturally the interplay been touch actions and keyboard actions is. I’ve had touch enabled Windows laptops for years, and approximately never use the touch screens, but they have Trackpoints and Trackpads that move a mouse pointer around; Android is of course more naturally designed for touch.
Apps in landscape
It’s clear that whilst Android devices have pivoted between portrait and landscape since forever nobody tests their app in landscape. Buttons are often half obscured at the bottom of the screen (Feedly) or selection areas get in the way of vital output (Authy). Even when things do work landscape is often just an inefficient use of screen real estate, which would be why people don’t use/test it.
It works fine in WordPress though :)
It’s great for SSH
I’ve used Connectbot with my Android devices for some time, pairing it up with Hackers Keyboard, but the Gemini keyboard is a world better for driving a command line interface. This feels like what the Gemini was born to do.

Linux
I’ve not yet had the chance to try any of the supported Linux distros, and Android offers almost everything I want. I’d expect Linux to be useful for security testing and coding whilst offline, but that’s more of a break glass in case of emergency thing, and I’d be surprised if it became a daily drive thing for me.
I haven’t even tried to use it as a phone
The Gemini can be used as a phone, but that’s not what I bought it for, and the SIM I have in it has a data only plan.
Size
I noted with some frustration that the Gemini is just fractionally too long for the in flight device restrictions that were imposed on some routes (and threatened for many more) last year. It’s a shame, because in a pinch I think I could get through a week or two with just the Gemini (and no laptop) in order to avoid checking bags with devices in.
More prosaically it does (kind of) fit in a jeans pocket, though not comfortably, and not with the other phone(s) I’m still obliged to cart around. It’s more of a jacket pocket thing, so in day to day use I’m more likely to substitute it for my (now somewhat venerable) Nexus 7 (2013) tablet than my Android phone (and in practice I can see myself using the Gemini for productivity stuff whilst watching TV/movies/Netflix on the Nexus).
Performance
I haven’t noticed performance, which means it’s (more than) good enough. @PJBenedicto asked on Twitter ‘How’s the operating temperature after prolonged use?’ and that’s also something I’ve not noticed – it runs cool (though I haven’t been watching movies on it – yet).
Conclusion
I had high expectations of the Gemini and it hasn’t disappointed. It’s great to have something so small with a useful keyboard, and I can see it transforming some aspects of my on the move productivity.
Updates
3 Mar 2018 – I think I’ve now found the first thing that Planet Computers messed up badly and will have to spend some time fixing on with future hardware – noise isolation on the headphone port. Shortly after pressing publish on this post I downloaded some Netflix episodes and tried watching, and the noise isolation on the headphone port is just awful. It’s not really noticeable when watching something, but with the headphones plugged in with nothing playing you can listen to the Gemini crunching numbers as you move around the UI, and that’s not a good thing.
Filed under: Gemini, review | 2 Comments
Tags: android, Gemini, keyboard, Planet Computing, review
DXC Bionix
Today’s a big day as we’ve unveiled our first sub-brand at DXC Technology — DXC Bionix™, our new digital generation services delivery model that provides a data driven approach to intelligent automation. DXC Bionix includes three elements:
- Analytics and AI
- Lean methodology
- Automation
Bringing these elements together enables us to achieve greater insight, speed and efficiency across our global delivery ecosystem.
We’ve been deploying Bionix at scale and seeing some great results, and the time has come for us to share our approach directly with clients and partners as more of our offerings become Bionix enabled.
Our results from Bionix, as noted in the press release, include:
- 50-80% reduction in time spent on operations
- 25% reduction in testing costs; 50% defect reduction; 60% reduction in testing time
- Reduction in average applications deployment time from 180 minutes to 15 minutes
- 65% reduction in business process transaction time with assisted Robotic Process Automation (RPA)
- 71% of incidents auto-resolved or auto-diagnosed without human intervention
- 82% percent elimination of issues through rules-based filtering and alert correlation
At the heart of Bionix is design for operations, where we feed back the lessons learned from improvements in our existing operational environment into the design of our offerings. These “Powered by Bionix” offerings will be turnkey on Day 1 and integrated into our ongoing operation environment, Platform DXC. Platform DXC provides the foundations for Bionix with services for intelligence, orchestration and automation that allow us to quickly build and deliver partner-engineered, at-scale, repeatable offerings and solutions that help drive client digital transformations.
As a proof of concept for Platform DXC, we re-engineered a solution deployment that yielded big results:
- Lead time reduced from 1500 hours to 2 hours
- Work reduced from 660 hours to less than an hour
- From 8 teams to 1 team
- Meaning 7 hand offs reduced to zero
The numbers speak for themselves and demonstrate the power of shifting from an organisational model that scales with labour to one that’s designed and built on a scalable operating platform.
Our Bionix approach was first introduced in May 2017 internally as “Bionics”, which I wrote about in this primer. We’ve come a long way since then, and I look forward to sharing more about the culture change that enabled us to go from concept to minimal viable product (MVP) with Platform DXC in just 230 days.
Filed under: DXC | 1 Comment
Tags: analytics, automation, Bionix, lean, PDXC
Bionix – a primer
This is pretty much a repost of the original Bionics – a primer, but we decided to call it Bionix (with an X).
TL;DR
Greater automation is the future for the IT industry, and we’ve called DXC’s automation programme ‘Bionix’. It’s about being data driven with a flexible tool kit, rather than being all in on a particular vendor or product. To understand what we’re trying to achieve with Bionix (besides reading the rest of this post) I recommend reading The DevOps Handbook, and to get the foundation skills needed to contribute please run through the Infrastructure as Code Boot Camp [DXC only link].
Introduction
‘Bionix’ is the name that we’ve given to DXC Technology’s automation programme that brings together CSC’s ‘Operational Data Mining’ (ODM) and HPE ES’s ‘Project Lambroghini’. This post is written for DXC employees, and some of the links will go behind our federated identity platform, but it’s presented here on a public platform in the interest of ‘outside in'[1] communication that’s inclusive to customers and partners (and anybody else who’s interested in what we’re doing). What I’ll present here is part reading list, and part overview, with the aim of explaining the engineering and cultural foundations to Bionix, and where it’s headed.
Not a vendor choice, not a monoculture
The automation programme I found on joining CSC can best be described as strategy by vendor selection, and as I look across the industry it’s a pretty common anti-pattern[2]. That’s not how we ended up doing things at CSC, and it’s not how we will be working at DXC. Bionix is not a label we’re applying to somebody else’s automation product, or a selection of products that we’ve lashed together. It’s also not about choosing something as a ‘standard’ and then inflicting it on every part of the organisation.
Data driven
Bionix uses data to identify operational constraints, and then further uses data to tell us what to do about those constraints through a cycle of collection, analysis, modelling, hypothesis forming and experimentation. The technology behind Bionix is firstly the implementation of data analysis streams[3] and secondly a tool bag of automation tools and techniques that can be deployed to resolve constraints. I say tools and techniques because many operational problems can’t be fixed purely by throwing technology at them; it’s generally necessary to take an holistic approach across people, process and tools.
Scaleable
The constraints that we find are rarely unique to a given customer (or industry, or region) so one of the advantages we get from the scope and scale of DXC is the ability to redo experiments in other parts of the organisation without starting from scratch. We can pattern match to previous situations and what we learned, and move forward more quickly.
Design for operations
Data driven approaches are fine for improving the existing estate, but what about new stuff? The key here is to take what we’ve learned from the existing estate and make sure those lessons are incorporated into anything new we add (because there’s little that’s more frustrating and expensive than repeating a previous mistake just so that you can repeat the remedy). That’s why we work with our offering families to ensure that so far as possible what we get from them is turnkey on day 1 and integrated into the overall ‘Platform DXC’ service management environment for ongoing operations (day 2+). Of course this all takes a large amount of day 0 effort.
Required reading
What the IT industry presently calls ‘DevOps’ is largely the practices emerging from software as a service (SaaS) and software based services companies that have designed for operations (e.g. Netflix, Uber, Yelp, Amazon etc.). They in turn generally aren’t doing anything that would be surprising to those optimising manufacturing from Deming‘s use Statistical Process Control onwards.
Theory of constraints lies at the heart of the Bionix approach, and that was introduced in Goldratt‘s The Goal, which was recast as an IT story in Gene Kim (et al’s) The Phoenix Project. I’d suggest starting with Kim’s later work in the more prescriptive DevOps Handbook, which is very much a practitioner’s guide (and work back to the earlier stuff if you find it inspiring[4]).
The DevOps handbook does a great job of explaining (with case study references) how to use the ‘3 DevOps ways’ of flow, feedback and continuous learning by experimentation[5].

Next after the DevOps Handbook is Site Reliability Engineering ‘How Google Runs Production Systems’ aka The SRE Book. It does just what it says on the jacket, and explains how Google runs systems at scale, which has brought the concepts and practices of Site Reliability Engineering (SRE) to many other organisations.
Learning the basics of software engineering
The shift to automated operations versus the old ways of eyes on glass, hand on keyboards means that we need to write more code[6]; so that means getting ops people familiar with the practices of software engineering. To that end we have the Infrastructure as Code Boot Camp, which provides introductory material on collaborative source code management (with GitHub), config management (with Ansible) and continuous integration/continuous delivery (CI/CD) (with Jenkins). More material will come to provide greater breadth and depth on those topics, but if you can’t wait check out some of the public Katacoda courses.
Call to action
Read The DevOps Handbook to understand the context, and do the Infrastructure as Code Boot Camp to get foundation skills. You’ll then be ready to start contributing; there’s plenty more reading and learning to do afterwards to level up as a more advanced contributor.
Notes
[1] My first ‘outside in’ project here was the DXC Blogs series, where I republished a number of (edited) posts that had previously been internal (as explained in the intro). I’ll refer to some of those past posts specifically.
[2] I’ve been a huge fan of understanding anti-patterns since reading Bruce Tate’s ‘Bitter Java‘. Anti-patterns are just so much less numerous than patterns, and if you can avoid hurting yourself by falling down well understood holes it’s generally pretty easy to reach the intended destination.
[3] It’s crucial to make the differentiation here between streams and lakes. Streams are about working with data now in the present, whilst lakes are about trawling through past data. Lakes and streams both have their uses, and of course we can stream data into a lake, but much of what we’re doing needs to have action in the moment, hence the emphasis on streams.
[4] If you want to go even further back then check out Ian Miell’s Five Books I Advise Every DevOps Engineer to Read
[5] More on this at 3 Ways to Make DXC Better
[6] Code is super important, but it’s of little use if we can’t share and collaborate with it, which is why I encourage you to Write code. Not too much. Mostly docs.
Filed under: DXC | 1 Comment
Tags: Bionix, DevOps, SRE
Skiing in Austria (Skiwelt)
I missed out on skiing last season as my daughter went with her school to Pila, so this was my first time back on the slopes since skiing in Andorra.
Why Austria
As with Andorra it was my neighbour John’s suggestion, and I went with it as it’s generally more fun to ski (and socialise) in a group.
Getting there
We flew to Munich Airport (MUC) and picked up a hire car for the 1½hr drive to the hotel. I got the car sorted whilst John collected the bags, so we were out of there in good time. Sunday evening traffic coming out of Austria was awful, but going in we were fine apart from the need to stop along the way for a €9 ‘Vignette‘ toll sticker to allow us onto the Austrian motorways (because for some insane bureaucratic reason the rental car place wasn’t able to provide this).
I paid a little extra to get an estate car (to fit in our skis) and was generally pretty pleased with the Renault Megane Estate. Apple CarPlay was great for music and navigation from our iPhones, but the heater/blower controls were a mystery that none of us figured out. In retrospect it was a mistake to not get a ski box for one of the cars in our party as then we’d have all been able to fit into a single car rather than going around in convoy.
Accomodation
John’s original target was Westendorf, but everything was booked out. In fact pretty much everything in the whole area was booked out. The combination of UK and German school holidays likely didn’t help, but I get the feeling that people return to the same hotels year after year making the supply of accommodation pretty tight unless you book well in advance (and this was looking in October – 4 months before we went).
We ended up in the Gasthof Fuchswirt in Kelschau, which was great. Friendly service, huge rooms and good hearty traditional Austrian dishes for dinner every night – highly recommended. The breakfast was great too.
Equipment
Although Kelschau is part of the Skiwelt area it’s a bit of a small island, and there’s not much infrastructure beyond the 3 (somewhat ancient) lifts. Looking online at the options I chose Sport Verleih Fuchs in Itter via Skiset as they had better (and cheaper) gear than the corresponding place in Westendorf.
My choice of ‘Excellence’ skis got me a nice pair of Fischer Pro Mtn 80. They weren’t quite as amazing as the Lecroix Mach Carbon skis I had last time, but still performed very well. The ‘Sensation’ skis got my daughter some Volkl RTM 7.4, which she liked a lot.

Once we were kitted out with boots, skis, poles and helmets it was a matter of popping to the kiosk over the road to get a pass for the Skiwelt area, then into the telecabin to get up onto the slopes.
The other kit thing that I should mention is my Xnowmate boots. I’d ordered these prior to my last Andorra trip, but they didn’t arrive in time, so this was my first chance to try them out. They exceeded all expectations – comfortable, lightweight, warm, dry, and more comfortable. I wore them as chalet slippers in the hotel, I wore them to drive to the piste, I wore them to get to the first run of the day (rather than clomping around in ski boots) and they were just great.

The skiing
From Wikipedia:
The SkiWelt is Austria’s largest interconnected ski area. It has 90 Cable car lifts and Ski lifts, 280 Kilometers (173 Miles) of Ski Pistes, and 77 Ski Huts. The member villages are: Brixen im Thale, Ellmau, Going, Hopfgarten, Itter, Kelchsau, Scheffau, Söll and Westendorf.
That’s a lot of skiing for a single week (5 days once we’d factored in travel there and back), but we had a crack at it anyway.

Day 1 – since we’d picked up our gear at Itter that’s where we started out from. It was snowing, so conditions weren’t great, so we didn’t cover a huge amount of ground, and generally got our ski legs back.
Day 2 – we headed over to Westendorf and pretty much skied out that part of the map. It was fantastic, with the run down 16a that took us to Brixen im Thale for lunch idyllically empty of other skiers. We also loved the 120 run on the other side of the mountain. If there was a part of the area I’d hurry back to it’s this bit.
Day 3 – we parked in Hopfgarten and headed to join a friend of mine who was starting out from Ellmau, but fluffed the transition at the top of lift 22 and ended up taking a detour via Söll that was pleasant but time consuming. The lesson for the next day was to exit the lift on foot to the right and hike up past the restaurant.
Day 4 – parking again in Hopfgarten we struck out for (and made it to) Going on the other side of the resort. Starting the day with the 2c black run wasn’t too bad – we’d skied it the day before and it’s not really that steep or narrow. I suspect that many of the black runs in the area don’t really deserve the rating, but the signposts do a good job of scaring people away. We were also amused by the ‘purple’ runs, where blues suddenly turn into reds, not that it was a problem for anybody in the group. The only really challenging black we found were the sections of 80 down into Ellmau where the piste follows the line of the lift with one particularly interesting steep/icy section. That lift back out of Ellmau is notable for its modernity and comfort :) Though when we got to the top it had started snowing for our return journey.
Day 5 – we’d always planned to tick off Kelchsau on our final day, but the rain/wet snow made conditions pretty miserable so we called it a wrap and took our skis back after a run down from the top (rather than the better weather plan of a bit more skiing in Itter/Söll).
I have to say that the piste map (and corresponding app that held it) weren’t the greatest, and neither were the piste posts. It took too much trial and error to get around, and more time spent poring over the map in evenings that could have been better spent on apres ski.
Conclusion
I’d go back just to do Westendorf again, but we barely touched Going or Söll and there’s stuff around Brixen in Thale we didn’t get near, so I could easily spend a second week in Skiwelt without getting bored of it.
Filed under: review, travel | Leave a Comment
Tags: Austria, boots, Brixen im Thale, Ellmau, Going, Hopfgarten, Itter, Kelchsau, Söll, Scheffau, skiing, skis, Skiwelt, Tirol, Westendorf
Headsets (mini review)
I jumped into a thread on DXC Workplace[1] on the topic of headsets (for use with Skype [for Business]), which made me realise that it’s an important topic that I’ve not covered here before.
Even a cheap headset is better than no headset
The point made by the original author was that many people are using the built in mic/speaker on their laptops, and this is generally terrible for everybody else on the call (and maybe them too).
My headsets
Jabra 930 – this is my main desktop headset that I use for Skype from my home office. It’s wireless, and works anywhere in my house. It’s also comfortable, and the battery lasts all day long. The mute button is easy to operate, and it gives a comforting background ‘your on mute’ beep-bop every so often. I have InfoQ’s Wes Reisz to thank for the recommendation.
Jabra Pro 9470 – this is the headset that CSC provided me with when I joined. Although it’s supposedly an upgrade on the 930 I’ve never got along with it. It’s also useless with a laptop as it ends up chaining the laptop to wherever the base station is sucking power from.
Plantronics C310 – I bought this to use in the office (instead of the 9470 which languishes in its box), and because it’s wired I can wander around with my laptop with the headset connected (no base station to worry about). I like the inline mute control button on the cable; and it’s lightweight and comfortable (and cheap).
Plantronics DA40 USB adaptor + H91 headset – this is the type of headset that I used to have on my office desk phone in early 00s and I bought one for home on eBay. It now lives in my travel bag, and the DA40 lets me use it with my laptop. I also carry a bottom cable that lets me attach it to office phones as that’s sometimes handy. If I was buying new now the DA80 has a mute button on it, but DA40s are plentiful on eBay (just watch out for frayed cables on used ones) and older voice tube headsets are the best for quality and comfort.
Some observations
Everything above is monaural over the ear. I don’t like stereo for phone calls (though it can help in really noisy places) and I don’t like stuff hanging off my ears (too uncomfortable when doing hours of back to back calls).
Wireless headsets are great for desktops, because the base station (and its power cord) can stay with the desktop itself. Wired headsets are better for laptops, as you can wander around with the headset attached to the laptop without any base station worries.
PC Bluetooth is (still) way too complex and fragile for this stuff, and companies still seem to have security worries about it (which is likely why my corporate laptop doesn’t have it).
Integration with Skype/Skype for Business is very good with both Jabra and Plantronics software. Webex generally works OK too. I’ve found the experience to be highly variable with other web conferencing tools (whether browser based or with their own apps), and lots of stuff seems to fall at the first hurdle by ignoring settings for default communication device.
My perfect headset
The Jabra 930 is almost perfect, and the one thing I’d change would be a visual indication of mute status (e.g. a tiny LED at the end of the mic boom – we can then argue about whether red or green signifies mute or talk).
A quick diversion to phone headsets
I’ve yet to find a phone headset that I’m really happy with. In a pinch I use my Koss SparkPlugs with my phone, and they’re great for listening, but worse than the built in mic for talking (catching too much background noise).
By the way who decided that the button on the headset cord should be for hangup rather than mute (and doubly who decided that wasn’t even a configurable option)? On a typical call I might be on/off mute dozens of times, and the UX people decide to make me fiddle around with a touch screen to do that; obviously I only get to hang up once per call – the balance of effort is obvious here.
Note
[1] Previously known as Facebook@Work, which I like a lot – in the same way that I liked Facebook itself before the whole advertising fueled surveillance capitalism thing ruined it.
Filed under: review | Leave a Comment
Tags: headset, Jabra, Plantronics, review, skype
