Twistlock have announced the general availability of their Container Security Suite, along with a partnership with Google Cloud Platform that integrates Twistlock into Google Container Engine (GKE). The suite consists of a console to define policy, a registry scanner and a ‘Defender’that runs as a privileged container on each host. The suite connects to Twistlock’s cloud based ‘Intelligence Service’ to get real time vulnerability and threat intelligence.

continue reading the full story at InfoQ


TV Cable Tidy

08Nov15

TL;DR

If you’re putting a TV on a modern open stand then the ancillaries and cables can make a real mess and spoil the overall look. I put a board onto the VESA mount on the back of my TV to hold everything, which then let me arrange the cables into one tidy trunk running along the centre line of the TV stand.

TV_stand

Background

I recently upgraded the AV system in my living room. My Toshiba LCD TV and Panasonic Blu-Ray come DVR stayed, along with the awesome Mission FS2-AV NSX flat panel speakers[1], but the old (pre HDMI) AV Amp and CD jukebox are out of the picture. The CD jukebox had pushed me to buying an enormous oak AV unit, which seemed fine with a 29″ CRT TV on top. With a new slimline Marantz NR1504 AV Amp I’ve been able to get everything I need onto a sleek Centurian Opod Stand.

The new stand would take the TV, AV Amp and BluRay/DVR perfectly. The problem was where to put everything else:

  • Power distribution (for a total of 6 things)
  • Network switch
  • Raspberry Pi (running OpenELEC of course)
  • Aerial amplifier

My new setup was going to look awful with that lot (and all of the associated wires) hanging around.

VESA to the rescue

Pretty much all LCD TVs (and monitors[2]) have a VESA mount on the back. This is intended for wall mounting, but in this case my idea was to mount all of the untidiness onto the TV. I bought a piece of plywood, some M6 threaded bar and M6 nuts on eBay to create a board that would hold the power strip, switch, Raspberry Pi and aerial amp:

VESA_mounted

 

I placed the threaded bars into the VESA mount, using a couple of nuts as spacers (and to prevent them going in too far) then drilled the plywood board to fit over. The board was then secured with some more nuts, and I left the threaded bar protruding to be used to wrap things around later. The various bits and bobs were secured with a mixture of screw mounts, sticky velco strips and cable ties running through the board. Once the full horror of cabling was added I then used cable wrap and more cable ties to keep everything in a single umbilical running along the centre line of the TV stand.

cables

The net effect is that the only cabling that can be seen from the front is one thick tidy bundle, which is pretty much invisible behind the post of the TV stand. The clean lines of the stand and overall modern aesthetics are preserved.

Conclusion

This isn’t what the VESA mount was designed for, but I think it’s a great way of keeping everything tidy for a TV that isn’t wall mounted.

Update

4 Mar 2018 – I added a few extra things since I first wrote this – the original Raspberry Pi has gone in favour of an Amazon Fire 4k box, and I’ve added another Pi that has Terrestrial Digital (DVB-T) and Satellite (DVB-S) receivers. As before the mess at the back is invisible from the front.

@ianfh also pointed out that there’s a commercial version of something like this (which seems to use a metal grid) – The Intuitive CMP-1.

Notes

[1] I’ve been so happy with the FS2-AV system that I bought another one for my new media/games room. Sadly both of them were old enough for the foam around the (sub) woofer to have perished. Worse still Mission no longer have spares stock of the original Audax AP170MN2 drivers units. Luckily I was able to find some AP170M0 woofers on eBay, which fitted perfectly, so I’m back to crisp bass with no clicks and pops.
[2] I was once told that the original VESA mount was contrived by my (sadly departed) old friend Peter Golden and some of his colleagues at Barclays. He was doing the first trade floor fit out to use LCD screens rather than CRT monitors, and didn’t want to end up becoming beholden to a single supplier over mounting, so they cooked up a simple arrangement of using a 100mm square and M4 screws and persuaded VESA to make it a standard.


TL;DR

The UK ‘snoopers charter’ is back in the form of the Investigatory Powers Bill (IPB [pdf]). As with previous efforts it’s not just trying to provide a more robust legal framework for ongoing spying, but also trying to extend spying powers to other agencies. The police might see this as a way to solve crime more efficiently, but they risk undermining their trust relationship with the public. The worst part is what’s being outsourced to the telcos.

Background

I last wrote about communications interception (aka signals intelligence or SIGINT) on my friend Nick Selby’s Police Led Intelligence blog shortly after the initial Snowden revelations about PRISM. Since then there’s been a constant stream of fresh outpourings about the scope and scale of spying.

I’d once again recommend Richard Aldrich’s ‘GCHQ‘ for historical perspective. One point that sticks with me was that the post war spies got everything they wanted… right up until the nuclear powered spy ships[1]. Only then was the line crossed. I wish I was making this up.

The Legal Argument

Snowden is now saying that the IPB is about making the law fit spying rather than spying fit the law.

Snowden_backwards

Successive generations of the ‘snoopers charter’ appear to have been trying to do this very thing. On the other hand it doesn’t seem to matter whether the spies are acting within the law (or some twisted secret interpretation of the law) as nobody is going to jail – not the spies, and certainly not the politicians who seem so in love with what they get from the spies (or is it just the lobbying and political donations from the companies the spies spend with – it’s so hard to tell?).

The Social Contract

Citizens seem to be quite happy to pretend that the spies don’t exist so long as the spies remain happy to hide in the shadows. This (approximately) makes it quite OK for an agency like GCHQ to hoover up everybody’s (meta)data so long as the product from that gets used exclusively within the secret squirrel club. This arrangement gets shored up by making intercept evidence inadmissible in court. Things get a bit murky with ‘parallel reconstruction‘, but the point here is that regular law enforcement needs to do some spade work to dig up evidence that they can use in court.

Breaking the Social Contract

Having police who are also spies is a bad thing. This is why Germany now has some of the strongest privacy laws in the world, because half of that country lived under the prying eye of the Stasi, and nobody wants to go back there.

The British police claim that they want to ‘police by consent’, according to Peelian Principles. But they’re also claiming that they need new powers to deal with crime moving from the physical world to the virtual world.

This is where the current proposals don’t stand up to scrutiny. ‘We can’t follow somebody into a bank when the banking is online’, was part of a statement that I heard in the past week. This seems like a perfectly sound basis for police to get a warrant and tap the communications of a suspect. It seems like a flimsy excuse for a year long backlog being kept just in case somebody does something bad[2].

The Spies Aren’t Helping (Enough) and the Police Want More

The output and impact of the secret squirrel club is necessarily constrained (otherwise it stops being secret[3]). It hence becomes limited to the most serious activities – terrorism etc. Of course the (generally very senior) police recipients of the product see how helpful it can be and want more – so that they can go after a wider variety of crimes.

Policing on the Cheap?

So is this just an efficiency play… dragging the police into the 21st century where they catch people at the click of a mouse rather than going to the effort and expense of following people around in the physical world. That’s an argument that’s being made, but if that was really the case then why not go all in and make intercepts admissible as evidence?

I’ll return to the earlier point of get a warrant. In the physical world there are certain checks and balances on police behaviour where they have to ask permission before taking action. The IPB is being sold as providing a framework of checks and balances, but sadly these seem to be modelled on the (already discredited) US Foreign Intelligence Surveillance Court (FISC). Theresa May seems happy to apply the mechanisms designed for spies to regular police because we seem to have a complete muddle here over who’s doing spying and who’s doing police work.

Just add Telco

Nobody’s just handing over the keys to GCHQ to PC Plod, and this is where the trouble really starts. The spies suck up our (meta)data into their very impressive dome of concrete and steel where it’s looked after by highly vetted and dedicated professionals. The police don’t have those resources, so they need to outsource the heavy lifting of data retention. The virtual boot rubber will meet the road at British Telecom, Sky and Talk Talk.

There are probably examples of companies that care less about their customers than telcos, I just can’t think of any off the top of my head.

And then there’s the fact that Talk Talk just got hacked six ways from Sunday, apparently by a bunch of teenagers, eliciting the usual spin about advanced ‘cyber’ adversaries.  Nobody trusts these people to get a phone bill right, and they’re just barely competent at moving IP packets around. It’s bad enough that we have to trust them with our data in motion, but making them look after a giant pile of it at rest is just asking for trouble.

Conclusion

We might wish for spying fitting the law rather than the law fitting the spying, but too often is seems that the actions of the ‘bad’ guys is sufficient excuse for poor behaviour by the ‘good’ guys (and in extremis this is how we generate ‘bad’ guys in the first place – a lack of transparent justice).

What our spies have been doing doesn’t seem to have been hurt too badly by having it dragged into the public eye, so it would probably be fine if the IPB was simply moving the legal boundary to allow for past and present activities (even if that does inevitably lead to new boundary pushing).

Where this all goes horribly wrong is the IPB turning our police into spies, but outsourcing the real effort to some of the most loathed companies on the planet.

Notes

[1] As the British Empire contracted so did the opportunity to have land based intercept stations in friendly territory. This created a classic ‘capability gap’, which exploits loss aversion to regain things at almost any cost. The answer apparently was to build giant floating intercept stations. Of course this stuff became overtaken by events once spy satellites went into orbit.
[2] There seems to be a curious love affair with backward causality here – the ability to explain why something went wrong after it happened. This is why in pretty much every terrorism case over the past decade we find that the agencies had half an eye on the perpetrators, but had determined that they weren’t worthy of full on attention – bigger fish to fry and all that. Backward causality invariably makes the spies look bad, though it does give them a never ending reason to plead for more resources.
[3] It’s well documented how Churchill agonised over intelligence from Ultra, and the balance between saving lives in a given operation, and tipping his hand to the Germans, which might have lost or disrupted future intelligence. The same problem exists for politicians today. It’s hard to believe that the spies don’t get to notice political party defections (e.g. Douglas Carswell or  Mark Reckless) but even if the Prime Minister does get told of such things it’s hard to take action.


Docker Inc have announced their acquisition of Tutum, ‘The Docker Platform for Dev and Ops’ that allows users to ‘Build, deploy, and manage your apps across any cloud’. The rationale for the deal is to complement Docker Hub, which takes care of ‘build’ and ‘ship’, with Tutum as the platform for ‘run’.

continue reading the full story at InfoQ

Tutum_architecture

 


The kind folk at Newark Element14 sent me a Gizmo2 dev board to try out. I’ve not been able to do much with it yet, so here are some first impressions.

What is it?

I’d completely missed the first generation Gizmo, and hadn’t heard of the new one until it was brought up by Brandon at Element14. It’s an x86 single board PC packing an AMD GX210HA dual core CPU running at 1GHz with 1GB of DDR3 RAM on board.

It’s about the same size as the board in a small form factor (SFF) PC such as a NUC or Brix, but comes without any case.

Connectivity is pretty comprehensive with 2 USB2, 2 USB3, RJ45 Gigabit Ethernet, HDMI, 3.5mm audio in/out and microSD. There’s also an mSATA/mini PCIe port and headers for JTAG and SPI.

The thing that differentiates this board from other SFF PCs is the pair of edge connectors along one side. One is described as ‘low speed’ and carries USB2, 8 GPIOs, 2 PWMs, 2 Counters and SPI. The other is described as ‘high speed’ and carries Display Port, 2 PCIe X1, SATA and 2 USB2.

What’s in the box?

Gizmo2box

The board comes with a 4GB microSD card that can be booted up into Linux along with some demo apps. A 12v power supply with a US plug is included along with a travel adaptor kit that let me use it with my UK sockets. There’s also a handy getting started guide.

Why won’t it boot up?

I plugged in my USB keyboard and mouse, monitor, network and power, pressed the power button, and nothing happened.

After confirming that the power supply was pushing out 12v (with a handy strip of 5050 RGB LEDs) I was starting to fear a dead on arrival board. Had UPS done something terrible to cause the hole in the shipping box and actually managed to damage the board inside?

It turned out that the culprit was my Matias Quiet Pro USB keyboard. As soon as I unplugged it some LEDs lit up on the Gizmo2. After swapping over to an Anker wireless USB keyboard I was off to the races and able to boot up. It seems that the Gizmo2 is even worse than the original Raspberry Pi (which the Matias keyboard works fine with) at supplying juice to USB peripherals.

Demo environment

Once booted from the supplied microSD card the system brings up a ‘Timesys’ application launcher that I can only describe as dreadful – it shows you a bunch of icons, but you can only select them by clicking on left/right arrows (not the icons themselves).

Big Buck Bunny is supplied as a video demo in 1080p (H.264) and can also be watched from the bundled XBMC player (not the more recent Kodi). It’s also possible to escape to a fairly vanila Xfce desktop, and from there launch a terminal to get a root BusyBox shell.

Niggles

It’s noisy – the CPU is a 9W part, which isn’t a huge amount of power, but sufficient to rule out pure passive cooling. It’s a shame that the fan is so loud, as it pretty much ruins any ideas I might have had for using the Gizmo2 as a media player. I’m still going to try out getting OpenELEC onto it, but even if it can handle the VC1 MKVs that make my Raspberry Pi choke up I can’t see the Gizmo2 displacing my much quieter Gigabyte Brix.

I’ve had a right struggle getting the USB dongle for my keyboard out of the USB2 ports – they’re way too grippy.

The fuse behind the power adaptor looks like it’s pretty much designed to break off.

There’s onboard SATA, but no actual connector for it unless you solder one on yourself. I’ve also read about people having compatibility problems with some mSATA drives (and having to unsolder a resistor to get them working)

I’d rather have the GPIOs on header pins than an edge connector.

I can see the USB current draw issue I hit with the keyboard being problematic for DVB adaptors and other things. The Raspberry Pi people (and community) learned about this stuff the hard way, and it’s a shame that Gizmosphere didn’t pick up on that lesson and put in a beefy enough power system.

Next steps

I’ve read a few accounts of people running Kodi on the Gizmo2, including some excellent analysis of media streaming performance, but since I’ve done a bunch of stuff with OpenELEC on the Raspberry Pi I’m going to take a swing at getting that working on the Gizmo2. I’ll also take a look at what’s involved in getting Ubuntu running on it.


Late last year AWS launched Private DNS within Amazon VPC as part of their Route 53 service. This allows customers to create DNS entries that are only visible within a VPC (or group of VPCs). It’s also possible to have ‘split horizon’ DNS where servers inside a VPC get different answers to the same queries versus users on the public Internet.

The DNS resolver for a VPC is always at the +2 address, so if the VPC is 172.31.0.0/16 then the DNS server will be at 172.31.0.2. Amazon and the major OS distros do a good job of folding that knowledge into VM images, so pretty much everything just works with that DNS, which will resolve any private zones in Route 53 and also resolve names for public resources on the Internet (much like an ISP’s DNS does for home/office connections).

There are a few use cases for using the VPC DNS from outside of the VPC, particularly when connecting things into a VPC using a VPN. Here’s where things get a little tricky, as the VPC DNS is set up in such a way that it won’t answer queries from outside its own network.

The answer is to run a DNS forwarder within the VPC, and connect to the VPC DNS through that. Digital Ocean provide a good howto guide to configuring Bind on Ubuntu to do the job.

If you’re using our VNS3 to provide the VPN connection then DNS forwarding can be handled by a container. We provide instructions and a download on GitHub.

This post originally appeared on the Cohesive Networks Blog

Update – Since I put together the Bind based container I came across Unbound courtesy of John Graham-Cumming, and if I was starting over I might well choose that instead of Bind.

 


Docker inc. have announced the release of Docker 1.8, which brings with it some new and updated tools in addition to new engine features. Docker Toolbox provides a packaged system aiming to be, ‘the fastest way to get up and running with a Docker development environment’, and replaces Boot2Docker. The most significant change to Docker Engine is Docker Content Trust, which provides image signing and verification.

continue reading the full story at InfoQ

ContentTrustKeys


TL;DR

Anybody wanting a high spec laptop that isn’t from Apple is probably getting a low end model with small RAM and HDD and upgrading themselves to big RAM and SSD. This skews the sales data, so the OEMs see a market where nobody buys big RAM and SSD, from which they incorrectly infer that nobody wants big RAM and SSD.

Background

I’ve been crowing for some time that it’s almost impossible to buy the laptop I want – something small, light and with plenty of storage. The 2012 model Lenovo X230 I have sports 16GB RAM and could easily carry 3TB of SSD, so components aren’t the problem. The newer Macbook and Pixel 2 are depressingly close to perfect, but each just misses the mark. I’m not alone in this quest. Pretty much everybody I know in the IT industry finds themselves in what I’ve called the “pinnacle IT clique”. So why are the laptop makers not selling what we want?

I came to a realisation the other day that the vendors have bad data about what we want – data from their own sales.

The Lenovo case study

I spent a long time pouring over the Lenovo web sites in the UK and US before buying my X230 (which eventually came off eBay[1]). One of the prime reasons I was after that model was the ability to put 16GB RAM into it.

Lenovo might have had a SKU with 16GB RAM factory installed, but if they did I never saw it offered anywhere that I could buy it. The max RAM on offer was always 8GB. Furthermore the cost of a factory fitted upgrade from 4GB to 8GB was more than buying an after-market 8GB DIMM from somewhere like Crucial. Anybody (like me) wanting 16GB RAM would be foolish not to buy the 4GB model, chuck out the factory fit DIMM and fit RAM from elsewhere. The same logic also applies to large SSDs. Top end parts generally aren’t even offered as factory options, and it’s cheaper to get a standalone 512GB[2] drive than it is to choose the factory 256GB SSD – so you don’t buy an SSD at all you buy the cheapest HDD on offer because it’s going in a bin (or sitting in a drawer in case of warranty issues).

The outcome of this is that everybody who wanted an X230 with 16GB RAM and 512GB SSD (a practical spec that was purely hypothetical in the price list) bought one with 4GB RAM and a 320GB HDD (the cheapest model).

LenovoSales

Looking at the sales figures the obvious conclusion is that nobody buys models with large RAM and SSD, so should we be surprised that the next version, the X240, can’t even take 16GB RAM.

The issue here is that the perverse misalignment in pricing between factory options and after-market options has completely skewed what’s sold, breaking any causal relationship between what customers want and what customers buy.

Apple has less bad data

Mac laptops have become almost impossible to upgrade at home so even though the factory pricing for larger RAM and SSD can be heinous there’s really no choice.

It’s still entirely credible that Apple are looking at their sales data and seeing that the only customers that want 16GB RAM are those that buy 13″ Macbook Pros – because that’s the only laptop model that they sell with 16GB RAM.

I’d actually be happy to pay the factory premium for a Macbook (the new small one) with 16GB RAM and 1-2TB SSD, but that simply isn’t an option.

Intel’s share of the blame

I’d also note that I’m hearing increasing noise from people who want 32GB RAM in their laptops, which is an Intel problem rather than an OEM problem because all of the laptop chipsets that Intel makes max out at 16GB.

Of course it’s entirely likely that Intel are basing their designs on poor quality sales data coming from the OEMs. It’s likely that Intel sees essentially no market for 16GB laptops, so how could there possibly be a need for 32GB.

Intel pushing their Ultrabook spec as the answer to the Macbook Air for the non Apple OEMs has also distorted the market. Apple still has the lead on both form factor and price whilst the others struggle to keep up. It’s become impossible to buy the nice cheap 11.6″ laptops that were around a few years ago[3], and the market is flooded with stupid convertible tablet form factors where nobody seems to have actually figured out what people want.

Conclusion

If Intel and the OEMs it supplies are using laptop sales figures to determine what spec people want then they’re getting a very distorted view of the market. A view twisted by ridiculous differences between factory option pricing for RAM and SSD versus market pricing for the same parts. Only by zooming back a little and looking at the broader supply chain (or actually talking to customers) can it be seen that there’s a difference between what people want, what people buy and what vendors sell. Maybe I do live amongst a “pinnacle IT clique” of people who want small and light laptops with big RAM and SSD. Maybe that market is small (even if the components are readily available). I’m pretty sure that the market is much bigger than the vendors think it is because they’re looking at bad data. If the Observation in your OODA loop is bad then the Orientation to the market will be bad, you’ll make a bad Decision, and carry out bad Actions.

Update

20 Aug 2015 – it’s good to see the Dell Project Sputnik team engaging with the Docker core team on this Twitter thread. I really liked the original XPS13 I tried out back in 2012, but that was just before I discovered that 8GB RAM really wasn’t enough, and that limit has been one of the reasons keeping me away from Sputnik. There’s some further objection to (mini) DisplayPort and the need to carry dongles, and proprietary charging ports, but I reckon both of those things will be sorted out by USB-C.

Notes

[1] Though I so nearly bought one online in the US during the 2012 Black Friday sale.
[2] I’ll run with 512GB SSD for illustration as that’s what I put into my X230 a few years back, though with 1TB mSATA and 2TB 2.5″ SSDs now readily available it’s fair to conclude that the want today is 2x or even 4x. I’ve personally found that (just like 16GB RAM) 1TB SSD is about right for a laptop carrying a modest media library and running a bunch of VMs.
[3] That form factor is now dominated by low end Chromebooks.


It’s less than two months since I last wrote about Upgrading Docker, but things have changed again.

New repos

Part of my problem last time was that the apt repos had quietly moved from HTTP to HTTPS. This time around the repos have more visibly moved, bringing with them a new install target ‘docker-engine’, and the change has been announced.

Beware vanishing Docker options

Following Jessie’s guidance I purged my old Docker and installed the new version. This had the unintended consequence of wiping out my /etc/defaults/docker file and the customised DOCKER_OPTS I had in there to use the overlay filesystem.

Without the right options in place the Docker daemon wouldn’t start for me:

chris@fearless:~$ sudo service docker start
docker start/running, process 48822
chris@fearless:~$ sudo docker start 5f4af8edf203
Cannot connect to the Docker daemon. Is 'docker -d' running on this host?
chris@fearless:~$ sudo docker -d
Warning: '-d' is deprecated, it will be removed soon. See usage.
WARN[0000] please use 'docker daemon' instead.
INFO[0000] Listening for HTTP on unix (/var/run/docker.sock)
FATA[0000] Error starting daemon: error initializing graphdriver: "/var/lib/docker" contains other graphdrivers: overlay; Please cleanup or explicitly choose storage driver (-s <DRIVER>)

NB It looks like Docker is running after I start its service, there’s no indication that it immediately fails. I’m also not the first to notice that one part of Docker tells you to use ‘docker -d’, and then another scolds you that it should be ‘docker daemon’ it’s now been fixed.

Don’t use my old script

Last time around I included a script that wrote container IDs out to a temporary file then upgraded Docker and restarted the containers that were running. It’s not just a case of adding the new repo and changing the script to use ‘docker-engine’ rather than ‘lxc-docker’ because if (as I did) you purge the old repos first then there isn’t a ‘docker’ command left in place to run the containers. I’d suggest breaking the script up to firstly record the running container IDs and then go through the upgrade process.


The United States Federal Communications Commission (FCC) has introduced ‘software security requirements’ obliging WiFi device manufacturers to “ensure that only properly authenticated software is loaded and operating the device”. The document specifically calls out the DD-WRT open source router project, but clearly also applies to other popular distributions such as OpenWRT. This could become an early battle in ‘The war on general purpose computing’ as many smartphones and Internet of Things devices contain WiFi router capabilities that would be covered by the same rules.

continue reading the full story at InfoQ