TL;DR

Many SSDs are also Self Encrypting Drives (SEDs) they just need a few bits flipped to make them work. As the SSDs use encryption under the hood anyway there’s no performance overhead.

Background

This is something of an almanac post after a couple of days of prodding around the topic of PC device encryption. I wanted to make sure that the PCs I use for work stuff were properly protected, but I also wanted to minimise the impact on performance.

Bitlocker

As my laptop runs Windows 8.1 it seemed obvious to check out BitLocker, but a quick search revealed that software based BitLocker has some degree of performance overhead.

In the end I actually went with BitLocker on my laptop, as the SanDisk X300 SSD I have isn’t a SED (as it doesn’t support Opal or Microsoft eDrive), which is a shame as the article I found on the X300s gives a pretty good review of what’s out there.

Even if I did have an X300s rather than a plain old X300 the eDrive/BitLocker combination wouldn’t have been easy, as it requires doing a clean install of Windows rather than letting you keep your existing setup.

SEDs

SSDs use encryption internally anyway so that the blocks written to flash memory don’t have long runs of 1s or 0s, so it’s almost trivial for an SSD to also be a SED – all that’s needed is a means to manage the keys that are used to unlock that encryption. Out of the box SEDs are like safes with the door open and no combination set – they just need some tools to set the lock.

Class 0

With my desktop machine (a NUC) I’ve got a Samsung SSD that supports three different modes of encryption:

Magician_Security

  • Encrypted Drive is eDrive/BitLocker – too much trouble to configure
  • More on Trusted Computing Group (TCG) Opal below
  • Class 0 just uses a BIOS boot password. After reading this piece on Class 0 I decided it was probably worse than useless.

Opal and sedutil

The X300s article had run through the basics of Opal and use of the Wave Embassy app to enable it. Sadly as I have just a plain X300 I wasn’t getting a free license for that. There are a bunch of commercial offerings for Opal, from the usual suspects, and frankly they all look awful.

Open Source to the rescue… the Drive Trust Alliance offers sedutil for Windows and Linux. It’s a combination of a command line tool to configure Opal, and a Linux based pre boot application (PBA) to ask for the password that unlocks your drive.

After a bit of downloading and testing I confirmed that I was good to go, and following the encrypting your drive instructions worked perfectly.

The user experience

Most of the time the encryption is totally seamless in terms of performance and use experience. The only change is at boot (or resume from hibernation) when the PBA is launched first and asks for a password – the system then unlocks the SSD and reboots into the normal OS.

No Sleep

The one issue seems to be that the system will no longer make use of sleep mode, instead dropping into hibernate (to force a request for the password for resume). I can see why that’s more secure, but for my own use case I’d be happy to have sleep/wake without being asked for a drive password.

Conclusion

I wish the drive in my laptop was a SED. The BitLocker performance overhead isn’t too annoying, and it didn’t even take too long to encrypt the whole SSD, but it’s still sub optimal.

Using open source tools with the SED in my desktop was quick and easy. So if I’m even unlucky enough to be burgled I won’t have to worry about the data on that device.


I got an email proclaiming:

AmazonFresh – now in your area

If that wasn’t exciting enough in its own right (which would probably be the case) it went on to say:

Get a £20 Amazon.co.uk gift card when you spend £60 or more on an AmazonFresh order and have it delivered between 16 – 30 June, 2016

I’m a sucker for a deal, so I thought I’d give it a try (in place of my usual Waitrose Deliver order).

The first problem was when I clicked on the link in the email https://www.amazon.co.uk/b/?ie=UTF8&node=10407261031&bbn=6723205031

Amazon_Amp

Ah – the old & to & gag. A few backspaces later and I’m at this:

Fresh_Postcode

Alarm bells are starting to ring now. You know where I live Amazon. You know the postcode. Rule #1 of UX is don’t ask a human to answer a question where the computer already knows.

So then I enter my postcode:

Fresh_postcode2

And I look back at the email:

… The Offer is limited to selected London postcodes where AmazonFresh is available …

So AmazonFresh isn’t now in my area, because my area isn’t London, that’s not where I live (and Amazon knows that). So the whole thing is a ham-fisted waste of time.

I look forward to my London friends telling me how great Fresh is. If they can get past the old & gag.



micro:bit Simon

15May16

The BBC micro:bit is a computerised project board that’s being given to every year 7 (11-12yr old) kid in the UK. It’s supposed to encourage experimentation and learning to program in the same way that the BBC Micro (and associated BBC programmes) did back in the 80s. I’ve been pretty excited about it since the announcement, though I feared that I’d have to wait until my daughter received hers before I got my hands on one. Luckily a friendly local teacher brought me one to play with, and with a long flight on my hands I decided to have a go at coding Simon (a perennial favourite of mine for trying out new platforms).

There are various different ways to program the micro:bit, but I went with Python (as it’s familiar) and the Mu editor (as plane WiFi isn’t a great way to use online code editors). I used the accelerometer for input, and simple arrows to show the game sequence (up, down, left, right).

Here’s the code, which is also on github (it should work fine in the online Python editor as well as Mu, and yes I know I should probably not be using global variables, but I haven’t refactored much from earlier BASIC and C implementations):

from microbit import *                  # standard Micro:Bit libraries
from array import *                     # to use an array
import random                           # generate random numbers

count = 0                               # initialise counter to 0
wait = 500                              # initialise wait to half a sec
sequence = array('B',[])                # array to hold sequence
display.show("-")                       # start out showing a dash

def squark(dir):                        # function to show arrows
    global wait
    if dir==0:                          # Right
        display.show(Image.ARROW_E)
    elif dir==1:                        # Left
        display.show(Image.ARROW_W)
    elif dir==2:                        # Down
        display.show(Image.ARROW_S)
    elif dir==3:                        # Up
        display.show(Image.ARROW_N)
    else:
        display.show("-")
    sleep(wait)
    display.show("-")
    sleep(wait)

def play_sequence():
    global count                        # use the count global variable
    global sequence                     # use the sequence global variable 
    global wait                         # use the wait global variable
    sequence.append(random.randint(0, 3))       # add a new value to sequence
    for i in range(0, count):           # loop for sequence length
      squark(sequence[i])               # display the arrow
    wait = 500 - (count * 15)           # vary delay to speed things up
    count = count+1                     # increment sequence length
    
def get_tilt():
    x = accelerometer.get_x()           # read left-right tilt
    y = accelerometer.get_y()           # read up-down tilt
    if x > 100:
        return 0                        # Right
    elif x < -100: return 1 # Left elif y > 100:
        return 2                        # Down
    elif y < -100:
        return 3                        # Up
    else:
        return 4                        # Flat
        
def reset_game(): 
   global count
   global sequence
   count=0
   sequence=[]    

def read_sequence():
    global count
    global sequence
    display.show("*")                   # Show that we're waiting
    for i in range(0, count-1):
        while get_tilt()==4:            # Wait for a tilt
            sleep(50)
        input=get_tilt()
        if input == sequence[i]:        # If it's right then show it
            squark(input)
            if i==9:                    # We have a winner
                display.show(Image.SMILE)
                sleep(1000)
                display.show("WINNER")
                reset_game()
        else:
            display.show("X")           # Wrong tilt - game over
            sleep(1000)
            display.show("TRY AGAIN")
            reset_game()
            break
        
while True:
    play_sequence()                     # play the sequence to be remembered
    read_sequence()                     # read the sequence from the player
    sleep(1000)                         # wait a sec

Docker Inc have announced general availability of Docker Security Scanning, which was previously known as Project Nautilus. The release comes alongside an update to the CIS Docker Security Benchmark to bring it in line with Docker 1.11.0, and an updated Docker Bench tool for checking that host and daemon configuration match security benchmark recommendations.

continue reading the full story at InfoQ

Docker_Security_Scanning


TL;DR

A relay board from eBay combined with a cheap wireless doorbell from Amazon allowed me to extend my existing wired doorbell.

Background

I got the loft of my house converted into a home office. I love it up there, but if I shut the door (to keep noise out) then I can’t hear the doorbell (or anybody shouting up the stairs for me).

Some research led me to wired to wireless doorbell extenders such as the Friedland D3202N, but they didn’t seem like an ideal choice due to:

  • Being hard to come by (obsolete?).
  • Lock in to a given (expensive) wireless bell system.
  • An extra box to mount near the existing bell (and wires to hide).

I also checked out the Honeywell DC915SCV system, but that had many of the same flaws/limitations.

What I really wanted was a setup where I could have a bell in my office, another in $son0’s room, integration with the existing doorbell, and a summoning button in the kitchen/living room.

The bells

I found the 1byone Easy Chime system on Amazon, which seemed to offer multiple bells and bell pushes that could work together. I bought one to see how hackable the bell push would be, and the answer was good enough – the button is a standard push button surface mounted onto the PCB (and thus very easy to remove/replace). I also found pads on the PCB marked SW2 that aren’t connected, which seem to change the send code.

bell_push

Having confirmed that the system would do what I wanted I ordered a second bell and push.

Relay board

Now I needed some way to take the 12V AC from the wired bell system and turn it into a button push. A solid state system would have been nice, but I couldn’t find anything off the shelf, and I wasn’t going to design something from scratch[1]. The ‘12v ac/dc Mini Handy little Relay board‘ I found on eBay seemed to be ideal.

Putting it all together

The doorbell has space for some batteries that isn’t used, so I was able to tuck the relay and wireless push in there.

I’d have liked to wire things up so that the relay was activated when somebody pushed the bell, and I probably could have done that if I’d dug into the system; but with the available connections the best I could do was to wire it across the wired switch, so the relay is spending most of its time on (with consequential power draw, heat, and expected lower component life… but it works).

bell_done

So the relay is wired across connections 1 and 3 of the wired bell, to the same places as the wired bell push. The wireless bell push is connected to the C (common) and NC (normally closed) connections on the relay board. When the doorbell is pushed the relay briefly powers down and opens, causing the wireless bells to be activated.

relay_wired

Conclusion

The entire system cost £26.23 and took no longer to put together than I expect it would have taken to install an off the shelf wired to wireless extender. The key parts fit inside the existing bell, so no new boxes and wires to worry about. I’m very happy with the outcome.

Note

[1] This is where one of my smart hardware hacking friends points out that I could have used a 555, a twig and a rusty nail.


I’ve spent the last two winter half term breaks in Andorra skiing with my daughter and some neighbours. It’s been great both times.

Ski areas

Andorra has two ski areas. On my first trip I went to Arinsal, which is part of the Vallnord area. More recently I went to Pas De La Casa, which is part of Grandvalira area.

Arinsal is very much a beginner resort, which was fine for my daughter just starting out, but I was pretty bored of it after a week (and I’d worked my way through most of connected Pal) so although I loved the town, and its restaurants etc. I’m not in any rush to go back (though I never got the bus link over to Arcalis to try that out).

Pas De La Casa is just the opposite. It needs confidence on red runs to get around it (and a willingness to take on blacks to get full advantage of the place). We were able to ski out pretty much all of the Pas and Grau Roig sectors, but the bottleneck of the Cubil lift out of Grau Roig meant that we left a good chunk of Soldeu and El Tarter sectors untouched.

Getting there

Transfers are available from Barcelona and Toulouse airports. I went via Barcelona both times due to better timings and prices for flights. Shared buses can be booked at Arinsal.co.uk and take about four hours each way (though the border crossing back to Spain can cause delays, especially later in the day – my neighbours missed their flight home last year).

Accommodation

I stayed at the Hotel Arinsal on the first trip, which was absolutely made by receptionist and barman Danny (and would likely be a totally different experience without him).

For the Pas trip I went with the Hotel Les Truites, which certainly lived up to it’s #1 billing on TripAdvisor. My neighbours stayed at Hotel Camelot, which seemed OK, and had a decent bar (and happy hour) for getting together apres ski. Dinner at the Camelot wasn’t great though – we only used it once because of a ski injury to one of our group making it hard to go further afield. Some friends stayed at the very fancy Hotel Kandahar, which they loved (not least for its well appointed but surprisingly reasonable bar).

Equipment

Racetiger

For both trips I ordered ski passes and ski/boot/helmet hire from Arinsal.co.uk, and both times around I treated myself to the top of the range ski package. In Arinsal I got Racetiger SC Uvo, which were excellent, but nowhere near as good as the brand new pair of Lacroix Mach Carbon I had last week- easily the most fantastic skis I’ve ever had the pleasure of using

machcarbon

Good value

I found Arinsal surprisingly inexpensive (after spending most of my previous ski trips in Alpine French resorts). Pas seemed a little more expensive, but not so much that it changed any behaviour – it wasn’t pocket breaking to eat out on the slopes every day for lunch and the town restaurants every night for dinner.

Conclusion

Arinsal was ideal for my daughter to learn to ski, but it lacks variety for more experienced skiers. I’d go back to Pas again, though maybe I’d be better off choosing El Tarter or Soldeu for an alternative entry point to the Grandvalira area.


Background

The last two interviews that I’ve done for InfoQ have been with Anil Mahavapeddy and Bryan Cantrill, and in both cases we talked about unikernels. Anil is very much pro unikernels, whilst Bryan takes the opposing view.

long and rambling Twitter thread about oncoming architecture diversity in Docker images took a turn into the unikernel cul-de-sac the other day, and I was asked what I thought. It wasn’t something I was willing to address in 140 characters or less, so this post is here to do that.

Smoking the whole pack

Bryan made a point that anybody advocating unikernels should be ‘forced to smoke the whole pack’ and that soundbite was used by a bunch of people to promote the interview. It wasn’t traditional clickbait, but it seemed to have the desired effect. I must however confess that I was somewhat uncomfortable about being so closely associated with that quote – it’s what Bryan said, not my own opinion on the matter.

Debugging

The point behind Bryan’s position is that it’s basically impossible to debug unikernels in situ – they either work or they don’t. As most of us know that software fails all the time it should therefore be obvious that software that can’t be debugged is a major hazard and QED it shouldn’t ever be used.

In which Bryan makes Anil’s point for him

Elsewhere in the interview Bryan says that we’re surrounded by correct software:

Correct software is timeless, and people say “Yes, but software is never correct”, that’s not true, there is lots of correct software, there is correct software every day that our lives silently come into contact with correct perfectly working software.

Arguably if you have correct software in a unikernel then there’s no need to debug it, and the argument that it can’t effectively be debugged in production being a major problem starts to subside.

So how do we make correct software?

That’s probably the multi $Bn question for many industries, and at this point the formal methods wonks that do safety critical stuff for planes and cars start to poke their noses in and it all gets very complicated…

But in general, keep it simple, keep it small, avoid side effects are all signposts on the path to righteousness, and I can almost hear Anil saying – ‘and that’s why we use OCAML’.

My conclusion

I don’t think unikernels are a general panacea, Bryan makes some very good points. I do however think that there are some use cases where it’s possible for software that’s small and simple, and most importantly likely to be correct where unikernels can be appropriate.

Feel free to continue the argument/debate in the comments below.


TL;DR

Copying the contents of one SSD to a larger one (and making use of the extra space) should be simple, but there are a few gotchas. A combination of AOMEI Partition Assistant Standard and some command line tools got the job done though.

Background

SSDs have been reasonably cheap for some time, but now they’re really cheap. In the run up to Christmas I got a Sandisk Ultra II 960GB SSD for less than £150 (and right after Christmas they were on sale at Amazon for even less, so I got a couple for the in laws).

Copying rig

I used one of my old NL40 HP Microservers running Windows 7 to do the copying as it’s easy to get disks in and out of it. To put 2.5″ drives into the 3.5″ bays I got a couple of Icydock adaptors, and to get mSATA drives in place I used an existing mSATA to 2.5″ adaptor, and bought a new one[1], which is uncased – but fine for the job.



Right tool for the job

That new SSD became the base of the pyramid for a drive shuffle that rippled through a bunch of my PCs. For the older systems I could have easily used my trusty old version 11 copy of Paragon Partition Manager[2], but for the newer systems using GUID partition tables (GPT) I needed something else. I found AOMEI Partition Assistant Standard did the job (mostly).

Almost there

AOMEI works pretty much the same as the Paragon Partition Manager I was used to, though if anything the copy disk function was easier to use (and offered a means to resize the main partition as part of the operation, which was the whole point of the exercise). The problem that I found with AOMEI is that it didn’t copy over the partition types and attributes, meaning that I had to fix things up with the DISKPART utility.

Fixing type and attributes

I launched two CMD windows using ‘Run As Administrator’, and then compared the source disk (e.g. disk 1) with the destination dist (e.g. disk 2) e.g. this is what I see on the source disk:


DISKPART> select disk 1

Disk 1 is now the selected disk.

DISKPART> list partition

Partition ### Type Size Offset
------------- ---------------- ------- -------
Partition 1 Recovery 300 MB 1024 KB
Partition 2 System 99 MB 301 MB
Partition 3 Reserved 128 MB 400 MB
Partition 4 Primary 930 GB 528 MB

DISKPART> select partition 1

Partition 1 is now the selected partition.

DISKPART> detail partition

Partition 1
Type : de94bba4-06d1-4d40-a16a-bfd50179d6ac
Hidden : Yes
Required: Yes
Attrib : 0X8000000000000001
Offset in Bytes: 1048576

Volume ### Ltr Label Fs Type Size Status Info
---------- --- ----------- ----- ---------- ------- --------- --------
* Volume 3 Recovery NTFS Partition 300 MB Healthy Hidden

DISKPART>

To make the partition on the target disk have the same attributes (in my second CMD window running DISKPART):


DISKPART> select disk 2

Disk 2 is now the selected disk.

DISKPART> select partition 1

Partition 1 is now the selected partition.

DISKPART> gpt attributes=0X8000000000000001

DiskPart successfully assigned the attributes to the selected GPT partition.

DISKPART> set id=de94bba4-06d1-4d40-a16a-bfd50179d6ac

DiskPart successfully set the partition ID.

DISKPART>

and then work through the other partitions ensuring that the type and attributes are set accordingly.

Fixing boot

Even after all of that I ended up with disks that wouldn’t boot. In each case I needed to boot from a Windows install USB (for the correct version of Windows[3]) and run the following on the repair command line (automatic repair never worked):


bootrec /fixmbr
bootrec /fixboot
bootrec /rebuildbcd

Notes

[1] I knew I’d be copying from mSATA to mSATA at various stages, which is why I needed two adaptors. I could have introduced an intermediate drive, but that would have just added time and risk, which wasn’t worth it for the sake of a less than £10.
[2] A little while ago I gave version 12 a try, but it seemed to be lacking all of the features I found useful in version 11.
[3] My father in law’s recent upgrade to Windows 10 (when I thought he was still on 8.1) caused a small degree of anxiety when the /scanos and /rebuildbcd commands kept returning “Total identified Windows installations: 0”


Back in March I wrote about Using Overlay file system with Docker on Ubuntu – those instructions applied to Ubuntu before the switch to systemd e.g. 14.04 and earlier.

The move to systemd means that changes to /etc/default/docker don’t have any effect any more.

To get systemd to dance along to our tune needs a file like this:

/etc/systemd/system/docker.service.d/overlay.conf

[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// --storage-driver=overlay

To make this work use the following script (or get it from gist to avoid silly copy/past replacement of < with &lt;):


sudo mkdir /etc/systemd/system/docker.service.d
sudo bash -c 'cat <<EOF > /etc/systemd/system/docker.service.d/overlay.conf
[Service]
ExecStart=
ExecStart=/usr/bin/docker daemon -H fd:// --storage-driver=overlay
EOF'
sudo systemctl daemon-reload
sudo systemctl restart docker

Now when you run ‘sudo docker info’ you should see something like:


...
Storage Driver: overlay
 Backing Filesystem: extfs
...

At least you didn’t need to upgrade the kernel this time – small mercies.

NB this is somewhat (inaccurately right now) documented in Control and configure Docker with systemd – I can feel a PR coming on.




Follow

Get every new post delivered to your Inbox.

Join 122 other followers