The United States Federal Communications Commission (FCC) has introduced ‘software security requirements’ obliging WiFi device manufacturers to “ensure that only properly authenticated software is loaded and operating the device”. The document specifically calls out the DD-WRT open source router project, but clearly also applies to other popular distributions such as OpenWRT. This could become an early battle in ‘The war on general purpose computing’ as many smartphones and Internet of Things devices contain WiFi router capabilities that would be covered by the same rules.

continue reading the full story at InfoQ


I’m a big fan of Github Gist, as it’s an excellent way to store fragments of code and config.

Whatever I feel I can make public I do, and all of that stuff is easily searchable.

A bunch of my gists are private, sometimes because they contain proprietary information, sometime because they’re for something so obscure that I don’t consider them useful for anybody else, and sometimes because they’re an unfinished work in progress that I’m not ready to expose to the world.

Over time I’ve become very sick of how difficult it is to find stuff that I know I’ve put into a gist. There’s no search for private gists, which means laboriously paging and scrolling through an ever expanding list trying to spot the one you’re after.

Those dark days are now behind me now that I’m using Mark Percival’s Gist Evernote Import.

It’s a Ruby script that syncs gists into an Evernote notebook. It can be run out of a Docker container, which makes deployment very simple. Once the gists are in Evernote they become searchable, and are linked back to the original gist (as Evernote isn’t great at rendering source code).

I’ve used Evernote in the past, as it came with my scanner, but didn’t turn into a devotee. This now gives me a reason to use it more frequently, so perhaps I’ll put other stuff  into it (though most of the things I can imagine wanting to note down would probably go into a gist). So far I’ve got along just fine with a basic free account.


Apache 2.4 changes things a lot – particularly around authentication and authorisation.

I’m not the first to run into this issue, but I didn’t find a single straight answer online. So here goes (as root):

# add Precise sources so that Apache 2.2 can be used
cat <<EOF >> /etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu precise main restricted universe
deb http://archive.ubuntu.com/ubuntu precise-updates main restricted universe
deb http://security.ubuntu.com/ubuntu precise-security main restricted universe multiverse
EOF
# install Apache 2.2 (from the Precise repos)
apt-get install -y apache2-mpm-prefork=2.2.22-1ubuntu1.9 \
  apache2-prefork-dev=2.2.22-1ubuntu1.9 \
  apache2.2-bin=2.2.22-1ubuntu1.9 \
  apache2.2-common=2.2.22-1ubuntu1.9

If you want mpm-worker then do this instead:

# add Precise sources so that Apache 2.2 can be used
cat <<EOF >> /etc/apt/sources.list
deb http://archive.ubuntu.com/ubuntu precise main restricted universe
deb http://archive.ubuntu.com/ubuntu precise-updates main restricted universe
deb http://security.ubuntu.com/ubuntu precise-security main restricted universe multiverse
EOF
# install Apache 2.2 (from the Precise repos)
apt-get install -y apache2=2.2.22-1ubuntu1.9 \
  apache2.2-common=2.2.22-1ubuntu1.9 \
  apache2.2-bin=2.2.22-1ubuntu1.9 \
  apache2-mpm-worker=2.2.22-1ubuntu1.9

All of the major cloud providers now offer some means by which it’s possible to connect to them directly, meaning not over the Internet. This is generally positioned as helping with the following concerns:

  1. Bandwidth – getting a guaranteed chunk of bandwidth to the cloud and applications in it.
  2. Latency – having an explicit maximum latency on the connection.
  3. Privacy/security – of not having traffic on the ‘open’ Internet.

The privacy/security point is quite bothersome as the ‘direct’ connection will often be in an MPLS tunnel on the same fibre as the Internet, or maybe a different strand running right alongside it. What makes this extra troublesome is that (almost) nobody is foolish enough to send sensitive data over the Internet without encryption, but many think a ‘private’ link is just fine for plain text traffic[1].

For some time I’d assumed that offerings like AWS Direct Connect, Azure ExpressRoute and Google Direct Peering were all just different marketing labels for the same thing, particularly as many of them tie in with services like Equinix’s Cloud Exchange[2]. At the recent Google:Next event in London Equinix’s Shane Guthrie made a comment about network address translation (NAT) that caused me to scratch a little deeper, resulting in this post.

direct connect comarison

What’s the same

All of the services offer a means to connect private networks to cloud networks over a leased line rather than using the Internet. That’s pretty much where the similarity ends.

What’s different – AWS

Direct Connect is a 802.1q VLAN (layer 2) based service[3]. There’s an hourly charge for the port (that varies by the port speed), and also per GB egress charges that vary by location (ingress is free, just like on the Internet).

What’s different – Azure

ExpressRoute is a BGP (layer 3) based service, and it too charges by port speed, but the price is monthly (although it’s prorated hourly), and there are no further ingress/egress charges.

An interesting recent addition to the portfolio is ExpressRoute Premium, which enables a single connection to fan out across Microsoft’s private network into many regions rather than having to have point-to-point connections into each region being used.

What’s different – Google

Direct Peering is a BGP (layer 3) based service. The connection itself is free, with no port or per hour charges. Egress is charged for per GB, and varies by region.

Summary table

Cloud Type Port Egress
Amazon VLAN $ $
Microsoft BGP $
Google BGP  $

Notes

[1] More discerning companies are now working with us to use VNS3 on their ‘direct’ connections, in part because all of the cloud VPN services are tied to their Internet facing infrastructure.
[2] There’s some great background on how this was build in Sam Johnston’s Leaving Equinix post
[3] This is a little peculiar, as AWS itself doesn’t expose anything else at layer 2.

This post first appeared on the Cohesive Networks Blog


A friend emailed me yesterday saying he was ‘trying to be better informed on security topics’ and asking for suggestions on blogs etc. Here’s my reply…

For security stuff first read (or at least skim) Ross Anderson’s Security Engineering (UK|US) – it’s basically the bible for infosec. Don’t be scared that it’s now seven years old – nothing has fundamentally changed.

Blogger Gunnar Peterson once said there are only two tools in security – state checking and encryption, so I find it very useful to ask myself each time a look at something which it is doing (or what blend).

Another seminal work is Iain Grigg’s The Market for Silver Bullets, and it’s well worth following his financial cyptography blog.

Everything else I’ve ever found interesting on the topic of security is on my pinboard tag, and you can get an RSS feed to that.

Other stuff worth following:

Cigital
Light Blue Touch Paper (blog for Ross Anderson’s security group at Cambridge University)
Bruce Schneier
Freedom To Tinker (blog for Ed Felton’s group at Princeton University)
Chris Hoff’s Rational Survivability

Also keep an eye on the papers for WEIS and Usenix security (and try not to get too sucked in by the noise from Blackhat/DefCon).

An important point that emerges here is that even though there’s a constant drumbeat of security related news, there’s not that much changing at a fundamental level, which is why it’s important to ensure that ‘basic block and tackle’ is taken care of, and that you build systems that are ‘rugged software‘.

This post originally appeared on the Cohesive Networks Blog.


Many of the big data technologies in common use originated from Google and have become popular open source platforms, but now Google is bringing an increasing range of big data services to market as part of its Google Cloud Platform. InfoQ caught up with Google’s William Vambenepe, who’s lead product manager for big data services to ask him about the shift towards service based consumption.

continue reading the full story at InfoQ


Dockercon #2 is underway, version 1.7.0 of Docker was released at the end of last week, and lots of other new toys are being launched. Time for some upgrades.

I got used to Docker always restarting containers when the daemon restarted, which included upgrades, but that behaviour went away around version 1.3.0 with the introduction of the new –restart policies

Here’s a little script to automate upgrading and restarting the containers that were running:

upgrade-docker.sh

#!/bin/bash
datenow=$(date +%s)
sudo docker ps > /tmp/docker."$datenow"
sudo apt-get update && sudo apt-get install -y lxc-docker
sudo docker start $(tail -n +2 /tmp/docker."$datenow" | cut -c1-12)

I also ran into some problems with Ubuntu VMs where I’d installed from the old docker.io repos that have now moved to docker.com.

I needed to change /etc/apt/sources.list.d/docker.list from:

deb http://get.docker.io/ubuntu docker main

to:

deb https://get.docker.com/ubuntu docker main

The switch to HTTPS also meant I needed to:

apt-get install apt-transport-https

BanyanOps have published a report stating that ‘Over 30% of Official Images in Docker Hub Contain High Priority Security Vulnerabilities’, which include some of the sensational 2014 issues such as ShellShock and Heartbleed. The analysis also looks at user generated ‘general’ repositories and finds an even greater level of vulnerability. Their conclusion is that images should be actively screened for security issues and patched accordingly.

continue reading the full story at InfoQ

Official Images with Vulnerabilities


Yesterday I delivered a tutorial as part of the Open Network Users Group (ONUG) Academy:

To go through the tutorial yourself you’ll need an AWS account and an SSH client (and the Internet access and browser you’re using to read this).

To complement the slides there’s a wiki on GitHub with all of the relevant command line snippets and explanations of what’s going on. The materials are Creative Commons license 4.0 Attribution Sharealike licensed, so please feel free to make derivatives, just please give me some attribution for creating this (it took a lot of work).

If you like this then you might also want to take a look at the Cloud Networking Workshop I did for the Open Data Center Alliance (ODCA) Forecast event last year.


At last week’s Ignite conference Microsoft announce a set of new networking capabilities for its Azure cloud described as being ‘for a consistent, connected and hybrid cloud’. The new capabilities include improvements to ExpressRoute, Azure’s Internet bypass offering, availability of ExpressRoute for SaaS offerings such as Office 365 and Skype for Business, additional VPN capabilities and enhancement of virtual networks in Azure’s IaaS.

continue reading the full story at InfoQ

Azure_routes




Follow

Get every new post delivered to your Inbox.

Join 112 other followers