All of the major cloud providers now offer some means by which it’s possible to connect to them directly, meaning not over the Internet. This is generally positioned as helping with the following concerns:

  1. Bandwidth – getting a guaranteed chunk of bandwidth to the cloud and applications in it.
  2. Latency – having an explicit maximum latency on the connection.
  3. Privacy/security – of not having traffic on the ‘open’ Internet.

The privacy/security point is quite bothersome as the ‘direct’ connection will often be in an MPLS tunnel on the same fibre as the Internet, or maybe a different strand running right alongside it. What makes this extra troublesome is that (almost) nobody is foolish enough to send sensitive data over the Internet without encryption, but many think a ‘private’ link is just fine for plain text traffic[1].

For some time I’d assumed that offerings like AWS Direct Connect, Azure ExpressRoute and Google Direct Peering were all just different marketing labels for the same thing, particularly as many of them tie in with services like Equinix’s Cloud Exchange[2]. At the recent Google:Next event in London Equinix’s Shane Guthrie made a comment about network address translation (NAT) that caused me to scratch a little deeper, resulting in this post.

direct connect comarison

What’s the same

All of the services offer a means to connect private networks to cloud networks over a leased line rather than using the Internet. That’s pretty much where the similarity ends.

What’s different – AWS

Direct Connect is a 802.1q VLAN (layer 2) based service[3]. There’s an hourly charge for the port (that varies by the port speed), and also per GB egress charges that vary by location (ingress is free, just like on the Internet).

What’s different – Azure

ExpressRoute is a BGP (layer 3) based service, and it too charges by port speed, but the price is monthly (although it’s prorated hourly), and there are no further ingress/egress charges.

An interesting recent addition to the portfolio is ExpressRoute Premium, which enables a single connection to fan out across Microsoft’s private network into many regions rather than having to have point-to-point connections into each region being used.

What’s different – Google

Direct Peering is a BGP (layer 3) based service. The connection itself is free, with no port or per hour charges. Egress is charged for per GB, and varies by region.

Summary table

Cloud Type Port Egress
Amazon VLAN $ $
Microsoft BGP $
Google BGP  $

Notes

[1] More discerning companies are now working with us to use VNS3 on their ‘direct’ connections, in part because all of the cloud VPN services are tied to their Internet facing infrastructure.
[2] There’s some great background on how this was build in Sam Johnston’s Leaving Equinix post
[3] This is a little peculiar, as AWS itself doesn’t expose anything else at layer 2.

This post first appeared on the Cohesive Networks Blog


A friend emailed me yesterday saying he was ‘trying to be better informed on security topics’ and asking for suggestions on blogs etc. Here’s my reply…

For security stuff first read (or at least skim) Ross Anderson’s Security Engineering (UK|US) – it’s basically the bible for infosec. Don’t be scared that it’s now seven years old – nothing has fundamentally changed.

Blogger Gunnar Peterson once said there are only two tools in security – state checking and encryption, so I find it very useful to ask myself each time a look at something which it is doing (or what blend).

Another seminal work is Iain Grigg’s The Market for Silver Bullets, and it’s well worth following his financial cyptography blog.

Everything else I’ve ever found interesting on the topic of security is on my pinboard tag, and you can get an RSS feed to that.

Other stuff worth following:

Cigital
Light Blue Touch Paper (blog for Ross Anderson’s security group at Cambridge University)
Bruce Schneier
Freedom To Tinker (blog for Ed Felten’s group at Princeton University)
Chris Hoff’s Rational Survivability

Also keep an eye on the papers for WEIS and Usenix security (and try not to get too sucked in by the noise from Blackhat/DefCon).

An important point that emerges here is that even though there’s a constant drumbeat of security related news, there’s not that much changing at a fundamental level, which is why it’s important to ensure that ‘basic block and tackle’ is taken care of, and that you build systems that are ‘rugged software‘.

This post originally appeared on the Cohesive Networks Blog.

Update 17 Nov 2015 – Stephen Bonner pointed out that I should also recommend Krebs on Security.

Update 4 May 2017 – Dick Morrell suggested Cybersecurity Exposed as a more ‘manager level intro’ on the topic.

Update 3 Sep 2019 – I checked in with Gunnar for the original source of ‘two tools’ and he pointed out that the original source was Blaine Burnham at Usenix Security saying ‘in computer security we basically only have two working mechanisms (which aint enough but that’s another story). One is the reference monitor, and the other is crypto.’

Update 15 Dec 2019 – a thread from Goldman Sachs security leader Phil Venables on ‘non-technical’ books for security people. They all look pretty technical to me, but maybe non security.


Many of the big data technologies in common use originated from Google and have become popular open source platforms, but now Google is bringing an increasing range of big data services to market as part of its Google Cloud Platform. InfoQ caught up with Google’s William Vambenepe, who’s lead product manager for big data services to ask him about the shift towards service based consumption.

continue reading the full story at InfoQ


Dockercon #2 is underway, version 1.7.0 of Docker was released at the end of last week, and lots of other new toys are being launched. Time for some upgrades.

I got used to Docker always restarting containers when the daemon restarted, which included upgrades, but that behaviour went away around version 1.3.0 with the introduction of the new –restart policies

Here’s a little script to automate upgrading and restarting the containers that were running:

upgrade-docker.sh

#!/bin/bash
datenow=$(date +%s)
sudo docker ps > /tmp/docker."$datenow"
sudo apt-get update && sudo apt-get install -y lxc-docker
sudo docker start $(tail -n +2 /tmp/docker."$datenow" | cut -c1-12)

I also ran into some problems with Ubuntu VMs where I’d installed from the old docker.io repos that have now moved to docker.com.

I needed to change /etc/apt/sources.list.d/docker.list from:

deb http://get.docker.io/ubuntu docker main

to:

deb https://get.docker.com/ubuntu docker main

The switch to HTTPS also meant I needed to:

apt-get install apt-transport-https

BanyanOps have published a report stating that ‘Over 30% of Official Images in Docker Hub Contain High Priority Security Vulnerabilities’, which include some of the sensational 2014 issues such as ShellShock and Heartbleed. The analysis also looks at user generated ‘general’ repositories and finds an even greater level of vulnerability. Their conclusion is that images should be actively screened for security issues and patched accordingly.

continue reading the full story at InfoQ

Official Images with Vulnerabilities


Yesterday I delivered a tutorial as part of the Open Network Users Group (ONUG) Academy:

To go through the tutorial yourself you’ll need an AWS account and an SSH client (and the Internet access and browser you’re using to read this).

To complement the slides there’s a wiki on GitHub with all of the relevant command line snippets and explanations of what’s going on. The materials are Creative Commons license 4.0 Attribution Sharealike licensed, so please feel free to make derivatives, just please give me some attribution for creating this (it took a lot of work).

If you like this then you might also want to take a look at the Cloud Networking Workshop I did for the Open Data Center Alliance (ODCA) Forecast event last year.


At last week’s Ignite conference Microsoft announce a set of new networking capabilities for its Azure cloud described as being ‘for a consistent, connected and hybrid cloud’. The new capabilities include improvements to ExpressRoute, Azure’s Internet bypass offering, availability of ExpressRoute for SaaS offerings such as Office 365 and Skype for Business, additional VPN capabilities and enhancement of virtual networks in Azure’s IaaS.

continue reading the full story at InfoQ

Azure_routes


Docker Inc have worked with the Center for Internet Security (CIS) to produce a benchmark document [pdf] containing numerous recommendations for the security of Docker deployments. The benchmark was announced in a blog post ‘Understanding Docker Security and Best Practices’ by Diogo Mónica who was recently hired along with Nathan McCauley to lead theDocker Security team. The team have also released an ‘Introduction to Container Security’ [pdf] white paper.

continue reading the full story at InfoQ


For the last few years the fantastic chaps at GreenQloud have been hosting my automated builds for OpenELEC. Sadly (for me) their business is shifting from running a cloud to selling their ‘QStack‘ cloud platform to others, so GreenQloud are shutting down their IaaS (so that they’re not competing with their customers).

I’m pleased to say that the guys at Bytemark’s BigV cloud have stepped in to offer hosting, so the PiChimney site is now proudly wearing their banner:


NAT in the hat

06Apr15

TL;DR

Whilst on vacation in Spain I’ve found networks that seem to be like something out of a Cory Doctorow novel – domestic WiFi routers hanging off domestic WiFi routers hanging off domestic WiFi routers. At first I thought it was my Airbnb host being cheap and having a cosy arrangement with a neighbour to provide Internet, but it’s much more systematic than that.

cats_in_hats

Six routers deep

Here’s a traceroute from my laptop:

Tracing route to google-public-dns-a.google.com [8.8.8.8] over a maximum of 30 hops:

1 3 ms 1 ms 1 ms 192.168.0.1
2 6 ms 7 ms 5 ms . [192.168.2.1]
3 9 ms 8 ms 7 ms 192.168.1.20
4 963 ms 940 ms 697 ms 192.168.10.1
5 368 ms 464 ms 159 ms homestation.Home [192.168.1.1]
6 685 ms 728 ms 769 ms 192.168.144.1
7 * * * Request timed out.
8 1580 ms 658 ms 588 ms 109.Red-80-58-106.staticIP.rima-tde.net [80.58.106.109]
9 * * * Request timed out.
10 3538 ms 2147 ms 1566 ms GOOGLE-Ae2-GRAMADNO2.red.telefonica-wholesale.net [5.53.1.74]
11 723 ms 397 ms 877 ms 216.239.50.199
12 1975 ms 1198 ms 1047 ms 216.239.50.177
13 865 ms 431 ms 425 ms google-public-dns-a.google.com [8.8.8.8]

Trace complete.

That’s six different routers on RFC1918 class C private networks (and a lot of latency) before I hit the Internet proper. It’s also a whole ton of NATing and way too much potential flakiness. On a good moment I’ve seen 2Mb/s, but in reality it seems lucky when packets get through at all, and amazing when Skype works[1].

I’ll try to unpick what’s going on at each router in turn…

Router 1

The router in the house I’m renting is my old friend the TP-Link TL-WR841N. I have one of these at home running OpenWRT, but the one here has the (awful) stock firmware on it. Luckily the admin password hasn’t been changed, which came in handy when it needed some help reconnecting after a long power cut.

router

The WAN link of the router is connected to a TP-Link powerline adaptor

powerline

At first I thought this was connected through to a neighbour, but that was because I was looking downstairs for an ADSL modem or similar that wasn’t there. When I looked upstairs (in the laundry room) I found its twin attached to a Ubiquiti power over ethernet coupler:

poe

and that was for a WiFi antenna mounted on the roof:

antenna

 

A quick detour to Solyaires Internet

I didn’t set the system up, and I don’t pay the bill, but my research would suggest that it’s connected to Solyaires Internet or some similar system for distributing Internet via 5GHz WiFi connections. So instead of a community effort where people create a mesh network to share, this seems to be a commercial endeavour (and it’s not a mesh – more like a spider’s web).

One amusing thing I’ve noticed is that my in law’s apartment building (which is miles away from the house I’m renting) has exactly the same egress IP onto the Internet. Here’s their tracert (oddly despite fewer layers of router they get much worse bandwidth):


Tracing route to google-public-dns-a.google.com [8.8.8.8] over a maximum of 30 hops:

1 <1 ms <1 ms 1 ms 192.168.100.1
2 1 ms 1 ms <1 ms homestation [192.168.1.1]
3 107 ms 56 ms 42 ms 192.168.144.1
4 * * * Request timed out.
5 62 ms 72 ms 102 ms 109.Red-80-58-106.staticIP.rima-tde.net [80.58.106.109]
6 * * * Request timed out.
7 160 ms 142 ms 136 ms GOOGLE-Ae2-GRAMADNO2.red.telefonica-wholesale.net [5.53.1.74]
8 67 ms 59 ms 58 ms 216.239.50.197
9 58 ms 65 ms 60 ms 209.85.254.9
10 57 ms 59 ms 59 ms google [8.8.8.8]

Trace complete.

Router 2

The second router along is a Belkin F7D1301, which judging by the Amazon reviews is a very ordinary router indeed. It has no password set, so the admin interface is wide open, which is obviously a terrible idea from a security perspective. My best guess as to what’s going on here is that the WiFi distribution outfit use some of their customers as Internet mules, acting as a relay from one point to the next. It’s pretty shocking how amateurish the setup is though.

Router 3

The third router doesn’t have an open admin interface. Looking at its response headers I see a Boa 0.93.15 web server, which could suggest a Zyxel/Edimax piece of kit (which might be a full router, or might be some sort of ‘range extender’). That web server is susceptible to a basic authentication bypass exploit, but I wasn’t feeling nefarious enough to pwn it (this was a look but don’t touch exercise). The basic auth prompt was ‘Graham-New’ so I suspect it’s a wise home user (another relay mule?) rather than something professionally configured.

Router 4

This one has an airOS admin screen implying something from Ubiquiti networks, and likely kit that’s run by an actual service provider rather than sat in somebody’s home.

Routers 5&6

Neither of these had web admin screens on ports 80 or 443 so I have less to go on (but at least they’re somewhat secure)[2].

The home.homestation implies that we’re back to consumer ADSL gear, and my best guess is that the WiFi connections are being back-hauled by a bunch of consumer grade ADSL links.

The final 192.168.x.y router might just be the local telco being awful and aggregating many ADSL connections onto one public IP.

Part of a broader broadband problem?

I asked a friend who lives and works in Spain about her experiences, and she said ‘it’s unreliable, it’s slow, and the telephone companies are from the last century’. Flicking through local papers I also see that WiFi delivery is a pretty normal offering, and priced in line with ADSL services at around €24/month.

Whilst here I’ve been lucky enough to see Spain included in Three’s ‘Feel at home‘ roaming deal, which means I’ve also been able to check out 3G service. The 3G I’m getting is pretty typical of a mobile service – when it’s good it’s OK (~1Mb/s), when it’s bad it’s not there at all.

In general I’d say that the house WiFi and 3G are about on par in terms of bandwidth and reliability – good enough for keeping up with what’s going on in the world beyond, but not so good that I’d want to depend on it for any kind of business use.

Conclusion

Something must be very wrong with the Internet connectivity market in the Costa Tropical (and perhaps Spain more generally) for this type of arrangement to be tolerable (never mind commonplace).  I’ve been visiting Almuñécar for many years now, and back in the early days the ADSL provision seemed to be much the same as back home in the UK. I get the feeling that the FTTC connection I have at home now would be considered enough to serve hundreds of properties. It’s been great to see investment in infrastructure like roads over the past decade, but it’s a shame that the technology infrastructure hasn’t had the same attention.

Notes

[1] I got so sick of large downloads from my home network failing that I’ve lashed up a combination of autossh and bittorrent so that it will download things eventually, and I don’t have to burden the network (and my mouse finger) with redoing the same failed file time and time again.
[2] I’m guessing that the homestation in the path from my in-laws’ was a different one, as in addition to being ‘homestation’ rather than ‘homestation.Home’ in the traceroute it also serves up an admin GUI over the web.