The United States Federal Communications Commission (FCC) has introduced ‘software security requirements’ obliging WiFi device manufacturers to “ensure that only properly authenticated software is loaded and operating the device”. The document specifically calls out the DD-WRT open source router project, but clearly also applies to other popular distributions such as OpenWRT. This could become an early battle in ‘The war on general purpose computing’ as many smartphones and Internet of Things devices contain WiFi router capabilities that would be covered by the same rules.
Filed under: technology, InfoQ news, WRTnode | Leave a Comment
Tags: android, wifi, router, FCC, firmware, CyanogenMod, open source, OpenWRT, WRTnode
I’m a big fan of Github Gist, as it’s an excellent way to store fragments of code and config.
Whatever I feel I can make public I do, and all of that stuff is easily searchable.
A bunch of my gists are private, sometimes because they contain proprietary information, sometime because they’re for something so obscure that I don’t consider them useful for anybody else, and sometimes because they’re an unfinished work in progress that I’m not ready to expose to the world.
Over time I’ve become very sick of how difficult it is to find stuff that I know I’ve put into a gist. There’s no search for private gists, which means laboriously paging and scrolling through an ever expanding list trying to spot the one you’re after.
Those dark days are now behind me now that I’m using Mark Percival’s Gist Evernote Import.
It’s a Ruby script that syncs gists into an Evernote notebook. It can be run out of a Docker container, which makes deployment very simple. Once the gists are in Evernote they become searchable, and are linked back to the original gist (as Evernote isn’t great at rendering source code).
I’ve used Evernote in the past, as it came with my scanner, but didn’t turn into a devotee. This now gives me a reason to use it more frequently, so perhaps I’ll put other stuff into it (though most of the things I can imagine wanting to note down would probably go into a gist). So far I’ve got along just fine with a basic free account.
Filed under: howto | Leave a Comment
Tags: evernote, gist, github, note, search, sync
Apache 2.4 changes things a lot – particularly around authentication and authorisation.
I’m not the first to run into this issue, but I didn’t find a single straight answer online. So here goes (as root):
# add Precise sources so that Apache 2.2 can be used cat <<EOF >> /etc/apt/sources.list deb http://archive.ubuntu.com/ubuntu precise main restricted universe deb http://archive.ubuntu.com/ubuntu precise-updates main restricted universe deb http://security.ubuntu.com/ubuntu precise-security main restricted universe multiverse EOF # install Apache 2.2 (from the Precise repos) apt-get install -y apache2-mpm-prefork=2.2.22-1ubuntu1.9 \ apache2-prefork-dev=2.2.22-1ubuntu1.9 \ apache2.2-bin=2.2.22-1ubuntu1.9 \ apache2.2-common=2.2.22-1ubuntu1.9
If you want mpm-worker then do this instead:
# add Precise sources so that Apache 2.2 can be used cat <<EOF >> /etc/apt/sources.list deb http://archive.ubuntu.com/ubuntu precise main restricted universe deb http://archive.ubuntu.com/ubuntu precise-updates main restricted universe deb http://security.ubuntu.com/ubuntu precise-security main restricted universe multiverse EOF # install Apache 2.2 (from the Precise repos) apt-get install -y apache2=2.2.22-1ubuntu1.9 \ apache2.2-common=2.2.22-1ubuntu1.9 \ apache2.2-bin=2.2.22-1ubuntu1.9 \ apache2-mpm-worker=2.2.22-1ubuntu1.9
Filed under: howto | Leave a Comment
Tags: 14.04, 2.2, Apache, Ubuntu
All of the major cloud providers now offer some means by which it’s possible to connect to them directly, meaning not over the Internet. This is generally positioned as helping with the following concerns:
- Bandwidth – getting a guaranteed chunk of bandwidth to the cloud and applications in it.
- Latency – having an explicit maximum latency on the connection.
- Privacy/security – of not having traffic on the ‘open’ Internet.
The privacy/security point is quite bothersome as the ‘direct’ connection will often be in an MPLS tunnel on the same fibre as the Internet, or maybe a different strand running right alongside it. What makes this extra troublesome is that (almost) nobody is foolish enough to send sensitive data over the Internet without encryption, but many think a ‘private’ link is just fine for plain text traffic.
For some time I’d assumed that offerings like AWS Direct Connect, Azure ExpressRoute and Google Direct Peering were all just different marketing labels for the same thing, particularly as many of them tie in with services like Equinix’s Cloud Exchange. At the recent Google:Next event in London Equinix’s Shane Guthrie made a comment about network address translation (NAT) that caused me to scratch a little deeper, resulting in this post.
What’s the same
All of the services offer a means to connect private networks to cloud networks over a leased line rather than using the Internet. That’s pretty much where the similarity ends.
What’s different – AWS
Direct Connect is a 802.1q VLAN (layer 2) based service. There’s an hourly charge for the port (that varies by the port speed), and also per GB egress charges that vary by location (ingress is free, just like on the Internet).
What’s different – Azure
ExpressRoute is a BGP (layer 3) based service, and it too charges by port speed, but the price is monthly (although it’s prorated hourly), and there are no further ingress/egress charges.
An interesting recent addition to the portfolio is ExpressRoute Premium, which enables a single connection to fan out across Microsoft’s private network into many regions rather than having to have point-to-point connections into each region being used.
What’s different – Google
Direct Peering is a BGP (layer 3) based service. The connection itself is free, with no port or per hour charges. Egress is charged for per GB, and varies by region.
 More discerning companies are now working with us to use VNS3 on their ‘direct’ connections, in part because all of the cloud VPN services are tied to their Internet facing infrastructure.
 There’s some great background on how this was build in Sam Johnston’s Leaving Equinix post
 This is a little peculiar, as AWS itself doesn’t expose anything else at layer 2.
This post first appeared on the Cohesive Networks Blog
Filed under: cloud, CohesiveFT, networking | Leave a Comment
Tags: amazon, aws, Azure, cloud, direct connect, direct peering, expressroute, GCE, GCP, google, Microsoft, network
A friend emailed me yesterday saying he was ‘trying to be better informed on security topics’ and asking for suggestions on blogs etc. Here’s my reply…
For security stuff first read (or at least skim) Ross Anderson’s Security Engineering (UK|US) – it’s basically the bible for infosec. Don’t be scared that it’s now seven years old – nothing has fundamentally changed.
Blogger Gunnar Peterson once said there are only two tools in security – state checking and encryption, so I find it very useful to ask myself each time a look at something which it is doing (or what blend).
Other stuff worth following:
Light Blue Touch Paper (blog for Ross Anderson’s security group at Cambridge University)
Freedom To Tinker (blog for Ed Felton’s group at Princeton University)
Chris Hoff’s Rational Survivability
An important point that emerges here is that even though there’s a constant drumbeat of security related news, there’s not that much changing at a fundamental level, which is why it’s important to ensure that ‘basic block and tackle’ is taken care of, and that you build systems that are ‘rugged software‘.
This post originally appeared on the Cohesive Networks Blog.
Filed under: security, Uncategorized | 1 Comment
Many of the big data technologies in common use originated from Google and have become popular open source platforms, but now Google is bringing an increasing range of big data services to market as part of its Google Cloud Platform. InfoQ caught up with Google’s William Vambenepe, who’s lead product manager for big data services to ask him about the shift towards service based consumption.
Filed under: cloud, InfoQ news | Leave a Comment
Tags: big data, google, InfoQ
Dockercon #2 is underway, version 1.7.0 of Docker was released at the end of last week, and lots of other new toys are being launched. Time for some upgrades.
I got used to Docker always restarting containers when the daemon restarted, which included upgrades, but that behaviour went away around version 1.3.0 with the introduction of the new –restart policies
Here’s a little script to automate upgrading and restarting the containers that were running:
#!/bin/bash datenow=$(date +%s) sudo docker ps > /tmp/docker."$datenow" sudo apt-get update && sudo apt-get install -y lxc-docker sudo docker start $(tail -n +2 /tmp/docker."$datenow" | cut -c1-12)
I also ran into some problems with Ubuntu VMs where I’d installed from the old docker.io repos that have now moved to docker.com.
I needed to change /etc/apt/sources.list.d/docker.list from:
deb http://get.docker.io/ubuntu docker main
deb https://get.docker.com/ubuntu docker main
The switch to HTTPS also meant I needed to:
apt-get install apt-transport-https
Filed under: code, Docker, howto | Leave a Comment
Tags: Docker, script, upgrade
BanyanOps have published a report stating that ‘Over 30% of Official Images in Docker Hub Contain High Priority Security Vulnerabilities’, which include some of the sensational 2014 issues such as ShellShock and Heartbleed. The analysis also looks at user generated ‘general’ repositories and finds an even greater level of vulnerability. Their conclusion is that images should be actively screened for security issues and patched accordingly.
Filed under: Docker, InfoQ news, security | Leave a Comment
Tags: Docker, InfoQ, security
Yesterday I delivered a tutorial as part of the Open Network Users Group (ONUG) Academy:
To go through the tutorial yourself you’ll need an AWS account and an SSH client (and the Internet access and browser you’re using to read this).
To complement the slides there’s a wiki on GitHub with all of the relevant command line snippets and explanations of what’s going on. The materials are Creative Commons license 4.0 Attribution Sharealike licensed, so please feel free to make derivatives, just please give me some attribution for creating this (it took a lot of work).
Filed under: Docker, networking, presentation | 1 Comment
Tags: containers, Docker, networking, ONUG, tutorial
At last week’s Ignite conference Microsoft announce a set of new networking capabilities for its Azure cloud described as being ‘for a consistent, connected and hybrid cloud’. The new capabilities include improvements to ExpressRoute, Azure’s Internet bypass offering, availability of ExpressRoute for SaaS offerings such as Office 365 and Skype for Business, additional VPN capabilities and enhancement of virtual networks in Azure’s IaaS.
Filed under: cloud, InfoQ news, networking | Leave a Comment
Tags: aws, Azure, cloud, networking, PowerShell, SDN, vpn