tommorris.org

Discussing software, the web, politics, sexuality and the unending supply of human stupidity.


security


How to use a YubiKey to secure all the things

For a while I’ve been using YubiKey’s blue FIDO key. It enables you to use FIDO U2F for web services. Currently, the main sites that support Universal 2nd Factor (U2F) login are Google, Facebook, GitHub, Dropbox, Fastmail and the password manager Dashlane. It’d be great to see more sites added to this list, but this is a good start.

I upgraded recently from the blue FIDO key to the black YubiKey 4. The full YubiKey can be used for a few things

  • website 2FA using FIDO U2F
  • login to your computer using PIV
  • GPG
  • SSH

It took a while before I got it setup for each of these things.

U2F

This is the easy bit. You go to the relevant websites, you activate the U2F mode and you put the key in as required. This works the same whether you use the blue FIDO U2F-only key or the black YubiKey 4.

Generally, the way to use U2F is in addition to TOTP tokens. U2F is a nicer experience than TOTP on laptops and most desktops. (Really, some USB ports are really poorly designed for actually using a YubiKey with. The USB ports on either side of the old Mac keyboards are bad. The two USB ports on my Das mechanical keyboard are perfect.)

Computer login using PIV

To set your Windows or Mac computer up to login, you can do that using the YubiKey PIV Manager app. You can find instructions on the YubiKey website on how to set up key-based authentication on Linux.

The way I’m using the PIV mode is like this. I have a long password for my computer. I can type this in or I can use the YubiKey with a six-digit PIN. If you’ve got a good 20+ character password, you can always use that. But if you also have the USB key, you can use the PIN instead.

GPG

With the YubiKey, you can put a number of GPG keys on to it. As with the PIV logins, these are protected with a PIN. My main use case for GPG at the moment is code signing for GitHub. (Alas, nobody seems too bothered about email signing.)

This blog post by Simon Josefsson explains the process of setting up a YubiKey for GPG. The broad approach is you create a GPG key, then you create three subkeys—one for encryption, one for signing, one for authorisation. The private key for each is stored on the YubiKey: once you’ve pushed the private key to the YubiKey, there’s no way to get it back out again. (So, it’s sensible to make a backup of the keys and store them offline on, say, a USB key in a safe.)

If you’ve already got your public key on GitHub, you’ll need to export a new public key containing the subkeys you’ve stored on the YubiKey and paste that up on GitHub. (If you use GitLab, this whole approach doesn’t work… because of this bug I’ve reported.)

Once you’ve got it all set up, when you commit some new code to a Git repository, it’ll ask you for your YubiKey’s six-digit PIN. That is cached for a bit, so it should only interrupt you occasionally. If you remove the YubiKey, you’ll obviously have to put it back in and re-enter the PIN to commit new code.

SSH

Now the fiddly bit. The way you use SSH with the YubiKey is to convert your GPG key into an SSH public key. There’s a utility called gpgkey2ssh that does just this for you: you point it to your GPG public key and it’ll turn that into an SSH public key (remember, other than an offline—and hopefully encrypted—backup, you don’t have the private key: it’s stowed away inside the YubiKey).

The problem occurs with the version of GPG. You need to ensure you are running GPG 2.1. You can now get it from Homebrew on the Mac, so do that. You may need to ensure that symlinks are going to GPG 2.1 if you’ve got multiple installs.

This tutorial is the best one for getting gpg-agent setup for doing SSH. I found that it’s a bit of a faff keeping track of which keys are doing what during the process, so I ended up noting them down on a bit of paper.

So, what to do?

If you are an ordinary user and you don’t mind the tradeoffs of having to occasionally plonk a USB key in the side of your computer, get a blue YubiKey FIDO U2F. They’re fantastic.

There are issues: on desktop Macs, you need to ensure you have a USB socket that’s actually available and not a faff to use (i.e. not on the back of a Mac Mini, or tucked away under the edge of a keyboard so you can put the key in… but can’t actually push the button). You may need to get a little bit of USB extension cable to give you somewhere to press, or you might consider getting the YubiKey Nano.

If you are a geek and write software (or software-adjacent products including documentation), and you don’t mind jumping through quite a few more hoops to get the GPG and SSH stuff set up, you should do that too. If you don’t use GPG or SSH, you probably don’t need the black YubiKey and can stick to the blue FIDO U2F device.

You can buy the YubiKey 4 from Amazon UK, or if you prefer, you can get the YubiKey FIDO U2F ‘blue’ key.

Further reading


Fault tolerance on the Web and the dark side of Postel's Law

I’ve been reading @adactio’s new book. Pretty much all I have read is great, and I highly recommend reading it.

But in the grand spirit of pedantic blogging, I take issue with this:

“Imperative languages tend to be harder to learn than declarative languages”

That tend to is doing a lot of work.

I think Jeremy is conflating declarative with fault tolerant here. HTML and CSS are declarative and fault tolerant. SQL is declarative and decidedly fault intolerant (and quite hard to master to boot). PHP’s type system is a lot more permissive and tolerant (“weak”, one might say) than a language like Python. The former is great for newbies and non-programmers, because adding 1 + "1" (that is, adding a string and an integer together) will give you 2, or at least something that when printed to screen looks vaguely like a two, though under the covers it may be Cthulhu.1 And the behaviour of something like Python is great for jaded old gits like me who don’t want the system to magically convert strings into integers but to blow up as quickly as possible so that it can be fixed before all this nastiness gets stitched together with the rest of the code and causes some real major bugs. The same principle applies on a grander scale with stricter type systems like Java or the broad ML family (including things like Scala, F#, Haskell etc.).

“A misspelt tag in HTML or a missing curly brace in CSS can also cause headaches, but imperative programs must be well‐formed or they won’t run at all.”

Again, depends on the language. PHP is pretty permissive. If you make a very minor mistake, you can often just carry on and keep going. If you are feeling particularly careless about your errors, you prefix your expression with an @ sign and then security people get to laugh at you. I hesitate to say this was “designed” but it was at the very least chosen as a behaviour in PHP.

This may all seem rather pedantic, but it is actually quite important. HTML and CSS are declarative and permissively fault-tolerant. That’s a key strength. In an environment like the web, it creates a certain robustness. If your JavaScript fails to load entirely, you can get some very strange behaviour if, say, a function that is expected to be there isn’t. (I use a site frequently that makes multiple Ajax calls but if one fails, say due to a bad connection, the site is unusable and must be reloaded from scratch. It is also contained in an iOS app, which must be manually restarted.) But if some of your CSS doesn’t load, that’s not the end of the world: you still get something you can read. If some of your HTML has weird new stuff in it, as Jeremy points out elsewhere in the book, that’s still backwards compatible–the browser simply ignores that element and renders its content normally.

This error handling model, this permissiveness of web technologies, isn’t a side-effect of being declarative. It’s actually a property of them being designed for the web, of human choice by the creators. There is a cost to this. It has been incredibly hard for us to secure the web. Permissive error handling can and has enabled a whole class of security vulnerabilities.

If Postel’s Law gives you the ability to use border-radius in your CSS or aside in your HTML and some terrible old version of Internet Explorer happily ignoring it without vomiting XML warnings all across your screen, then Postel’s Law also comes with the cost of having to worry about downgrade attacks. We collectively left SSLv2 running long after it should have been dead and we got DROWN. We did the same with SSLv3 and we got POODLE. These are examples of ungraceful degradation and the sad cost is your server being vulnerable to being pwned.2

With the last few attacks on SSL/TLS, it wasn’t just nasty old versions of Internet Explorer on Windows XP getting shut out of the web, it was non-browser apps that talked to HTTPS-based APIs. The Logjam attack meant that a lot of people upgraded their servers to not serve DH keypairs that are below 1024-bit. For most current day browsers, this was not an issue. Apple, Mozilla, Google, Microsoft and others released patches for their current browsers. Java 6 didn’t get a patch for a very long time. If you had a Java desktop app that consumed an HTTPS-based RESTful API which had patched Logjam, that broke with no graceful degradation, and the general solution was to upgrade to a new version of Java. On OS X, Java used to be distributed by Apple, albeit as something of a reluctant afterthought. Upgrading every desktop Java user on OS X was a bit of a faff. (My particular Java Logjam issue was with… JOSM, the Java OpenStreetMap editor.)

Postel’s Law giveth and it taketh away. One could give this kind of rather pedantic counterexample to Jeremy’s account of Postel’s Law and then conclude hastily “given how badly the web is at solving these security problems, the grass on the native apps side of the fence might just be a little bit greener and yummier”. Oh dear. Just you wait. When every app on a platform is basically reimplementing a site-specific client, you might actually get more fragility. Consider our recent vulnerabilities with SSL/TLS. After something like Logjam, the bat signal went out: fix your servers, fix your clients. On the server side, generally it meant changing a config file for Apache or Nginx in a fairly trivial way and then restarting the web server process. On the client side, it meant downloading a patch for Chrome or Firefox or Safari or whatever. That may have just been rolled into the general system update (Safari) or rolled into the release cycle (Chrome) without the end user even being aware of it. The long tail3 of slightly oddball stuff like Java desktop apps, which tends to affect enterprise users, assorted weirdos like me, and niche use cases, took a bit longer to fix.

If every (client-server) app4 that could be in a browser, in the case of a security fail, fixing all those apps would be as simple as fixing the browser (and the server, but that’s a separate issue). If everything were a native app, you have to hope they are all using the system-level implementations of things like HTTP, TLS and JSON parsing, otherwise you have a hell of a job keeping them secure after vulnerabilities. We already see things going on in native-app-land (Napland?) that would cause a browser to throw a big stonking error: user data being sent in cleartext rather than over TLS being more common than I care to think about. But the native app won’t scream and shout and say “this is a bad, very wrong, no good idea, stop it unless you really really really want to”, because the UI was developed by the person who created the security issue to start with.

The downside to Postel’s Law5 is sometimes the graceful degradation is pretty damn graceless. Sadly, the least graceful degradation is often security-related. The web might still be better at mitigating those than all but the most attentive native app developers, or not. Browser manufacturers may be better at enforcing security policies retroactively than app stores, or they might not. We shall have to wait and see.

The Web is a hot mess.

But we still love it.

  1. Sausage typing: if it looks like a sausage, and tastes like a sausage, try not to think about what it really is.

  2. At risk of giving technical backing to rising reactionary movements, perhaps the modern day variation of Postel’s Law might be: “Be conservative in what you send, and be liberal in what you accept, to the degree that you can avoid catastrophic security failures.”

  3. Wow, it’s been a long time since we talked about them…

  4. Let’s leave aside the semantics of what is or isn’t an app. Incidentally, I was in a pub the other day and saw a few minutes of a football game. There was a big advert during the game from a major UK retail brand that said “DOWNLOAD THE [brand name] APP”. Why? I don’t know. We are back at the ghastly “VISIT OUR WEBSITE, WE WON’T TELL YOU WHY” stage with native mobile apps. I’m waiting for a soft drink brand to exhort me to download their app for no reason on the side of a bottle.

  5. I claim no originality in this observation. See here and here. Interestingly, if you take a look at RFC 760, where Postel’s Law is originally described, it has a rather less soundbitey remark just before it:

    The implementation of a protocol must be robust. Each implementation must expect to interoperate with others created by different individuals. While the goal of this specification is to be explicit about the protocol there is the possibility of differing interpretations. In general, an implementation should be conservative in its sending behavior, and liberal in its receiving behavior. That is, it should be careful to send well-formed datagrams, but should accept any datagram that it can interpret (e.g., not object to technical errors where the meaning is still clear).

    The first two sentences are key…


Firefox 52 adding insecure form warnings

The latest version of Firefox’s Developer Edition (formerly known as Aurora) now ships with more prominent warnings for when you send passwords from an insecure context. For a long time, some sites have attempted to justify having login forms on insecure HTTP pages with the argument that the credentials would be encrypted in transmission as they were being sent to an HTTPS endpoint. The problem with this is that it doesn’t actually prevent password sniffing, it just makes it slightly harder. Rather than sniffing the credentials as they go over the wire, you instead use a man-in-the-middle attack to intercept the HTTP page and insert JavaScript into it that sniffs the password upon entry and then sends it via Ajax to the interceptor.

Firefox’s new implementation uses a fairly simple algorithm described in the new W3C Secure Contexts spec that is attempting to standardise some of the key concepts of browser-side security. Hopefully, users being warned that they are submitting passwords insecurely will start prompting websites to stop doing the login-form-on-HTTP-that-submits-via-HTTPS anti-pattern.

My usual go-to example when illustrating the problem is airline websites, specifically the online checkin or frequent flyer account login page. You give airlines quite substantial amounts of money and personal information. For a long time, most were vulnerable to this kind of attack. Malicious hackers also have been known to steal and sell frequent flyer miles, although not necessarily though man-in-the-middle attacks on the login forms.

British Airways used to have a login form for their Executive Club frequent flyer programme on their homepage—they’ve now fixed this and the whole site seems to be HTTPS.

But the following (mostly randomly selected) airlines still are vulnerable to man-in-the-middle password stealing through JavaScript injection.

And that’s just one sector: airlines. There’s plenty more sites that ordinary people use everyday that have potential vulnerabilities caused by these purportedly-secure-but-really-not login forms. Browsers giving prominent and irritating warnings about it is the first step to getting the companies to pay attention.

When the next big attack happens, there will–as always–be non-technical people from government and business lamenting how difficult all this information security stuff is, and how the vectors of attack are always changing. Let it be on the record that this kind of vulnerability is extremely simple, well-known and relatively easy to exploit. There are interesting and ingenious ways to attack Transport Layer Security, but if you don’t turn it on to start with, one doesn’t need to DROWN POODLEs or really do anything that will make technical people go “ooh, that’s clever”. Firefox warning users about this really old and boring way of breaking user security might mean that people spend less time speculating about the scary difficult emerging threats and fix the basic glaring security errors right in front of them.


Proposal: 'change password' discoverability metadata

The recent leak of LinkedIn’s password database shows that passwords remain a fragile part of our security ecosystem. Users are bad at coming up with passwords. They use the same password among multiple services. Enterprise password change policies have been part of the problem: users simply take their existing passwords and stick an incrementing number on the end, or engage in other substitutions (changing the letter o for the number 0, for example). Plus, the regular password change doesn’t really help as a compromised password needs to be fixed immediately, rather than waiting three months for the next expiration cycle. CESG recently issued guidance arguing against password expiration policies using logic that is obvious to every competent computer professional but not quite so obvious to big enterprise IT managers.

Many users, fed up with seeing yet another IT security breach, have switched over to using password managers like KeePass, 1Password, Dashlane and LastPass. This is something CESG have encouraged in their recent password guidance. Password managers are good, especially if combined with two-factor authentication.

For users who are starting to use a password manager, they have the initial hurdle of switching over from having the same password for everything to using the password manager’s generated password approach. They may have a backlog of tens or hundreds of passwords that need changing. The process of changing passwords on most websites is abysmally unfriendly. It is one of those things that gets tucked away on a settings page. But then that settings page grows and grows. Is it ‘Settings’, or ‘My Profile’ or ‘My Account’ or ‘Security’ or ‘Extra Options’? Actually finding where on the website you have to go in order to change your password is the part which takes longest.

Making it easier for a user to change their password improves security by allowing them to switch from a crap (“123456”), reused, dictionary word (“princess”) or personally identifiable password (the same as their username, or easily derived from it: “fred” for the username “fred.jones”) to a strong password that is stored only in their password manager like “E9^057#6rb2?1Yn”.

We could make it easier by clearly pointing the way to the password change form so that software can assist the user to do so. The important part here is assist, not automate. The idea of software being able to automate the process of changing passwords has some potential selling points, but the likelihood of it being adopted is low. Instead, I’m simply going to suggest we have software assist the user to get to the right place.

In the form of a user story, it’d be like this: as a user of a password management application, I’d like to speed up the process of changing passwords on websites where they have been detected to be weak, reused or old. When I’m looking at a password I wish to change, I could click “change password” in the password management application and it’d take me to the password change form on the website without me having to search around for it.

There’s a few ways we could do this. There are some details that would have to be ironed out, but this is a rough first stab at how to solve the problem.

This is my preferred option. On the website, there is a link, either visible (using an a element) or invisible (a link in the head). It would be marked with a rel attribute with a value like password-change. Software would simply parse the HTML and look for an element containing rel="password-change" and then use the href attribute. The user may have to go through the process of logging in to actually use the password change form, but it’d stop the process of searching.

One issue here is that there are a large number of web apps that rely on JavaScript to render up the page and there is the potential for rogue third-party JavaScript to modify the DOM. A simple way to ameliorate this is to search for the value in the HTML itself and ignore any JavaScript. Another possible solution is to require that the password change form be located on the same domain as the website, or decide whether to trust the URL relative to the base domain based on an existing origin policy like CORS.

Putting JSON in a specified location

Alternatively, have people put some JSON metadata in a file and store it in a known location, similar to robots.txt or the various things spooked away in the .well-known hidey-hole. This is okay, but it suffers from all the usual flaws of invisble metadata, and is also a violation of the “don’t repeat yourself” principle—the links are already on the web in the HTML. Replicating that in JSON when it already exists in HTML increases the likelihood that the JSON version will fall out of sync with the published reality.

Same principle as the JSON one, but using HTTP(S) headers. Same issue of invisible metadata. Same issue with same-origin policies.

Security considerations

As noted above, there are some security issues that would have to be handled:

  1. Should a consuming agent (i.e. the password management application) allow third-party (or even same-origin) JavaScript to modify the DOM that contains the link?
  2. Should a consuming agent ignore password change form endpoint targets that are on a different domain?
  3. Should a consuming agent follow a password change link to a non-HTTPS endpoint?

My rather conservative answers to these three questions are all no, but other people might differ.

Warning on scope

As I said above, this is a very narrowly specified idea: the ecology of web application security is pretty fragile, and the likelihood of radical change is low, so I’m not proposing a radical overhaul. Just a very minor fix that could make it easier for (motivated, security-conscious) users to take steps to transition to better, stronger passwords.


Ignore the talking heads: TalkTalk's security issues run much deeper

I have to say I’m rather intrigued by the TalkTalk hack. First of all, they’ve found the 15-year-old who allegedly did it, arrested him and bailed him pending investigations. Hopefully, if said person did it, he’ll be quite interested in helping the police with their inquiries and with a bit of luck, the customers aren’t going to have their personal or financial details released. TalkTalk have waived cancellation fees for customers who want to leave, but only if a customer has sustained a financial loss.

Meanwhile, Brian Krebs reports that the TalkTalk hackers demanded £80k worth of Bitcoin from the ISP. We’ve now had the media tell us it is “cyberjihadis”, a fifteen year old boy, and people holding it ransom for Bitcoin.

What’s curious though is how the mainstream media have not really talked very much to security experts. Yesterday, I listened to the BBC Today programme—this clip in particular. It featured an interview with Labour MP Hazel Blears (who was formerly a minister in the Home Office) and Oliver Parry, a senior corporate governance adviser at the Institute of Directors.

Here’s Mr Parry’s response to the issue:

The threat is changing hour by hour, second by second—and that’s one of the real problems, but as I said, I don’t think there’s one way to deal with this. We just need to reassure consumers, shareholders and other wider stakeholder groups that they have something in place.

Just a few things. This attack was a simple SQL injection attack. That threat isn’t “changing hour by hour, second by second”. It’s basic, common sense security that every software developer should know to mitigate, that every supervisor should be sure to ask about during code reviews, and that every penetration tester worth their salt will check for (and sadly, usually find).

As Paul Moore has pointed out in this excellent blog post, there are countless security issues with TalkTalk’s website. Craptastic SSL, no PCI compliance. The talking heads are going on about whether or not the data was “encrypted” or not. The SSL transport was encrypted but you could request that they encrypt traffic with an obsolete 40 or 56-bit key rather than the 128-bit that is considered secure.

There are new and changing threats, but SQL injection isn’t one of them. That’s a golden oldie. And it also doesn’t matter if the data is encrypted if the web application and/or the database is not secured against someone injecting a rogue query.

Mr Parry is right that there is not “one way to deal with this”. There are plenty. First of all, you need to hire people with some security expertise to build your systems. You need to hire independent experts to come and test your systems. That’s the in house stuff. Unfortunately, TalkTalk seem to have lost a whole lot of their senior technical staff in the last year, including their CIO. Perhaps they weren’t confident in the company’s direction on security and technology matters.

Then there’s the external facing stuff. Having a responsible disclosure process that works and which gives incentives for people to disclose. Have a reward system. If you can’t afford that, have a guarantee that you won’t seek prosecution or bring legal action against anyone who engages in responsible disclosure. Have an open and clear log where you disclose your security issues after fixing them. Actually fix the issues people report to you. Again, Paul Moore’s post linked above notes that he tried to contact TalkTalk and was ignored, disrespected and threatened. That’s not how you should treat security consultants helping you fix your issues.

All of this stuff should be simple enough for an ISP that has over four million paying customers. It isn’t rocket science. The fact that they aren’t doing it means they are either incompetent or don’t give a shit.

Brian Krebs nailed this corporate failure to care about security recently in a discussion on Reddit:

I often hear from security people at organizations that had breaches where I actually broke the story. And quite often I’ll hear from them after they lost their job or quit out of frustration, anger, disillusion, whatever. And invariably those folks will say, hey, we told these guys over and over…here are the gaps in our protection, here’s where we’re vulnerable….we need to address these or the bad guys will. And, lo and behold, those gaps turned out to be the weakest link in the armor for the breached organization. Too many companies pay good money for smart people to advise them on how to protect the organization, and then go on to ignore most of that advice.

Mr Parry said that the important thing was reassuring customers that their information was safe, not actually ensuring that customers data is safe. This is exactly the problem. I don’t want to be “reassured”, I want it to be safe. I don’t want to be reassured that my flight isn’t going to crash into the Alps—I actually want my flight to not crash into the Alps. Reassuring me requires salespeople and professional bullshitting, not crashing requires well trained pilots and staff, engineers doing proper checks, the designers of the plane making sure that they follow good engineering practices, constant testing. Engineering matters, not “reassurance”.

So long as business thinks “reassuring” customers matters more than actually fixing security problems, these kinds of things will keep happening. It would be really nice if the media actually spoke to security experts who could point out how trivially stupid and well-known the attack on TalkTalk was, so this kind of industry avoidance tactic could be properly squelched.


NatWest add SMS for fraudulent transactions

Just read this from NatWest:

From December, if we spot an attempted debit card transaction on your account that looks unusual, we’ll send you a text message asking you to confirm that the payment is being made by you. Once you text us back to let us know it’s genuine, we’ll ensure your card is available to use again.

This is actually reasonably sensible. I’m not keen on the “text us back” bit,1 but having quicker possible fraud notifications (or, even better, just a routine SMS for each transaction) seems like a win—something Bruce Schneier was pointing out back in 2006.

  1. I’ve been in areas with no mobile service too much to trust a system where I have to send SMSes to verify things, especially if the SMS response cycle has a timeout.


Why you need to care about HTTPS

(Yes, even you with the content-focussed site.)

One of the recurring themes from IndieWebCamp this weekend in Brighton was a desire to get a lot more websites SSL-enabled. It became something of a friendly competition: with both the level scheme laid out on the IndieWeb wiki and the Qualsys SSL Labs report generator, a bunch of sites which did not have HTTPS before today now do, including my own.

Qualsys rate tommorris.org A+ on the SSL front. It’s not perfect: there’s still stuff on the website that is mixed content (that is, both HTTP and HTTPS on the same website) in the archives, although I’ll be working to reduce the amount of stuff I post that isn’t HTTPS enabled.

For a long time, the standard policy for a lot of people has been “HTTPS is important for interactive sites, but isn’t really needed for content sites”. This has a certain level of truth. If you are collecting user data—requiring people to login—you should be using HTTPS. It’s not a negotiable. E-commerce sites, social networking sites, dating sites, email sites, web applications, forums—pretty much anything you are expecting people to login to should be HTTPS only to protect the user from having their packets sniffed between you and them.

But what about those “content sites”? Those sites that just publish content for you to read with no expectation of you interacting? Blogs, for instance.

You still need SSL. Especially if you write about anything controversial. Politics, religion, sexuality and so on. With HTTPS turned on, those sniffing the packets going between client and server will spot only that there is communication with your web server—the exact request made is not revealed.

I am already aware that in at least one evangelical Christian high school in New Zealand, I am filtered as a purveyor of immoral and unchristian lifestyles. I’m assuming it is because of my use of the Ruby programming language rather than for being a hell-bound atheist sodomite. But I’m hoping that now the repressed subjects of other censorship-based societies can worry slightly less about the exact pages they are reading on my site being disclosed to their censorious masters. That’s worth a tenner a year and a few minutes futzing around with Nginx config files.

HTTPS is not NSA or GCHQ proof. SSL certificates are issued by Certification Authorities (CAs) and if you don’t think that the CAs are in league with the government, you are very naïve. Read up on DigiNotar. Ideally, at some point, we’ll also do something like Monkeysphere so that we can apply GPG-style Web of Trust principles to HTTPS. I trust security-conscious wise Unix neckbeard types to verify identities far more than I trust big companies in the pay of surveillance states that put on an elaborate show of being liberal democracies.

NSA and GCHQ proof is a tall order. There are lots of scumbags trying to spy on you that aren’t NSA or GCHQ. Even if we can’t defeat the surveillance state, we can fight against corrupt ISPs, corporations and universities monitoring and censoring the web on behalf of those in their charge.

And, yes, HTTPS/SSL sucks in a lot of ways. But you still need to do it. CAs are kind of craptastic. The experience of setting up HTTPS is annoying—although it is a lot less painful with Nginx than it ever was with Apache. If you publish a website, set up SSL. It’s not very painful and so long as you do it right, you are helping protect your users from some forms of surveillance and privacy intrusion.

(Next on the “let’s be less creepy” front: switching out Google Analytics for something like Piwik. I went with GA because I’m lazy. But there’s no point building independent tooling for the web and still giving a load of user data to Google given they seem to be creatively reinterpreting the whole not being evil thing these days.)


That's corporate-speak for "you're fucked, sonny"

Recently, there’s been a problem with the Sense UI on HTC’s Android devices. HTC made this announcement recently:

HTC takes claims related to the security of our products very seriously. In our ongoing investigation into this recent claim, we have concluded that while this HTC software itself does no harm to customers’ data, there is a vulnerability that could potentially be exploited by a malicious third-party application. A third party malware app exploiting this or any other vulnerability would potentially be acting in violation of civil and criminal laws. So far, we have not learned of any customers being affected in this way and would like to prevent it by making sure all customers are aware of this potential vulnerability.

Let me translate this for you.

You’re fucked.

How can you conclude that potentially allowing applications access to customer data “does no harm”? Yes, technically, it doesn’t directly do any harm, but so what?

As the statement hints: “third party malware” could use this vulnerability to exploit the data on the phone. That’s, err, why it’s a security issue. But don’t worry: if they did so, they’d be violating civil and criminal laws.

And, as we all know, the sort of people who exploit security holes are well-known to give a shit what the civil and criminal law says. Especially if they are operating out of countries with lax regulations and even laxer or bribable law enforcement. But, you know, don’t worry, that will never happen.

I’m not picking on HTC here: the problem isn’t the security vulnerability (that’ll get patched soon, right?) but the attitude that “oh, there’s a vulnerability, don’t worry, the law will protect you” is an acceptable response from a big company.


Who on earth thought that putting SecureCode / Verified by Visa on the web page used to top up a mobile phone account? It is a usability nightmare when you are using it on a desktop PC but on a mobile device like the iPod touch or iPad?