tommorris.org

Discussing software, the web, politics, sexuality and the unending supply of human stupidity.






Paypal email me to say a subscription payment failed (due to a replaced card). I click the link in the email that PayPal sent me and it takes me to the “outdated version of PayPal”. PayPal is a fucking hot mess of a service.



Why GDS / GOV.UK bet on the web rather than apps. All of which is eminently sensible. Imagine the alternative: the UKGOV app. You download it and it has everything you need to be a citizen. Okay, every week there’d need to be new updates to the functionality. It’d have to be available on every single platform.

And you’d end up downloading and storing a lot of that binary blob for functionality that you use once in a blue moon (renewing your passport? That’s a once every ten years problem. Driving licence renewals are the same. Some of the business-oriented stuff that’s done on government websites will affect only half a percent of a country’s citizens.) and you then have the moral risk of having taken app permissions (notifications, location etc.)

Apps have downsides too. A big problem in the tech industry at the moment is too many people count up the negative sides of the web, and the positive sides of apps and don’t consider the other side of both balance sheets. Still, the current ridiculous app trend is great for keeping iOS and Android devs in full employment.


Startups can’t explain what they actually do. I disagree with the contention that it’s because the owners can’t think clearly: it is all a scam to get money. It’s all hype. Saying “cloud-based disruptive P2P content-driven platform” hoodwinks coked-up investors in a way that “it’s a website where you can upload your photos and we’ll show them to you in a different way” doesn’t.

Also, the article assumes that the customer is the end user. In Silicon Valley land, that’s rarely if ever true any more.


TIL: the Royal Mail Group have a registered trade mark on “the colour red”. Take that, refraction.



Proposal: 'change password' discoverability metadata

The recent leak of LinkedIn’s password database shows that passwords remain a fragile part of our security ecosystem. Users are bad at coming up with passwords. They use the same password among multiple services. Enterprise password change policies have been part of the problem: users simply take their existing passwords and stick an incrementing number on the end, or engage in other substitutions (changing the letter o for the number 0, for example). Plus, the regular password change doesn’t really help as a compromised password needs to be fixed immediately, rather than waiting three months for the next expiration cycle. CESG recently issued guidance arguing against password expiration policies using logic that is obvious to every competent computer professional but not quite so obvious to big enterprise IT managers.

Many users, fed up with seeing yet another IT security breach, have switched over to using password managers like KeePass, 1Password, Dashlane and LastPass. This is something CESG have encouraged in their recent password guidance. Password managers are good, especially if combined with two-factor authentication.

For users who are starting to use a password manager, they have the initial hurdle of switching over from having the same password for everything to using the password manager’s generated password approach. They may have a backlog of tens or hundreds of passwords that need changing. The process of changing passwords on most websites is abysmally unfriendly. It is one of those things that gets tucked away on a settings page. But then that settings page grows and grows. Is it ‘Settings’, or ‘My Profile’ or ‘My Account’ or ‘Security’ or ‘Extra Options’? Actually finding where on the website you have to go in order to change your password is the part which takes longest.

Making it easier for a user to change their password improves security by allowing them to switch from a crap (“123456”), reused, dictionary word (“princess”) or personally identifiable password (the same as their username, or easily derived from it: “fred” for the username “fred.jones”) to a strong password that is stored only in their password manager like “E9^057#6rb2?1Yn”.

We could make it easier by clearly pointing the way to the password change form so that software can assist the user to do so. The important part here is assist, not automate. The idea of software being able to automate the process of changing passwords has some potential selling points, but the likelihood of it being adopted is low. Instead, I’m simply going to suggest we have software assist the user to get to the right place.

In the form of a user story, it’d be like this: as a user of a password management application, I’d like to speed up the process of changing passwords on websites where they have been detected to be weak, reused or old. When I’m looking at a password I wish to change, I could click “change password” in the password management application and it’d take me to the password change form on the website without me having to search around for it.

There’s a few ways we could do this. There are some details that would have to be ironed out, but this is a rough first stab at how to solve the problem.

This is my preferred option. On the website, there is a link, either visible (using an a element) or invisible (a link in the head). It would be marked with a rel attribute with a value like password-change. Software would simply parse the HTML and look for an element containing rel="password-change" and then use the href attribute. The user may have to go through the process of logging in to actually use the password change form, but it’d stop the process of searching.

One issue here is that there are a large number of web apps that rely on JavaScript to render up the page and there is the potential for rogue third-party JavaScript to modify the DOM. A simple way to ameliorate this is to search for the value in the HTML itself and ignore any JavaScript. Another possible solution is to require that the password change form be located on the same domain as the website, or decide whether to trust the URL relative to the base domain based on an existing origin policy like CORS.

Putting JSON in a specified location

Alternatively, have people put some JSON metadata in a file and store it in a known location, similar to robots.txt or the various things spooked away in the .well-known hidey-hole. This is okay, but it suffers from all the usual flaws of invisble metadata, and is also a violation of the “don’t repeat yourself” principle—the links are already on the web in the HTML. Replicating that in JSON when it already exists in HTML increases the likelihood that the JSON version will fall out of sync with the published reality.

Same principle as the JSON one, but using HTTP(S) headers. Same issue of invisible metadata. Same issue with same-origin policies.

Security considerations

As noted above, there are some security issues that would have to be handled:

  1. Should a consuming agent (i.e. the password management application) allow third-party (or even same-origin) JavaScript to modify the DOM that contains the link?
  2. Should a consuming agent ignore password change form endpoint targets that are on a different domain?
  3. Should a consuming agent follow a password change link to a non-HTTPS endpoint?

My rather conservative answers to these three questions are all no, but other people might differ.

Warning on scope

As I said above, this is a very narrowly specified idea: the ecology of web application security is pretty fragile, and the likelihood of radical change is low, so I’m not proposing a radical overhaul. Just a very minor fix that could make it easier for (motivated, security-conscious) users to take steps to transition to better, stronger passwords.


file and libmagic doesn't detect SVGs if they don't have an XML declaration

I’ve struggled to find where to report this issue, so I’m putting it on my blog as a canonical copy just in case I forget to jump through whatever hoops are needed to report the bug.1

The file command that is widely used in UNIX land to detect file types doesn’t seem to detect an SVG file properly if it doesn’t have an XML declaration.

Fine, you might say, but if there’s no XML declaration, then that means it isn’t XML. Not so. §2.8 of XML 1.0 says that documents should have an XML declaration but that an otherwise well-formed XML document remains well-formed even without an XML declaration.

The application I’m working on will check manually to see if the data uploaded smells like an SVG and accept it per Postel’s law. But file and libmagic ought to do the job…

  1. Seriously, it is 2016, use Github or equivalent (GitLab, Bitbucket), not CVS…


An excellent article on the silly Conversational UI trend: Bots won’t replace apps. Better apps will replace apps.

As the author of the piece notes, there’s plenty that’s wrong with the current trend in app design. Conversational UIs are orthogonal to fixing those problems. Each individual app has become its own silo. The model of “spend a bunch of money to hire a bunch of iOS and Android devs to build out a custom app for each platform, then spend a ton of resources trying to convince people to download those apps” has to wind down at some point. And there will be a point where we want a lot more fluidity between interactions. We still spend an enormous amount of time jockeying data between apps and manually patching pipelines of information into one another like some a telephone operator of old. Conversational UIs don’t fix any of those things. Better UIs, which is often less UIs, fix that. As does more focus on trying to make it so we can more efficiently and seamlessly have single-serving, one shot interactions (which goes against all the metrics: we often measure success by how much time someone spends interacting with something, rather than measuring success by how well that thing hides itself away and doesn’t need to be interacted with).


HSCIC rebrand: distancing the NHS from care.data?

The Health and Social Care Information Centre is now changing its name to “NHS Digital”.

Actually, the full name is the strikingly memorable “NHS Digital: Information and technology for better health and care”, which sounds more like the name of an academic paper than a healthcare organisation. The first order of business for employees of NHS Digital will be to set up keyboard shortcuts so they can type the full name. We should perhaps breathe a sigh of relief that nobody managed to shoehorn the word “cyber” into the name.

The announcement states that the new name “should help to build public recognition, confidence and trust”. Presumably because when people think of “NHS Digital”, they’ll not associate the new brand with the colossal cascade of cock-ups that is care.data, the plan to share your confidential medical records including alcohol and tobacco use and mental health conditions on a centralised computing system that will undoubtedly be as secure and well-managed as most other central government databases. Rebranding HSCIC seems like a cynical way to distance themselves from this highly controversial scheme.


The latest revelations from GCHQ show them to be even more slippery and untrustworthy than previously thought. And the government want to give them more power rather than hold the leash tightly until they behave within the law? The intelligence services have shown a colossal lack of transparency or accountability that makes a mockery of any claim to being an institution compatible with democratic principles.


In things nobody should be the least bit surprised about, companies like 23andMe are being asked to provide DNA samples from their customers to law enforcement agencies.

There’s a way that the 23andme model could work that wouldn’t require creating a honeypot of DNA data for law enforcement. Instead of centralising the data, decentralise it. Have people send in their DNA sample, sequence it, then send that data back to the customer. Then to provide analysis of that data, give each individual customer a piece of software that runs on their computer and does the analysis on there. Then send back anonymous data, but giving each individual user the choice of what they share back to the centralised server. That would actually put users in real control.

Decentralisation and user empowerment isn’t just a nice idea in this case, it’s potentially the difference between being arrested and not. If companies like 23andMe don’t switch to some kind of decentralised model, it shows they are prioritising other factors above the privacy and security of their customers.


How to improve GOV.UK: publish visa-free travel information

I was going to post a tweet about this, but I need a big canvas to illustrate the entirety of the mess. I’m struggling to find an up-to-date list of which countries people can visit the UK from as a tourist without a visa on GOV.UK.

User story #1: As a person living in the United Kingdom who occasionally helps to organise travel for friends and family members, I want to be able to see who is allowed to enter the UK without a visa.

I went to Google and I typed in “countries who can visit uk visa free”. Mostly it brought back news articles about how as a UK citizen, I can visit 175+ countries with a British passport. Which is awesome. Being a UK citizen has some great advantages. I’d recommend it, along with Amazon Prime and an Amex Gold Card. Oh, wait, citizenship, it’s not just a consumer product. Fealty to sovereign powers and all that jazz.

The only GOV.UK result I could find on Google was this: Standard Visitor Visa. Now I’m pretty sure that this isn’t what I want, but it is close. So I start clicking through the pagination (sigh, my scroll bar works…) looking for a list of countries that are visa-exempt. You know, like, the US has the Visa Waiver Program, and on the US State Department website, it lists all the countries that can visit the US on the VWP. Can I find such a list? No.

Then I clicked the link on there to the Visas and immigration section of GOV.UK because the list must be there, surely.

Nope. Nada. Sigh.

Then I go back to the visitor visa page and hidden away on the second page, there’s a link titled “Check if you need a visa to enter the UK.”

I click that and it takes me to a page called Check if you need a UK visa.

Well, I mean, I don’t need a UK visa. I’m not checking for myself. I’m just curious now which countries need visas and which don’t. Government websites are there to inform all sorts of decisions, not just the person directly using that service.

Anyway, this “Check if you need a UK visa” thing doesn’t have anything as simple as a list of countries which visa exemptions. No. Instead, you have to click a big green button that says “Start now”, then it asks me what type of passport or travel document I have, and I’m asked to select from a big list. Then I’m asked to choose the purpose of my hypothetical visit. Tourism, say.

Then it shows me for the one country I’ve asked about.

That kind of solves the problem, albeit it requires about three more clicks than just having a list. Over-engineering.

User story #2: As a UK-based organiser of an international youth sporting tournament, I need to be able to check the visa requirements of 500 people who will be visiting the UK from around the world on scholarships.

Oh god, if you resemble the second user story, you are so screwed if you use the existing GOV.UK system. Instead of a simple list, you have to sort the list of students by country, then check each country by hand. Be sure to send the Cabinet Office the bill for the RSI you get from clicking through that form a couple hundred times.

Fortunately Wikipedia exists and has the list in readable HTML form rather than an interactive clicky thing that’ll make you want to stab someone. You better hope that someone hasn’t vandalised Wikipedia. Of course, if the government published a simple list of countries on a website with a stable URL, then Wikipedia could use that as a source and the editorial community on Wikipedia could use this to check that their copy is accurate.

I bet you someone spent a lot of time and effort making this interactive wizard thing. But it’s actually significantly less useful than a simple list on a webpage (like the one the US State Department or Wikipedia publish) for pretty much any use case that involves more than one person, which is pretty much everybody in the travel industry or anyone who has to organise travel for other people (so, schools, universities, businesses in almost all sectors, the government, charities, etc.).

This is an impressively strange case of over-engineering. I really hope someone fixes it.

Here’s the code for it, by the way.




There Was An Old Woman Who Continously Deployed

There was an old lady who swallowed a fly. No, no, hold it, she didn’t swallow a spider or a bird or a dog. Firstly, that’s stupid and irresponsible and a form of animal cruelty. Plus, we’re in the twenty-first century. We’ve got technology and cyber and stuff.

She got a programmable nanobot off some darknet knockoff of Alibaba that sells experimental biohacking gear and put it in a capsule. She swallowed the capsule. Now, of course, she needed something to control the nanobot. So she wrote a server script. The nanobot would talk to the server which would do it all. Who needs spiders and birds to catch flies when you have REST?

She wrote the script in Python. Because Python is lovely. She tried Django and Flask and Pyramid too. This script would stop the flies. And of course, she needed a database. Only PostgreSQL. I mean, there ain’t no MongoDB controlling my programmable nanobots, thank you very much. And a message queue and a few other things.

But then where to run the server? She couldn’t host it at home because BT provided her Internet connection (and that’s less reliable than a pair of baked-bean cans and a piece of string), so she decided to host it in the Cloud. But, of course, you can’t just set up a server individually for this. What if it needed to scale? So she decided to do some DevOps®.

She wrote a script to spin up an AWS EC2 server. Then she wrote an Ansible playbook to deploy it and a monitrc to ensure the servers were up. Then she set up Capistrano to deploy the script to the server. Then she set up Sentry to catch any exceptions. Better tie that into PagerDuty so she could programatically modify the notifications for when monit and Sentry had something to tell her.

Then she discovered Docker. Then she ended up on Hacker News reading about Kubernetes and decided that the whole thing really needed a microservices architecture.

And the fly is still inside her. Because the server never got deployed and the scripts never got written because she went down the pub instead.

Perhaps she’ll die. Or she’ll just crank it out in PHP.