tommorris.org

Discussing software, the web, politics, sexuality and the unending supply of human stupidity.


There’s finally a reasonable looking podcasting client for Linux: Vocal. It has the things that iTunes (etc.) has had since the early days which everyone told me were unimportant like remembering where you left off.

Of course, now we have the Web Audio API, Electron and things like remoteStorage, we may even get cross-platform free software podcasting apps that sync what you have and haven’t listened to across devices. And then we’ll be back to where we were back in about 2008 with iTunes and the iPod…


Firefox 52 adding insecure form warnings

The latest version of Firefox’s Developer Edition (formerly known as Aurora) now ships with more prominent warnings for when you send passwords from an insecure context. For a long time, some sites have attempted to justify having login forms on insecure HTTP pages with the argument that the credentials would be encrypted in transmission as they were being sent to an HTTPS endpoint. The problem with this is that it doesn’t actually prevent password sniffing, it just makes it slightly harder. Rather than sniffing the credentials as they go over the wire, you instead use a man-in-the-middle attack to intercept the HTTP page and insert JavaScript into it that sniffs the password upon entry and then sends it via Ajax to the interceptor.

Firefox’s new implementation uses a fairly simple algorithm described in the new W3C Secure Contexts spec that is attempting to standardise some of the key concepts of browser-side security. Hopefully, users being warned that they are submitting passwords insecurely will start prompting websites to stop doing the login-form-on-HTTP-that-submits-via-HTTPS anti-pattern.

My usual go-to example when illustrating the problem is airline websites, specifically the online checkin or frequent flyer account login page. You give airlines quite substantial amounts of money and personal information. For a long time, most were vulnerable to this kind of attack. Malicious hackers also have been known to steal and sell frequent flyer miles, although not necessarily though man-in-the-middle attacks on the login forms.

British Airways used to have a login form for their Executive Club frequent flyer programme on their homepage—they’ve now fixed this and the whole site seems to be HTTPS.

But the following (mostly randomly selected) airlines still are vulnerable to man-in-the-middle password stealing through JavaScript injection.

And that’s just one sector: airlines. There’s plenty more sites that ordinary people use everyday that have potential vulnerabilities caused by these purportedly-secure-but-really-not login forms. Browsers giving prominent and irritating warnings about it is the first step to getting the companies to pay attention.

When the next big attack happens, there will–as always–be non-technical people from government and business lamenting how difficult all this information security stuff is, and how the vectors of attack are always changing. Let it be on the record that this kind of vulnerability is extremely simple, well-known and relatively easy to exploit. There are interesting and ingenious ways to attack Transport Layer Security, but if you don’t turn it on to start with, one doesn’t need to DROWN POODLEs or really do anything that will make technical people go “ooh, that’s clever”. Firefox warning users about this really old and boring way of breaking user security might mean that people spend less time speculating about the scary difficult emerging threats and fix the basic glaring security errors right in front of them.


Towards an Evernote replacement

Since the recent announcements by Evernote that they really, really will be able to poke around inside your notebooks without issue, and they’ll also apply the same sort of machine learning technology to your data that people have convinced themselves that paying for a product will help them avoid, lots of people have been looking at alternatives to Evernote. I’ve been evaluating a few of them.

Here’s some of the open source alternatives I’ve seen people talk about:

  • Laverna, a client-side only JavaScript app that uses localStorage for your notes. No syncing, sadly.
  • Paperwork, PHP serverside app
  • Standard Notes, which aims to be an encrypted protocol for storing Evernote-style notes

These all handle the plain text (or Markdown or whatever) case reasonably well, but there’s a few things Evernote provides which we should be aware of if we’re trying to find replacements.

  1. text/plain or RTF storage. A lot of people store a lot of simple text notes in Evernote.
  2. OCRed PDF storage. Evernote has an app called Scannable that makes it ludicrously easy to scan a lot of documents and store them in Evernote.
  3. Web Clipper: I don’t use this, but a lot of people use Evernote as a kind of bookmarking service using the Web Clipper plugin that they provide. If they see a news article or recipe or whatever on the web, they clip it out and store it in Evernote and use that almost like a reading list, like what people use Instapaper/Pocket for etc.

The solutions people have been building generally solve problem (1) but do little to address problems (2) and (3).

My own preferred solutions are basically this: for (1), I’m leaning towards just storing and syncing together plain text Markdown files of some description.

Solving (2) is a harder problem. My current plan is to try and create a way to store all these in email. Email is a pretty reliable, well-tested and widely implemented Everything Bucket. The process would be relatively simple: scan document, run it through an OCR process, then provide the relevant title and tags which could be stored in the subject line and body of the email. The OCR result would also be stored in the body of the email to make it more easily searchable. Then you just stick it all in an email folder (or Gmail label). You’ve got a security layer (whatever your email provides, and if you are storing lots of important data in there, you should probably ensure it is 2FA). You’ve got sync (IMAP). You’ve got universal capture (email it to yourself). And you have already made either the financial bargain (with, say, Fastmail) or the give-away-all-your-personal-information bargain (Gmail). Backing up IMAP is relatively trivial compared to backing up whatever weird binary blob format people come up with.

Solving (3) is somebody else’s problem because I don’t understand why anyone wants to stick all the websites they’ve ever visited into Evernote.

That said, let’s not promise users replacement for software if we are only replicating the features from that software that we actually use. If anyone has great suggestions for how they are going to sort out problem (2), I’m all ears.


Mako for VS Code

I just launched a Visual Studio Code extension: Mako. It provides syntax highlighting for the Mako template language. There’s already extensions for Jinja, Django and so on.

I can’t take much credit for this: literally all the extension does is takes an existing TextMate language theme. I needed it, so I created it very quickly and released it. It’s MIT licensed per the original TextMate plugin.

I’m really impressed with how great VS Code is. To see how to build your own plugin, check out this example tutorial and this page explaining how the vsce CLI tool works.



TIL: Vagrant doesn’t do any kind of error recovery when a VM (.box) download fails/stalls. This gets boring the third or fourth time it happens.

If you happen to be using a sporadic or error-prone Internet connection (and if you are using a London DSL connection, that definitely counts) you can get around this by manually downloading the box file (using wget, say) and then using vagrant box add boxname ~/Downloads/name_of_vm.box (see this StackOverflow answer for details of the manual box adding process). This may also be of use if you are behind particularly fiddly or developer-unfriendly corporate firewalls.

Of course, in an ideal world, we’d have languages that are sensible enough that you don’t need to download a Linux distribution for each project you work on in said language.



You (probably) don't need a chatbot

There has been a great hullabaloo in the last few months about the rise of chatbots, and discussions of “conversational UIs” or, even more radically, the concept of “no UI”—the idea that services might not need a UI at all.

This latter concept is quite interesting: I’ve written in the past about one-shot interactions. For these one-shot interactions, UI is clutter. But chatbots aren’t the answer to that problem: because chatbots are UI, just a different sort of UI. Compare…

Scenario 1: App

  1. Alice hears an amazing song playing in a club.
  2. Alice pulls out her iPhone and unlocks it by placing her finger on the TouchID sensor.
  3. Alice searches on the homescreen for the Shazam app.
  4. Alice opens Shazam, then presses the button to start the process of Shazam identifying the song that is currently playing.
  5. Alice waits.
  6. Alice is told what the song is and offered links to stream it or download it from a variety of streaming and download services that vary depending on the day of the week, the cycle of the moon, and how Shazam’s business development team are feeling this week.

Scenario 2: Chat

Someone at Shazam decides that apps are a bit old-fashioned and decides to build a chatbot. They have read an article on Medium.com that tells them that chatbots are better, and decide to build one based solely on this advice rather than any actual empirical evidence.

  1. Alice hears an amazing song playing in a club.
  2. Alice pulls out her iPhone and unlocks it by placing her finger on the TouchID sensor.
  3. Alice searches on the homescreen for the Facebook Messenger app.
  4. Alice opens Facebook Messenger, then locates the existing chat session with the Shazam bot.
  5. Alice scrolls back up the chat to work out what the magic phrase she needs to type in to trigger the chatbot into listening to music.
  6. Alice waits.
  7. Alice is told what the song is and offered whatever extra rich data the chat UI is allowed to show.

As you can see, this is a vast improvement, not because it makes the process less involved or elaborate, but because someone on Medium.com told them that it is new and exciting.

Scenario 3: Idealised One-Shot Interaction

  1. Alice hears an amazing song playing in a club.
  2. Alice taps a button on her smartwatch. Everything else happens in the background. Alice continues partying and enjoying herself rather than being the saddo staring at her phone all night.

For those without a smartwatch, a lockscreen button on the phone could be substituted.

Anyway, this is a slight distraction from the broader point: chatbots are a bit of a silly fashion and a fad and that they seem to be adopted based on fashion rather than based on any actual utility.

But, but, there’s this awesome chatbot I use, and I really like it!

Great. I’m not saying that they have no purpose, but that chatbots are being adopted even though they often are worse at what they do than the alternative. They also come with considerable downsides.

First of all, chatbot UIs are poor at letting a user compare things. When someone browses, say, Amazon or eBay or another e-commerce service, they will often wish to compare products. They’ll open up competing products in different tabs, read reviews, check up on things on third-party sites, ask questions of their friends via messaging apps and social media sites like Facebook. Chatbot UIs remove this complexity and replace it with a linear stream.

Removing complexity sounds good, but when someone is ordering something, researching something or in any way committing to something, navigating through the complexity is a key part of what they are doing.

Imagine this scenario. Apple have 500 different iPhones to choose from. And instead of calling them iPhones, they give them memorable names like UN40FH5303FXZP (Samsung!) or BDP-BX110 (Sony!). Some marketing manager realises the product line is too complex and so suggests that there ought to be a way to help consumers find the product they want. I mean, how is the Average Joe going to know the difference between a BDP-BX110, a BDP-BX210, and a BDP-BX110 Plus Extra? You could build a chatbot. Or, you know, you could reduce the complexity of your product line. The chatbot is just a sticking plaster for a broader business failure (namely, that you have a process whereby you end up creating 17 Blu-Ray players and calling them things like BDP-BX110 rather than calling them something like “iPhone 7” or whatever).

Chatbots aren’t removing complexity as much as recreating it in another form. I called my bank recently because I wanted to enquire about a direct debit that I’d cancelled but that I needed to “uncancel” (rather than setup again). I was presented with an interactive voice response system which asked me to press 1 for payments, 2 for account queries, 3 for something else, and then each of those things had a layer more options underneath them. Of course, I now need to spend five minutes listening to the options waiting for my magic lucky number to come up.

Here’s another problem: the chatbot platforms aren’t necessarily the chat services people use. I’m currently in Brazil, where WhatsApp is everywhere. You see signs at the side of the road for small businesses and they usually have a WhatsApp logo. WhatsApp is the de facto communication system for Brazilians. The pre-pay SIM card I have has unlimited WhatsApp (and Facebook and Twitter) as part of the 9.99 BRL (about USD 3) weekly package. (Net neutrality? Not here.) The country runs on WhatsApp: the courts have blocked WhatsApp three times this year, each time bringing a grinding halt to both business and personal interactions. Hell, during Operação Lava Jato, the ongoing investigations into political corruptions, many of the leaks from judges and politicians have been of WhatsApp messages. Who needs Hillary Clinton’s private email servers when you have WhatsApp?

WhatsApp is not far off being part of the critical national telecoms infrastructure of Brazil at this point. Network effects will continue to place WhatsApp at the top, at least here in Brazil (as well as most of the Spanish-speaking world).

And, yet, WhatsApp does not have a bot platform like Facebook Messenger or Telegram. To get those users to use your chatbot, you need to convince them to set up an account on a chat network that supports your bot. For a lot of users, they’ll be stuck with WhatsApp, the app they use to talk to their friends, and Telegram, the app they use to talk to weird, slightly badly programmed robots. Why bother? Just build a website.

Now, in fairness, WhatsApp are planning to change this situation at some point, but you still have an issue to deal with: what if your users don’t have an account on the messaging service used by the bot platform?

One of the places chatbots are being touted for use is in customer service. “They’ll reduce customer service costs”, say proponents, because instead of customers talking to an expensive human you have to employ (and pay, and give breaks and holidays and parental leave and sick days and all that stuff) to, you just talk to a chatbot which will answer questions.

It won’t though. Voice recognition is still in its infancy, and natural language parsing is still fairly primitive keyword matching. If your query is simple enough that it can be answered by an automated chatbot, it’s simple enough for you to just put the information on your website, which means you can find it with your favourite search engine. If it is more complicated than that, your customer will very quickly get frustrated and need to talk to a human. The chatbot serves only as a sticking plaster for lack of customer service, or business processes that are so complicated that the user needs to talk to customer service rather than simply being able to complete the task themselves.

You know what else will suffer if there were a widespread move to chatbots? Internationalisation. Currently, the process of internationalising and localising an app or website is reasonably understandable. In terms of language, the process isn’t complex: you just replace your strings with calls to gettext or a locale file, and then you have someone translate all the strings. There’s sometimes a bit of back and forth because there’s something that doesn’t really make sense in a language so you have to refactor a bit. There’s a few other fiddly things like address formats (no, I don’t have a fucking ZIP code) and currency, as well as following local laws and social taboos.

In chatbot land, you have the overhead of parsing the natural language that the user presents. It’s hard enough to parse English. Where are the engineering resources (not to mention linguistic expertise) going to come from to make it so that the 390 million Spanish speakers can use your app? Or the Hindi speakers or the Russian speakers. If your chatbot is voice rather than text-triggered, are you going to properly handle the differences between, say, American English and British English? European Portuguese, Brazilian Portuguese and Angolan Portuguese? European Spanish and Latin American Spanish? Français en France versus Québécois? When your chatbot fucks up (and it will), you get to enjoy a social media storm in a language you don’t speak. Have fun with that.

And you can’t use the user’s location to determine how to parse their language. What language should you expect from a Belgian user: French, Dutch or German?

If you tell a user “here’s our website, it’s in English, but we’ve got a rough German translation”, that’s… okay. I use a website that is primarily in German everyday, and the English translation is incomplete. But I can still get the information I need. If, instead, your service promised to understand everything I say, then completely failed to speak my language, that’d be a bit of a fuck you to the user.

In the chatbot future, the engineering resources go into making it work in English, and then we just ignore anyone who speaks anything that isn’t English. World Wide Web? Well, if we’re getting rid of the ‘web’ bit, we may as well get rid of the ‘world’ and ‘wide’ while we’re at it.

Siri and Cortana are still a bit crap at language parsing, even with the Herculean engineering efforts of Apple and Microsoft behind them. An individual developer isn’t going to do much better. Why bother? There’s a web there and it works.

There’s far more to “no UI” or one-shot interactions than chat. But I’m cynical as to whether we’re ever going to reach the point of having “no UI”. We measure our success based on “engagement” (i.e. how much time people spend staring at the stuff we built). But the success criteria for the user isn’t how much time they spend “engaging” with our app, but how much value they get out of it divided by the amount of time they spend doing it. The less time I spend using your goddamn app, the more time I get to spend, oh, I dunno, looking at cat pictures or snuggling with my partner while rewatching Buffy or writing snarky blog posts about chatbots.

But so long as we measure engagement by how many “sticky eyeballs” there are staring at digital stuff, we won’t end up building these light touch “no UIs”, the interaction models of set-it-and-forget-it, “push a button and the device does the rest”. Because a manager won’t be able to stand up and show a PowerPoint of how many of their KPIs they met. Because “not using your app” isn’t a KPI.

Don’t not build a chatbot because of my snarkiness. They may solve a problem that your users have. They probably don’t but they might. But please don’t just build a chatbot because someone on a tech blog or a Medium post told you to. That’s just a damn cargo cult. Build something that delivers value to your users. That may be a chatbot, but most likely, it’s something as simple as making your website/app better.


Let's Encrypt: it just works, so use it

I’ve just switched over to Let’s Encrypt. My paid-for SSL certificate expires today. I don’t object too much to having to pay the €11 or whatever to renew it, but having to remember to do it every year is a huge faff. The process of making a CSR, logging into the SSL supplier website and all that is just boring.

Let’s Encrypt is nice not only because you don’t have to pay the (not very expensive) SSL certificate tax, but because ideally, it automates the renewal process. Instead of being an annual arse-ache, it’s hopefully set-it-and-forget-it.

For your personal or hobby site, if you currently do the annual certificate dance, switch to Let’s Encrypt when your SSL certificate expires. If you don’t currently do SSL, Let’s Encrypt takes one of the pain points out of it. There’ll still be a market for SSL certificates for businesses (especially for wildcard and EV certificates), but Let’s Encrypt lowers the bar to using SSL significantly.


Git fuzzy branch finder relying on the awesome fzf. This is fantastic if your job makes you use silly things like ticket or story IDs from a project management app in your branch names. Because nobody deserves to suffer because of JIRA.


How to nuke Chrome's HSTS database (if you really need to)

When debugging hairball web dev problems, occasionally you’ll end up sending HSTS headers to yourself while on localhost. These are surprisingly tricky to remove from Google Chrome. Quite sensibly, the Chrome developers store HSTS hosts hashed.

The standard answer you read online is to go to chrome://net-internals/#hsts. You can check to see if there’s an entry in Chrome’s HSTS database, and you can remove it. But I sit there and type every version of localhost, localhost:3000, 192.168.0.1, 127.0.0.1, ::1 and so on that I can think of and yet the problem remains. (The query form works whether you prefix the query with and without the https scheme and :// separator. So https://en.wikipedia.org and en.wikipedia.org will both search for the same entry. The query form is quite useful if you need to debug your own site’s HTTPS/HSTS setup, so I recommend it.)

The unfortunate result of having screwed up your localhost HSTS entry in Chrome is you then spend the next few days having to use some other browser you aren’t familiar with to work on localhost. This isn’t ideal.

A rather brute force solution to a polluted HSTS database is to delete the database completely. It isn’t a great solution, but if you’ve polluted the database such that you can’t remove localhost from it despite numerous attempts, sometimes needs must. So here’s how you do it.

  1. Close Chrome. It caches the HSTS database in memory, so if you just remove the file, it’ll get rewritten.
  2. Find where Chrome stores the database. On Mac OS X, using the Chrome that’s in use as of today (August 2, 2016), this is in ~/Library/Application Support/Google/Chrome/Default/ in a file called TransportSecurity. Older versions of Chrome store it in a file called StrictTransportSecurity. If you are using Chrome Canary, substitute “Chrome” for “Chrome Canary” in the paths. If you are using Chromium, substitute “Chromium” for “Google/Chrome”. On Linux, check out .config/google-chrome/Default/ or .config/chromium/Default as appropriate. On Windows, ask Cthulhu.
  3. Replace the TransportSecurity file with an empty JSON object: {}. On OS X, echo "{}" > ~/Library/Application\ Support/Google/Chrome/Default/TransportSecurity
  4. Restart Chrome.

Now, be aware that resetting your HSTS database /does/ undermine the security benefits of having HSTS in your browser. After doing it, you should keep an eye out for spoofing, MITM attacks, phishing attempts and so on until Chrome picks up replacement HSTS headers from the sites you visit frequently. This /is/ pretty much the last resort and you shouldn’t be doing it routinely. Don’t do it unless you absolutely have to. If you are going to run an app on localhost that might try and force HTTPS, do the testing in an Incognito window or in a browser whose profiles/cache you can fully nuke after you are done.

Incidentally, I haven’t had any similar issue in Firefox because Firefox’s HSTS database is tied into the browser history, meaning that you simply find an entry in history and tell Firefox to forget about the site, and it removes the HSTS entry.






Paypal email me to say a subscription payment failed (due to a replaced card). I click the link in the email that PayPal sent me and it takes me to the “outdated version of PayPal”. PayPal is a fucking hot mess of a service.



Why GDS / GOV.UK bet on the web rather than apps. All of which is eminently sensible. Imagine the alternative: the UKGOV app. You download it and it has everything you need to be a citizen. Okay, every week there’d need to be new updates to the functionality. It’d have to be available on every single platform.

And you’d end up downloading and storing a lot of that binary blob for functionality that you use once in a blue moon (renewing your passport? That’s a once every ten years problem. Driving licence renewals are the same. Some of the business-oriented stuff that’s done on government websites will affect only half a percent of a country’s citizens.) and you then have the moral risk of having taken app permissions (notifications, location etc.)

Apps have downsides too. A big problem in the tech industry at the moment is too many people count up the negative sides of the web, and the positive sides of apps and don’t consider the other side of both balance sheets. Still, the current ridiculous app trend is great for keeping iOS and Android devs in full employment.


Startups can’t explain what they actually do. I disagree with the contention that it’s because the owners can’t think clearly: it is all a scam to get money. It’s all hype. Saying “cloud-based disruptive P2P content-driven platform” hoodwinks coked-up investors in a way that “it’s a website where you can upload your photos and we’ll show them to you in a different way” doesn’t.

Also, the article assumes that the customer is the end user. In Silicon Valley land, that’s rarely if ever true any more.


TIL: the Royal Mail Group have a registered trade mark on “the colour red”. Take that, refraction.