Discussing software, the web, politics, sexuality and the unending supply of human stupidity.

LibreOffice: a case study in frustration and usability shortcomings

I don’t really use LibreOffice. I have it installed because it’s marginally better than the alternatives—namely, paying for Office 365, using OpenOffice (which is basically LibreOffice without any pesky developers actually working on it), or figuring out how Apple have broken Pages since I last used it. I use text/plain, Markdown, and if I’m feeling particularly masochistic, LaTeX. Or Google Docs because of social obligation.

Recently, I had some docx files I needed to tidy up, edit a bit and convert to PDF. Finally, a perfect use case for LibreOffice to shine.

I started to report these issues on LibreOffice’s bug tracker but life is too short for Bugzilla.

Keyboard shortcuts didn’t work

I opened up the first document and it was too big for my screen. I’m using a Mac, so let’s use the universal zoom out command: Cmd -

That didn’t work. It just put a - into the document. Unhelpful.

How about zoom in? Cmd =. Nope, that didn’t work either.

So that’s problem 1: Cmd - and Cmd = don’t work for zooming in/out even though that’s the keyboard shortcut in every other Mac app.

What is the keyboard shortcut for it?

Finding keyboard shortcuts

I clicked ‘Help’ on the menu bar. I typed in ‘keyboard shortcuts’ in the search box. Nada.

LibreOffice’s help menu doesn’t integrate with the Help menu search box in macOS. But it still has a search box. The point of the Help menu search box is that it searches both the menu options of the application and the topics listed in the documentation. In LibreOffice it does only the former (which I believe is built into the operating system) but not the latter. This is misleading: it’ll lead users to think there isn’t documentation there when there is.

So, problem 2: The Help menu has a search field that doesn’t work.

Okay, let’s look at the Help menu. Many Mac apps have an item on the Help menu called Keyboard Shortcuts. Nope.

Problem 3: If your app has keyboard shortcuts, you should have a menu item in the Help menu telling people what they are. Undiscoverable keyboard shortcuts need to be made discoverable.

The first entry in the help menu is called ‘LibreOffice Help’ and has the distinctly Windowsy shortcut F1. I open that up.

Problem 4: Don’t use Function key shortcuts on macOS like you would on Windows. macOS is not Windows.

It then brings up a window called LibreOffice Help and it is focussed in a search box so I can search the index. You know, the thing I couldn’t search using the in-menu search box.

So I search for “keyboard shortcuts”. Nope. There is “keyboard; general commands” which didn’t have a cross-reference for “keyboard shortcuts”. Okay, let’s take a look at that.

Problem 5: If you have a document containing keyboard shortcuts, you should be able to find it by typing ‘keyboard shortcuts’ into the search field.

Nothing in that document about zooming in or out. Okay, let’s search for something else. I focus back in the search field. I type Cmd+A to select all the text in the search field to delete it and… I get a Database Wizard window.

Do I want to create a new database? No. I want to select all in the window I was focussed in, so I could delete it and type something else, just like I would type Cmd+A in any other Mac app.

I’m pretty sure if I wanted to create a database, the route I would take wouldn’t be: open LibreOffice, open a word processing document, edit it a bit, want to zoom out, open up the help centre, type something, not find what I’m looking for, then try to change my search, then create a database. Makes perfect sense.

Problem 6: Why would I want to create a database when I’m looking at help files for the word processor?

The help centre is not going to help, evidently. Maybe there’s a menu option to zoom the document.

View menu, Zoom! Okay, no keyboard shortcuts listed in there. There’s keyboard shortcuts for a whole bunch of things which aren’t listed in the help document, but whatever.

I change the zoom level by going to the submenu. You can also change the zoom by using the slider in the bottom right hand corner of the window. This has zoom in and zoom out buttons (marked ‘-‘ and ‘+’). That function exists… but isn’t accessible using the menu or through a keyboard shortcut.

As an aside: the database thing

When I accidentally ended up in the ‘Database Wizard’ while trying to find keyboard shortcuts in the help section, I noticed it asked me what sort of database I wanted to create, with a drop-down menu.

The only option in the drop-down menu is ‘HSQLDB Embedded’. As someone who knows a bit about databases, I know what this is. But:

Problem 7: You are offering me a choice of databases but there’s only one. Why offer me a drop-down box—suggesting some choice—if there is no choice?

Problem 8: Why tell me what type of database it is? This gives me no meaningful information.

Back to my document

Alright, so zooming in and out of a document using system-wide keyboard shortcuts isn’t going to be possible. Oh well. Let’s work on the document.

First of all, there was a table I wanted to delete. I did what many years of Microsoft Word—and natural intuition—taught me. Go to the line after the table finished. Press delete a bunch of times until the table goes away. Nope.

How about create a selection that starts after the table finishes and go back before the table starts then press delete? Nope. That doesn’t work either.

The way you delete a table is you right click on the table and choose “Delete > Table”. Or you do the same from the drop-down menus.

Problem 9: Why not let me delete a table by backspacing over it, or range-selecting it and pressing delete?

I delete some stuff from the document. It has a table of contents that’s auto-generated. So I go to it and “update” it. It loses all the formatting from the table of contents. This is bad.

Problem 10: Updating a table of contents in a docx file lost the formatting of the table of contents.

Now, time to save my document. I save a copy as ODT. I want to export it as PDF too.

Problem 11: The Export as PDF window has ‘Create PDF form’ pre-selected even though my document doesn’t have any forms in it. Surely, it’d be sensible to work out whether the document has any forms in it then only offer me the option of whether to create a PDF form if there’s forms that actually need creation? Otherwise, you are asking me to make a wholly redundant decision (and, boy, are there a lot of decisions in the LibreOffice Export as PDF panel: it’s a six-panel tabbed interface with 75 different interface components including radio buttons, drop-downs, checkboxes, numeric selectors, buttons and a file selector). If it’s irrelevant, grey it out.

I’m not trying to be shitty to LibreOffice’s developers: I’m glad LibreOffice exists and want it to succeed, even though it’s not really for me. It still feels very unpolished and unloved. I get why some of the above issues exist: they are trying to make a system that works across platforms, they’ve got a whole bunch of different products to work on.

I have reported one of these issues on Bugzilla, and it was closed as a duplicate of an issue that has existed untouched since 2012. I wish the LibreOffice developers all the best, but it does seem like an unmanageably large monolithic blob of an application with an enormous amount of technical debt with a small, overstretched team.

git: redo a failed commit with the message you already typed

I bet you’ve been in this scenario: you are committing something to a Git repository and it fails.

Perhaps a pre-commit hook fails. Perhaps you are using a secure smartcard (including a YubiKey) to sign your commit, but it isn’t plugged in. Or you type in your PIN or GPG passphrase wrong. Or you’ve accidentally got an .git/index.lock file in the way and you need to clear it before committing. Or maybe your EDITOR environment variable is set wrong. You’ve attempted to commit. It fails.

You’ve fixed whatever makes it not fail. You want to commit again and use the beautiful commit message you wrote a few minutes ago when it failed.


git commit -F .git/COMMIT_EDITMSG

When your commit fails, it stores the message in a file called .git/COMMIT_EDITMSG. This merely commits with the contents of that file as your commit message. -F/--file takes a file name and uses that as the commit message, similarly to how -m takes a message on the command line.

On my computer, I’ve set it up as an alias (git retry-commit). Which I probably won’t remember.

My blog has allowed me to post over 140 characters for a lot longer than Twitter.

I can also correct my spelling, add links and formatting and I own it. I like this. Writing without limitations. Also, no Nazis on my website. Guaranteed.

How to use a YubiKey to secure all the things

For a while I’ve been using YubiKey’s blue FIDO key. It enables you to use FIDO U2F for web services. Currently, the main sites that support Universal 2nd Factor (U2F) login are Google, Facebook, GitHub, Dropbox, Fastmail and the password manager Dashlane. It’d be great to see more sites added to this list, but this is a good start.

I upgraded recently from the blue FIDO key to the black YubiKey 4. The full YubiKey can be used for a few things

  • website 2FA using FIDO U2F
  • login to your computer using PIV
  • GPG
  • SSH

It took a while before I got it setup for each of these things.


This is the easy bit. You go to the relevant websites, you activate the U2F mode and you put the key in as required. This works the same whether you use the blue FIDO U2F-only key or the black YubiKey 4.

Generally, the way to use U2F is in addition to TOTP tokens. U2F is a nicer experience than TOTP on laptops and most desktops. (Really, some USB ports are really poorly designed for actually using a YubiKey with. The USB ports on either side of the old Mac keyboards are bad. The two USB ports on my Das mechanical keyboard are perfect.)

Computer login using PIV

To set your Windows or Mac computer up to login, you can do that using the YubiKey PIV Manager app. You can find instructions on the YubiKey website on how to set up key-based authentication on Linux.

The way I’m using the PIV mode is like this. I have a long password for my computer. I can type this in or I can use the YubiKey with a six-digit PIN. If you’ve got a good 20+ character password, you can always use that. But if you also have the USB key, you can use the PIN instead.


With the YubiKey, you can put a number of GPG keys on to it. As with the PIV logins, these are protected with a PIN. My main use case for GPG at the moment is code signing for GitHub. (Alas, nobody seems too bothered about email signing.)

This blog post by Simon Josefsson explains the process of setting up a YubiKey for GPG. The broad approach is you create a GPG key, then you create three subkeys—one for encryption, one for signing, one for authorisation. The private key for each is stored on the YubiKey: once you’ve pushed the private key to the YubiKey, there’s no way to get it back out again. (So, it’s sensible to make a backup of the keys and store them offline on, say, a USB key in a safe.)

If you’ve already got your public key on GitHub, you’ll need to export a new public key containing the subkeys you’ve stored on the YubiKey and paste that up on GitHub. (If you use GitLab, this whole approach doesn’t work… because of this bug I’ve reported.)

Once you’ve got it all set up, when you commit some new code to a Git repository, it’ll ask you for your YubiKey’s six-digit PIN. That is cached for a bit, so it should only interrupt you occasionally. If you remove the YubiKey, you’ll obviously have to put it back in and re-enter the PIN to commit new code.


Now the fiddly bit. The way you use SSH with the YubiKey is to convert your GPG key into an SSH public key. There’s a utility called gpgkey2ssh that does just this for you: you point it to your GPG public key and it’ll turn that into an SSH public key (remember, other than an offline—and hopefully encrypted—backup, you don’t have the private key: it’s stowed away inside the YubiKey).

The problem occurs with the version of GPG. You need to ensure you are running GPG 2.1. You can now get it from Homebrew on the Mac, so do that. You may need to ensure that symlinks are going to GPG 2.1 if you’ve got multiple installs.

This tutorial is the best one for getting gpg-agent setup for doing SSH. I found that it’s a bit of a faff keeping track of which keys are doing what during the process, so I ended up noting them down on a bit of paper.

So, what to do?

If you are an ordinary user and you don’t mind the tradeoffs of having to occasionally plonk a USB key in the side of your computer, get a blue YubiKey FIDO U2F. They’re fantastic.

There are issues: on desktop Macs, you need to ensure you have a USB socket that’s actually available and not a faff to use (i.e. not on the back of a Mac Mini, or tucked away under the edge of a keyboard so you can put the key in… but can’t actually push the button). You may need to get a little bit of USB extension cable to give you somewhere to press, or you might consider getting the YubiKey Nano.

If you are a geek and write software (or software-adjacent products including documentation), and you don’t mind jumping through quite a few more hoops to get the GPG and SSH stuff set up, you should do that too. If you don’t use GPG or SSH, you probably don’t need the black YubiKey and can stick to the blue FIDO U2F device.

You can buy the YubiKey 4 from Amazon UK, or if you prefer, you can get the YubiKey FIDO U2F ‘blue’ key.

Further reading

How to no longer fuck up symbolic links with lns

lns is an old-school Perl utility that unbreaks the command order of ln -s.

If you’ve got a file called a.txt and you want to make a symbolic link b.txt, you can type lns a.txt b.txt or lns b.txt a.txt and it’ll do the intelligent thing. This is really handy. It won’t overwrite anything. It won’t break shit. It’s in my ~/bin/ forever now.

Since that site is looking a bit old and shaky, here’s archive versions: Internet Archive 2016-10-30, GitHub Gist for source code 2017-08-30

Things I learned this week: #EuroPython

This week, I’ve been at EuroPython in Rimini, Italy. I’ve been tweeting some sessions on the @Nexmo Twitter account. Here, I’m link-dumping a bunch of stuff I learned that you may or may not find of interest. It’s less a methodical writeup of talks, more links and stuff I found that are interesting, some of which were discussed at the conference (others have nothing to do with Python, programming or the conference).

Sourcelift is a subsidised travel programme for open source developers to go to conferences and events. They support mostly Python, Node.js and PostgreSQL-related projects, plus language-agnostic tools like text editors and package managers.

World’s Biggest Data Breaches: a rather depressing infographic of big hacks and data breaches.

awesome-security is a collection of interesting links related to security.

Bandit is a static analyzer for Python that inspects the abstract syntax trees (ASTs) of Python code looking for common security issues. Some of the checks are pretty straightforward and could be achieved without the need for an AST parse: grep could tell you whether the code uses stuff like the ftplib, telnetlib, xmlrpclib or pickle modules, eval, or MD5. But checking for empty exception parsing (i.e. except: pass) is useful.

Static analysis seems to have been a theme at EuroPython this year, with one mention of SonarQube, a hosted SaaS application that scans for bugs and potential security vulnerabilities in a bunch of languages including C, C++, C#, PHP, Java, Python, JS, VB (6 and .NET), Swift and Objective-C. I haven’t tried it yet but it looks interesting.

Even more accessible is Semmle’s LGTM1. It’s blissfully simple: you add a repo, it scans it for bugs and alerts them to you in a dashboard. I’ve fixed a very minor one already. If you are maintaining an open source project, adding this kind of stuff is a no brainer. Robots finding bugs automatically is better than humans finding them.

Every year, there’s a games design contest called PyWeek. The idea is simple: you have one week to design and build a game, in Python, based on a theme. This is a really cool idea.

PyDatalog is an implementation of Datalog. Datalog is a simplified subset of Prolog. It is very easy for one to forget the existence of logic-based programming languages like Prolog/Datalog, but they still exist and still help solve real world problems.

objgraph is a Python object graph visualiser (it uses GraphViz). It’s quite helpful for finding memory leaks.

sanest is a library for nested dictionaries and lists. It allows you to do nested operations, and more intelligently handle errors.

Two web smaller-than-micro frameworks: Firefly and Pico.

I’m going to write more about data science, specifically machine learning soon.

  1. I’ve already mistyped it ‘LGBT’ at least three times. 🏳️‍🌈

Chatbots are the future and always will be

I read a lot online about how chatbots are the future. Ultimately, the proof of chatbots is in the eating—I mean, err, talking (this really is a crap metaphor). Let’s try one out to do something like… booking a flight.

The “domain model” of commercial passenger aviation is familiar to most people, but still there are a lot of choices to be made. Which airline? Which day do I want to fly? Some days are a bit cheaper than others. What seat class do I want? What ticket class do I want? (They are different things.) Do I want to book using my frequent flyer miles or with cash? Do I get checked baggage or just hand baggage? It might be slightly cheaper if I fly from London Gatwick rather than Heathrow. Do I want to land at Newark or JFK? Am I logged into my frequent flyer account? Do I need to supply Advance Passenger Information now or later? What about if I need to change flights? If I book business class for the long-haul segment, am I going to get business class on the short-haul codeshare segment? What are the cancellation conditions?

Apps like those provided by British Airways do their best to hide away the complexity of this. Within the confines of “I know I want to fly with BA”, rather than “I want to compare different airlines”, the app from BA provides a pretty good experience at managing this matrix of different options.

Now let’s try and book a flight using a chatbot.

First stop, Facebook Messenger. After all, Mark Zuckerberg said:

We think that you should just be able to message a business in the same way you message a friend

Well, I know that I like messaging friends at 8:30 on a Saturday morning demanding that they look up flights for me. So let’s go!

I open up Facebook. ‘New Message’. I type in British Airways. And I get… nothing.

Okay, this is just the Messenger thing they build into the default Facebook view. I guess I should open up the actual Facebook Messenger page. Let’s have a chat with British Airways.

I found, well, British Airways i360. Which is a giant levitating dougnut on Brighton beach. I thought about messaging it and asking whether or not the doughnut is currently levitating or not. But then I realised that there’s probably someone being paid not very much money to answer stupid questions from people on the Internet and left it be.

Maybe British Airways haven’t climbed on board the chatbot train yet. I’m sure some enterprising developer must have built a chatbot that lets me search and book flights. Let’s search for one, on Facebook Messenger. I had an insightful conversation with ‘Flight Bot’ that resulted in me… not being able to search for a flight. That went well.

I mean, I get it. It seemed like a good idea when they built it, maybe not now.

I still want to find a chatbot that can help me fly from London Heathrow to Newark Liberty International. That shouldn’t be too hard. So, as with all problems, let’s turn to Google. Oh, Google, find me a travel chatbot.

Huzzah! There are plenty to choose from. The popular travel booking sites Expedia, Skyscanner, Kayak and Cheapflights all have chatbots. On Facebook! Okay, let’s test Expedia.

That didn’t work out. I’m now feeling guilty about wasting some poor human’s time with a test query when I could just search on the web. Let’s give Skyscanner a go. This time, instead of just being a chat window, it asks me to “Get Started”. This means it’s actually a bot, I guess.

The fact that it’s taken me this long to find a bot to help me book a flight, when the apps and websites are already to hand is… kind of telling. But whatever, let’s play along.

Hi Tom! 🙂 Welcome to the Skyscanner bot. We’ll help you find your next trip ✈. What would you like to do?

  • Find a flight deal
  • Tell me more

I guess I should “Find a flight deal”.

OK, let’s start again… where would you like to go?

“New York City”. If you can get me there, you can get me anywhere.

A flight from New York, United States

  • Change origin city
  • Set as home airport

Where are you going?

  • Don’t know
  • Show weekend trips

Err, no, hold on, you asked me where I would like to go. I’m in London right now. I want to go from London. Click, tap, London.

A flight from London, United Kingdom

  • Change origin city
  • Set as home airport

Where are you going?

“New York City.”

A flight from New York City.

  • Change origin city
  • Set as home airport

Where are you going?

  • Don’t know
  • Show weekend trips

Oh Jesus, I’ve managed to get this piece of shit to go into an infinite loop just by asking it to go from London to New York.

I’m not kidding.

I was going to write a piece about how chatbots necessarily have to provide an experience that doesn’t allow the user to control the details of a somewhat complicated, but ordinary, process like booking an international flight. That compared to using the website or the app, you’d be less able to control some of the complex matrix of choices you make when booking a flight. I was also going to point out that chatbots do not allow for comparison shopping in a way that, say, tabbed browsing or multiple windows does. Being able to open up three different airlines in three different tabs and search their websites to compare and contrast.

Instead, I ended up talking to a chatbot created by a major travel comparison site that failed to grasp the difference between the city I wished to depart from and my destination. I’d understand if this were a weekend hackathon project built by some people dosed up on energy drinks. But this is a product from one of the companies in the top five of comparison sites.

While faffing around with this ridiculous bot, I did get a nice message from a human at Expedia directing me to their website, their phone lines and wishing me a nice weekend. All of which is much more useful than a bloody chatbot that can’t decide whether I’m talking about whether I’m going to New York or from New York.

From this experience, chatbots are clearly the future of human-computer interaction. On behalf of programmers everywhere, I implore you to waste a few more billions on building them, even if they don’t actually help your customers achieve what they want to do.

The Chrome plugin Magnetify does what Spotify ought to already do: make it so it just opens the desktop app when you click a Spotify link in your browser.

The small army of browser extensions one has to install to keep using the web a tolerable experience is pretty telling.

URL Tracking Snipper is a Chrome extension that removes all those nasty utm_source parameters from the query strings of URLs. Works very nicely. I already try to remove these parameters by hand, but automating the process is helpful.

Beyond being potentially creepiness-enabling, tracking parameters make things like bookmarks less useful. If I bookmark URL x, and then I bookmark URL x1 which is the same content as URL x, but has a load of tracking crap on the end of the URL, my bookmarking tools (Pinboard in my case) may not tell me I have already bookmarked it.

These parameters also misleading. If you post, say, a link with UTM parameters saying that the link came from Twitter and then I share that link via another channel, your analytics data now says you’ve got lots of people visiting via Twitter. But they didn’t come from Twitter—they came from someone who shared a link from Twitter in another channel. “Cool URIs don’t change” (said Tim Berners-Lee), but cool URIs also don’t replicate unnecessarily. In an ideal world, we’d have one URL for each piece of content.

These irritating query parameters make the lives of web users more frustrating solely to benefit somewhat creepy ad-tech people. Snapping these nasty protrusions off URLs benefits everyone. Except marketers. Oh well. I guess we’ll have to live with that.

Oil & Gas International post-mortem: the hard security lessons are management-related, not technical

On Monday, a man named Dev George posted the following on Mozilla’s Bugzilla bug reporting database:

Your notice of insecure password and/or log-in automatically appearing on the log-in for my website, Oil and Gas International is not wanted and was put there without our permission. Please remove it immediately. We have our own security system and it has never been breached in more than 15 years. Your notice is causing concern by our subscribers and is detrimental to our business.

(Bugzilla have hidden the bug report, but you can view a version with broken CSS on

What happens next is both completely predictable and an excellent illustration of how to not do security.

Browsers are in the process of making it so insecure logins are flagged up as a potential security issue. This recently went live in Firefox and is slated to go live in Chrome soon (it is available behind a feature flag).

The website referred to in the comment—Oil and Gas International (or O&GI)—not only accepts logins via HTTP, but also accepts credit card payments sent unencrypted. At least, it did. It is now offline.

Not only was the site not using Transport Layer Security for logins or card transactions, the site itself was rife with an array of fundamental security failures.

It was running on ASP.NET, version 2.0. .NET 2.0 was released in 2006. The version the OGI site was running was likely teeming with old, unpatched vulnerabilities. Microsoft stopped supporting .NET 2.0 a long time ago, and they no longer support versions of Windows capable of running .NET 2.0. I’m not an expert in .NET, but from what I understand, it is pretty trivial to upgrade the underlying .NET Framework (just as one can run old Java code on a newer JVM).

Not that framework vulnerabilities are the main problem. The site was vulnerable to a SQL injection attack in the login form. Before the site went down, people found that submitting poorly formatted username/passwords led to the .NET server responding with a stack trace with highlighted source code.2

Here’s another screenshot showing the application vomiting back stack traces when the login code fails to handle a SqlException.

So far we’ve got no HTTPS, an extremely outdated back-end framework, SQL injection, and a server configured to reveal stack traces to the user.

According to this Reddit thread, someone took a peek at their user table and found all the passwords were stored in cleartext. And some charitable soul took it upon themselves to drop either the user table or the whole database (I can’t tell which). The Reddit thread also says that a user on there called the company and tried to explain their security failures, only to be rebuffed.

Nmap results posted in that thread and on Twitter showed that the server had open ports for Microsoft SQL Server, POP3, IMAP and… HTTPS. Yes, they had port 443 open for HTTPS traffic but didn’t use it.

At one level, the technical lessons of this are obvious. In no particular order:

  1. Keep your server patched. Keep an eye on the frameworks and libraries you use and patch/upgrade as appropriate. Automate this as much as possible.
  2. Don’t expose error messages/stack traces in production. Make the attacker do some work.
  3. Sanitise user input to avoid SQL injection.
  4. Don’t store user passwords in cleartext. At this point, Bcrypt or PBKDF2.
  5. Lock down your server ports.
  6. Use HTTPS. With Let’s Encrypt, certifiates are free. If you run a business, can you think of any examples where the traffic between your user’s browser and your server should be interceptible? No? Then use HTTPS. (Every US federal website is going HTTPS-only. Seriously. You should too.)
  7. PCI-DSS is a thing. Even if you think login-over-insecure-HTTP is fine (it isn’t), sending card details over insecure-HTTP is dumbness.

I do have some sympathy for the site owner. According to this biographical page from (an archived version of) the site, he’s a writer with genuine expertise and understanding of his field, and has worked with some of the biggest companies in the business. He knows about oil rigs, not infosec.

He has every right to sell access to his expertise and knowledge through running a premium website. But doing so comes with risk. Information security risks are genuine risks that businesses have to cope with.

And they are doing a poor job of it. Every business that builds anything online needs to have a strategy for how to handle information security risks. None of the issues I raised above are complex. Every professional developer worth their salt should know the importance of getting them right. And most do.

As I’ve noted in the past, business organisations often talk up the complex, ever-changing, difficult nature of these threats. Listen to the non-technical discourse on information security and you’ll end up so confused as to basically do nothing. The threat is insidious, ever-mutating, confusing and impossible to fight that it is quite rational to not do anything. This ignores the truth: the threat model that for small and medium sized companies are less like Wikileaks and/or 24 and more like… well, this kind of small potatoes nonsense.

The threat to most small-to-medium sized companies is that they don’t seem to get this. Poorly built websites, often outsourced to the cheapest supplier, filled with utterly boring vulnerabilities. SQL injections and insecure form submissions aren’t the constant, dynamic, ever-changing threat that pundits in the press pontificate about. It’s not cyberwar, just run-of-the-mill bad coding, lack of testing, and poor systems administration.

In so many companies, people haven’t fully understood how bad software is. They haven’t grasped how widespread poor business information security truly is. They hide behind certifications and assurances.

I’ve seen organisations proudly show off their ISO 27001 certification, then taken a look at their site and found an old, unpatched POP3 server running without TLS and accepting passwords in cleartext. I’ve seen organisations offering to do ISO 27001 certification, as well as government-approved cybersecurity work, without HTTPS on their website.

I have seen so many people and organisations in business who think software is something that gets “done”. That you just finish writing the software and nothing else needs to be done. Everyone who works in software knows this isn’t true, but this incorrect attitude is worryingly frequent in business.

I have also seen organisations where non-technical staff are not instructed on how to deal with reports of security vulnerabilities. In such organisations, it’s only through luck that a vulnerability is passed on to developers/sysadmins who can actually do something about it. When a security vulnerability is revealed, the staff run around like headless chickens worrying about what to do. People at all levels, in all areas (sales, customer service, support, management etc.) will need to learn what to do when someone tells them their website or app has a vulnerability—what the appropriate reporting procedures are etc. They also need to have confidence that the business will take it seriously, and know how to get it fixed quickly and properly.

Those at the management level need to take responsibility for understanding some basic information security. Without basic knowledge of information security, they won’t be able to evaluate risks rationally. If people were wandering around so worried about laser-shooting stalker drones that they didn’t bother locking their front doors, that’d be a sign of a colossal misunderstanding of risk. People getting worried about three-letter agencies and cyberwar while not fixing (or even having basic awareness) SQL injections, XSS vulnerabilities and other well-understood threats is the situation we see now.

The same misplaced fears affect the wider public. People read stories in the press about, oh, the CIA, NSA, GCHQ, Snowden, Wikileaks, mass surveillance, and cyberwar. That’s all worth thinking about. But most people, most businesses, are unlikely to be the target of surveillance by a spooky government agency, nor are they likely to be pawns in a global cyberwar. Much more likely is some incompetent website designed by inexperienced, self-taught developers with some major Dunning–Kruger issues, unwilling to learn anything new, is going to accidentally leak private data due to, oh, the sort of important but boring sins committed by sites like O&GI.

The real problem with information security is management-related: those in charge, unable to grasp elementary security issues or evaluate their relevance, properly handle security reports (sometimes from scary pseudonymous people on the Internet), or listen to experts. The existing certification processes, and the attitude that security is about “reassuring” customers (rather than actually being secure) is part of the problem. O&GI is a case study in how bad technology choices and poor management combine to produce a predictable (if undeserved) outcome. Here’s the scary bit: the same people who design websites with this level of insecurity are also designing your internet-connected coffee machine or teddy bear or, oh, car. We can’t carry on without some pretty fundamental fixes to how information security is handled in society, in business and in government.

  1. I did consider the ethics of whether to post this. I think the cat is out of the bag now and continuing to pretend otherwise is pointless.

  2. bool blnLogin. Yay for Hungarian notation. Systems Hungarian, really.

LWS talk: Break down the barriers - Self taught developers today

The following is contemporaneous notes from a talk at London Web Standards by Matt Bee (@mattbee) from futureLearn. There may be errors or material I missed.

Auto-didacticism: the act of self learning or self-education without the guidance of masters (such as teachers and professors) or institutions. (Audience member: “Just Stack Overflow!”)

Wholly self-taught front end developer. Two things happened when I first started developer: “The Facebook” and Firefox.

If you are self-taught, you are in good company: Leonard Da Vinci, Charles Darwin, Quentin Tarantino, David Bowie, Jimi Hendrix.

Our industry is amazing - the support/encouragement available is unbelievable. We are excluding new people though.

“I followed an unconventional route to being a web developer”. But that’s a lot of people.

Source for first website - table based layout, a lot of view source, a lot of Notepad, a lot of IE 6. Used to work mostly in HTML and CSS. With the help from books like “HTML for the World Wide Web - Visual Quickstart Guide”, learned a lot as a tinkerer.

Two years in: good with HTML (table layouts) and moderate CSS (fairly new), basic PHP, could use FTP and do basic web config. Could get a site up and running from scratch. This was enough to get my first developer job. This was without any computer science background.

Now: front end developer with 10 years experience, not an engineer, or a code ninja. I don’t know Angular, React, WebPack. I don’t even know JavaScript inside out. I am valuable to my team. Need more: empathy, honesty, being able to see stuff from a user’s perspective.

I worry for the future. Not in a political sense either. The openness is being removed. A lot harder for someone who is intrigued to learn. 69.1% of developers are self-taught according to Stack Overflow.

3 key areas that have made it harder for new people:

  1. Harnessing the power of devices.
  2. Exposure to the inner workings
  3. Mass adoption of complex tools and technologies

My first computer: ZX Spectrum 48K. Had to load programs off a tape. Instruction manual contained programs you could write and run, could produce beautiful graphics (if you had a colour TV). You could buy magazines to copy commands.

Compare the ZX Spectrum keyboard (with “str$”, “rnd”, “peek”, “poke”) keys with the PS2 controller. UX won out - but blackbox devices don’t enable developers.

Terminal and command line are hidden away. Accessible hardware is around, but you need to know about it and have surplus cash for it. Simulated environments aren’t always great experiences.

It used to be easy. View source was enough to get inspiration and troubleshoot. A lot of code was not dependent on anything else. Generally you saw exactly what people write. Didn’t depend on a node package that the author of the code you are looking at didn’t understand. The was true even for big production sites.

JavaScript is an example of how it used to be. In my early development days, a website like Dynamic Drive.1 You could copy, paste and tinker. The modern day equivalent is include some file then type $('.datepickerclass').datepicker();

Today: a new developer can create with no exposure to real code. View source is now potentially useless. Everything output is transformed from what is written. HTML and CSS sometimes feel like second class citizens.

Tweet from @paulcuth: “Just starting to use Web Components & Shadow DOM. Great fun, but going to raise the barrier to entry massively. View source is practically useless.”

Complex tools and mass adoption: “I first thought of learning AngularJS through their official documentation, but this turned out to be a terrible idea.” (source)

Ashley Nolan’s front end tooling survey: “the adoption rate of front-end tools shows no signs of letting up, with tools such as Webpack and JavaScript transpilers becoming ever more essential in our workflows” (source)

Try to install a Node.js library. Thousands of C++ compilation errors. Answer is to download a few gigabytes of Xcode that you probably won’t ever use. Why?2

We are exposing thousands of developers to this every day, and potentially putting them off a career in tech.

I am not saying do not use frameworks. They are valuable and save time and effort. I am saying provide a pathway to learn them.

In summary: I don’t think I’d cut it today. I’m not a great developer, I have huge gaps in my knowledge. I do have other attributes that make me valuable.

“The rapidness of getting the information doesn’t really correlate with the pace of the actual learning. This frustrates us” - Gavin Strange

What can we do? Consider the future developers. Run a code club. Run a Homebrew Website Club. Help teach people. Promote diversity on teams.

People learn at different rates and in different ways. Empathy is important. Provide pathways for great people to become great developers.

(All pictures from Wikimedia Commons!)

  1. Oh my. It still exists.

  2. “Everyone knows that debugging is twice as hard as writing a program in the first place. So if you are as clever as you can be when you write it, how will you ever debug it?” (Brian Kernighan)

LWS talk: Designing for the Web - Healthy practices for developers and designers

The following is contemporaneous notes from talk at London Web Standards by Matteo Pescarin (@ilPeach), development lead at Sainsbury’s (for now). There may be errors or material I missed.

“I’ve seen things you people wouldn’t believe”. I’ve seen developers and designers who didn’t want to touch a single line of HTML and CSS. I’ve seen fantastic designs that would work on paper. I’ve seen code that shouldn’t work in browsers.

There’s a lot to take in: learning the ever increasing complexity of front-end technologies.

At the beginning, we were very happy. Using tables to display data. Paginating millions of records. Then UX.

Then we started looking into content-first approaches. Personas, user journeys, interactions and behaviours, usability, cross-device compatibility, and many other aspects. A lot to take in.

Let’s say you don’t know web design. What is web design? Everything? Multi-faceted set of practices and doctrines that surround websites. Maybe it is more similar to product design. Quite difficult to grasp everything.

Semantic markup. Do you know?

  • What’s the difference between article and section?
  • How do you vertically align with CSS?

If developers have too much to take in, what should designers do? Learn to code?

The developers are masters in their field. Developers know the limits of the technologies. Designers know colour theory, typography, grid systems, use of whitespace etc. “Don’t expect to find the unicorn” - there are developers who learn design, and designers who learn to become developers, but they are rare.

Don’t expect tools to solve your problems: tools exist to help developers do design better. Things like SketchUp (and earlier Dreamweaver) enable designers to turn ideas into web pages. Developers have the same: CSS grid systems taught us indirectly how design work. Methodologies like Atomic Design too. But don’t let anybody decide which tools you should be using. Allow yourself to be flexible enough to experiment, try new things.

A better world is possible. We cannot shift responsibility.

1-10-100 rule: if you have to fix a problem during design time, cost is 1. Fix a problem during development time, cost is 10. Fix after delivery, cost is 100. Data-based analysis showed the 100 is more like 1m.

Iterate and validate: as developers and designers, we are biased. Involve your users. Design in the browser. Rapid prototyping. Use Design Sprints. Use style guides and pattern libraries - save thousands of hours in the long run.

Designers must sit with developers, to promote knowledge sharing. Work bottom-up, understand each other, elevate developers to work on front-end design.

LWS talk: Building a dementia friendly website

The following is contemporaneous notes from a talk given at London Web Standards by Imogen Levy (@teppie). There may be errors or material I missed.

Dementia: memory loss, mood changes, changes in reasoning. Caused by a number of diseases - most common is Alzheimer’s. Over 850,000 people in the United Kingdom with dementia. By 2021, over 1m people will live with dementia. Getting worse with ageing population.

Designing websites for people with dementia is an unexplored space. Redesigning the Alzheimer’s Society website, it was key that it worked for people with dementia, so they were included them at all stages in the design process.

Challenge 1: how to design for dementia

First step was to develop a series of personas. Personas help guide decisions in design. What are their needs? Used focus groups, interviews, help line call logs, surveys and quantitative data from existing website. Focus groups included both people with dementia and carers. Focus groups run through local branches of Alzheimer’s Society as well as local charitable groups included singing groups, “Alzheimer’s Cafés” etc.

Seven personas:

  1. Person with dementia
  2. Carer
  3. Volunteer
  4. Researcher
  5. Mature supporter
  6. ??
  7. Authority?

Research findings:

  1. Goal-oriented content emerged as key need. Content needed to be structured around stages of dementia.
  2. Increasing importance of mobile and tablet usage. Smartphones are equally accessible in the home to computers. Lots of users relying on smartphones and tablets alone.
  3. Content must be visual and interactive: more video and audio content.

Challenge 2

Challenge: “website is informative scary”. Contained 10,000 pages of content, but was added to without review. Overgrown, unclear structure. “Time to do some weeding”.

Google rankings were excellent, but navigating via the website wasn’t good. Old site started with four items on the left-hand navigation panel, but this grew. Had to start again, but didn’t want to lose out on Google rankings for key content.

New IA/site structure to make it easy for people to find stuff easily and quickly. Worked with consultants.

Digital team owned website, but dementia content was produced mostly by team responsible for print content including fact sheets and material handed out in doctors surgery. Printed material must meet NHS Information Standard.

Had to restructure content but not break compliance with NHS information standard.

The goal of the website is to help users find information and complete tasks.

Structure tested using online tool called TreeJack.

Set of tasks given to user, then user is asked to navigate through website. No design, just structure. Gives you a “spider graph”. The cleaner the graph, the easier the navigation.

Example task: find out more information about vascular dementia. 90% found information easily. Each click stage broken down.

Unsuccessful task: “how can you best look after yourself while caring for a relative?” Content was contained within the “About dementia” section, but most users went to the “Get Support” section, though that did not contain the relevant information. Result was moving the information.


  1. Don’t let the organisation dictate the website structure - this required tricky conversations internally
  2. Clear signposted entry points on website - “
  3. Needs and abilities are unique

Challenge 3: brand strategy

Wanted a new brand strategy as part of a wider goal to change societal perceptions of dementia - “I have dementia, and I’m OK.”

Use of logo across multiple devices. Square logos on a mobile phone was hard.

Colour palette: web use was limited. Lots of testing required. Two prototypes tested - one wireframe - to test out IA hierarchy. Testers liked the design and layout, gave positive impression

Challenge 4: in trouble with the law

ICO enforcement notice against AS. Data breaches including website hack. Great deal of scrutiny followed. Complete restructure necessitated a new environment. Involvement of lawyers, penetration testers and security companies. A great deal of security scrutiny was performed.

CMS limitations: 10,000 to 2,000 pages meant a lot of redirects. Significant number of spreadsheets sent for 301 redirects.

New site enabled “friendly URLs”. Not perfect, but better.


  1. “Love your testing team!”
  2. More time for managing redirects
  3. Use a CMS that supports SEO friendly URLs.

Challenge 5: The Future

AS wants to reach every person who has a dementia diagnosis, and “build stuff people want”. Team wanted to show how to be user focussed.

New stuff:

  • Implementing a new CMS
  • Extensive research into people at point of diagnosis
  • Partner project with NHS Digital - “we are the authority” on dementia content, so want to align with what NHS are doing
  • Continued website improvements