tommorris.org

Discussing software, the web, politics, sexuality and the unending supply of human stupidity.


Things I learned this week: #EuroPython

This week, I’ve been at EuroPython in Rimini, Italy. I’ve been tweeting some sessions on the @Nexmo Twitter account. Here, I’m link-dumping a bunch of stuff I learned that you may or may not find of interest. It’s less a methodical writeup of talks, more links and stuff I found that are interesting, some of which were discussed at the conference (others have nothing to do with Python, programming or the conference).

Sourcelift is a subsidised travel programme for open source developers to go to conferences and events. They support mostly Python, Node.js and PostgreSQL-related projects, plus language-agnostic tools like text editors and package managers.

World’s Biggest Data Breaches: a rather depressing infographic of big hacks and data breaches.

awesome-security is a collection of interesting links related to security.

Bandit is a static analyzer for Python that inspects the abstract syntax trees (ASTs) of Python code looking for common security issues. Some of the checks are pretty straightforward and could be achieved without the need for an AST parse: grep could tell you whether the code uses stuff like the ftplib, telnetlib, xmlrpclib or pickle modules, eval, or MD5. But checking for empty exception parsing (i.e. except: pass) is useful.

Static analysis seems to have been a theme at EuroPython this year, with one mention of SonarQube, a hosted SaaS application that scans for bugs and potential security vulnerabilities in a bunch of languages including C, C++, C#, PHP, Java, Python, JS, VB (6 and .NET), Swift and Objective-C. I haven’t tried it yet but it looks interesting.

Even more accessible is Semmle’s LGTM1. It’s blissfully simple: you add a repo, it scans it for bugs and alerts them to you in a dashboard. I’ve fixed a very minor one already. If you are maintaining an open source project, adding this kind of stuff is a no brainer. Robots finding bugs automatically is better than humans finding them.

Every year, there’s a games design contest called PyWeek. The idea is simple: you have one week to design and build a game, in Python, based on a theme. This is a really cool idea.

PyDatalog is an implementation of Datalog. Datalog is a simplified subset of Prolog. It is very easy for one to forget the existence of logic-based programming languages like Prolog/Datalog, but they still exist and still help solve real world problems.

objgraph is a Python object graph visualiser (it uses GraphViz). It’s quite helpful for finding memory leaks.

sanest is a library for nested dictionaries and lists. It allows you to do nested operations, and more intelligently handle errors.

Two web smaller-than-micro frameworks: Firefly and Pico.

I’m going to write more about data science, specifically machine learning soon.

  1. I’ve already mistyped it ‘LGBT’ at least three times. 🏳️‍🌈




Chatbots are the future and always will be

I read a lot online about how chatbots are the future. Ultimately, the proof of chatbots is in the eating—I mean, err, talking (this really is a crap metaphor). Let’s try one out to do something like… booking a flight.

The “domain model” of commercial passenger aviation is familiar to most people, but still there are a lot of choices to be made. Which airline? Which day do I want to fly? Some days are a bit cheaper than others. What seat class do I want? What ticket class do I want? (They are different things.) Do I want to book using my frequent flyer miles or with cash? Do I get checked baggage or just hand baggage? It might be slightly cheaper if I fly from London Gatwick rather than Heathrow. Do I want to land at Newark or JFK? Am I logged into my frequent flyer account? Do I need to supply Advance Passenger Information now or later? What about if I need to change flights? If I book business class for the long-haul segment, am I going to get business class on the short-haul codeshare segment? What are the cancellation conditions?

Apps like those provided by British Airways do their best to hide away the complexity of this. Within the confines of “I know I want to fly with BA”, rather than “I want to compare different airlines”, the app from BA provides a pretty good experience at managing this matrix of different options.

Now let’s try and book a flight using a chatbot.

First stop, Facebook Messenger. After all, Mark Zuckerberg said:

We think that you should just be able to message a business in the same way you message a friend

Well, I know that I like messaging friends at 8:30 on a Saturday morning demanding that they look up flights for me. So let’s go!

I open up Facebook. ‘New Message’. I type in British Airways. And I get… nothing.

Okay, this is just the Messenger thing they build into the default Facebook view. I guess I should open up the actual Facebook Messenger page. Let’s have a chat with British Airways.

I found, well, British Airways i360. Which is a giant levitating dougnut on Brighton beach. I thought about messaging it and asking whether or not the doughnut is currently levitating or not. But then I realised that there’s probably someone being paid not very much money to answer stupid questions from people on the Internet and left it be.

Maybe British Airways haven’t climbed on board the chatbot train yet. I’m sure some enterprising developer must have built a chatbot that lets me search and book flights. Let’s search for one, on Facebook Messenger. I had an insightful conversation with ‘Flight Bot’ that resulted in me… not being able to search for a flight. That went well.

I mean, I get it. It seemed like a good idea when they built it, maybe not now.

I still want to find a chatbot that can help me fly from London Heathrow to Newark Liberty International. That shouldn’t be too hard. So, as with all problems, let’s turn to Google. Oh, Google, find me a travel chatbot.

Huzzah! There are plenty to choose from. The popular travel booking sites Expedia, Skyscanner, Kayak and Cheapflights all have chatbots. On Facebook! Okay, let’s test Expedia.

That didn’t work out. I’m now feeling guilty about wasting some poor human’s time with a test query when I could just search on the web. Let’s give Skyscanner a go. This time, instead of just being a chat window, it asks me to “Get Started”. This means it’s actually a bot, I guess.

The fact that it’s taken me this long to find a bot to help me book a flight, when the apps and websites are already to hand is… kind of telling. But whatever, let’s play along.

Hi Tom! 🙂 Welcome to the Skyscanner bot. We’ll help you find your next trip ✈. What would you like to do?

  • Find a flight deal
  • Tell me more

I guess I should “Find a flight deal”.

OK, let’s start again… where would you like to go?

“New York City”. If you can get me there, you can get me anywhere.

A flight from New York, United States

  • Change origin city
  • Set as home airport

Where are you going?

  • Don’t know
  • Show weekend trips

Err, no, hold on, you asked me where I would like to go. I’m in London right now. I want to go from London. Click, tap, London.

A flight from London, United Kingdom

  • Change origin city
  • Set as home airport

Where are you going?

“New York City.”

A flight from New York City.

  • Change origin city
  • Set as home airport

Where are you going?

  • Don’t know
  • Show weekend trips

Oh Jesus, I’ve managed to get this piece of shit to go into an infinite loop just by asking it to go from London to New York.

I’m not kidding.

I was going to write a piece about how chatbots necessarily have to provide an experience that doesn’t allow the user to control the details of a somewhat complicated, but ordinary, process like booking an international flight. That compared to using the website or the app, you’d be less able to control some of the complex matrix of choices you make when booking a flight. I was also going to point out that chatbots do not allow for comparison shopping in a way that, say, tabbed browsing or multiple windows does. Being able to open up three different airlines in three different tabs and search their websites to compare and contrast.

Instead, I ended up talking to a chatbot created by a major travel comparison site that failed to grasp the difference between the city I wished to depart from and my destination. I’d understand if this were a weekend hackathon project built by some people dosed up on energy drinks. But this is a product from one of the companies in the top five of comparison sites.

While faffing around with this ridiculous bot, I did get a nice message from a human at Expedia directing me to their website, their phone lines and wishing me a nice weekend. All of which is much more useful than a bloody chatbot that can’t decide whether I’m talking about whether I’m going to New York or from New York.

From this experience, chatbots are clearly the future of human-computer interaction. On behalf of programmers everywhere, I implore you to waste a few more billions on building them, even if they don’t actually help your customers achieve what they want to do.




The Chrome plugin Magnetify does what Spotify ought to already do: make it so it just opens the desktop app when you click a Spotify link in your browser.

The small army of browser extensions one has to install to keep using the web a tolerable experience is pretty telling.


URL Tracking Snipper is a Chrome extension that removes all those nasty utm_source parameters from the query strings of URLs. Works very nicely. I already try to remove these parameters by hand, but automating the process is helpful.

Beyond being potentially creepiness-enabling, tracking parameters make things like bookmarks less useful. If I bookmark URL x, and then I bookmark URL x1 which is the same content as URL x, but has a load of tracking crap on the end of the URL, my bookmarking tools (Pinboard in my case) may not tell me I have already bookmarked it.

These parameters also misleading. If you post, say, a link with UTM parameters saying that the link came from Twitter and then I share that link via another channel, your analytics data now says you’ve got lots of people visiting via Twitter. But they didn’t come from Twitter—they came from someone who shared a link from Twitter in another channel. “Cool URIs don’t change” (said Tim Berners-Lee), but cool URIs also don’t replicate unnecessarily. In an ideal world, we’d have one URL for each piece of content.

These irritating query parameters make the lives of web users more frustrating solely to benefit somewhat creepy ad-tech people. Snapping these nasty protrusions off URLs benefits everyone. Except marketers. Oh well. I guess we’ll have to live with that.




Oil & Gas International post-mortem: the hard security lessons are management-related, not technical

On Monday, a man named Dev George posted the following on Mozilla’s Bugzilla bug reporting database:

Your notice of insecure password and/or log-in automatically appearing on the log-in for my website, Oil and Gas International is not wanted and was put there without our permission. Please remove it immediately. We have our own security system and it has never been breached in more than 15 years. Your notice is causing concern by our subscribers and is detrimental to our business.

(Bugzilla have hidden the bug report, but you can view a version with broken CSS on archive.is.1)

What happens next is both completely predictable and an excellent illustration of how to not do security.

Browsers are in the process of making it so insecure logins are flagged up as a potential security issue. This recently went live in Firefox and is slated to go live in Chrome soon (it is available behind a feature flag).

The website referred to in the comment—Oil and Gas International (or O&GI)—not only accepts logins via HTTP, but also accepts credit card payments sent unencrypted. At least, it did. It is now offline.

Not only was the site not using Transport Layer Security for logins or card transactions, the site itself was rife with an array of fundamental security failures.

It was running on ASP.NET, version 2.0. .NET 2.0 was released in 2006. The version the OGI site was running was likely teeming with old, unpatched vulnerabilities. Microsoft stopped supporting .NET 2.0 a long time ago, and they no longer support versions of Windows capable of running .NET 2.0. I’m not an expert in .NET, but from what I understand, it is pretty trivial to upgrade the underlying .NET Framework (just as one can run old Java code on a newer JVM).

Not that framework vulnerabilities are the main problem. The site was vulnerable to a SQL injection attack in the login form. Before the site went down, people found that submitting poorly formatted username/passwords led to the .NET server responding with a stack trace with highlighted source code.2

Here’s another screenshot showing the application vomiting back stack traces when the login code fails to handle a SqlException.

So far we’ve got no HTTPS, an extremely outdated back-end framework, SQL injection, and a server configured to reveal stack traces to the user.

According to this Reddit thread, someone took a peek at their user table and found all the passwords were stored in cleartext. And some charitable soul took it upon themselves to drop either the user table or the whole database (I can’t tell which). The Reddit thread also says that a user on there called the company and tried to explain their security failures, only to be rebuffed.

Nmap results posted in that thread and on Twitter showed that the server had open ports for Microsoft SQL Server, POP3, IMAP and… HTTPS. Yes, they had port 443 open for HTTPS traffic but didn’t use it.

At one level, the technical lessons of this are obvious. In no particular order:

  1. Keep your server patched. Keep an eye on the frameworks and libraries you use and patch/upgrade as appropriate. Automate this as much as possible.
  2. Don’t expose error messages/stack traces in production. Make the attacker do some work.
  3. Sanitise user input to avoid SQL injection.
  4. Don’t store user passwords in cleartext. At this point, Bcrypt or PBKDF2.
  5. Lock down your server ports.
  6. Use HTTPS. With Let’s Encrypt, certifiates are free. If you run a business, can you think of any examples where the traffic between your user’s browser and your server should be interceptible? No? Then use HTTPS. (Every US federal website is going HTTPS-only. Seriously. You should too.)
  7. PCI-DSS is a thing. Even if you think login-over-insecure-HTTP is fine (it isn’t), sending card details over insecure-HTTP is dumbness.

I do have some sympathy for the site owner. According to this biographical page from (an archived version of) the site, he’s a writer with genuine expertise and understanding of his field, and has worked with some of the biggest companies in the business. He knows about oil rigs, not infosec.

He has every right to sell access to his expertise and knowledge through running a premium website. But doing so comes with risk. Information security risks are genuine risks that businesses have to cope with.

And they are doing a poor job of it. Every business that builds anything online needs to have a strategy for how to handle information security risks. None of the issues I raised above are complex. Every professional developer worth their salt should know the importance of getting them right. And most do.

As I’ve noted in the past, business organisations often talk up the complex, ever-changing, difficult nature of these threats. Listen to the non-technical discourse on information security and you’ll end up so confused as to basically do nothing. The threat is insidious, ever-mutating, confusing and impossible to fight that it is quite rational to not do anything. This ignores the truth: the threat model that for small and medium sized companies are less like Wikileaks and/or 24 and more like… well, this kind of small potatoes nonsense.

The threat to most small-to-medium sized companies is that they don’t seem to get this. Poorly built websites, often outsourced to the cheapest supplier, filled with utterly boring vulnerabilities. SQL injections and insecure form submissions aren’t the constant, dynamic, ever-changing threat that pundits in the press pontificate about. It’s not cyberwar, just run-of-the-mill bad coding, lack of testing, and poor systems administration.

In so many companies, people haven’t fully understood how bad software is. They haven’t grasped how widespread poor business information security truly is. They hide behind certifications and assurances.

I’ve seen organisations proudly show off their ISO 27001 certification, then taken a look at their site and found an old, unpatched POP3 server running without TLS and accepting passwords in cleartext. I’ve seen organisations offering to do ISO 27001 certification, as well as government-approved cybersecurity work, without HTTPS on their website.

I have seen so many people and organisations in business who think software is something that gets “done”. That you just finish writing the software and nothing else needs to be done. Everyone who works in software knows this isn’t true, but this incorrect attitude is worryingly frequent in business.

I have also seen organisations where non-technical staff are not instructed on how to deal with reports of security vulnerabilities. In such organisations, it’s only through luck that a vulnerability is passed on to developers/sysadmins who can actually do something about it. When a security vulnerability is revealed, the staff run around like headless chickens worrying about what to do. People at all levels, in all areas (sales, customer service, support, management etc.) will need to learn what to do when someone tells them their website or app has a vulnerability—what the appropriate reporting procedures are etc. They also need to have confidence that the business will take it seriously, and know how to get it fixed quickly and properly.

Those at the management level need to take responsibility for understanding some basic information security. Without basic knowledge of information security, they won’t be able to evaluate risks rationally. If people were wandering around so worried about laser-shooting stalker drones that they didn’t bother locking their front doors, that’d be a sign of a colossal misunderstanding of risk. People getting worried about three-letter agencies and cyberwar while not fixing (or even having basic awareness) SQL injections, XSS vulnerabilities and other well-understood threats is the situation we see now.

The same misplaced fears affect the wider public. People read stories in the press about, oh, the CIA, NSA, GCHQ, Snowden, Wikileaks, mass surveillance, and cyberwar. That’s all worth thinking about. But most people, most businesses, are unlikely to be the target of surveillance by a spooky government agency, nor are they likely to be pawns in a global cyberwar. Much more likely is some incompetent website designed by inexperienced, self-taught developers with some major Dunning–Kruger issues, unwilling to learn anything new, is going to accidentally leak private data due to, oh, the sort of important but boring sins committed by sites like O&GI.

The real problem with information security is management-related: those in charge, unable to grasp elementary security issues or evaluate their relevance, properly handle security reports (sometimes from scary pseudonymous people on the Internet), or listen to experts. The existing certification processes, and the attitude that security is about “reassuring” customers (rather than actually being secure) is part of the problem. O&GI is a case study in how bad technology choices and poor management combine to produce a predictable (if undeserved) outcome. Here’s the scary bit: the same people who design websites with this level of insecurity are also designing your internet-connected coffee machine or teddy bear or, oh, car. We can’t carry on without some pretty fundamental fixes to how information security is handled in society, in business and in government.

  1. I did consider the ethics of whether to post this. I think the cat is out of the bag now and continuing to pretend otherwise is pointless.

  2. bool blnLogin. Yay for Hungarian notation. Systems Hungarian, really.



LWS talk: Break down the barriers - Self taught developers today

The following is contemporaneous notes from a talk at London Web Standards by Matt Bee (@mattbee) from futureLearn. There may be errors or material I missed.

Auto-didacticism: the act of self learning or self-education without the guidance of masters (such as teachers and professors) or institutions. (Audience member: “Just Stack Overflow!”)

Wholly self-taught front end developer. Two things happened when I first started developer: “The Facebook” and Firefox.

If you are self-taught, you are in good company: Leonard Da Vinci, Charles Darwin, Quentin Tarantino, David Bowie, Jimi Hendrix.

Our industry is amazing - the support/encouragement available is unbelievable. We are excluding new people though.

“I followed an unconventional route to being a web developer”. But that’s a lot of people.

Source for first website - table based layout, a lot of view source, a lot of Notepad, a lot of IE 6. Used to work mostly in HTML and CSS. With the help from books like “HTML for the World Wide Web - Visual Quickstart Guide”, learned a lot as a tinkerer.

Two years in: good with HTML (table layouts) and moderate CSS (fairly new), basic PHP, could use FTP and do basic web config. Could get a site up and running from scratch. This was enough to get my first developer job. This was without any computer science background.

Now: front end developer with 10 years experience, not an engineer, or a code ninja. I don’t know Angular, React, WebPack. I don’t even know JavaScript inside out. I am valuable to my team. Need more: empathy, honesty, being able to see stuff from a user’s perspective.

I worry for the future. Not in a political sense either. The openness is being removed. A lot harder for someone who is intrigued to learn. 69.1% of developers are self-taught according to Stack Overflow.

3 key areas that have made it harder for new people:

  1. Harnessing the power of devices.
  2. Exposure to the inner workings
  3. Mass adoption of complex tools and technologies

My first computer: ZX Spectrum 48K. Had to load programs off a tape. Instruction manual contained programs you could write and run, could produce beautiful graphics (if you had a colour TV). You could buy magazines to copy commands.

Compare the ZX Spectrum keyboard (with “str$”, “rnd”, “peek”, “poke”) keys with the PS2 controller. UX won out - but blackbox devices don’t enable developers.

Terminal and command line are hidden away. Accessible hardware is around, but you need to know about it and have surplus cash for it. Simulated environments aren’t always great experiences.

It used to be easy. View source was enough to get inspiration and troubleshoot. A lot of code was not dependent on anything else. Generally you saw exactly what people write. Didn’t depend on a node package that the author of the code you are looking at didn’t understand. The was true even for big production sites.

JavaScript is an example of how it used to be. In my early development days, a website like Dynamic Drive.1 You could copy, paste and tinker. The modern day equivalent is include some file then type $('.datepickerclass').datepicker();

Today: a new developer can create with no exposure to real code. View source is now potentially useless. Everything output is transformed from what is written. HTML and CSS sometimes feel like second class citizens.

Tweet from @paulcuth: “Just starting to use Web Components & Shadow DOM. Great fun, but going to raise the barrier to entry massively. View source is practically useless.”

Complex tools and mass adoption: “I first thought of learning AngularJS through their official documentation, but this turned out to be a terrible idea.” (source)

Ashley Nolan’s front end tooling survey: “the adoption rate of front-end tools shows no signs of letting up, with tools such as Webpack and JavaScript transpilers becoming ever more essential in our workflows” (source)

Try to install a Node.js library. Thousands of C++ compilation errors. Answer is to download a few gigabytes of Xcode that you probably won’t ever use. Why?2

We are exposing thousands of developers to this every day, and potentially putting them off a career in tech.

I am not saying do not use frameworks. They are valuable and save time and effort. I am saying provide a pathway to learn them.

In summary: I don’t think I’d cut it today. I’m not a great developer, I have huge gaps in my knowledge. I do have other attributes that make me valuable.

“The rapidness of getting the information doesn’t really correlate with the pace of the actual learning. This frustrates us” - Gavin Strange

What can we do? Consider the future developers. Run a code club. Run a Homebrew Website Club. Help teach people. Promote diversity on teams.

People learn at different rates and in different ways. Empathy is important. Provide pathways for great people to become great developers.

(All pictures from Wikimedia Commons!)

  1. Oh my. It still exists.

  2. “Everyone knows that debugging is twice as hard as writing a program in the first place. So if you are as clever as you can be when you write it, how will you ever debug it?” (Brian Kernighan)


LWS talk: Designing for the Web - Healthy practices for developers and designers

The following is contemporaneous notes from talk at London Web Standards by Matteo Pescarin (@ilPeach), development lead at Sainsbury’s (for now). There may be errors or material I missed.

“I’ve seen things you people wouldn’t believe”. I’ve seen developers and designers who didn’t want to touch a single line of HTML and CSS. I’ve seen fantastic designs that would work on paper. I’ve seen code that shouldn’t work in browsers.

There’s a lot to take in: learning the ever increasing complexity of front-end technologies.

At the beginning, we were very happy. Using tables to display data. Paginating millions of records. Then UX.

Then we started looking into content-first approaches. Personas, user journeys, interactions and behaviours, usability, cross-device compatibility, and many other aspects. A lot to take in.

Let’s say you don’t know web design. What is web design? Everything? Multi-faceted set of practices and doctrines that surround websites. Maybe it is more similar to product design. Quite difficult to grasp everything.

Semantic markup. Do you know?

  • What’s the difference between article and section?
  • How do you vertically align with CSS?

If developers have too much to take in, what should designers do? Learn to code?

The developers are masters in their field. Developers know the limits of the technologies. Designers know colour theory, typography, grid systems, use of whitespace etc. “Don’t expect to find the unicorn” - there are developers who learn design, and designers who learn to become developers, but they are rare.

Don’t expect tools to solve your problems: tools exist to help developers do design better. Things like SketchUp (and earlier Dreamweaver) enable designers to turn ideas into web pages. Developers have the same: CSS grid systems taught us indirectly how design work. Methodologies like Atomic Design too. But don’t let anybody decide which tools you should be using. Allow yourself to be flexible enough to experiment, try new things.

A better world is possible. We cannot shift responsibility.

1-10-100 rule: if you have to fix a problem during design time, cost is 1. Fix a problem during development time, cost is 10. Fix after delivery, cost is 100. Data-based analysis showed the 100 is more like 1m.

Iterate and validate: as developers and designers, we are biased. Involve your users. Design in the browser. Rapid prototyping. Use Design Sprints. Use style guides and pattern libraries - save thousands of hours in the long run.

Designers must sit with developers, to promote knowledge sharing. Work bottom-up, understand each other, elevate developers to work on front-end design.


LWS talk: Building a dementia friendly website

The following is contemporaneous notes from a talk given at London Web Standards by Imogen Levy (@teppie). There may be errors or material I missed.

Dementia: memory loss, mood changes, changes in reasoning. Caused by a number of diseases - most common is Alzheimer’s. Over 850,000 people in the United Kingdom with dementia. By 2021, over 1m people will live with dementia. Getting worse with ageing population.

Designing websites for people with dementia is an unexplored space. Redesigning the Alzheimer’s Society website, it was key that it worked for people with dementia, so they were included them at all stages in the design process.

Challenge 1: how to design for dementia

First step was to develop a series of personas. Personas help guide decisions in design. What are their needs? Used focus groups, interviews, help line call logs, surveys and quantitative data from existing website. Focus groups included both people with dementia and carers. Focus groups run through local branches of Alzheimer’s Society as well as local charitable groups included singing groups, “Alzheimer’s Cafés” etc.

Seven personas:

  1. Person with dementia
  2. Carer
  3. Volunteer
  4. Researcher
  5. Mature supporter
  6. ??
  7. Authority?

Research findings:

  1. Goal-oriented content emerged as key need. Content needed to be structured around stages of dementia.
  2. Increasing importance of mobile and tablet usage. Smartphones are equally accessible in the home to computers. Lots of users relying on smartphones and tablets alone.
  3. Content must be visual and interactive: more video and audio content.

Challenge 2

Challenge: “website is informative scary”. Contained 10,000 pages of content, but was added to without review. Overgrown, unclear structure. “Time to do some weeding”.

Google rankings were excellent, but navigating via the website wasn’t good. Old site started with four items on the left-hand navigation panel, but this grew. Had to start again, but didn’t want to lose out on Google rankings for key content.

New IA/site structure to make it easy for people to find stuff easily and quickly. Worked with consultants.

Digital team owned website, but dementia content was produced mostly by team responsible for print content including fact sheets and material handed out in doctors surgery. Printed material must meet NHS Information Standard.

Had to restructure content but not break compliance with NHS information standard.

The goal of the website is to help users find information and complete tasks.

Structure tested using online tool called TreeJack.

Set of tasks given to user, then user is asked to navigate through website. No design, just structure. Gives you a “spider graph”. The cleaner the graph, the easier the navigation.

Example task: find out more information about vascular dementia. 90% found information easily. Each click stage broken down.

Unsuccessful task: “how can you best look after yourself while caring for a relative?” Content was contained within the “About dementia” section, but most users went to the “Get Support” section, though that did not contain the relevant information. Result was moving the information.

Lessons:

  1. Don’t let the organisation dictate the website structure - this required tricky conversations internally
  2. Clear signposted entry points on website - “
  3. Needs and abilities are unique

Challenge 3: brand strategy

Wanted a new brand strategy as part of a wider goal to change societal perceptions of dementia - “I have dementia, and I’m OK.”

Use of logo across multiple devices. Square logos on a mobile phone was hard.

Colour palette: web use was limited. Lots of testing required. Two prototypes tested - one wireframe - to test out IA hierarchy. Testers liked the design and layout, gave positive impression

Challenge 4: in trouble with the law

ICO enforcement notice against AS. Data breaches including website hack. Great deal of scrutiny followed. Complete restructure necessitated a new environment. Involvement of lawyers, penetration testers and security companies. A great deal of security scrutiny was performed.

CMS limitations: 10,000 to 2,000 pages meant a lot of redirects. Significant number of spreadsheets sent for 301 redirects.

New site enabled “friendly URLs”. Not perfect, but better.

Lessons:

  1. “Love your testing team!”
  2. More time for managing redirects
  3. Use a CMS that supports SEO friendly URLs.

Challenge 5: The Future

AS wants to reach every person who has a dementia diagnosis, and “build stuff people want”. Team wanted to show how to be user focussed.

New stuff:

  • Implementing a new CMS
  • Extensive research into people at point of diagnosis
  • Partner project with NHS Digital - “we are the authority” on dementia content, so want to align with what NHS are doing
  • Continued website improvements

Some notes on Vim and Visual Studio Code

I’ve recently been playing with text editors a bit. I’ve been trying out Microsoft’s excellent Visual Studio Code and I have found it to be perhaps the best editor for both a language I love (Python) and a language I have to occasionally write (JavaScript, or one of its many spinoffs). I like it so much that I ported over a TextMate bundle for handling Mako templates.1

Visual Studio Code really is great, but my one true love is and probably always will be Vim. I recently upgraded to Vim 8.0 which came out in September. The async I/O and channels support should make it so things like linters can be adapted to not block the editing thread on save. This is good.

Today, I finally cleared up my .vimrc, refactored it out into some separate files, and took a position on the Vundle vs Pathogen debate. I’ve used Pathogen in the past, but Vundle seems preferable. Git submodules are an unending source of pain and confusion. Vundle lets me just specify a list of plugins and have that list stored in version control, much like a Gemfile or a requirements.txt or bundle.json or whatever Node is doing this week to download left-pad.

These are the packages I actually use lots in Vim:

  • editorconfig-vim: this is grade A amazing. The idea of EditorConfig is you create a file called .editorconfig and you put it in the root of your project. It specifies all the editor settings: character set, indent style, indent spacing, end-of-line coding, final newline etc.
  • vim-gitgutter: once set up right, it shows you what you have changed in the editor’s gutter (that is, margin) of your files
  • ctrlp: a pretty good fuzzy file finder. The only thing I wish it did was integrate with EditorConfig so that “don’t peek inside that stupidly large node_modules folder that the front-end guys use” could be specified in a shared config file.
  • vim-endwise: because if you find yourself writing Ruby, you may as well set up ways to spend less time actually writing Ruby.
  • vim-commentary: comment and uncomment things.

I have a bunch of other stuff installed in Vim including fugitive (Git integration), ack.vim (which gives you integration with the ack/ag search tools, which are basically faster versions of grep) and vim-rails (no prizes for guessing what that does). I don’t tend to use them. Two reasons. One is that I will often just use the command line. I don’t really feel like I need to use Vim as a command line replacement. 2 For something like Git, I either use the command line, or I use SourceTree. I don’t really feel like I need it baked into my editor.

The second reason I don’t use plugins I have downloaded: I don’t use them just because I don’t use them. They haven’t stuck in my head. They are quite likely to be useful but I haven’t burned the commands into my mind like I have with Vim itself and with the other plugins I use. One of the things that Visual Studio Code, Sublime Text 23 and TextMate before it did was make commands discoverable quite a bit easier by having a commands “palette” you could use. I should probably have a personal Vim cheatsheet. I already know the basic functionality of Vim well enough but each plugin brings some potentially powerful new stuff that you don’t really bother using until you actually learn how to use it.

There’s a few things I want to try soon. One is setting up ale, the asynchronous linting engine for Vim, which will asynchronously run your code through a whole variety of linters for different languages. The most useful for me being flake8 and whatever the least awful JavaScript one is when I next write JavaScript.

Oh, and while we are talking about linters, let’s talk about Proselint. Proselint is an actually useful linter for English prose. It isn’t a grammar checker. There are too many of those. I recently saw a grammar-checker-as-a-service called Grammarly being advertised at me during a YouTube video. The ad featured a rather clueless social media strategist needing a Chrome plugin to correctly spell basic words. The premium edition of Grammarly costs $139.95 a year. Jesus. Proselint is designed for people who are generally able to write in English but want the sort of thing one would consult a style guide over checked. I have a Proselint plugin setup in Visual Studio Code. Once I set up the ale plugin, I hope I will get the same in Vim.

I also tried vim-devicons and Powerline/Airline, but having to install custom fonts just wasn’t something I really felt like bothering with long term, so I stopped. I’m going to give Gundo a try soon.

  1. Mako templates aren’t such a good idea, but Pyramid seems to use them. I tend to normally use either Jinja or Django templates, or ERb in Rails land (because HAML annoys me, and “logic-less templates” are a lie).

  2. Try Emacs for that. It has a surprisingly pretty website for something hosted on gnu.org!

  3. I’m told. I have never really used any version of Sublime Text. After TextMate was left without updates for such a long time, I lost interest in ever using an editor that wasn’t both free-as-in-beer and free-as-in-Stallman.


Jean Louis-Gassée has nailed the exact thing which the iPhone got right: wrestling control of the device away from the carriers.

This remains a problem in Android land. It isn’t a choice between empowering the device manufacturer or the user. The user doesn’t get the power–the network provider gets the power. And if the network provider doesn’t bother giving updates, the user gets all the vulnerabilities…


Bad news: Parliament voted down measures to make sex and relationship education required in British schools. Instead of comprehensive, rounded and LGBT-inclusive, British schools can continue to teach a minimal battery of reproductive biology combined with a tissue of religiously-inspired propaganda and call it education in order to satisfy the concerns of “faith communities”. In Britain not offending religious people is apparently a more important public policy outcome than teaching students that gay people are human, how to negotiate sexual consent and how to not get sexually transmitted diseases.