tommorris.org

Discussing software, the web, politics, sexuality and the unending supply of human stupidity.


design


You (probably) don't need a chatbot

There has been a great hullabaloo in the last few months about the rise of chatbots, and discussions of “conversational UIs” or, even more radically, the concept of “no UI”—the idea that services might not need a UI at all.

This latter concept is quite interesting: I’ve written in the past about one-shot interactions. For these one-shot interactions, UI is clutter. But chatbots aren’t the answer to that problem: because chatbots are UI, just a different sort of UI. Compare…

Scenario 1: App

  1. Alice hears an amazing song playing in a club.
  2. Alice pulls out her iPhone and unlocks it by placing her finger on the TouchID sensor.
  3. Alice searches on the homescreen for the Shazam app.
  4. Alice opens Shazam, then presses the button to start the process of Shazam identifying the song that is currently playing.
  5. Alice waits.
  6. Alice is told what the song is and offered links to stream it or download it from a variety of streaming and download services that vary depending on the day of the week, the cycle of the moon, and how Shazam’s business development team are feeling this week.

Scenario 2: Chat

Someone at Shazam decides that apps are a bit old-fashioned and decides to build a chatbot. They have read an article on Medium.com that tells them that chatbots are better, and decide to build one based solely on this advice rather than any actual empirical evidence.

  1. Alice hears an amazing song playing in a club.
  2. Alice pulls out her iPhone and unlocks it by placing her finger on the TouchID sensor.
  3. Alice searches on the homescreen for the Facebook Messenger app.
  4. Alice opens Facebook Messenger, then locates the existing chat session with the Shazam bot.
  5. Alice scrolls back up the chat to work out what the magic phrase she needs to type in to trigger the chatbot into listening to music.
  6. Alice waits.
  7. Alice is told what the song is and offered whatever extra rich data the chat UI is allowed to show.

As you can see, this is a vast improvement, not because it makes the process less involved or elaborate, but because someone on Medium.com told them that it is new and exciting.

Scenario 3: Idealised One-Shot Interaction

  1. Alice hears an amazing song playing in a club.
  2. Alice taps a button on her smartwatch. Everything else happens in the background. Alice continues partying and enjoying herself rather than being the saddo staring at her phone all night.

For those without a smartwatch, a lockscreen button on the phone could be substituted.

Anyway, this is a slight distraction from the broader point: chatbots are a bit of a silly fashion and a fad and that they seem to be adopted based on fashion rather than based on any actual utility.

But, but, there’s this awesome chatbot I use, and I really like it!

Great. I’m not saying that they have no purpose, but that chatbots are being adopted even though they often are worse at what they do than the alternative. They also come with considerable downsides.

First of all, chatbot UIs are poor at letting a user compare things. When someone browses, say, Amazon or eBay or another e-commerce service, they will often wish to compare products. They’ll open up competing products in different tabs, read reviews, check up on things on third-party sites, ask questions of their friends via messaging apps and social media sites like Facebook. Chatbot UIs remove this complexity and replace it with a linear stream.

Removing complexity sounds good, but when someone is ordering something, researching something or in any way committing to something, navigating through the complexity is a key part of what they are doing.

Imagine this scenario. Apple have 500 different iPhones to choose from. And instead of calling them iPhones, they give them memorable names like UN40FH5303FXZP (Samsung!) or BDP-BX110 (Sony!). Some marketing manager realises the product line is too complex and so suggests that there ought to be a way to help consumers find the product they want. I mean, how is the Average Joe going to know the difference between a BDP-BX110, a BDP-BX210, and a BDP-BX110 Plus Extra? You could build a chatbot. Or, you know, you could reduce the complexity of your product line. The chatbot is just a sticking plaster for a broader business failure (namely, that you have a process whereby you end up creating 17 Blu-Ray players and calling them things like BDP-BX110 rather than calling them something like “iPhone 7” or whatever).

Chatbots aren’t removing complexity as much as recreating it in another form. I called my bank recently because I wanted to enquire about a direct debit that I’d cancelled but that I needed to “uncancel” (rather than setup again). I was presented with an interactive voice response system which asked me to press 1 for payments, 2 for account queries, 3 for something else, and then each of those things had a layer more options underneath them. Of course, I now need to spend five minutes listening to the options waiting for my magic lucky number to come up.

Here’s another problem: the chatbot platforms aren’t necessarily the chat services people use. I’m currently in Brazil, where WhatsApp is everywhere. You see signs at the side of the road for small businesses and they usually have a WhatsApp logo. WhatsApp is the de facto communication system for Brazilians. The pre-pay SIM card I have has unlimited WhatsApp (and Facebook and Twitter) as part of the 9.99 BRL (about USD 3) weekly package. (Net neutrality? Not here.) The country runs on WhatsApp: the courts have blocked WhatsApp three times this year, each time bringing a grinding halt to both business and personal interactions. Hell, during Operação Lava Jato, the ongoing investigations into political corruptions, many of the leaks from judges and politicians have been of WhatsApp messages. Who needs Hillary Clinton’s private email servers when you have WhatsApp?

WhatsApp is not far off being part of the critical national telecoms infrastructure of Brazil at this point. Network effects will continue to place WhatsApp at the top, at least here in Brazil (as well as most of the Spanish-speaking world).

And, yet, WhatsApp does not have a bot platform like Facebook Messenger or Telegram. To get those users to use your chatbot, you need to convince them to set up an account on a chat network that supports your bot. For a lot of users, they’ll be stuck with WhatsApp, the app they use to talk to their friends, and Telegram, the app they use to talk to weird, slightly badly programmed robots. Why bother? Just build a website.

Now, in fairness, WhatsApp are planning to change this situation at some point, but you still have an issue to deal with: what if your users don’t have an account on the messaging service used by the bot platform?

One of the places chatbots are being touted for use is in customer service. “They’ll reduce customer service costs”, say proponents, because instead of customers talking to an expensive human you have to employ (and pay, and give breaks and holidays and parental leave and sick days and all that stuff) to, you just talk to a chatbot which will answer questions.

It won’t though. Voice recognition is still in its infancy, and natural language parsing is still fairly primitive keyword matching. If your query is simple enough that it can be answered by an automated chatbot, it’s simple enough for you to just put the information on your website, which means you can find it with your favourite search engine. If it is more complicated than that, your customer will very quickly get frustrated and need to talk to a human. The chatbot serves only as a sticking plaster for lack of customer service, or business processes that are so complicated that the user needs to talk to customer service rather than simply being able to complete the task themselves.

You know what else will suffer if there were a widespread move to chatbots? Internationalisation. Currently, the process of internationalising and localising an app or website is reasonably understandable. In terms of language, the process isn’t complex: you just replace your strings with calls to gettext or a locale file, and then you have someone translate all the strings. There’s sometimes a bit of back and forth because there’s something that doesn’t really make sense in a language so you have to refactor a bit. There’s a few other fiddly things like address formats (no, I don’t have a fucking ZIP code) and currency, as well as following local laws and social taboos.

In chatbot land, you have the overhead of parsing the natural language that the user presents. It’s hard enough to parse English. Where are the engineering resources (not to mention linguistic expertise) going to come from to make it so that the 390 million Spanish speakers can use your app? Or the Hindi speakers or the Russian speakers. If your chatbot is voice rather than text-triggered, are you going to properly handle the differences between, say, American English and British English? European Portuguese, Brazilian Portuguese and Angolan Portuguese? European Spanish and Latin American Spanish? Français en France versus Québécois? When your chatbot fucks up (and it will), you get to enjoy a social media storm in a language you don’t speak. Have fun with that.

And you can’t use the user’s location to determine how to parse their language. What language should you expect from a Belgian user: French, Dutch or German?

If you tell a user “here’s our website, it’s in English, but we’ve got a rough German translation”, that’s… okay. I use a website that is primarily in German everyday, and the English translation is incomplete. But I can still get the information I need. If, instead, your service promised to understand everything I say, then completely failed to speak my language, that’d be a bit of a fuck you to the user.

In the chatbot future, the engineering resources go into making it work in English, and then we just ignore anyone who speaks anything that isn’t English. World Wide Web? Well, if we’re getting rid of the ‘web’ bit, we may as well get rid of the ‘world’ and ‘wide’ while we’re at it.

Siri and Cortana are still a bit crap at language parsing, even with the Herculean engineering efforts of Apple and Microsoft behind them. An individual developer isn’t going to do much better. Why bother? There’s a web there and it works.

There’s far more to “no UI” or one-shot interactions than chat. But I’m cynical as to whether we’re ever going to reach the point of having “no UI”. We measure our success based on “engagement” (i.e. how much time people spend staring at the stuff we built). But the success criteria for the user isn’t how much time they spend “engaging” with our app, but how much value they get out of it divided by the amount of time they spend doing it. The less time I spend using your goddamn app, the more time I get to spend, oh, I dunno, looking at cat pictures or snuggling with my partner while rewatching Buffy or writing snarky blog posts about chatbots.

But so long as we measure engagement by how many “sticky eyeballs” there are staring at digital stuff, we won’t end up building these light touch “no UIs”, the interaction models of set-it-and-forget-it, “push a button and the device does the rest”. Because a manager won’t be able to stand up and show a PowerPoint of how many of their KPIs they met. Because “not using your app” isn’t a KPI.

Don’t not build a chatbot because of my snarkiness. They may solve a problem that your users have. They probably don’t but they might. But please don’t just build a chatbot because someone on a tech blog or a Medium post told you to. That’s just a damn cargo cult. Build something that delivers value to your users. That may be a chatbot, but most likely, it’s something as simple as making your website/app better.


An excellent article on the silly Conversational UI trend: Bots won’t replace apps. Better apps will replace apps.

As the author of the piece notes, there’s plenty that’s wrong with the current trend in app design. Conversational UIs are orthogonal to fixing those problems. Each individual app has become its own silo. The model of “spend a bunch of money to hire a bunch of iOS and Android devs to build out a custom app for each platform, then spend a ton of resources trying to convince people to download those apps” has to wind down at some point. And there will be a point where we want a lot more fluidity between interactions. We still spend an enormous amount of time jockeying data between apps and manually patching pipelines of information into one another like some a telephone operator of old. Conversational UIs don’t fix any of those things. Better UIs, which is often less UIs, fix that. As does more focus on trying to make it so we can more efficiently and seamlessly have single-serving, one shot interactions (which goes against all the metrics: we often measure success by how much time someone spends interacting with something, rather than measuring success by how well that thing hides itself away and doesn’t need to be interacted with).


I read about “playful cities” and I wonder how “playfulness” makes cities actually better for humans. Is it reducing crime? It it helping people hate their jobs less? Is it helping reduce the number of people chucking themselves under trains to end it all? Is it making people less likely to be racist or homophobic our loudly proclaim their hatred for immigrants? Is it helping reduce social privation? Improving educational chances for the worst off in society? Is it actually making a meaningful change in how cities are.

Or is it just making pretty things so designers can say “look at the pretty things I made” and put photos of said pretty things in their portfolios with lots of pious discussions of urbanism? When I hear ‘playful’, my mind usually jumps to ‘insipid’. I’d love for someone to convince me that I’m wrong to be so cynical about this playfulness stuff.


The Sellotape Problem, design and the indie web

For those who haven’t been aware, my site tommorris.org, where you are hopefully reading this, is part of a plucky little group of Internet folk who raise the banner of the “indie web”. Sadly, I haven’t been working on the code behind my site as much as I’d like recently. Last week, I was racing away on a work project. And there’s always the time I spend working on a little encyclopedia you may have heard of.

One of the more complex issues the indie web faces is a non-technical problem. The technical problems are easy enough to work out. Good people can differ on preferred approaches there. Some will like Activity Streams, some will like microformats, some will like PubSubHubbub, some will like schema.org, some weirdos like me will even like the Semantic Web stack with technologies like RDFa. This reflects technical differences and underlying philosophical differences (I see the web as containing graph-like data, and thus needing a graph-like data model, others do not find value in such an approach).

But, for users—both in this case readers and writers—the particular brands of techno-duct tape that holds the whole contraption together shouldn’t matter too much. There are some big problems beside the technical ones. In fact, much bigger problems we’re going to have to solve in order to build our own homebrewed alternative to centralised social networks like Twitter and Facebook.

That problem is that the centralised social networks don’t just rule the web because they are big evil silos who horde all your content and so on. This is true. No, the far bigger problem is that they own the genre. Before Flickr, everyone knew what a photograph was. Before YouTube, everyone knew what a video was. Before Wikipedia, everyone knew what an encyclopedia entry was. Before blogging… not really. I mean, we are familiar with what writing is, but not necessarily in the form they take online. People still don’t get that it is sort of a mixture of citizen journalism and diary writing, but different because it is on the web. Photography and video have settled on the web in a modified form: naff hipster camera filters weren’t necessarily widely popular before Instagram, three minute video diaries weren’t necessarily popular pre-YouTube. But writing on the web has been a maze of exploration. From fairly masturbatory hypertext fiction that has been subject to lots of excessive academic theorising to things like the humble FAQ to wikis (of which Wikipedia is actually a very unrepresentative example) to blogs and microblogs, how we write online has changed pretty dramatically.

It’s perfectly easy to start a microblog. Hey, so long as your blogging system doesn’t require anything weird like obligatory titles for posts, there’s no particular reason why you can’t just start microblogging on your own site. And per the POSSE principle, syndicate it out on to Twitter and so on. The problem is the interpretation given to a microblog post. If I write a short snappy silly post on my personal site, people give it a seriousness that it doesn’t seek or deserve. If it were on Twitter in an admittedly compressed form, it would never have got linked on Hacker News because it is… just a tweet. But it is just a microblog post. The long post I made about Neo4J a day or so later didn’t get any such consideration.

We have a Sellotape problem here. Or a Google-as-a-verb problem. Twitter has defined microblogging just like Sellotape defined sticky tape, Xerox defined photocopying, Hoover (and now Dyson) defined vacuum cleaners, and “to Google” has become the verb you use to talk about doing a web search. We struggle to find the terminology to talk about it outside of Twitter: identi.ca—now status.net—used to call it a “dent”, and Facebook call it a “note”. We’ve used “posts”, “notes”, “statuses”, “updates”, “microblog post” and so on. The name isn’t the issue, or isn’t the complete issue. The issue is we don’t have a way yet of describing, displaying and understanding what one of these little posts is.

Finding a language to talk about these independent tweets is hard but not that important. I don’t particularly care about whether we call them updates or notes or statuses or even “tweet-like posts”. What is far more important is that we have ways of signifying to people that a short status or tweet-like post is in fact that. It is more important that the reader understands that they are looking at something that should be mentally pigeonholed into the same category as tweets or Facebook status updates rather than as great philosophical monologues on the state of the human condition.

My usual source for design inspiration, namely books, fails me here. Books—or at least, the sort of books that people who go mushy about typography (I’m guilty of that) tend to like—aren’t usually in the business of pointing out their own frivolity and lack of seriousness. I mean, there’s always Comic Sans and Chalkboard, but I don’t see anyone suddenly writing their site all in Comic Sans. And that doesn’t send the correct message either: it’d be like a stand up comic wearing a clown outfit on the basis that the clown outfit represents humour. It’s more complicated than that.

The answer may lie in kitsch and camp. Quite what that even means in web design, I’m struggling to work out. Bette Midler songs? What exactly is there to ludicrously send-up? Well, there’s 8-bit gaming, perhaps? The Super Mario theme tune as web site. We live in an ultra-modernist Helvetica world, and finding a playful but subtle way of expressing the frivolity and fun of something like Twitter in the context of everyone having their own little self-published independent microblogs is hard.

The challenge isn’t just building the technical infrastructure for the independent and decentralised web. The bigger challenge is building a shared design language for that new frontier. It will be fun though.


I think in the last month or so, I’ve worked out something important about how I think. I go in cycles between wordy things and visual things. It’s almost like seasons: I’ll go for a long time in word mode and then suddenly switch and I’ll be in image mode.

I can trace it through my background. I sort of never thought of myself as good at any of that design stuff (still don’t, really). I once took an after-school art class, and completely surprised myself that I could, with some attention, actually draw better than I thought I could. We were paired up and had to draw them. I had to sit and draw a girl, and spent about an hour intently focussed on the task of sketching, methodically and carefully, barely even seeing the person but just seeing the textures and light.

And then I kind of got thrown into heavy word mode at school. I balanced this with doing a photography course, which eventually led me to art school. Which I quit, and went off and did philosophy: lots of words, few pictures. And I bounce back and forth between the two. This may explain in part why I quit my Ph.D: word overload. I haven’t been reading as much as I used to. I’m kind of worded out. (Writing doesn’t count: that’s just reflexive at this point.)

I think part of what makes building Ferocity quite good fun has been that at certain points, I’ve hit a technical stump, so I close my text editor, check whatever I’m working on in to Git, then find a design task that needs work. And the result is… not bad. I’m no whiz with CSS. I can crash about and get something nice, but I’m basically doing with CSS what newbie programmers do with code: it’s some combination of copy-and-paste programming and cargo cult programming. I’m not proud of this fact.

Well, having realised that I’m now into visual season, I’m going to ride that particular ride for a while. I’m reading up on typography to try and understand it better. I’ve started reading Thinking With Type and then plan to move on to Detail in Typography.

I’m so frequently having to plonk pixels in the right place, whether for my own projects or for other people’s work, I may as well learn to do it well.


For a non-designer, I spend quite a chunk of my time lining up other people’s pixels.


Retina

I had to pop down to the Apple Store today. Somehow, a weird crack has appeared in my laptop’s LED screen (I didn’t drop it, ‘onest!). I got there half an hour or so before meeting my appointed Genius (I was his last appointment before ending his shift and going off on a ten day holiday, neatly missing the iPhone 5 launch, a move that certainly puts him at least in the smarty-pants league) and had a little chance to play with the shiny new Retina display MacBook Pros. I had a go with them last time.

And I had a look at this little beta site. Blimey. The text looks pretty darn gorgeous. The only thing that looks a bit crap is the logo image. As I’m going to be visiting the Apple Store a few times in the next week, I’ve decided I may as well get the graphics looking nice on the Retinabooks. I’ve just created a 2x-sized logo graphic and checked it into Git. Tomorrow, I’ll spend a few minutes implementing it in the markup: either using superacidjax’s clear_eyes, a Rails gem that gives you a nice easy way of doing Retina graphics. Alternatively, I may just use Retina.js.

Either way, I’m actually pretty impressed and excited by retina displays. I’d rather like it if Apple could spend more time doing things like Retina displays and less time on silly gimmicks, patent lawsuits and the like. The whole 72dpi thing has always been a bit shit. Perhaps it is my eyesight, which has always been pretty fucking good, or perhaps it is the formal training in photography, but I actually notice how shit things look. I’ll look at images in a book and see the dots on the page and say “no, look, that reproduction looks like shit”, and other people will think I’m crazy and I’ll get a fucking loupe out and show them that, no, really, it really does look like shit.

And as per this XKCD, I’ve always been rather surprised that people seem to think big monitors are impressive.

Apple have done a stunningly good job educating people on this. Yeah, you think your 60” 1080p TV is impressive? Whack an 1080p video up on a Retinabook and you’ve got more than enough space on the side for Twitter. I’ve known I want much better displays, but Apple are now selling this to computer buyers. Give it two or three years and we’ll see it permeate the rest of the industry. And a good thing too. It’ll be nice to forget the bad old days when you could look closely at a screen and see dots.


The danger of design thinking

Seyi Ogunyemi has an interesting post about writing as design. Go read it.

Now, I have a story.

When I went to secondary school, they taught us “food technology”. Not home economics. Food technology. We had two different women who taught us food technology: one was a 50-something lady who liked to say ‘scone’ to rhyme with ‘cone’ rather than to rhyme with ‘gone’. Eventually she was supplemented for a blonde lady who many of the lads thought was exceptionally attractive and was subject to numerous jokes about “I’d take her buns out of the oven any day” etc. etc.

I tell you this because that is about all I remember from the years spent in the food technology classroom. Food technology was one of the tracks of design and technology. You could do textile design, graphic design, food technology, electronics or plain old design and technology type design: woodwork and all that. Food is a design process, and we’d have to “design” meals in order to fit into a “designed” menu, to match a “design brief” of an imagined restaurant.

Quite what the point of this was eludes me. Cooking and food preparation seems like a pretty useful skill that everyone ought to have. If you are particularly good at it, perhaps you have a career awaiting you in the food business, working in restaurants or designing prepared meals to sell in supermarkets.

Coming up with the range of sandwiches for sale in a national supermarket chain would require considerable design skills: knowing what the customer wants and dealing with the difficulties and economics of large-scale manufacturing and distribution. I can see that as a design challenge, and an innovative solution to a design challenge could potentially save the company money, increase the quality of the product, and maybe the savings could be passed onto the consumer. There’s a whole lot of design thinking going on there. And if you want to teach kids about design thinking, getting them to contemplate something like mass food manufacturing is a good way to ensure they aren’t just thinking of design as being about pixels and typefaces or ink on paper or sewing bits of cloth together, or whatever the medium is that they need to design in.

But, what the design thinking misses is that it is essentially a secondary thing here. When I wake up and want to prepare myself some breakfast, it’s not a “design” issue. It’s cracking an egg, whisking it, frying it and pouring some ketchup on it. Food comes first, design comes second. In a school environment, there’s absolutely no reason why we should be asking children to “design” food when they haven’t yet worked out how to boil an egg or bake a loaf of bread. If I haven’t even got the most basic skills and competence at handling the “medium”, thinking how to “design” in the “medium” is ridiculous (and, really, calling food a medium seems on a very basic level to be a category mistake given the important biological role that eating plays compared to, say, Photoshop).

A similar thing occurs in sex education. When I was in school, sex education was delivered primarily as part of biology. Remember being a teenager: the primary questions one had about sex couldn’t be answered by writing labels on photocopied diagrams of penises and vaginas. Hey, you’ve never been told about the importance of being open and communicative with your partner, and perhaps your school made gay or lesbian relationships basically invisible (thank you, Section 28!), and perhaps you don’t know enough about safe sex… but at least you know where to locate the vas deferens on a diagram. A huge chunk of the problem with education comes from trying to fit them into the curriculum box. When practical education goes wrong, it is treated as academic: knowing how to cook and knowing how to have sex in a responsible and safe way aren’t academic skills, they aren’t things you should get marks for on GCSEs, and shouldn’t be loaded into the “science” box or the “technology” box or the “art and design” box. It’s a category mistake in the same way as people who think that moral philosophy is about how to be moral rather than critical discussions of the concept of morality (cue the usual complaint about the distinct lack of saintliness of certain professors of moral philosophy).

Design thinking is about making decisions, making tradeoffs and trying to find ingenious solutions so you can reduce the tension between competing goals. But if you don’t know what the possible things you can decide are, where the tradeoffs are made, the nature of the goals, the solutions that have gone before, you can’t do design thinking. Design thinking is a luxury of those who already know how to do the thing in question. I’d like to think I can write reasonably well, perhaps to the point where I can appreciate and use a reasonably wide range of literary devices. Certain words can be used to lighten the mood, or to build up tension, or to gently mock the people who use them, or as a form of reappropriation. A particular speciality of mine is surreal introductions to throw people off their guard a bit.1 Obviously, I’m pretty well-trained on the whole expressing righteous anger and bile.

Is writing a form of design thinking? Sure. Do I think of it as design thinking? Not really, but I’m not going to deny it if someone like Seyi challenges me on it. Is it a good way to teach people to think of it as such? Hell no.

The ability to “design words” is something you get when you get good. When you start writing, you shouldn’t—in my opinion anyway—worry too much about what effect you are trying to bring about. You don’t need emotional engineering, you need to just pootle around in your writingmobile, exploring how the thing works, shifting gears and experimenting. You need to burn through words and paragraphs, really just following your natural intuition. Then you get to the ruthless sentence massacre stage that people like Stephen King talk about: cutting out all the crap, self-editing hard. Eventually, you can start thinking of your words as design, but first think of them as words damnit.


"Om nom nom, this design tastes as magical and revolutionary as an iPad."

To have people thinking about “designing” words before they are able to really write is as silly as expecting people to “design” pizzas before they have been taught, well, the basic grammar: to grate the mozzarella, to knead the dough, to slice the peppers and salami and so on.

I suppose I should elaborate this as some kind of theory. Okay then. Here’s a first grasp towards one. Heidegger and Gadamer have written of the hermeneutic circle: this is basically an acknowledgment that when you are trying to understand something, bring yourself and your ‘world’ to understanding the world constructed in the thing you are trying to understand. This takes place at multiple levels. Think of trying to understand a religious text: you come at it with certain prejudgments, and you can’t really take it at face value. Each time you re-read it, you understand a little more. You understand a small aspect of the work, which informs your understanding of the larger work it is contained in. As you understand the larger work, it helps you make sense of the confusing bits of the smaller aspects.

This kind of dynamic, this cycle of understanding and involvement recurs through all sorts of things. Particularly well-written drama is a cycle between you understanding the characters and then understanding yourself through the characters. There’s a cycle of interpretation: you understand the dialogue and the plot through what it is the character is representing, and you understand the point of the character through the dialogue and the plot.

There is something similar going on with design: you need the basic competence to do design, and you need design to understand the point of the basic competence. You need to try and solve some design-like problems, but thinking about it all as design means you miss developing the basic competence. All the design thinking in the world can’t let you transcend the limitations of the medium you are working in, and you need to learn the medium, not some generic idea of “design thinking”. Design thinking risks becoming all consuming at the expense of the actual reality derived from practice.

I once heard a woman on the train saying to someone “If you can design a dress, you can design a house”. As someone whose grandfather spent years studying to become an architect, I felt like standing up and saying “no, you can’t”. My grandfather designed prisons: ensuring murderers can’t roam the streets is rather a different design challenge than getting on the front cover of Vogue or GQ. However much shared “design thinking” exists between the fashion designer and the architect, the Home Office won’t be recruiting the next prison architect from the catwalks of the London Fashion Week.

  1. I never bought into the whole “you need a beginning, a middle and an end” nonsense. Because, “duh, obvious”. Unless the work is infinite, it will begin and end at some point. Often the beginning is some variation on “Well, I’m going to discuss X”. Then the middle comes and it consists of them discussing X. And then the end is “Well, I’ve discussed X.” I think this is completely redundant. Imagine a romantic scene in a movie, and the lead informs the partner “I’m about to kiss you.” Then his kisses them. Then he says “In conclusion, I have just kissed you.” This is why introductions are so much more fun if they are a bit more like introductions in real life: messy and imperfect.


Why WebFonts matter

I want to share a font that changed my life.

This is a paragraph from this page on Malayalam Wikipedia. The article is about Simone de Beauvoir. (You could tell, right?)

I listen in on the web design community. It’s often very interesting, but can be a little bit narrow. When WebFonts first came out, the discussions were about type foundries and putting DRM on type and how WebFonts would never work because of the lack of copy protection.

And, yes, typography designers will now have to work out what their business models are. They may follow the music industry and try to sue everyone into oblivion. Because that has worked real well.

But that doesn’t matter. The Web as universal archive is so much more important than whether or not existing industries can continue making money. Napster may have pissed off the music industry, but it helped build an enormous library of human creativity.

Designers look at the web and see that it needs civilizing, it needs design. It needs beauty. It’s been designed–if you dare use that word–by philistine programmers who spend fourteen hours a day staring at white text against a black background in some godforsaken text editor like Emacs or Vim. They never went to art school and they prefer reading Perl manuals to reading Keats. They probably use Android, not iOS.

They are right (although I did briefly go to art school). The web is ugly. And WebFonts might not help. The type foundries may or may not jump on to WebFonts. The DRM schemes may or may not happen. And it doesn’t matter.

The reason WebFonts is vitally important is because of the key role of typography. Typography is to make things readable. And they currently aren’t for hundreds of millions of people around the world because there are many, many languages that don’t have fonts. There are 35.9 million people who speak Malyalam. Up until recently, they couldn’t use the web in their own language. At the end of the fiscal year 2010, Apple had sold 73.5 million iPhones.

Malayalam fonts and input methods means Malayalam now has a blogging community, it now has a vibrant Wikipedia and Wikisource. The Malayalam Wikipedia community have been distributing Wikipedia and Wikisource on CD to schools in India: WebFonts means that the CD they distribute need only contain HTML, CSS and JavaScript, which means that it can work on any computer that has a web browser on it.

What’s more important to the world: that smartphone users in the West have a “beautiful user experience” with pretty typography, or that the “rest of the world” as we so frequently call them can actually read and write on the Web? How you answer that question will tell you how important WebFonts will be for you. For me, everyone being able to have the chance to participate in the World Wide Web is far more important than making sure the privileged few have a more magical user experience.

WebFonts are important because language requires type, and access to one’s own language is about as profound a social justice issue as you can find. As the early Wittgenstein said, the limits of my language are the limits of my world. If you can’t type or read your language online, your world is not part of the World Wide Web. That needs fixing.


I'm not an experience-seeking user, I'm a meaning-seeking human person

After an evening of cynicism last night, reading a bloody awful article by a pompous twit, and travelling on bloody slow trains, and then logging on to Twitter and seeing a bunch of bloody fools debating things they are completely ignorant of without even a modicum of philosophical charity, I found something which restored my trust in the human race: psd’s talk at Ignite London. It combines giving naughty link-breaking, data-sunsetting corporate types a spank for misbehaviour with an admiration for I Spy books. I had I Spy books as a kid, although mine were products of the late 80s/early 90s and had the Michelin Man, although in not nearly as an intrusively corporate way as Paul’s slides of current day I Spy suggests. Do forgive me: I’m going to do one of those free-associative, meditative riffing sessions that you can do on blogs.

The sort of things Paul talks about underly a lot of the things I get excited about on the web: having technology as a way for people to establish an educational, interactional feeling with the world around them, to hack the world, to hack their context, to have the web of linked data as another layer on top of the world. The ‘web of things’ idea pushes that too far in the direction of designed objects (or spimes or blogjects or whatever the current buzzword is), and the way we talk about data and datasets and APIs makes it all too tied to services provided by big organisations. There’s definitely some co-opting of hackerdom going on here that I can’t quite put my finger on, and I don’t like it. But that’s another rant.

I’ve been hearing about ‘gamification’ for a while and it irritates me a lot. Gamification gets all the design blogs a-tweeting and is a lovely refrain used at TED and so on, but to me it all looks like “the aesthetic stage” from Kierkegaard applied to technology. That is, turning things into games and novelties in order to mask the underlying valuelessness of these tasks. Where does that get you? A manic switching between refrains. To use a technological analogy, this week it is Flickr, next week it is TwitPic, the week after it is Instagram. No commitment, just frantic switching based on fad and fashion. Our lives are then driven by the desire to avoid boredom. But one eventually runs out of novelties. The fight against boredom becomes harder and harder and harder until eventually you have to give up the fight. There’s a personal cost to living life as one long game of boredom-avoidance, but there’s also a social cost. You live life only for yourself, to avoid your boredom, and do nothing for anybody else. Technology becomes just a way for you to get pleasure rather than a way for you to contribute to something bigger than yourself.

In Kierkegaard’s Either/Or, the alternative to this aesthetic life was typified by marriage. You can’t gamify marriage, right? You commit yourself for life. You don’t get a Foursquare badge if you remember your anniversary. The alternative to aestheticism and boredom is an ethical commitment. (And, for Kierkegaard anyway, ultimately a religious commitment.1) And I think the same holds true for the web: you can gamify everything, make everything into Foursquare. Or you can do something deeper and build intentional, self-directed communities of people who want to try and do something meaningful. Gamification means you get a goofy badge on your Foursquare profile when you check into however many karaoke bars. A script fires off on a server somewhere and a bit changes in a database, you get a quick dopamine hit because an ironic badge appears on your iPhone. Congratulations, your life is now complete. There’s got to be more to life and technology than this. If I had to come up with a name for this alternative to gamification that I’m grasping for, it would be something like ‘meaning-making’.

Gamification turns everything into a novelty and a game (duh). Meaning-making turns the trivial into something you make a commitment to for the long haul; it turns the things we do on the web into a much more significant and meaningful part of our lives.

In as much as technology can help promote this kind of meaning-making, that’s the sort of technology I’m interested in. If I’m on my deathbed, will I regret the fact that I haven’t collected all the badges on Foursquare? Will I pine for more exciting and delightful user experiences? That’s the ultimate test. You want a design challenge? Design things people won’t regret doing when they are on their deathbed and design things people will wish they did more of when they are on their deathbed. Design things that one’s relatives will look back in fifty years and express sympathy for. Again, when you are dead, will your kids give a shit about your Foursquare badges?

A long time ago, I read a story online about a young guy who got killed in a road accident. I think he was on a bike and got hit by a car while driving home from work. He was a PHP programmer and ran an open source CMS project. There was a huge outpouring of grief and support from people who knew the guy online, from other people who contributed to the project. A few people clubbed together to help pay for two of the developers to fly up to Canada to visit his family and attend the funeral. They met the guy’s mother and she asked them to explain what it is that he was involved in. They explained, and in the report they e-mailed back to the project, they said that the family eventually understood what was going on, and it brought them great comfort to know that the project that their son had started had produced something that was being used by individuals and businesses all over the world. This is open source: it wasn’t paid for. He was working at a local garage, hacking on this project in between pumping petrol. But there was meaning there. A community of people who got together and collaborated on something. It wasn’t perfect, but it was meaningful for him and for other people online. That’s pretty awesome. And it’s far more interesting to me to enable more people to do things like this than it is to, I dunno, gamify brands with social media or whatever.

This is why I’m sceptical about gamification: there’s enough fucking pointless distractions in life already, we don’t need more of them, however beautiful the user experiences are. But what we do need more of is people making a commitment to doing something meaningful and building a shared pool of common value.

And while we may not be able to build technologies that are equivalent in terms of meaning-making as, say, the importance of family or friendship or some important political commitment like fighting for justice, we should at least bloody well try. Technology may not give us another Nelson Mandela, but I’m sure with all the combined talent I see at hack days and BarCamps and so on, we can do something far more meaningful than Google Maps hacks and designing delightful user experiences in order to sell more blue jeans or whatever the current equivalent of blue jeans is (smartphone apps?).

The sort of projects I try to get involved in have at least seeds of the sort of meaning-making I care about.

Take something like Open Plaques, where there are plenty of people who spend their weekends travelling the towns and cities in this country finding blue memorial plaques, photographing them and publishing those photos with a CC license and listing them in a collaborative database. No, you don’t get badges. You don’t get stickers and we don’t pop up a goofy icon on your Facebook wall when you’ve done twenty of them. But you do get the satisfaction of joining with a community of people who are directed towards a shared meaningful goal. You can take away this lovely, accurate database of free information, free data, free knowledge, whatever you want to call it. All beautifully illustrated by volunteers. No gamification or fancy user experience design will replicate the feeling of being part of a welcoming community who are driven by the desire to build something useful and meaningful without a profit motive.

The same is true with things like Wikipedia and Wikimedia Commons. Ten, fifteen years ago, if you were carrying around a camera in your backpack, it was probably to take tourist snaps or drunken photos on hen nights. Today, you are carrying around a device which lets you document the world publicly and collaboratively. A while back I heard Jimmy Wales discussing what makes Wikipedia work and he said he rejected the term ‘crowdsourcing’ because the people who write Wikipedia aren’t a ‘crowd’ of people whose role is to be a source of material for Wikipedia: they are all individual people with families and friends and aspirations and ideas, and writing for Wikipedia was a part of that. As Wales put it: they aren’t a crowd, they are just lots of really sweet people.

What could potentially lead us into more meaning-making rather than experience-seeking is the cognitive surplus that Clay Shirky refers to. The possibilities present in getting people to stop watching TV and to start doing something meaningful are far more exciting to me than any amount of gamification or user experience masturbation, but I suspect that’s because I’m not a designer. I can see how designers would get very excited about gamification because it means they get to design radically new stuff. They get to crack open the workplace, rip out horrible management systems and replace them with video games. Again, not interested. The majority of things which they think need to be gamified either shouldn’t be, because they would lose something important in the process, or they are so dumb to start with that they need to be destroyed, not gamified. The answer to stupid management shit at big companies isn’t to turn it into a game, it’s to stop it altogether and replace the management structure with something significantly less pathological.

Similarly, I listen to all these people talking about social media. Initially it sounded pretty interesting: there was this democratic process waiting in the wings that was going to swoop in and make the world more transparent and democratic and give us the odd free handjob too. Now, five years down the line and all we seem to be talking about is brands and how they can leverage social media and all that. Not at all interested. I couldn’t give a shit what the Internet is going to do to L’Oreal or Snickers or Sony or Kleenex or The Gap. They aren’t people. They don’t seek meaning, they seek to sell more blue jeans or whatever. I give far more of a shit what the Internet is doing for the gay kid in Iran or the geeky kid in rural Nebraska or a homeless guy blogging from the local library than what it is doing for some advertising agency douchebag in Madison Avenue.

One important tool in the box of meaning-making is consensual decision making and collaboration. There’s a reason it has been difficult for projects like Ubuntu to improve the user experience of Linux. There’s a reason why editing Wikipedia requires you to know a rather strange wiki syntax (and a whole load of strange social conventions and policies - you know, when you post something and someone reverts it with the message “WP:V WP:NPOV WP:N WP:SPS!”, that’s a sort of magic code for “you don’t understand Wikipedia yet!” See WP:WTF…). The reason is those things, however sucky they are, are a result of communities coming together and building consensus through collaboration. The result may be suboptimal, but that’s just the way it is.

Without any gamification, there are thousands of people across the world who have stepped up to do something that has some meaning: build an operating system that they can give away for free. Write an encyclopedia they can give away for free. All the gamification and fancy user experience design in the world won’t find you people who are willing to take up a second job’s worth of work to get involved in meaningful community projects. On Wikipedia, I see people who stay up for hours and hours reverting vandalism and helping complete strangers with no thought of remuneration.

It may seem corny, and it’s certainly not nearly as big of an ethical commitment as the sort Kierkegaard envisioned, but this kind of commitment is something I think we should strive towards doing, and helping others to do. And I think it is completely at odds with gamification, which seeks to basically turn us all into cogs in some kind of bizarre Skinner-style experiment. We hit the button not because we are getting something meaningful out of it, but because we get the occasional brain tickle of a badge or get to climb up the leaderboard or we get seventeen ‘likes’ or RTs or whatever. Gamification seems to be about turning these sometimes useful participation techniques into an end in themselves.

Plenty of the things which make meaning-making projects great are things any good user experience designer would immediately pick up and grumble about and want to design away. Again, contributing to the Linux kernel is hard work. Wikipedia has that weird-ass syntax and all those wacky policy abbreviations. Said UX designer will really moan about these and come up with elaborate schemes to get rid of them. And said communities of meaning will listen politely. And carry on regardless. Grandma will still have a difficult time editing Wikipedia.

When I listen to user experience designers, I can definitely sympathise with what they are trying to do: the world is broken in some fundamental ways, and it is certainly a good thing there are people out there trying to fix that. But some of them go way too far and think that something like “delight” or that “eyes lighting up” moment is the most important thing. If that is all technology is about, we could do that a lot easier by just hooking people up to some kind of dopamine machine. Technology should give us all our very own Nozickian experience machine and let us live the rest of our lives tripped out on pleasure drugs. I read an article a while back that reduced business management to basically working out how to give employees dopamine hits. Never mind their desire for self-actualization, never mind doing something meaningful. Never mind that the vast majority of people opt for reality with warts than Nozick’s experience machine—the real world has meaning.

The failure of meaning-making communities to value user experience will seem pretty bloody annoying, if only to designers. There are downsides to this. It sucks that grandma can’t edit Wikipedia. It sucks that Linux still has a learning curve. Meaning-making requires commitment. It can be hard work. It won’t be a super-duper, beautiful, delightful user experience. It’ll have rough edges. But that’s real life.

A meaningful life is not a beautiful user experience. A meaningful life is lived by persons, not users. But the positive side of that is that these are engaged, meaning-seeking, real human beings, rather than users seeking delightful experiences.

That’s the choice we need to make: are technologists and designers here to enable people to do meaningful things in their lives in community with their fellow human beings or are they here as an elaborate dopamine delivery system, basically drug dealers for users? If it is the latter, I’m really not interested. We should embrace the former: because although it is rough and ready, there’s something much more noble about helping our fellow humans do something meaningful than simply seeing them as characters in a video game.


This post is now on Hacker News, and Kevin Marks has written it up on the Tummelvision blog.

  1. This is one thing I disagree with Kierkegaard very strongly on. But not for any high-falutin’ existentialist reasons. I just don’t believe in God, and more importantly, I don’t believe in the possibility of teleological suspension of the ethical, which makes the step to the religious stage of existence rather harder! I’m not even sure I’m in the ethical. It could all be a trick of my mind, to make me feel like I’m some kind of super-refined aesthete. Or it could be rank hypocrisy. But one important thing to note here is that the aesthetic, ethical and religious stages or spheres of existence, for Kierkegaard, are internal states. The analogies he uses don’t necessarily map onto the spheres. So, you don’t have to be the dandy-about-town, seducing women and checking into Foursquare afterwards to be in the aesthetic. If you are married, that doesn’t mean you are in the ethical stage. Nor does being overtly religious or, rather, pious, mean you are in the religious stage. Indeed, the whole point of Kierkegaard’s final writings, translated into English as the Attack Upon Christendom is that Danish Lutheranism was outwardly religious but not inwardly in a true sense.


Infographics are porn without the happy ending

Consider this image:

It’s taken from this article on Mashable about one year of the iPad.

This is what curl -I has to say about it:

The important line from that, for those who don’t really do the Unix command line, is “Content-Length”. This is how large the file is in bytes. 200,939 bytes.

Now, to show the complete superfluousness of infographics, I have expressed the same information in another format, namely plain text (Internet MIME type: text/plain) encoded in ASCII.

Here it is:

The iPad has been out a year.

Analysts thought it would sell 3.3 million units. It sold 14.8 million.

Oppenheimer & Co. predicts tablet shipments will grow from 15.1 million in 2011 to 115 million in 2014.

Apple has 90% of the market share.

Five simple sentences that almost anybody could understand. The wc utility on my computer tells me it is 245 bytes.

These 245 bytes of English text/plain transmits exactly the same amount of information as the 200,939 byte JPEG image and does so without making me want to kick someone in the dick.

If you wanted to catalogue the shit-eating complacency and pretentiousness of Web 2.0, infographics would be right up there with the damn TED conference and people who put “rockstar” on their business card.

Did someone really sit down one day and think “you know, unless we have the market share of the iPad illustrated as a pie chart shaped as an apple, people will think this statistic is too dry”? The story of the iPad is an interesting one: much, much more interesting than can be displayed in three factoids hastily put together in a crappy infographic. You don’t need an infographic to tell the story of a computer that is the size and form of a magazine. You need a writer.

Everyone keeps telling me that infographics are fine, and that I’m just getting stuck in Sturgeon’s Law. I keep hearing infographics designers turn up at design events talking about the awesomeness of infographics. But in my day to day life, I can’t remember ever seeing a good infographic. That is, I can’t remember ever seeing an infographic that made it worth the page taking even half a second longer to load.

Unlike words, infographics are unreadable on small screen devices. Infographics make information less accessible for blind people and others with visual impairments. Christ, I have near-perfect 20-20 vision and I struggle to read some of the goddamn too-hipster-by-half typefaces even the better infographic designers use. If you make an infographic, you are basically saying fuck you to blind people, fuck you to the Googlebot and often fuck you to people with colour-blindness. And you are definitely saying fuck you to people on slow connections. If you are paying £4 a megabyte to get data in Paris (yeah, I hate you too, Orange), putting an infographic where text could do the job isn’t just a giant fuck you but a waste of actual money. And by the time you notice, you can’t complain. If you are out in India and your only connection to the WWW is a phone we Westerners called shitty and threw away about three years ago, the infographic is completely inaccessible to you.

And if you are trying to help people understand information–a wholly laudable goal!–cutting off the poor, the blind and those on shitty connections is a bad way of doing it. The first step to understanding information is making it available. And text/plain or text/html is a much better way of doing it than wrapping it in a poncy graphic. At the very least, if you all still think infographics are still worth doing, bloody well work out how to make them accessible and provide text fallbacks. Or stop making infographics and work out how to produce mixed graphic/text layouts. Just because you’ve got worked out an awesome ripple effect for that pie chart doesn’t exempt you from accessible design principles like progressive enhancement.

Take this infographic. If you were actually trying to get information across, you could turn most of it into a web page, and then put the graph at the bottom as an SVG. There are plenty of ways you could make it look nice. An ‘infographic’ pretty much has to be an image–in this case, a JPEG (again, seriously? Did nobody teach infographics designers that line art and text is best as a PNG than a JPEG?). But if people could do away with the whole silly infographics fetish and just produce information, that information could sit quite happily in web pages, with the occasional image when necessary. Those web pages are a lot more accessible, have much smaller file sizes and have the ability to include the sort of metadata around them to make them indexable by Google, consumable by blind people and much more.

Now, I’ll grant you one thing. Some things can only be displayed graphically. Here’s an example:

If every infographic were to disappear and be replaced by a picture of a kitten (or better yet, a picture of one of my kittens), the world would probably be a better place. And it probably wouldn’t be any less informed. It’s like the lottery: you don’t actually improve your odds of winning much by buying a ticket. Similarly, you are about as likely to learn useful information about the iPad (or whatever the topic, really) by looking at a picture of my kitten as you are by looking at an infographic. The point of most infographics isn’t to actually convey information: they only convey how much cooler than you the designer is.

Infographics are what happens when Nathan Barley thinks he can do statistics. Let’s be honest: the only audience for them is other self-facilitating new media nodes. Please, make it stop. There is no excuse.


Who on earth thought that putting SecureCode / Verified by Visa on the web page used to top up a mobile phone account? It is a usability nightmare when you are using it on a desktop PC but on a mobile device like the iPod touch or iPad?