tommorris.org

Discussing software, the web, politics, sexuality and the unending supply of human stupidity.


technology


No, I'm not going to download your bullshit app

How we used to read the news, back in the era of the Web:

  1. Go to newspaper website.
  2. Click on story.
  3. Read.

How we read news in the era of fucking stupid pointless iPhone apps.

  1. Go to website.
  2. Be told you aren’t allowed to read the website.
  3. Be redirected to an App Store.
  4. Download the app. (This may involve typing in a password. Which may involve shuffling over to your password manager app to find your password.)
  5. Wait while a multi-megabyte file downloads over your temperamental, expensive 3G connection.
  6. Open the app up.
  7. Familiarise yourself with an interface that has cryptic, weird touch affordances that aren’t actually revealed to the user and behave ever so slightly differently from every other similar app.
  8. Struggle as the badly-implemented statefulness gives you a spinning loading wheel (on iOS) or flashing progress bar (on Android) because you had the audacity to use your mobile device on a slow or unreliable connection.
  9. Attempt to find the story you wanted to read using a layout and information architecture that’s completely different from the layout and information architecture of the website that you’ve grown familiar with, because some arsehole decided that the process of reading the electronic equivalent of a newspaper needs to be “disrupted” because he’s been reading far too much Seth Godin or some other bullshit.
  10. Realise that the app shows you different things depending on whether it’s in landscape or portrait mode. Now you can look like an utter nob on the Tube rotating your iPad around so that you can zoom further into the Page 3 stunna’s tits.
  11. Not be able to share the story with your friends because it’s not a page on the web with a Uniform Resource Indicator. Because why do you need universal addressability when you’ve got shiny spinny touchy magical things to rub your sweaty greasy fingers all over?
  12. Take time to download updated binary files the next time the application is updated in the App Store, that’ll provide you “new functionality”, even though there is no fucking functionality you actually want other than reading the fucking news.
  13. If you are on Android, be sure to install some anti-adware software in case the app comes with some delightful bit of creepy privacy-intruding out-of-app advertising.
  14. Give up, go to newsagent, buy paper edition, throw smartphone off a fucking cliff and start a letterbomb campaign against all the idiots who thought that turning newspapers into “apps” was a good idea.

In the “web vs. apps” war, I think you can infer which side I’m on. I wouldn’t download a BBC app or an NPR app for my computer. Why would I want one on my phone? Do I buy a separate radio to listen to different stations? No. The functionality is the same, the only thing that differs is the content. Apps ought to provide some actual functionality, not just blobs of content wrapped up in binary files.

Maintenant disponsible en version française


The release of the iPhone 5 seems to have set off more Internet debate about smartphones. I’m completely uninterested.

I have a smartphone: a Samsung Galaxy S2. It makes phone calls. It has some neat applications. The Gmail app on Android is superb. But beyond that… it’s a phone.

People debate smartphones as fetish objects. On Facebook, Robert Scoble said that holding the new iPhone sold me. There’s nothing wrong with a few fetish objects or things that look nice. I don’t spend a lot of time just holding my phone. I spend time either using my phone or having my phone in my pocket. I just can’t understand spending hours waxing rhapsodic or worrying about this stuff. I mean, someone gave me a very nice bottle of eau d’toilette a while back, and I enjoy both the design of the bottle and wearing it but not to the point where I’m going to go and argue about the bottle designs of different brands of fragrance and express disappointment if a particular perfume manufacturer fails to innovate sufficiently.

People are overthinking this shit. They are phones. Unlike the Windows v. Mac v. Linux fights of yesteryear, it isn’t like anyone actually uses these things for anything important.


Tech journalists: take my tech test

It’s a recurring theme in the argument about journalism: that journalists don’t know what they are talking about. With the magical powers of science, I want to see if that’s true. Below is a series of questions I have come up with to test whether it’s actually true or not. And by science, I mean a hastily constructed pop quiz.

Here’s the deal. If you are a technology journalist, please answer truthfully. I know all journalists are truthful and honest—I’ve been watching the Leveson Inquiry. See how many you can get right.

If you get less right than you think you should, consider whether you should be writing about technology.

Above is a sample of some code written in a programming language that was introduced in the last decade.

Please identify the name of the programming language and the broader family of programming languages that it is in.

If you can, please identify the name of the creator of the language.

Programming languages fall into two types: dynamically typed and statically typed. Please identify whether this is a dynamic or a static language.

Above is a sample of some code written in a different programming language.

Please identify the name of the programming language.

If you wished to produce a game to sell on the App Store for iPhones/iPads, which of the two languages you have identified above would be more suitable to build such a game in given the constraints placed on developers in the iOS ecosystem?

2001:0DB8:AC10:FE01:0000:0000:0000:0000

Above is a sample of a string that identifies something. Please can you identify what it is for.

WEP, WPA and WPA2 are types of what?

FAT32, ext4 and HFS+ are types of what?

You have probably heard of NoSQL. Please choose the odd one out: Redis, Riak, CouchDB, MariaDB, MongoDB, eXist.

I was going to ask a complicated question, but it’s now past 2am and the question I was going to write involved me reading assembly code, and I took the executive decision that I couldn’t be arsed.

Anyway, all of the questions above are on topics that have been covered or mentioned at least once on either TechCrunch or ZDNet. Even if you have no plans on writing about those topics, it’s something you presumably need to vaguely understand in order to be able to read the writings of other tech journalists.

Before you say “ah, but writing about technology doesn’t mean you have to be a geek”, let me ask you this: would you read what a music journalist has to say if they can’t identify the bands that are discussed in their own newspapers and on the websites they write for? What about a motoring journalist who had no idea what an axle was? A politics journalist who couldn’t tell you what a party whip does? A journalist covering the financial markets who has no idea who sets the interest rates? A wine reviewer who doesn’t know whether chianti is red or white? A science journalist who doesn’t understand the difference between an element and a compound? A religion writer who was a bit shaky on the difference between Protestantism and Catholicism? If not those, why do we accept journalists writing about technology who couldn’t tell you what a compiler is?


EasyTether... is actually easy, and works

I’ve finally got USB tethering working between Mac and Android. I followed these instructions from AskDifferent (the StackExchange site for Apple and Mac related questions). You have to install a piece of software called EasyTether on your phone, and then carefully follow the instructions in the app which include installing drivers on your computer. It takes about 10 minutes.

But if you do that… it actually works. I’ve set up 3G connections before on Linux, for instance, which have required me to write AT strings and so on. (Which, you know, why? It’s 2012, for fuck’s sake.)

So why, given that any decent Android phone has a portable USB hotspot mode which basically makes it so your phone can rebroadcast the 3G signal as a wifi hotspot.

Two reasons come to mind.

Firstly, battery usage. You don’t need the wifi running in either your phone or on your computer. Less battery usage is obviously good.

But the far bigger reason is that you actually get a better connection. One thing I’ve noticed with both MiFi dongles and with the Android portable hotspot is that when you dip in and out of a mobile signal area, it’s very slow to reconnect. You spend a lot of time in TCP/IP-free limbo. This never used to be the case with GPRS: you’d get very quick reconnection, obviously at an unacceptably low speed.1

I’m writing this on the train home, and I’m getting service in areas that I wouldn’t when using portable hotspot. Portable internet that isn’t infuriating is good. That I have about an extra hour of battery life on my laptop is a nice bonus.

  1. I’d like to reiterate a fundamental point: speed is one of the least important aspects of broadband connections. Reliability, latency, usage caps and so on is far more important than speed, depending on the application you are using. For pottering about on the web, downloading a few MP3s, what’s the damn point of having super-duper-ultra fast broadband? You can give me fifty megs a second, but if I can’t afford to use more than a gigabyte of data a month, it’s basically a toy.


I'm not an experience-seeking user, I'm a meaning-seeking human person

After an evening of cynicism last night, reading a bloody awful article by a pompous twit, and travelling on bloody slow trains, and then logging on to Twitter and seeing a bunch of bloody fools debating things they are completely ignorant of without even a modicum of philosophical charity, I found something which restored my trust in the human race: psd’s talk at Ignite London. It combines giving naughty link-breaking, data-sunsetting corporate types a spank for misbehaviour with an admiration for I Spy books. I had I Spy books as a kid, although mine were products of the late 80s/early 90s and had the Michelin Man, although in not nearly as an intrusively corporate way as Paul’s slides of current day I Spy suggests. Do forgive me: I’m going to do one of those free-associative, meditative riffing sessions that you can do on blogs.

The sort of things Paul talks about underly a lot of the things I get excited about on the web: having technology as a way for people to establish an educational, interactional feeling with the world around them, to hack the world, to hack their context, to have the web of linked data as another layer on top of the world. The ‘web of things’ idea pushes that too far in the direction of designed objects (or spimes or blogjects or whatever the current buzzword is), and the way we talk about data and datasets and APIs makes it all too tied to services provided by big organisations. There’s definitely some co-opting of hackerdom going on here that I can’t quite put my finger on, and I don’t like it. But that’s another rant.

I’ve been hearing about ‘gamification’ for a while and it irritates me a lot. Gamification gets all the design blogs a-tweeting and is a lovely refrain used at TED and so on, but to me it all looks like “the aesthetic stage” from Kierkegaard applied to technology. That is, turning things into games and novelties in order to mask the underlying valuelessness of these tasks. Where does that get you? A manic switching between refrains. To use a technological analogy, this week it is Flickr, next week it is TwitPic, the week after it is Instagram. No commitment, just frantic switching based on fad and fashion. Our lives are then driven by the desire to avoid boredom. But one eventually runs out of novelties. The fight against boredom becomes harder and harder and harder until eventually you have to give up the fight. There’s a personal cost to living life as one long game of boredom-avoidance, but there’s also a social cost. You live life only for yourself, to avoid your boredom, and do nothing for anybody else. Technology becomes just a way for you to get pleasure rather than a way for you to contribute to something bigger than yourself.

In Kierkegaard’s Either/Or, the alternative to this aesthetic life was typified by marriage. You can’t gamify marriage, right? You commit yourself for life. You don’t get a Foursquare badge if you remember your anniversary. The alternative to aestheticism and boredom is an ethical commitment. (And, for Kierkegaard anyway, ultimately a religious commitment.1) And I think the same holds true for the web: you can gamify everything, make everything into Foursquare. Or you can do something deeper and build intentional, self-directed communities of people who want to try and do something meaningful. Gamification means you get a goofy badge on your Foursquare profile when you check into however many karaoke bars. A script fires off on a server somewhere and a bit changes in a database, you get a quick dopamine hit because an ironic badge appears on your iPhone. Congratulations, your life is now complete. There’s got to be more to life and technology than this. If I had to come up with a name for this alternative to gamification that I’m grasping for, it would be something like ‘meaning-making’.

Gamification turns everything into a novelty and a game (duh). Meaning-making turns the trivial into something you make a commitment to for the long haul; it turns the things we do on the web into a much more significant and meaningful part of our lives.

In as much as technology can help promote this kind of meaning-making, that’s the sort of technology I’m interested in. If I’m on my deathbed, will I regret the fact that I haven’t collected all the badges on Foursquare? Will I pine for more exciting and delightful user experiences? That’s the ultimate test. You want a design challenge? Design things people won’t regret doing when they are on their deathbed and design things people will wish they did more of when they are on their deathbed. Design things that one’s relatives will look back in fifty years and express sympathy for. Again, when you are dead, will your kids give a shit about your Foursquare badges?

A long time ago, I read a story online about a young guy who got killed in a road accident. I think he was on a bike and got hit by a car while driving home from work. He was a PHP programmer and ran an open source CMS project. There was a huge outpouring of grief and support from people who knew the guy online, from other people who contributed to the project. A few people clubbed together to help pay for two of the developers to fly up to Canada to visit his family and attend the funeral. They met the guy’s mother and she asked them to explain what it is that he was involved in. They explained, and in the report they e-mailed back to the project, they said that the family eventually understood what was going on, and it brought them great comfort to know that the project that their son had started had produced something that was being used by individuals and businesses all over the world. This is open source: it wasn’t paid for. He was working at a local garage, hacking on this project in between pumping petrol. But there was meaning there. A community of people who got together and collaborated on something. It wasn’t perfect, but it was meaningful for him and for other people online. That’s pretty awesome. And it’s far more interesting to me to enable more people to do things like this than it is to, I dunno, gamify brands with social media or whatever.

This is why I’m sceptical about gamification: there’s enough fucking pointless distractions in life already, we don’t need more of them, however beautiful the user experiences are. But what we do need more of is people making a commitment to doing something meaningful and building a shared pool of common value.

And while we may not be able to build technologies that are equivalent in terms of meaning-making as, say, the importance of family or friendship or some important political commitment like fighting for justice, we should at least bloody well try. Technology may not give us another Nelson Mandela, but I’m sure with all the combined talent I see at hack days and BarCamps and so on, we can do something far more meaningful than Google Maps hacks and designing delightful user experiences in order to sell more blue jeans or whatever the current equivalent of blue jeans is (smartphone apps?).

The sort of projects I try to get involved in have at least seeds of the sort of meaning-making I care about.

Take something like Open Plaques, where there are plenty of people who spend their weekends travelling the towns and cities in this country finding blue memorial plaques, photographing them and publishing those photos with a CC license and listing them in a collaborative database. No, you don’t get badges. You don’t get stickers and we don’t pop up a goofy icon on your Facebook wall when you’ve done twenty of them. But you do get the satisfaction of joining with a community of people who are directed towards a shared meaningful goal. You can take away this lovely, accurate database of free information, free data, free knowledge, whatever you want to call it. All beautifully illustrated by volunteers. No gamification or fancy user experience design will replicate the feeling of being part of a welcoming community who are driven by the desire to build something useful and meaningful without a profit motive.

The same is true with things like Wikipedia and Wikimedia Commons. Ten, fifteen years ago, if you were carrying around a camera in your backpack, it was probably to take tourist snaps or drunken photos on hen nights. Today, you are carrying around a device which lets you document the world publicly and collaboratively. A while back I heard Jimmy Wales discussing what makes Wikipedia work and he said he rejected the term ‘crowdsourcing’ because the people who write Wikipedia aren’t a ‘crowd’ of people whose role is to be a source of material for Wikipedia: they are all individual people with families and friends and aspirations and ideas, and writing for Wikipedia was a part of that. As Wales put it: they aren’t a crowd, they are just lots of really sweet people.

What could potentially lead us into more meaning-making rather than experience-seeking is the cognitive surplus that Clay Shirky refers to. The possibilities present in getting people to stop watching TV and to start doing something meaningful are far more exciting to me than any amount of gamification or user experience masturbation, but I suspect that’s because I’m not a designer. I can see how designers would get very excited about gamification because it means they get to design radically new stuff. They get to crack open the workplace, rip out horrible management systems and replace them with video games. Again, not interested. The majority of things which they think need to be gamified either shouldn’t be, because they would lose something important in the process, or they are so dumb to start with that they need to be destroyed, not gamified. The answer to stupid management shit at big companies isn’t to turn it into a game, it’s to stop it altogether and replace the management structure with something significantly less pathological.

Similarly, I listen to all these people talking about social media. Initially it sounded pretty interesting: there was this democratic process waiting in the wings that was going to swoop in and make the world more transparent and democratic and give us the odd free handjob too. Now, five years down the line and all we seem to be talking about is brands and how they can leverage social media and all that. Not at all interested. I couldn’t give a shit what the Internet is going to do to L’Oreal or Snickers or Sony or Kleenex or The Gap. They aren’t people. They don’t seek meaning, they seek to sell more blue jeans or whatever. I give far more of a shit what the Internet is doing for the gay kid in Iran or the geeky kid in rural Nebraska or a homeless guy blogging from the local library than what it is doing for some advertising agency douchebag in Madison Avenue.

One important tool in the box of meaning-making is consensual decision making and collaboration. There’s a reason it has been difficult for projects like Ubuntu to improve the user experience of Linux. There’s a reason why editing Wikipedia requires you to know a rather strange wiki syntax (and a whole load of strange social conventions and policies - you know, when you post something and someone reverts it with the message “WP:V WP:NPOV WP:N WP:SPS!”, that’s a sort of magic code for “you don’t understand Wikipedia yet!” See WP:WTF…). The reason is those things, however sucky they are, are a result of communities coming together and building consensus through collaboration. The result may be suboptimal, but that’s just the way it is.

Without any gamification, there are thousands of people across the world who have stepped up to do something that has some meaning: build an operating system that they can give away for free. Write an encyclopedia they can give away for free. All the gamification and fancy user experience design in the world won’t find you people who are willing to take up a second job’s worth of work to get involved in meaningful community projects. On Wikipedia, I see people who stay up for hours and hours reverting vandalism and helping complete strangers with no thought of remuneration.

It may seem corny, and it’s certainly not nearly as big of an ethical commitment as the sort Kierkegaard envisioned, but this kind of commitment is something I think we should strive towards doing, and helping others to do. And I think it is completely at odds with gamification, which seeks to basically turn us all into cogs in some kind of bizarre Skinner-style experiment. We hit the button not because we are getting something meaningful out of it, but because we get the occasional brain tickle of a badge or get to climb up the leaderboard or we get seventeen ‘likes’ or RTs or whatever. Gamification seems to be about turning these sometimes useful participation techniques into an end in themselves.

Plenty of the things which make meaning-making projects great are things any good user experience designer would immediately pick up and grumble about and want to design away. Again, contributing to the Linux kernel is hard work. Wikipedia has that weird-ass syntax and all those wacky policy abbreviations. Said UX designer will really moan about these and come up with elaborate schemes to get rid of them. And said communities of meaning will listen politely. And carry on regardless. Grandma will still have a difficult time editing Wikipedia.

When I listen to user experience designers, I can definitely sympathise with what they are trying to do: the world is broken in some fundamental ways, and it is certainly a good thing there are people out there trying to fix that. But some of them go way too far and think that something like “delight” or that “eyes lighting up” moment is the most important thing. If that is all technology is about, we could do that a lot easier by just hooking people up to some kind of dopamine machine. Technology should give us all our very own Nozickian experience machine and let us live the rest of our lives tripped out on pleasure drugs. I read an article a while back that reduced business management to basically working out how to give employees dopamine hits. Never mind their desire for self-actualization, never mind doing something meaningful. Never mind that the vast majority of people opt for reality with warts than Nozick’s experience machine—the real world has meaning.

The failure of meaning-making communities to value user experience will seem pretty bloody annoying, if only to designers. There are downsides to this. It sucks that grandma can’t edit Wikipedia. It sucks that Linux still has a learning curve. Meaning-making requires commitment. It can be hard work. It won’t be a super-duper, beautiful, delightful user experience. It’ll have rough edges. But that’s real life.

A meaningful life is not a beautiful user experience. A meaningful life is lived by persons, not users. But the positive side of that is that these are engaged, meaning-seeking, real human beings, rather than users seeking delightful experiences.

That’s the choice we need to make: are technologists and designers here to enable people to do meaningful things in their lives in community with their fellow human beings or are they here as an elaborate dopamine delivery system, basically drug dealers for users? If it is the latter, I’m really not interested. We should embrace the former: because although it is rough and ready, there’s something much more noble about helping our fellow humans do something meaningful than simply seeing them as characters in a video game.


This post is now on Hacker News, and Kevin Marks has written it up on the Tummelvision blog.

  1. This is one thing I disagree with Kierkegaard very strongly on. But not for any high-falutin’ existentialist reasons. I just don’t believe in God, and more importantly, I don’t believe in the possibility of teleological suspension of the ethical, which makes the step to the religious stage of existence rather harder! I’m not even sure I’m in the ethical. It could all be a trick of my mind, to make me feel like I’m some kind of super-refined aesthete. Or it could be rank hypocrisy. But one important thing to note here is that the aesthetic, ethical and religious stages or spheres of existence, for Kierkegaard, are internal states. The analogies he uses don’t necessarily map onto the spheres. So, you don’t have to be the dandy-about-town, seducing women and checking into Foursquare afterwards to be in the aesthetic. If you are married, that doesn’t mean you are in the ethical stage. Nor does being overtly religious or, rather, pious, mean you are in the religious stage. Indeed, the whole point of Kierkegaard’s final writings, translated into English as the Attack Upon Christendom is that Danish Lutheranism was outwardly religious but not inwardly in a true sense.


HOWTO: Build an HTML 5 website

Everyone is going on about how they are making “HTML 5 sites” and going on and on about how HTML 5 is giving them a hard-on or something equally exciting.

So, I’ll show you how you join this amazing club.

Open up your text editor and find some HTML file or template.

Look for the bit right at the top. It is called a DOCTYPE. It’ll look something like this:1

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML+RDFa 1.0//EN"
    "http://www.w3.org/MarkUp/DTD/xhtml-rdfa-1.dtd">

Now, delete all that and replace it with:

<!DOCTYPE html>

Save the file and push it out onto the web.

Congratulations, you are now using HTML 5.

Give yourself a big pat on the back. Listen to some cutting edge spacey techno or something. ‘Cos you are living in the future, man. Your techno-halo is so bright, I need to put on two pairs of shades indoors.

You are now officially signed up in the fight against SilverFlash and a minion in Steve Jobs’ campaign for the open web or something. (Because embrace-and-extend is so much nicer when it comes from Apple and Google than when it comes from Microsoft and Adobe.)

You can also go to your boss and justify a huge champagne and coke-fuelled party with hookers and everything because you are now fully buzzword compliant. You can get venture capitalists and TechCrunch and other people who wouldn’t know a DTD from an STD2 to give your huge, manly testicles a thorough tonguebath – sadly, only rhetorically – because you are smart and hip enough to be using HTML 5. Pow! Bam! Shazam! You are like a cross between Nathan Barley and Rambo!

Or, you know, you could actually learn what HTML 5 is. Let me give you a clue: it is quite a lot like HTML 4. That’s part of the philosophy of the damn thing: it is continuous with what you are already doing rather than a radical shift! It is that cliché: evolution not revolution. It’s like the difference between OS X Leopard and Snow Leopard.

Once you realise this important truth, you can drop the buzzwords, and just quietly educate yourself on some of the quite nifty new things you get to do on the web, get your rather excitable colleagues to calm down before they feint in pre-orgasmic excitement, and maybe try and nudge the community at large into realising that HTML 5 is a few new bits and bobs they are adding to HTML, not some hybrid of Jesus and Vannevar Bush riding down on a pterodactyl/unicorn hybrid giving out ultratight Fleshlights to anyone who slings angle-brackets so they can prepare for the giant fight between HTML 5, evil browser plugins and mobile app stores.3

You can adopt HTML 5 really quite slowly: if your site sucks now, making it “HTML 5” won’t make it not suck. Even better, don’t start with HTML 5. Start with CSS 3: the nature of CSS is that it is much easier to fiddle with a stylesheet, add a few things like media queries and so on.

Be patient and don’t rush into this. Include only technologies that improve your site and the experience of using it. Not because some fucking bullshit web design blog you found on Reddit is jabbering on about how it is the most awesomest thing ever invented since someone discovered you could have sex while eating sliced bread or some other crap like that. It’s not. It’s an evolutionary step from existing HTML on the web that gives you a few shiny new things that might make life easier.

Now calm down. I’ve just washed my clothes and I don’t want you jizzing all over them when you discover the joys of the section element.

  1. Yours will be much more boring. It won’t have cool shit like RDFa in it because you suck.

  2. To be fair, DTDs and STDs share a scary resemblance in lots of ways. You can prevent the transmission of DTDs by adopting RELAX NG for all your XML schema validation needs.

  3. Again, the whole native vs. web thing is fucking stupid. The only reason it is happening is because people seem to think that everything needs to be an app. You know, if the thing is more like a web page, you put it on the web. If it is more like a desktop application, you put it in an app. Content? Web. Functionality? App. This also resolves all the stupid nonsense about app store approvals. Why have we reached a situation where people are putting content in an app? You know, people are downloading blobs of Objective-C compiled object code that contain satirical political cartoons. Then they are complaining when Apple ban the ‘app’. What the fuck is that all about? Put that shit on the web. Apple can do what they want to apps, but why let them tell you what you put in your content. Let them approve functionality, not content.

    There was a time many moons ago when you had to download a Windows application – actually, you had to have a Windows application sent to you on a CD-ROM – in order to order groceries from Tesco. This is the app world we live in today, and it is totally idiotic. Apps are things like Vim or Firefox or Unreal Tournament 2004 or iTunes or The Gimp or Final Cut Pro. If you wouldn’t download a Windows or Mac app to read Wired Magazine, why are you downloading a damn iOS app?

    What is so stupid about this is that while Apple and Android and whatnot train everyone up into using app stores, what’s the reaction of plenty of people in the open source community: don’t worry, the web will do it. (Or worse: we’ll make an open web app store!) But it’s bullshit. The web is a pretty damn retarded application platform. I mean, it is okay in a pinch, but I’m not betting on a decent Ajax version of Vim, Half-Life 2 or Adobe Illustrator any day soon. And why would I want to use Google Docs when I’ve got thirty years of hard work by Donald Knuth and Leslie Lamport sitting there ready to churn out absolutely awesome pixel-perfect print documents from my damn command line. Plain text, Vim and Git (or Emacs and Mercurial or some other combination thereof) will beat the socks off whatever cloud vapour out there.

    You do actually sometimes need native code on actual hardware, not seventeen layers of JavaScript indirection bouncing back and forth between a server that doesn’t respond half the time and a browser that’s filled with security holes and memory leaks. Why do I want this when I have a command line here that does the job quicker and easier and works when I’m in a fucking train tunnel? And don’t even think about saying “Google Gears”.


LazyWeb idea: Read My Docs

Here’s an idea that came to me after reading about all the different teams and infrastructure on the Ubuntu Wiki:

Read My Docs

A web service to connect people willing to proof-read and provide constructive feedback on open source software documentation. Projects could post announcements of newly written or radically revised documentation including manuals, tutorials, guides, man pages, READMEs etc. Each announcement would have a comment page, and links back to issue trackers and/or version control systems so that contributors could submit back either comments, issues straight into the issue tracker or even diffs/git pull requests/hg patch queue requests etc. (this is made easier by the use of distributed version control systems).

Each announcement would be marked up to specify the intended audience of the documentation: end user, developer, expert, absolute newbie etc. In addition, if announcements were specific to particular language or technology communities, these could be noted using tags so that one could drill down and find those things.

Language communities where packages are released through a central repository such as RubyGems, PyPI, Haskell’s Hackage etc. could have newly released packages automatically announced to the site so that developers with a specific interest could keep an eye on the quality of documentation and improve it.

A community could perhaps build around such a site and they could build up good practices that could be documented, leading to a positive spiral of better and better documentation. Some kind of intelligent game mechanic could potentially be applied so that instead of people rushing around cities checking into venues on Foursquare, they would get goofy badges and points and mayorships and leaderboards and so on for doing something useful like writing better documentation.

The end result? A small army of documentation fairies who would improve open source documentation across a wide range of projects, languages, communities and Linux distributions without having to join any of those communities. And hopefully a fun way for people who aren’t programmers to ease the documentation burden from the people who’d much rather be writing code.


Handwriting: still useful

I’ve never understood people who deem skills obsolete long before they actually are. A while back, someone started a wiki listing obsolete skills. Some skills truly are obsolete. But most of the time, just as with technology, when someone says a particular skill is obsolete, there’s usually a pretty good chance that you will still need to do it. Programming in FORTH may be less commercially useful these days than programming in Java or Python, but the programmer who knows FORTH has a valuable skill even if he doesn’t find himself using it very often. (I’ve got an RPN calculator app installed on my iPod touch…)

Being able to use a typewriter is one of those things. Everyone tells me that typewriters became obsolete in the 1970s. Strange. I was born in the mid-80s and still remember that in 1995, I was using a manual typewriter for a school report even though we had a PC. And the skills I learned using a manual typewriter – namely, the ability to touch-type – are pretty useful now I’m chucking around Ruby code in Vim or academic citations in LyX.1

I’m used to people telling me that typewriters are obsolete technology, and skills like being able to change dot-matrix printer ribbons or operating a rotary phone are now obsolete. But I never expected handwriting would ever end up in the same category.

Most schools still include conventional handwriting instruction in their primary-grade curriculum, but today that amounts to just over an hour a week, according to Zaner-Bloser Inc., one of the nation’s largest handwriting-curriculum publishers. Even at institutions that make it a strong priority, such as the private Brearley School in New York City, “some parents say, ‘I can’t believe you are wasting a minute on this,’” says Linda Boldt, the school’s head of learning skills.

Parents are finding it strange that schools are spending time teaching children how to actually write? What the fuck is that all about?

I’ve got an iPad. I’ve had a Palm Pilot in the past. I’ve got a laptop. And I still write by hand a hell of a lot. Why? Because it is fast. And it is especially quick if you want to jot down something other than linear text. If you want to draw a diagram, doing it in a notepad is a hell of a lot less painful than doing it on almost any computing device I’ve ever used.

This is partly my objection to most smartphones: when I use a laptop or desktop computer, my fingers can keep up with my brain. I can just about do similarly when I’m writing longhand. If I’m using an iPhone/iPod touch or a Blackberry or whatever, it takes a bloody long time to take notes. I guess if you can’t touch type on a full-size keyboard, being forced to use a smudgy little iPhone keyboard that big of a step down.

So, imagine, we don’t teach kids how to write by hand. How do they answer exams in school and in university? For many subjects, undergraduate and postgraduate, you still have to do pen-and-paper exams.

What if you want to become a reporter? If you are in Afghanistan reporting on the war, you may not have the chance to take an iPad with you.

What if you simply want to leave a note on the fridge reminding one’s family members or flatmates to buy some milk when they next go to the shops? Oh yeah, I’m just going to login to my computer and tap it out in a word processor and send it to the damn laser printer hoping that some idiot hasn’t left it without paper or toner? I like computers more than most, but even I’d just grab a post-it note in that situation. I like pen and paper for the same reason I prefer using Vim over Microsoft Office: it is simple, powerful and failure-resistant.

It is the old tale of the reluctant geeks again: while everyone else breathlessly adopts technology very quickly, those of us who know the technologies in unhealthy amounts of details are a fair bit more conservative about changing systems that work. And, well, handwriting actually works pretty well. The pundits have gotten equally breathless about e-books, but I expect I’ll still renewing my library card to borrow physical books for the next decade or three.

As with e-books, there may come a time when technology has far outpaced the need for writing by hand. 2010 is not that time. The fact that we have to resort to neuroscience to justify teaching basic writing skills is absolutely pathetic.2

  1. In fact, compared to the bad joke that is most inkjet printers, I’d rather be using a manual typewriter.

  2. Ray Tallis’ article Neurotrash feels an appropriate link here.


Walt Mossberg travels to Paris with iPad instead of laptop.

Bully for Walt Mossberg. The tech media really is getting rather ego-centric...


Techno-bandages don't work

I just read an article in The Guardian about "the future of reading" and it cited all the usual sources: Nicholas Carr and lots of neuroscience studies and blah blah. That is all fine and good, but there is something that really got up my nose about the article: the idea that there is some kind of technological bandage for people not reading as much or for as long as they "ought" to be.

The idea that the ADHD crowd with their seven hundred browser tabs open will suddenly all go out and buy Kindles or iPads and the problem will just be solved like that is a joke. I have both an iPad (yes, I gave in) and an e-ink based reader (like a Kindle but without Amazon's DRM layer). I still have a stack of unread books on my shelves and desk that would probably reach up to at least my thigh, and a few thousand pages of downloaded papers and articles, manuals, documentation and miscellanea across Papers, Stanza, GoodReader, iBooks and on the micro SD card for my e-reader.

New technology doesn't add more hours to the day. Nor does it change the working patterns of the world around us. It doesn't make a 300 page tome any less lengthy (although it does let me store more books than I could ever read on a card the size of one of my smaller fingernails). It doesn't make the email I get any less idiotic. It doesn't make Twitter or Reddit or Google Reader any less seductive. Oh yeah, all three of those are on the damn iPad, along with a stash of games and music and videos. What exactly is going to entice me to tap GoodReader rather than reading Twitter or playing Sniper Strike again?

And these technologies aren't going to just magically turn people into closer and more careful critical readers. I can see it on Oprah: before getting my Kindle, I was an unliterary fool who just about managed to get through an article in the gossip rags about the latest Big Brother winner, and now I'm the next D. F. Strauss. No, we are kidding ourselves.

Neither the Internet nor any particular technology won't save our souls or our attention spans. Nor will they finance the news industry or save the music industry from impending failure. Anyone who is betting on the iPad solving all of society's woes (or indeed saving their own behinds) is just deluding themselves. Technobandages may exist for some things, but there isn't a special technobandage that will fix any perceived shortfall in literary reading.

And, you know what, it isn't all bad news. When the U.S. National Endowment for the Arts released the 'Reading At Risk' report back in 2004, it was widely reported. I was going to cite it here but - you know what? It is out of date! I didn't see anywhere near as much press reaction when in 2009 the NEA released the latest version of the report, 'Reading on the Rise' which reported that the number of adults in the U.S. who read literature has increased by seven percent between 2002 and 2008 (it is climbing back up towards what it was in 1992), and the largest increase by age group has been among 18-24 year olds - you know, the group that always gets branded rather negatively as the 'Facebook generation', despite the fact that they were probably up in their bedroom reading large quantities of written material online while their parents were downstairs in the living room watching some crap TV programme. In fact, although not statistically significant, the only group to have dropped in the survey are the 45-54 age group.

Interestingly, the report also notes that "most online readers also report reading books". The Internet isn't crowding out book reading: "For adults who read online articles, essays, or blogs, the book-reading rate is 77 percent". This is no surprise: television has fucked up far more people's lives than the Internet ever could. Worrying about the Internet rather than television is a little bit like worrying about the trace amounts of carcinogens in your coffee when you smoke forty a day.

Was this rise in American literary reading all because of the Kindle or the iPad? Well, it can't be because of the iPad because it wasn't even out when the surveys were being done. The Kindle and other similar e-ink devices? Maybe. But I think it is much more likely that other factors than technology have been the cause. I'm sure us techno-types would like to think that we've brought about some kind of literary renaissance, but - you never know - people may have actually made a positive change all by themselves without having to use the Kindle as some kind of literary nicotine patch.

126 Alaska Street, London, SE1 8


Keep analysts and VCs out of politics

Okay, first read Keep developers out of politics, please and then read Keep stereotypes of software developers out of politics, please.

I'd suggest, given the arc of the thread thus far, that it would be far more useful to keep the existing political class out of politics. That means lawyers, that means money-men and that sure as fuck means bankers. Software developers are ill-suited to politics? Perhaps I have too much self-interest in perpetuating this stereotype but software developers are, by and by, the group of easily identifiable people I know that are least likely to fall into the 'greedy asshole' category.

And greedy assholes are the problem. To understand the problem with politics, don't look so much at ideology, look at greed. You want to understand the bailouts? Greed. Want to understand the lack of a decent regulatory framework around the financial sector that led to worldwide economic collapse? Greed. The response to Deepwater Horizon? Greed. You go to Westminster or Washington and you'll find fucktons of greedy assholes being fed bullshit ideological lies sweetend with bullshit pay-offs by other greedy assholes. And, if those people were in the IT industry, they'd be the suits. Or, these days, they'd be the assholes in suits who don't wear ties to look cool and trendy.

However crazy Richard Stallman gets, I'd much rather have him serving in US politics than some fucking Gartner analyst. He may be nutty and do embarrassing and impractical things, but the Richard Stallmans of this world are far more genuine and non-asshole-ish - even when they are being assholes! At least they are being assholes for a good cause, rather than just being assholes to further their own greedy self-interest. Richard Stallman is being an asshole so that you can have a free and open source copy of Emacs. The fucking legislature are being assholes so that they can get a handjob from some big business lobbyist in a vague and unfulfilled promise to bring jobs.

And, to be honest, anyone who has done web stuff knows about governance. If you are building social systems, you know what huge differences very small changes can bring. Write the copy slightly differently and you get a huge reduction in trolls. Think Slashdot's karma system. Think of the process of trying hard to build niche community sites that are filled with good content rather than assholes. While the social media people and their corporate overlords are happy to pull a Wordpress or phpBB thing off the shelf and stick it up, we know that sometimes you have to put some effort in if you want to get the reward out. Again, think Hacker News or Stack Overflow. The economics and political science crowd have discovered this kind of thing recently with all the behavioural economics "nudge" stuff.

Given Douglas Adams' very simple rule that the people who are best qualified to rule are the people least desiring of power, I think software developers are a pretty good fit. We've got a public relations man in No. 10 at the moment. Yes, David Cameron is formerly of the PR industry. Compare: software developers are paid to tell you the truth. And we try to not hesitate in that task. If you ask me what I think of a particular database or programming language, I'm not going to beat around the bush. David Cameron is a PR man. A man whose former profession is nothing more than the task of lying for money.

This is nothing personal or partisan about Cameron, but how can I believe anything he says when his former profession is lying for money? How could I believe Blair with the omnipresent Alistair Campbell whispering in his ear? Our leaders are either turning into celebs - Schwarzenegger - or being surrounded by such a huge layer of PR bullshit as to insulate them from reality, and to free them from the petty demands of truth and reality. This is no grand or original observation: it is simply cold, hard fact. And what are the results? Crap. Government seems to think that issuing press releases is making policy, giving press conferences is implementing policy - and the actual policy itself is being written behind the scenes by lobbyists.

The governments of the Western world have brought us the Digital Economy Bill, the Digital Millenium Copyright Act, crazy fucking libel laws that allow assholes to persecute our fellow geek brothers and sisters (the Singh v. British Chriopractic Association case) for attempting to bring scientific understanding to these idiots. Our governments are ripping up funding for basic academic research and replacing it with bullshit like the Research Excellence Framework.

And rather than stopping to realise the craziness of all this, they seek to impose it on everyone through international bodies: through the IMF, through the WTO, WIPO, the UN, through bullshit trade agreements like ACTA.

Our governments have padded their own nests in creating voting systems that are totally unfair, outdated and more suited to a time when it was only the lord of the manor who could vote and not the servants or peasants around him. They've promised change alright - for decades. Do we believe them? Occasionally. We are then promptly disappointed.

Now we have governments who, to fix the problems caused by their asshole friends in the banking industry are taking it out on the worst off in society.

My question is just this: where are you going to find more concern about this? At some hoity-toity analysts and VCs conference like Web 2.0 or LeWeb or at a hackers conference? Just what have our industry's money-men done about this? Sweet fuck-all, it seems. Silicon Valley, and its offshots, is filled with a rampant and totally fucking stupid technolibertarianism. They're all off in Ayn Rand la-la land, believing that if we can just privatise the roads, that'll sort everything out. Yeah, assholes. Yes, yes, you can go on about boring things like the Internet being created by the US government under DARPA, and the public funding given to CERN, and how much of all this is public infrastructure, about the key role that universities have been playing in fostering startup cultures (MIT, Stanford, Cambridge etc.). You'll just get back a load of Ron Paul propaganda and exhortations to read Atlas Shrugged.

But, come back down to earth. Software engineers seem to be much less assholish than that. There's a reason why we try and keep things like BarCamp free and low-cost. Because we're not all rich assholes. The Web 2.0 Summit costs four thousand dollars. BarCamps are free. The BarCamp crowd mostly aren't rich assholes. They're people who are just trying to do useful and fun things with the skills they've got.

What makes software development slightly different is that it is a relatively creative act - not the only creative act (some people, for some reason, seem to think that if we say building software is a creative act, we are saying that it is the only creative act. I do not know where this stereotype comes from, but people have made a lot of hay out of it.), but a creative act. Those who are involved in it get to spend time solving relatively interesting problems and are reasonably well paid for it. Admittedly, you have to spend all day in front of a computer. But you aren't spending most of that time answering e-mails from assholes.

I can't imagine why anyone in their right mind would want to exclude software developers from politics. They are genuinely some of the least assholeish people I know. Oh, wait, I do know. Certain existing powers would much rather have our government made up of easily-bribed, industry-lobbyist-fed assholes who will preserve their comfy status quo and bend over backwards the next time Goldman Sachs wants to run our economies into the ground and walk away with a giant fucking bailout.

Software developers: we may not be pretty or be dynamic and exciting speakers, but we try hard not to be assholes. That is a huge asset in a world run by assholes.


E-learning: yet another fad, and guess who is paying?

I hate to bang on about this, but it is important: while doing my Master's degree, I never had a single PowerPoint presentation. Lecturers made do with distinctly analogue technology: paper handouts, blackboards/whiteboards, vocal cords and books. Before that, I only had one lecturer who used PowerPoint. I am no great fan of PowerPoint. And I'm sceptical of most e-learning projects. I've written before on Learning Objects, and most e-learning projects seem to be driven by trying to push what is currently popular, fashionable or on the bleeding edge into the classrom, regardless of whether it actually will benefit learners. Most e-learning is technology for technology's sake. Digital whiteboards? Fine, except they are harder to read for people with vision problems. I remember once having to play a quiz using remote controls - very much like a gameshow. Except half the remote controls didn't work. It would have been a lot easier just to give out a scrap of paper, write the answers on it, then swap test papers with the person next to you and mark the questions. Most technology in the classroom falls into two clear categories: pointless or really pointless. With all the people pushing technology as a magic fix for all that ails education, plenty of us geeks are a bit more reticent about it. For me, it is pretty much a necessary precondition for any useful learning on non-technical subjects to set my computer aside and read a damn book on the subject. In fact, it is often sufficient for technical stuff too - a few hours and a good O'Reilly manual is sometimes more than enough.

Which puts this story in The Argus, Brighton's local paper, that I saw yesterday into perspective. Davison Church of England High School for Girls, a secondary school in Worthing is going to require pupils in year 9 (that is, 13-14 year olds) to purchase an Apple iPad as part of its new elearning project, which will begin in September. This e-learning project is so well-tested that the school is requiring parents to spend three hundred pounds on a gadget that hasn't even been released yet.

The announcement and sale of early release Apple products has long been said to bring with it a "reality distortion field" - why, Apple's critics charge, would people be willing to purchase the iPod Shuffle, a device that until recently didn't even tell you what song was playing, without the presence of some kind of fanatical craziness reminiscent of the religious enthusiasms of revival-tent preachers? Apple customers are - the critics say - more like Scientologists or Objectivists or those goofy people who take Dan Brown novels or the rantings of Glenn Beck a bit too seriously - it is just a computer after all. As an Apple customer who has no end of problems with his hardware, and who would much rather be in the free world of Linux and GNU but is held back by the pragmatics of the world, I disagree with the idea that all Apple customers are driven by such brazen religiosity. Many of us have a slight aesthetic preference towards shiny objects but manage to keep it under control when faced with more practical concerns of everyday life - like not pissing money away recklessly on anything vaguely shiny and magical-sounding.

Surely, though, an e-learning project needs some thought, some testing. Before you require parents to go and spend a few hundred pounds on a gadget that will form an essential part of the curriculum, it might be useful to actually get your hands on one and test it out - see whether or not it does the job you are intending of it. It is sad to see the reality distortion field extends beyond the fawning media, the obsessive blogosphere, the cheering and hollering convention centres and out to the humble C-of-E secondary schools in Worthing. This is very sad, but it also points to a lurking travesty: the fact that schools are absolutely failing to teaching their charges that technology is a tool to empower them, to liberate them and to have fun. It is not about delivering a stream of learning objects with all the soul of a Chicken McNugget. Technology isn't just some gadget you use to coerce Facebook-addicted pupils into giving a shit about GCSE Geography or, worse, some fakakta Business Studies 'diploma' - basically what we used to call a GNVQ or a VCE rebadged for about the seventeenth time this decade to try and persuade people that there is actually parity between academic and vocational qualifications, even though there isn't. Schools love technology to the point where it turns their pupils into little hackers - because little hackers don't fit with the control implicit in the school ethos. They are the ones who moan about wanting to install Linux on everything and point out your inadequacy. They will generally take absolutely no shit and follow a course of enlightened absenteeism - if they aren't learning something useful, they may just stop turning up.

Schools should be teaching children to tinker, to hack, to throw code together hastily and make it do something cool. This is easier than ever these days thanks to virtualization. What happened to the idealism that gave us the BBC Micro? All swallowed up and replaced with proprietary, locked-in entertainment devices - and expensive ones at that. Imagine if you gave every kid a netbook instead and taught them Python. If you can't afford a netbook, give every kid a Xen instance. That will teach them more than all the e-learning initiatives you can brainstorm. If you want to do technology in the classroom, get rid of your bullshit ICT classes. Stop telling yourself that teaching people how to use Microsoft Word is teaching them an important life skill. Teach kids to hack and tinker would be teaching them a useful life skill rather than an expensive, untested, "cutting edge" e-learning project that teaches them even more dependency on hardware and software vendors.


Smartphones are just netbooks for people who like to blather on the phone rather than shut the fuck up and type

I just wanted to repost my violent agreement with Jeff Atwood's post about netbooks. It responds to a post saying that netbooks are just poor smartphones. I disagree. Smartphones are just bad netbooks.

A smartphone has a keyboard: but it is a lame keyboard that you can barely use to write an e-mail, let alone a novel or a few hundred lines of C++ with.

A smartphone has the ability to make voice calls: netbooks have Skype, which is basically a legacy emulator for the phone layer. If you want to communicate, use e-mail or IM. They are like voice phones but significantly less annoying.

A smartphone can run any software you like: so long as it has been pre-approved by Apple or some shady cabal of profit-addicted mobile providers. A netbook lacks this significant limitation. Debian, Ubuntu, Fedora, Suse, Mandriva, OS X (Hackintosh), Windows - pick your poison. They all taste pretty good compared to selling your soul to the phone companies. A previous generation of hackers did everything they could to fight against the phone companies - Google 'phreaking' if you don't believe me - now what do we do? Let them choose what software we use.

You can write code for your smartphone: buy yourself a Mac, learn Objective-C, learn Xcode. I can write code for my netbook: I open up Vim and start writing Python or Ruby or Scala or C or Java or LOLCODE or Brainfuck or Scheme or Smalltalk or JavaScript or whatever the fuck else I want to write. What am I targetting? Oh, an x86 machine with a keyboard, mouse and a Linux OS. That's easy. Did I mention that I can write code on the machine that I'm running it on? If I don't like it, I open up the text editor, change it and recompile.

You can read mail on your smartphone: but I get to use mutt on my netbook. This sucks significantly less than your mail client.

Your smartphone is multiuser in the sense that multiple people can use it. My netbook is multiuser in the sense that I can set up an account on it for family members to use, and they can't go poking through my e-mail. (Apparently, the iPad is 'multi-user' in that you can play drag-and-drop chess on it like you can with the Microsoft Surface. Great. Can you let other people use it without them able to read your mail?)

Call me when your smartphone runs bash, vim and git. Call me when I can write my dissertation on your iPhone. In fact, don't call me. Write me out an e-mail on that teeny-weeny keyboard and I'll send it back and make fun of all the times your iPhone has turned 'reading' (as in books) into 'Reading' (as in Berkshire town, home of the annual rock festival and the UK home for all the big tech companies like Intel, Microsoft, Oracle et al.).

Netbooks are everything that makes computing awesome in a smaller package. I want to take one everywhere: someone needs to start making holsters for these babies.


A programming language dictates possibility, not fate

I just read on Twitter that a speaker at Future of Web Apps said that Python was "slow". This later became "slow at threading". Now, this may be a perfectly reasonable statement - the Python users I know tell me that Python sucks at threading. Of course, it probably doesn't suck as much as Ruby does at threading! Heh.

I thought I better set the sprinkler system going pre-emptively. When we are discussing the performance characteristics of software, people get very hung up on language and ignore everything else.

Last year, I wrote a script to extract data out of a whole bunch of HTML files I downloaded. I used Ruby with Hpricot. I had about 2.7Gb of HTML files to go through and I wanted to extract the contents of the h1 element and a span with a particular classname. Hpricot was pretty slow back then. And I hadn't learned Nokogiri, which I now know. I cooked up a Ruby script in about fifteen minutes that did what I thought I wanted. I ran some tests and worked out that it was going to take four days for the script to run. This was because each iteration was going to take five or six seconds. This is on a two-point-whatever dual core GHz MacBook.

I rewrote the code in Java. I could have rewritten it in C, I guess, but I'm a Java weenie and I love my garbage collector. The code I wrote in Java used java.io.BufferedReader to read through the files, loading each line into a String object, then using java.lang.String's startsWith method on the string to see if it matches the HTML element strings. Once I had read all the lines that I care about, I close the file and move on, and spit out the strings I care about to System.out. This is one pretty tight for-loop. This code took about 25 minutes to write - plus a few minutes to go and get a library off the Internet - and the code took about 8 minutes to run. It turned out that I had made a goofup in one of the variable names. So after those eight minutes had run, I changed the code, recompiled it and it worked.

Now, at this point, there are two possible reactions.

If you are a programmer you go "No fucking shit. You wrote some slow code and it was slow. You wrote some fast code and it was quick. What are you, some kind of beautiful and unique snowflake?" Well, actually, that is what a very cynical programmer would say. Most programmers would say "Okay, that makes sense."

If you are a tech blogger who has learned their story-finding skills from TechCrunch, you now say "BREAKING NEWS: Ruby is slower than Java. Stop the presses!"

Feel free to substitute Python for Ruby or C for Java or whatever. The same story will be told over and over again. You don't believe me? Remember all the hoopla when Twitter changed their message queue over from Starling to Kestrel. A sane technical decision from the Twitter team turned into an absolutely farcical pissing contest.

To show you how ludicrous these comparisons are, I wrote three scripts - one in Ruby, one in Java and one in C. Both do the same thing: print out "Hello World" 100,000 times.

Running 'time' on the compiled C script gave me these results - real: 0m0.462s, user: 0m0.046s, sys: 0m0.132s.

Running 'time' on the interpreted Ruby script gave me these results - real: real: 0m0.664s, user: 0m0.178s, sys: 0m0.155s.

Running 'time' on the compiled Java gave me these results - real: 0m1.358s, user: 0m0.809s, sys: 0m0.360s.

Not surprisingly, C is faster than Ruby. And Ruby is faster than Java, not surprisingly (the JVM takes time to startup - to do what, exactly? Print 100,00 Hello Worlds?). Now, please go and build all your web applications in C. It is obviously faster than Ruby and Java. Speed is the only thing that matters, remember. And the only thing that dictates speed is what language you choose. What algorithms you use? No consequence at all. Libraries? Nuh-uh. It is not like different types of software have different performance characteristics, and that deciding on what programming language (and compiler or interpreter or whatever) is a complex decision made up of many factors. No, much better to trust what some goofball with an arbitrary 'time' output says. Or better, just trust what some goofball on a tech industry blog who barely knows what HTML stands for. Arrington and pals know a lot more about programming than your programmers do, remember.

Or you could do what sane people do: use the right tool for the job, test out speed claims in something vaguely approaching a scientific manner. And maybe get your programmers to decide what programming language to use, rather than the technical press or the bloggers. Base your language and technical decisions on your own problems. That my code took eight minutes when I wrote it in Java using the sort of constructs that Java gives me compared to doing it in Ruby with the sort of approach I take to writing Ruby code (hint: I could have written it in Ruby to run a lot faster. I just happen to know Java's IO libraries slightly better than I do Ruby's.) is pretty irrelevant. The problem that you might be facing may have absolutely nothing to do with file IO or HTML parsing or whatever. Just turn off the chatterbox and do what is right for your own situation based on actual good reasons rather than what is the current hype on the blogs.


Hack your context and free yourself from the bounds of digital irritation

Over the last few days, I've been working on what I am calling 'context hacks'. Context hacks are hacks that detect human presence in particular contexts in a latent way and enable that contextual information to be shared and reused. A context is, to put it very broadly, a relational state between an agent and their surroundings. Location is one of the important contexts: as is activity, state of mind, connectedness to gadgets and a whole load of other things.

Basically, by context hacking, I want to free this kind of locational and state information from the commercial enterprises that are currently using it for their own purposes.

Why context-hack? Because geolocation is expensive and explicit. Take the two popular location-based services Foursquare and Gowalla: both are built around device-based check-in applications. To use Foursquare, you need to have an iPhone (or a Blackberry, Android or Palm Pre) - a smartphone that costs a few hundred dollars to buy and a few hundred more dollars each year to own. But more than that, sites like Foursquare require you to explicitly check-in with your location. You have to say "hey, I'm at the Barbican Arts Centre right now". Except when you actually are at the Barbican Arts Centre (or wherever) you might actually want to get on with the things you went there for: to enjoy the exhibits, to peruse the library, to wonder at the combined beauty and ugliness of the concrete, to sit in the restaurant with your friends. You don't want to be sitting there opening up Foursquare, waiting for the 3G networks to work, hoping that the GPS is good enough to penetrate all that concrete and then worrying about privacy and whether or not stalkers are going to find you through Facebook.

Context hacking is finding a way of doing this better. The location-based services use mass market devices like iPhones. They are built around devices. If you have an iPhone, it obviously has GPS. But if you've got a Mac running 10.6, your computer knows roughly where it is using technology called Core Location. Core Location uses nearby wifi spots and a service called Skyhook to work out where you are. This is also built into browsers: if you use Firefox, you'll find that if you are connected in an area where Skyhook is working and it is built into your browser and/or OS, when you go to, say, Google Maps, it'll start where you are. I carry a device around with me called a MiFi which provides pay-as-you-go 3G service to me anywhere in the UK, but broadcasts it over a private cloud of wifi. But what most people don't know about the MiFi is it also has a GPS built-in. It broadcasts the location of itself to the devices that use it. If I use an iPod touch with the MiFi, when I use the built-in Google Maps application, it goes to where I am. Clever or what?

But all these things require user intervention. Foursquare requires you to check-in. W3C Geolocation-based apps require you to actually go to a web page - sorry, a web app - pages are old-school, remember. I have got enough technology on me that the technology should be able to work out where the hell I am and what I am doing.

That raises another problem: if it does that, I want to control it. I don't want Google to know my contexts. I don't want Facebook or Yahoo or Twitter or Foursquare to control my contextual information. I want to control it. I'm not a megalomaniac or anything: I just want my data.

The closest I have found to this is FireEagle. I go on and on about FireEagle like an eager fanboy, but there is an important thing that FireEagle gets right that the others do not: FireEagle is just a location broker, just like your e-mail provider is just an e-mail provider. They provide the plumbing, someone else builds the services on top. XMPP is plumbing. RSS is plumbing. What people build on top is up to them. Now, this is not to deny the use of sites like Foursquare. You can have something like Foursquare, but under my mental framework of contextual computing, it doesn't own your context, but you can share your context with it if you want to.

FireEagle is fine, then. It does a lot of what I want from contextual computing. But there is one thing it could do slightly better: it could be hosted on tommorris.org rather than on fireeagle.yahoo.net. Not because I have any particular problem with Yahoo - rather, I trust myself with my contextual information more than I trust anyone else. Just like I trust myself with my blog more than I trust anyone else. While I may suck at software - as Dave Winer says, we all build shitty software - the way the software I build is shitty in a way I like rather than shitty in a way someone else likes!

Eventually, I would like to host my own context server. Now, a context server is like a very private blog. You put your location information on there, and you can tightly control who gets to use it. What would be nice is if we could build a network of these context servers which could talk to each other. My context server would be able to tell other people where I am. A distributed network of context nodes. Of course, context servers wouldn't need to be anything more than HTTP servers. HTTP gives us all we need for context: the context data is in a machine-readable format (or a handful thereof: XML, JSON, RDF, and any other acryonym that fits). We provide access to it over authenticated HTTPS. So you might call tommorris.org/location and, if you've got the relevant permissions, you'd get back my location. We could work the details out later.

But before we build this kind of thing, we need to start collecting data. And because context needs to be about people rather than devices, we need to come up with many, many more ways of collecting clues as to context. This is what I have been working on: location hacking, context hacking. I've been doing it with the existing location-based services - specifically FireEagle and FourSquare. I've done FireEagle hacking in the past, so I am using the existing tools I have built - specifically a little command-line updater I wrote in Ruby called 'fe'.

I have built three tiny little context hacks recently that I want to share with you. I am working on more, and trying to come up with more.

One I built today in about six lines of Ruby is very simple indeed: location_ipod.rb. All it does is sits on a machine I have in my house, and every ten minutes attempts to ping my iPod touch. If it finds it, it tells FireEagle that I am at home. That is all it needs to do. If it can't find my iPod touch it doesn't mean I am not at home, but if it does find my iPod, it means I am. I rarely even walk the dog without taking my iPod with me.

This is useful for me as when I go out, I often update my location using interactive apps like Foursquare and Sparrow. But when I get home, I often forget to update my FireEagle status to say I am home again. As my blog uses my FireEagle status to put my location on my blogposts when I am not at home, having this reset my status is important.

The next hack I wrote is called simply 'train'. It is a little script I can schedule when I know I'm going on a train journey to or from London. If I decide, for instance, that I am going on a train journey at 0845 to London. I go to my computer and type echo "train_out" | at 08:45. Inside the train_out script there are more at jobs which fire off my location at all of the stops all the way to London. The train_return script does similarly but going from London to home. It schedules all the stops from London, then also schedules a FireEagle update saying I'm at home to happen about an hour after I get back to the station. That last mile can often take a variable amount of time: buses, taxis, lifts or even walking can vary in duration. As I am going up to London a lot this week, I am planning to be, err, rail-testing the scripts quite hard.

There is a class of hacks that are based around hacking context information out of other people's systems. I have an Oyster card. I plan to write a script to extract data from their website about Tube and bus journeys I take in London. I can then update my location accordingly using that data.

I found a good one the other day. Waterstone's is Britain's largest chain of booksellers - sort of like Borders or Barnes and Noble in the states. They have a loyalty card scheme - I have one. I got it after purchasing a large, hardback copy of the complete works of Plato, and they told me I'd get a few pounds off the book as it is so expensive. I recently bought an e-book reader off them too, and got a load of points for that. On the Waterstone's website, you can get details of all your recent purchases. I've written a scraper to get this information out. I need to check and make sure how fast this information gets updated - if it doesn't take very long, I can use that to get information from. The scraper just uses Celerity, the really nice JRuby abstraction on top of HtmlUnit, the Java library that lets you basically write testing scripts that prod a fast, headless browser.

At this point, I need to explain why this is useful. This may seem like a totally goofy way of tracking context. It is. But it is goofy and useful. This uses up no batteries at all. No mobile phones. No 3G signals or wifi hotspots. I buy book. Computer lists purchase on website. Another computer takes that purchase, parses the shop identifier string and infers context from it.

As a sole measure of working out identity, tracking Waterstone's loyalty card uses is useless. But the point of it is that it is one out of many measures. Sources of context are diverse: you can and should infer it from hundreds of sources, rather than relying on your smartphone's GPS signal. Put all those layers of context together and build up a personal database. This is why it needs to be on your server - a server you pay for each month, rather than some ad-ridden monstrosity where Google takes it all and sells it off to people whose only interest is selling you legal consulting services for when you get mesothelioma, or a need for a bigger penis or whatever.

What can you then do with this contextual information once you've collected it - whether it is from GPS or from goofy loyalty card hacks? Well, know thyself, for one. Write scripts that look at the information and derive useful suggestions from it. Imagine this: you write a script that works out, based on all your context data, how to spend less money on transportation. Or maybe a script that looks at the sort of restaurants you eat at in London and suggests similar restaurants you might like in Manhattan. I say 'scripts', because these would be like Greasemonkey scripts - little scripts that use a common API to your contextual information, and that you can read the source code of before applying. You aren't trusting some roach-motel-owning gangster with your data - if someone wants to use your context data, he writes a script, you run it and it gives you the information. Your data isn't just the filler for AdWords, your data is yours, and anyone who wants to run their hands through it needs to appeal to your best interests, not theirs.

Why come at this from a community angle rather than a start-up angle? Because it works. Look at blogging - we have a shared understanding of blogging, and anyone can do it on their own. We can aggregate them together, but it is decentralized. Anyone can install blog software and run their own. If they don't like what is on offer, they can hack at it. WordPress and Movable Type are both free now. If you don't want the hassle, you can use a hosted service. There is an interplay between what is available on the hosted providers (WordPress.com, Typepad, Tumblr, Posterous, Blogspot, LiveJournal) and the things people do on their own sites. Compare that with the 'attention' space that was hot a few years ago. What happened to that? Completely fizzled out. Google now own attention, and nobody does anything the least bit interesting with it. Yawn. Attention Trust, APML - err, it all failed. (It was also philosophically misguided - the idea that you share 'attention profiles' rather than the actual attention data was stupid, as I pointed out.) We could make sure that doesn't happen with context.

What now, then? I suggested on Twitter recently that we ought to have a hack day for this kind of thing. I am going to look into it - ask a few people. If you have a venue in London that wouldn't mind having a bunch of people who care about this stuff coming along, do get in touch. We could build scripts that collect context data, maybe build context servers either as separate stand alone apps or as plugins to stuff like blogging software, and build scripts that do useful things with that context data. Work out what people want from contexts, and flesh out a vocabulary for talking about context. And have some fun.


Google have acquired AppJet, which means they've acquired EtherPad and the EtherPad team are going to start working on Wave. This is great news for them, but not so good news for their users and customers. They announced that they would be shutting down EtherPad. This is very sad indeed. EtherPad is Wave for people who actually need to get shit done. We used it really successfully for drafting things like blog posts and e-mails for BarCampLondon 7. I've tested it as a way to draft Citizendium articles collaboratively. Simon Willison used it at BarCamp recently on a birds-of-a-feather session on non-relational databases and NoSQL so that people could collaboratively edit a list of all the different non-relational database servers that are coming out - really successful. I was very sad to see the announcement that it was disappearing, as were many others in the comments and on Twitter. The good news? They've listened to the users! They are keeping the site going until... they open source EtherPad. Woohoo! This is the best outcome of all: we will be able to run our own EtherPads locally, on our own servers - this will be great for things like RailsCamp where we are offline. I'm already on Google Wave - it's a fancy toy to show how clever browser-based apps are. EtherPad is the real deal.