Discussing software, the web, politics, sexuality and the unending supply of human stupidity.


A gentle ethical defence of doing nothing

I’m going to tell you about an amazing new alternative therapy that you should consider. It’s called procrasteopathy, and its treatment modality is exactly what the name suggests: procrastination. For whatever condition you believe you have, the treatment is simply to find something else to be getting on with. Licensed procrasteopaths work with patients to find exactly what sort of procrastination will fit best with their lifestyles. Unlike boring, reductive Western medicine, practiced by boring, reductive colonialists, procrasteopaths target their treatment specifically to you as a whole person. They are treating the whole you, not just your disease.

It is for this reason that procrasteopathy is so hard to test: because each treatment is customised based on a person’s life—work, relationships, diet, values, education, family, attitude, mood, clothing choices, most recently seen movies, mobile phone tariff, preferred pizza toppings—it is very hard to test procrasteopathy in a clinical trial setting. Cold-hearted skeptics like to suggest that as with homeopathy procrasteopathy is a form of placebo, and thus indistinguishable from the placebo control group used in a clinical trial. But this only shows that they are indeed shit-breathing smug wankstains of no significance, a pox on our intellectual community, and they like to eat babies at dawn.

Furthermore, skeptics and boringly orthodox pharmacologists and medics are likely to point out that use and advocacy of procrasteopathy is unethical. There are a great many diseases, they say, where non-treatment causes massively negative effects to both the patient and—in the case of infectious conditions—risks the lives of other people.

If one is limited to a boringly narrow, scientistic and epistemologically fundamentalist view of medical practice, the non-treatment of serious clinical conditions may seem irresponsible and possibly even depraved, but this misses important ethical benefits of procrasteopathy, such as:

  1. Holism. Procrasteopathy does not attempt to use narrow categories of biological science to treat patients. It approaches patients as whole units, preferring to treat the whole person rather than simply the symptoms. In fact, the symptoms are thought of as a positive and important part of the patient, and something to be embraced rather than attacked.
  2. Non-intrusiveness. While biological medicine seems to be improving, there are still many drugs which cause terrible side effects. Procrasteopathy comes with no side effects, except the warm feeling of not doing anything.
  3. Procrasteopathy respects patient’s wishes. If a patient says that the procrastination prescribed for them is not something they enjoy, the procrasteopath is strongly encouraged to reformulate their treatment plan to include a different mode of procrastination better suited to their life and values.
  4. Professionalism. Procrasteopathy is attempting to become professionalised, with a Society of Procrastopaths being set up to provide professional training and regulation, and we intend to register with the Complementary & Natural Healthcare Council.

Now, these individual factors on their own may seem trivial compared to the suffering and death that would be brought about by widespread use of procrasteopathy to treat serious medical conditions. The memories of the thousands of children dying from easily preventable diseases that lurks just a few decades ago before the advent of widespread infant vaccination may seem nightmarish; or more recently, the disastrous deaths of tens of thousands of AIDS patients before the availability of highly active anti-retroviral therapies may crush what little spirit of hope you have left. But a fair-minded observer must weigh up such woeful clinical outcomes with the positive ethical benefits of procrasteopathy.

If you find this kind of reasoning compelling, I strongly recommend a paper published by Levy et al. in the Journal of Bioethical Inquiry named A Gentle Ethical Defence of Homeopathy (PDF preprint). I’m sure the philosophically-informed reader will find the reasoning that Levy et al. present to demonstrate the ethical acceptability of homeopathy on utilitarian grounds to be as compelling a defence when applied to procrasteopathy.

“There has been a tendency to think that everything Xenophon says must be true, because he had not the wits to think of anything untrue. This is a very invalid line of argument. A stupid man’s report of what a clever man says is never accurate, because he unconsiously translates what he hears into something that he can understand.”
—Bertrand Russell in A History of Western Philosophy

Stating my opinion on State

What is an opinion? What is the point in having opinions? I would suggest that your answer to these questions is very much dependent on who you are and what you value in society.

But let me answer just for myself. The value in opinion is dependent on whether your opinion is informed, whether you are a reasonable person in command of the relevant facts, whether you are aware of your suspceptibility to erroneous thinking processes and capable of overriding them. The opinion you come to would ideally be structured. That is, it would have some kind of factual premises, some set of reasonable procedures you use to in order to reach certain conclusions, and the areas where you have had to settle for subjective feelings or emotions spelled out in a way that people can see how you reached your opinion. You expect that the holder of an opinion can justify that opinion in some fashion by appealing to the premises, the reasoning procedures and so on.

Perhaps my opinion on the subject of opinions is based on my personal and educational background—undergraduate and postgraduate degrees in philosophy. But I’m not dogmatic about this: I don’t think all opinions need to follow from some kind of pure reason or hew to the truth conditions of logical positivism. We can say intelligent and informed things about the subjective realm, about art and music and our emotions and personal experiences. Even with those, we can aspire towards understanding, to providing reasons and arguments, even if those reasons are subjective or presuppose some view not shared by others.

If that is close to your understanding of the nature of opinion, let me congratulate you on being a member of a proud philosophical tradition stemming back to the ancient Greeks. I have bad news for you and for your intellectual ancestors—Socrates, Hobbes, Descartes, Hypatia, Hume, Darwin, Russell—or whoever you pick for your hand from the grand deck of intellectual Top Trumps cards. All of you are out of touch with the modern world of business, advertising and consumerism. And if you are out of touch with those, then by extension, you are out of touch with their offspring: technology, media and the intersection of those things—social media.

It is with this background that I tested out State, a relatively new social media/technology startup based in London that is seeking to build a “global opinion network”, where the user can “have [his or her] opinion counted and see where [they] stand relative to others”. State has $14 million in funding, according to TechCrunch, and intriguingly has professional bullshit peddler Deepak Chopra on their board of advisors according to GigaOm.

I have been giving it a try: I mean, it should be a perfect fit—I have opinions. More than that, I’m a loudmouthed grumbly person who likes sharing my opinions with only minimal solicitation. Sounds like my sort of service. I was encouraged to join partly because a former colleague of mine had just started there and encouraged me to give it a try.

State is indeed a very interesting service, not so much because I think it will be either popular or important. I don’t think it will be either of those two things. But as a perfect encapsulation of exactly what the future holds for social media and society, you couldn’t do much better.

When one joins State, one is encouraged to find topics and to state one’s opinion on them in the form of a number of single word ‘opinions’, of the following form:

  • Lady Gaga: amazing.
  • David Cameron: bastard.
  • Tom Daley: phwoar.
  • Jedward: annoying.
  • UKIP: wankers.

You get the drift.

In fairness to State, you can then attach a comment to one’s statement, to qualify or expand on it. But the primary index of one’s opinions is this single word expressive grunt: awesome, amazeballs, fab, OMG, fail, omnom. The designers of State looked at Twitter and decided it was not short-form enough and so have stripped from it any content besides the hashtags.

Of course, my predictions of what will become popular is very fallible and though I personally do not see State having much success, it could end up being the next big thing. If it does, it won’t be long until people from the worlds of marketing, business and media swarm on to it, demand some kind of API to extract opinions from the State platform and have them displayed in executive summary form on a Big Data-powered dashboard platform. Engineers will scurry around so that senior figures in consumer-facing industries who have a stake in public opinion will be able to see an algorithmic summary of what exactly the interconnected plebiscite thinks of their brands, their celebrity representatives, their preened political spokesmen, all helpfully quantified into a stock ticker-style ‘metric’.

The helpful grunts from social media will be put through “sentiment analysis” and the opinions of the consumer will lead to a happier, better world where marketers can slice us, dice us, mix together our opinions with our demographic data, quantify whether our preferences satisfy key performance indicators and lots of other important measures.

Opinions in this new world of social media aren’t opinions: they are signalling grunts for marketers. Are you doing better than your competitors? Count up the positive grunts and the negative grunts, calculate the balance of grunts and see if you are getting more grunts than the other guys. The consumer has so much choice on where exactly to post their grunt: on Twitter (with a hashtag, perhaps), on Facebook (by liking posts and pages), on Google+ (if that still exists) and finally now on State. As a system of grunt aggregation, State is impeccable.

Where it falls down is on that boring rational philosophical stuff I started with. In the epistemology of State and many similar social media sites, opinions don’t have supporting reasons. They don’t derive from any confrontation with evidence or experience. They don’t allow for refutation or reformulation or revision. You can refute an argument; you can’t refute a grunt. Ambivalence leads to confusion: thinking a politician is a vicious, dastardly shitbag but admiring his Machiavellian success doesn’t easily translate easily into a simple aye or nay vote for that person.

It’s quite telling that for an opinion platform, I am actually unable to express my opinion of State on their own platform but have to resort to constructing paragraphs of prose and posting them on my own website. But then, I’d like to think my opinion on this topic has reached the point where it is no longer a grunt but some kind of at least vaguely sophisticated take on the place of a piece of technology in society.

Of course, in the consumerist zeitgeist, complex thought is rather embarassing. If a consultant tells you something is impossible or unethical or complicated, you just sack them and hire a bullshit-peddling yes-man to tell you that everything will be fine, and that pigs will fly so long as they practice positive thinking. Why bother weighing up a complex interlocking argument when you can grunt an opinion about a hashtagged blipvert or whatever it is some advertising creative has come up with this week?

If you want to grunt about things, I highly recommend State. As grunt publication and aggregation platforms go, it is exquisite—wonderfully designed, superbly executed, beautifully illustrated and rather addictive. If you want to express something more like an opinion and less like a grunt, you might want to read a writing manual and start a blog, as well as prepare for being ignored by the decision-makers in society because they’ve collectively decided that grunting is more important than well-considered opinions.

The implied argument fallacy

If you spend any time arguing with people, you will have come across the concept of logical fallacies. I’m sure you know of a few of them: ad hominem fallacies, wherein an argument is rejected because of a speaker’s personal attributes; the genetic fallacy, where a thing’s origin is used as an argument against the value or worth of that thing. There are plenty more, and there are now numerous websites detailing said fallacies including YourLogicalFallacyIs1, and, of course, Wikipedia.

I’d like to present a new fallacy, or at least a new expression of a couple of other fallacies: the implied argument fallacy.

The implied argument fallacy is the treating of spoken or written content that does not purport to present a formal argument as doing so, usually so that one can go on and argue that the formal argument presented contains fallacious reasoning.


  • A: Have you seen the latest thing that unnamed anti-gay preacher just wrote?
  • B: Yeah. What an asshole.

At this point, a third party, C—who has helpfully read some websites about logical fallacies—might pop up and say something like this:

  • C: Oh, look at B. Unable to present an argument rebutting the views presented by the preacher that A mentions, so has to resort to ad hominem name calling and abuse. This only goes to show the intellectual poverty of B’s case.

What C is presuming is that B was presenting a formal argument, and perhaps also that language is a tool used primarily or even exclusively for the transmission of formal arguments as one might find in a philosophy textbook. This is in spite of considerable evidence from sociology, linguistics and the experience of most people in everyday life that language is used for many other purposes than simply expressing strict logical deductive argument.

There are a number of things wrong with C’s statement, but all seem to stem from one root cause: C’s presumption that B is attempting to present a coherent, extensive argument.

The implied argument fallacy is used to essentially straw man points other people make. Here’s another example from the same topic:

  • Opponents of gay marriage are on the wrong side of history.

Our good friend ‘C’ might object at this point and say that a form of appeal to popularity—or rather, appeal to future unpopularity—is being played here. Except it isn’t. The argument that one’s political opponents are on the wrong side of history isn’t an argument that they are wrong because they are on the wrong side of history. Rather that politically, they aren’t on to a winner with advocacy of their views. There is no argument being made about the rightness or not of gay marriage, only the value in continued advocacy against it.

‘C’ might go on to say that since this argument (which isn’t an argument, or is an argument but for a different conclusion) is fallacious (which it isn’t, because it isn’t an argument, or isn’t an argument for the same conclusion), the whole case for gay marriage is undermined. It isn’t. The neat thing about the implied argument fallacy is that is can enable the person using it to avoid the actual arguments presented. Pretend that lots of things which aren’t actually arguments are, point out the flaws in the non-arguments, then avoid dealing with the actual arguments.

The implied argument fallacy is in some ways unavoidable. There will always be edge cases between what counts as an argument and what doesn’t. Sorting out what count as formal arguments from what doesn’t requires application of the principle of charity and a non-reductive reading of one’s opponents views. Which may not be something that best fits opening up Reddit or Twitter and blindly pasting in YourLogicalFallacyIs links.

Almost all people discussing almost all contentious topics will present things which are not arguments. Pretending they are arguments, then finding fault with them as arguments is to misunderstand the nature of both language, rhetoric and argument.

  1. Which itself subtly misleads people on the nature of the ad hominem fallacy. Not all personal abuse is an ad hominem fallacy. It’s only an ad hominem fallacy if you are saying that some point is wrong because the person presenting that point is bad or evil in some way. Just because you are engaging in personal abuse of someone doesn’t mean that you are engaging in fallacious ad hominem reasoning.

Keep your identity, even if it makes Paul Graham think you are dumb

Recently, I saw that Paul Graham’s post from 2009, Keep Your Identity Small, got linked on Hacker News again. It is an interesting post but I think Graham is wrong.

Graham starts by saying that he thinks that both politics and religion lead to “uniquely useless discussions”, because the participants in them need only have “strongly held beliefs” rather than expertise or knowledge.

No thread about Javascript will grow as fast as one about religion, because people feel they have to be over some threshold of expertise to post comments about that. But on religion everyone’s an expert.

Then it struck me: this is the problem with politics too. Politics, like religion, is a topic where there’s no threshold of expertise for expressing an opinion. All you need is strong convictions.

I’d question this. Firstly, it depends on context. If you are in a political philosophy seminar or a theology or philosophy of religion seminar in a university, then the fact that other people are basing their arguments on knowledge that you do not share may get you to shut up. Certainly, if a political argument depends heavily on some distinction that is only apparent to an expert versed in, say, the details of John Rawls’ work, I may not have a particularly good chance of answering it unless I can summon up the time and energy to plough through Rawls’ arguments. If you don’t know the details of Rawls’ argument, you’ll just look like a fool.

Do religion and politics have something in common that explains this similarity? One possible explanation is that they deal with questions that have no definite answers, so there’s no back pressure on people’s opinions. Since no one can be proven wrong, every opinion is equally valid, and sensing this, everyone lets fly with theirs.

But this isn’t true. There are certainly some political questions that have definite answers, like how much a new government policy will cost.

I think Graham is right to argue that the argument that there are no right answers in politics or religion is wrong, but I think the argument he gives for that conclusion is pretty weak. Yes, there is a right answer to the question of the fiscal impact of a particular government policy. But that doesn’t actually solve the particular moral or indeed political aspect of the question.

I’ll illustrate why I think Graham has got it wrong with this example. In the United Kingdom, the National Health Service spends a certain amount of public money each year on health services including pharmaceutical drugs. They do this by following the recommendations of the National Institute of Health and Clinical Excellence (NICE), who evaluate drugs and other interventions on a cost-benefit analysis: roughly, how much do they cost for each quality-adjusted life year (QALY)? A new and experimental cancer medication that extends someone’s life by a maximum of about a year but costs £50,000 will be a harder sale than an infant vaccination that costs a few pounds and saves, say, 10% of infants from developing a life-threatening illness, because overall, the number of QALYs that are purchased with a cheap vaccination far exceeds the number of QALYs that an expensive cancer medication buys.

If you were to contact NICE, or check their website, I’m sure that you can get detailed costings for all the various drugs they evaluate. There are facts here, and Graham’s argument applies. People can be wrong about the factual matters that NICE evaluate. A newspaper might say that a particular cancer treatment was rejected by NICE when it wasn’t. Or they might over- or under-state the cost-per-QALY evaluation.

Those are factual errors. But that’s not particularly interesting. People can and will make those kinds of factual mistakes regardless of their political beliefs. The fervent opponent of NICE-style cost-benefit analysis-based health service provision may make a typo when they copy a figure from the NICE website into an article. That means they’ve made a factual mistake. It is a fallacy to say that their political opinion is invalidated because they made a minor, typo-caused factual error.

No, I think the reason that Graham is right to reject the thesis that “no answers in politics/religion are wrong” is simply because of logic. If someone accepts some fundamental assumption about, say, the values they think drive politics, but their lower-level beliefs are inconsistent with their higher-level ideological preferences, they are “wrong” in some important way. If you have an ideological preference for a certain form of individual liberty that you place above all other values, but you then make an argument for some grossly tyrannical policy of government intrusion, you are failing to adhere to your most basic values. I can’t tell you that your basic values are wrong, but I can tell you that you fail to adhere to them in practice.

Wilfrid Sellars said that “[t]he aim of philosophy, abstractly formulated, is to understand how things in the broadest sense of the term hang together in the broadest sense of the term”. Those things include values. We can criticise how people’s stated values fail to coherently “hang together”.

I know the value of this because other people pointed out that my political views didn’t “hang together” well because my basic beliefs were in conflict with specific policies I thought vital to the flourishing of a just society. My basic beliefs were leading me to absurd conclusions, so I changed my basic beliefs. Because of this measure of how well one’s beliefs hang together, answers can be righter and wronger, even if they aren’t necessarily right or wrong.

I’ll be the first to admit that this doesn’t always work out quite so well in practice as in theory. Most political conversations do not go much beyond the abusive shouting stage, but some do. And the same is true with religious conversations: I studied philosophy and religion as an undergraduate and did have respectful, productive and interesting conversations about religion with people whose religious beliefs I fundamentally disagreed with.

Graham ignores a reason why both religious and political questions are things which most people feel entitled to have an opinion on: because they both affect everybody. Almost everybody lives in a society. Everybody who lives in a society will be governed under some political system. The political system they live under will affect some aspect of their lives—financial, social or personal. Ergo, most people have a pretty good reason to care about politics. You can opt-out of caring about politics, but you can’t opt-out of the effects of politics.

And the same is true for religion: many religions make a universal claim on humanity. If the evangelicals are right—and I don’t think they are—then I am bound to go to hell as a filthy atheist sodomite who neither believes in Christ nor follows the rules of the Bible. I am not particularly concerned about this possibility, but it is what many people believe seriously and in earnest. The content of their beliefs applies to everybody, regardless of whether you share said beliefs. If you are going to tell me what I have to believe (and the exact details of what I am and am not allowed to do with my genitals in the company of other consenting adults), I have a right to critically evaluate those claims and dissent from them if I find them wanting.

Politics and religion affect people in the world whether said people choose to participate in the political system or the religious communities: there is a pretty simple reason why people might have opinions about things that affect them.

But Graham has a theory about why politics and religion are such disagreeable topics for people: identity. It is worth quoting Graham at some length here.

But the more precise political questions suffer the same fate as the vaguer ones. I think what religion and politics have in common is that they become part of people’s identity, and people can never have a fruitful argument about something that’s part of their identity.

By definition they’re partisan. Which topics engage people’s identity depends on the people, not the topic. For example, a discussion about a battle that included citizens of one or more of the countries involved would probably degenerate into a political argument. But a discussion today about a battle that took place in the Bronze Age probably wouldn’t. No one would know what side to be on. So it’s not politics that’s the source of the trouble, but identity. When people say a discussion has degenerated into a religious war, what they really mean is that it has started to be driven mostly by people’s identities.

Because the point at which this happens depends on the people rather than the topic, it’s a mistake to conclude that because a question tends to provoke religious wars, it must have no answer. For example, the question of the relative merits of programming languages often degenerates into a religious war, because so many programmers identify as X programmers or Y programmers. This sometimes leads people to conclude the question must be unanswerable” that all languages are equally good. Obviously that’s false: anything else people make can be well or badly designed; why should this be uniquely impossible for programming languages? And indeed, you can have a fruitful discussion about the relative merits of programming languages, so long as you exclude people who respond from identity.More generally, you can have a fruitful discussion about a topic only if it doesn’t engage the identities of any of the participants. What makes politics and religion such minefields is that they engage so many people’s identities. But you could in principle have a useful conversation about them with some people.

And there are other topics that might seem harmless, like the relative merits of Ford and Chevy pickup trucks, that you couldn’t safely talk about with others. The most intriguing thing about this theory, if it’s right, is that it explains not merely which kinds of discussions to avoid, but how to have better ideas. If people can’t think clearly about anything that has become part of their identity, then all other things being equal, the best plan is to let as few things into your identity as possible. Most people reading this will already be fairly tolerant. But there is a step beyond thinking of yourself as x but tolerating y: not even to consider yourself an x. The more labels you have for yourself, the dumber they make you.

As I said above, I don’t think that having a stake in the game makes it impossible for you to discuss things rationally with people. Socratic questioning is one way that we can learn from each other. When I meet someone who, say, opposes gay marriage, I’ll try and draw out their reasons for doing so in a Socratic manner and question them to see if there are inconsistencies in their position. I don’t think I’m ever going to agree with their position (opposing my own right to get married is a bit of a stretch) but I do learn interesting things about why they are motivated to hold a particular position. The point is that to do that kind of Socratic questioning, you need to be able to set aside your own point of view and just look at their position on its own terms and see if it makes sense.

I also think that Graham’s prescription is deeply wrong. I am a 28-year-old man who works in London, and commutes in from suburbia every day. I’m white, I’ve got a BA and MA in Philosophy, I’m British, I’m a software developer, I’m gay, I’m a vegetarian, I’m cisgendered, I’m a Wikipedia administrator. Notice: all of these are just simple statements of fact. If Paul Graham’s statement is correct—that the more labels one has, the dumber they make you—then presumably, dropping a few of those labels would make me less dumb. But which ones? I mean, most of those have some kind of complementing set of other options. If I were not a vegetarian, I’d have some other dietary preference: a vegan, perhaps, or an omnivore. If I weren’t British, I would be some other nationality. If I weren’t gay, I’d have some other sexual orientation.

We are thrown into this world with a stack of attributes we didn’t really choose. Once the dice have been rolled, it’s not like you can change quite a few of them. I can’t change the country of my birth; I can change the country of my citizenship. I can’t change my sexuality (others have tried with little success). I can’t just wake up and decide that the historical circumstances I find myself in do not apply to me. You can’t opt-out of history.

The idea that the only way to not be “dumb” is to have no identity runs flat into the practice of actually being a living, breathing human being. We can’t escape our identities quite so easily. If you happen to be in a majority group, it’s a lot easier to not care about one’s identity than if you are in an oppressed or minority group. If you’ve never been the target of racial discrimination, then you could conclude (a) that race isn’t an important factor in life and how people see you, or that (b) you’ve just happened to luck out and been born into a privileged group, but others haven’t. Yeah, being white isn’t a big deal, because you don’t end up being Trayvon Martin if you are white. Nobody tries to deny you voting rights or marriage rights.

And if you happen to want to take one of those maligned labels and reclaim it, to defend it as good and beautiful and something to be proud of. Well, fuck you, dumbo. That just proves you are stupid for buying into labels.

Because your labels make you stupid. Because only by explicitly disclaiming any part of your identity that might make you “biased” will you not be “dumb”.

Never mind that being in a persecuted or minority group might give you some experiential knowledge of what it’s like to be separate from mainstream society. No, let’s just dismiss that as some kind of postmodernist claptrap.

The problem with Graham’s prescription is that it says to anyone who has an identity such that their identity might actually be an issue in society that they are unable to think rationally about that aspect of their self. Back when women were trying to get the vote, people were saying exactly the same thing. How can women think clearly about whether or not they should be allowed to vote—they are just women, after all. And they can’t really think clearly like us dudes, and so if they are asking for the right to vote, they are obviously biased by their self-interest. But us guys, we aren’t biased. So, ladies, don’t worry your silly little heads off about voting, and don’t apply any labels to yourself, because that’ll just make you stupid.

Graham’s idea of being smart, having good ideas, whatever one wishes to call it (the opposite of the “dumb” which he refers to) is some kind of disinterested neutrality. Good luck finding that in reality: we all have backgrounds, we all have histories, we all have identities. We bring with us a “pre-understanding” of the world shaped by our own backgrounds and circumstances; we line up this background pre-understanding with the world and a conversation plays out between the two. There isn’t a “view from nowhere”. We are not our labels, but our labels serve a purpose—to build a shared community, to fight for justice and equality. We give that up, and for what? To get us an unattainable view from nowhere? No thanks.

I’ll continue being myself, labels and all, even if it makes Paul Graham think that I’m dumb. Trying to be the human equivalent of a database row full of NULL values seems both impractical and undesirable to me.

Philosopher Colin McGinn is in some sticky trouble. You see, he has this nasty habit of making dirty jokes to research assistants. Said nasty habit has led to him getting in some very public trouble by emailing his research assistant with a bad joke about handjobs. She then went public and the proverbial shit has started flying, leading to his resignation. McGinn has decided to dig himself in deeper by writing a response.

Captain Picard saying 'What the fuck is this shit?'

Full context and discussion at Feminist Philosophers, New APPS (and another thread), and Metafilter. The more one looks into it, the more fucked up each piece looks.

But FauxPhilNews wins the prize for puncturing with one little pinprick the pomposity of McGinn’s response. (Oh, I said “pinprick”! How naughty and subversive! Perhaps I should have finished my Ph.D and become a professor…)

Speaking of philosophy, there’s some drama going on regarding Plantinga and Thomas Nagel who are accused of trading book reviews.

Audio of H.L.A. Hart interviewed.

There’s lots of fun going on in philosophyland over Thomas Nagel’s new book and various things related to it. As a recovering Plantingan, I feel a certain duty to point to it. A little reading list:

I’m very glad that I do not have to have an opinion on this kind of thing anymore. Not that I don’t have one.

Thomas Nagel has a review of Alvin Plantinga’s new book. If you wish to understand what I was working on before leaving my Ph.D programme, I recommend it.

Explaining philosophy for social justice warriors and/or trolls

Meet blackwomanvalues.

blackwomanvalues is a black woman. You can tell, right?

blackwomanvalues is apparently transethnic, which is Internet-speak for “I can be whatever the fuck I want, and if you don’t agree, you are just privileged”. Think transgender, except on racial and ethnic lines. There is a high probability of it being a troll. That’s fine by me. If they aren’t a troll, fine, but even if they are, I’m going to unpick a few things they say, because, well, they are interesting and they make some arguments that are quite commonly used by people who probably haven’t studied philosophy very much.

So, here’s a few choice quotes from blackwomanvalues justifying why they identify as a black woman.

It is important to note that race is a subjectively fabricated concept, with no scientifically verifiable cultural or physical characteristics shared universally within any group. Regardless of what you may perceive, there is no definitive formula for the acceptance and identification within a racial group- for objectively, they don’t exist.

Please. If you buy in to this kind of argument, you are conflating different things. That something is socially constructed doesn’t mean it isn’t real in some important sense. Money is socially constructed. It’s pretty arbitrary that when I’m in the United Kingdom I can exchange bits of paper with the Queen’s face on them for goods and services, while in the United States, I use money with pictures of Lincoln and Franklin and Washington on them. Is there some “scientifically verifiable cultural or physical characteristics shared universally” by money that isn’t by non-money? Well, don’t you dare say “a special type of pigment”, because coins are money too, and so are these funny plastic credit and debit cards we carry around. The key thing that distinguishes money from non-money is that money can be used as a method for exchanging value. That function is granted to it by linguistic and social means.

Is Barack Obama scientifically different ten minutes after he was elected President than he was before elected President? No, but there’s an important social distinction, namely he’s the President.

Saying that because something scientifically doesn’t exist in the sense of being undetectable by laboratory methods, that it objectively doesn’t exist is ridiculous—and it misunderstands the way that science operates at multiple layers of explanation. If you honestly buy into this argument and get pissy when someone steals a twenty dollar bill from your wallet, congratulations, you are a hypocrite.

And here’s another thing.

In this case, the pre-englightenment philosopher Rene Descartes statement “Cogito ergo sum”, “I think, therefore I am”, is an important contributing factor to my identification, aided with internal feelings of belonging and similarity.

Descartes’ statement of the cogito, or Descartes’ cogito argument. Oh, fuck, I’m gonna have to explain this, aren’t I?

Imagine you are 17th century philosopher. You set yourself the task of doubting all things. Methodological doubt is your challenge: you want to try and doubt everything and see how far you can take it. Are you sitting at a computer reading something? Well, yes, obviously, but what if it were not true? How can I know it isn’t true? Well, for Descartes, he got right back to the very basics. If you wish to doubt that the external world exists, that’s fine. Perhaps you are having your mind manipulated by an evil demon; these days, you are hooked up in some ghastly contraption with some neuroscientists electrically stimulating your brain. All very Matrix-like. But let’s go one further: what if you didn’t exist at all? You may be a brain in a vat or a victim of the evil demon, but at least you are thinking. There is something, whether it’s a brain or a wibbly-wobbly soul thing or a computer process, and it’s got some kind of consciousness and some intensionality. You know that you are thinking, and you can think about things, like the fact that you can think. Cogito ergo sum: I think, therefore I am. This isn’t a license to believe anything, it’s a response to skepticism about one’s very own existence.

The translation of sum to “I am” is problematic: in English we rather naturally prefer not to use the sentence “I am” alone, preferring an expansion into sentences of the form “I am x”: think of sentences like “I am a vegan”, “I am watching TV”, “I am six foot tall”, “I am gay” and “I am a citizen of the world”.1 Saying “I exist” would be a much better term, as it is less likely to cause the sort of confusion that sum has.

The problem with using the cogito to justify anything beyond the philosophically basic task of proving that there is a subject is that it leads you to obviously false conclusions. If you think cogito-style reasoning can justify “I think, therefore I’m a black woman in a white guy’s body” (or some other similarly absurd value), then you can substitute anything in the place of x in the sentence “I think, therefore I’m x”. You may want to try and mitigate this problem by switching it to “I think I’m x, therefore I’m x”. This doesn’t work either.2 We have situations where we make mistakes. I think I’m looking at a crooked stick, but it’s not crooked. Visual illusions exist. To argue from the certainty implicit in a cogito argument to the justification of anything you happen to think is to make yourself epistemically infallible, that is to say your beliefs can never be wrong. Whether or not you find the cogito convincing, concluding your own infallibility on the basis that one thinks is a conclusion so obviously absurd that one must have made a mistake in one’s understanding.

So, yeah, two bad arguments. They may have been given by a troll in this case, but they are used quite often. And they probably ought not to be. I shall now be on tenterhooks, waiting, desperately, for someone to point out that I’m exercising my “educated privilege”.

  1. A counter-example: someone says “who is going to London tomorrow?”, and you respond “I am”. It sort of proves my point though, because the x in “I am x” is pretty much implicit by dint of being an answer to a question. Once you account for the pragmatics of the way the sentence is being used, it expands quite naturally to “I am going to London tomorrow”, which is a sentence of the form “I am x”.

  2. Plus, that doesn’t actually work. The whole point of the cogito is that you are deriving existence from the fact that you have some mental content. Any rhetorical force of appealing to the Cartesian insight is lost. A modified cogito where you attempt to conclude that you exist and have some property x because you think is arbitrary (for what values of x is that kind of argument satisfactory? What happens if you have two locally incompatible values of x like x and not-x? The argument proves them both). And a super-duper modified cogito such that you can plausibly get content from the content in the premise fails because (a) it isn’t really a cogito any more and (b) it commits you to epistemic infallibilism which I think we have very good reasons to reject.

Of marriage privatization, libertarians and ahistoricism

I hope you’ve all been watching the gay marriage stuff in the media. I’ve sent off my response to the consultation. And I hope you do too.

It’s all jolly good fun and japes, having consultations and voting over whether or not to grant equal rights. And, of course, the Church and the Campaign for Marriage and so on are looking like utterly despicable fools.

Anyway, in amongst all this, if you go out on to the wilds of the Interwebs, or occasionally amongst political commentators and academics, you might get wind of the libertarian reaction to marriage equality, which can be summed up with the slogan “the state shouldn’t be in the marriage business”.

If you haven’t come across this argument, some examples of this: Michael Kinsley is one example from a non-libertarian perspective, and David Boaz comes at it from a libertarian perspective. I know Cass Sunstein has advanced a similar argument. For a broad overview, see Wikipedia which labels it “marriage privatization”, which I guess is as good a term as any.

Incidentally, Michael Sandel often uses it as an example when illustrating Aristotelian moral theory, and it’s a good example for that. According to the Guardian, Sandel doesn’t endorse the argument, instead thinking that the state has a duty to promote virtue, and letting same-sex partners marry does that.

What ought we make of people like Kinsley and Boaz? Obviously, they aren’t homophobes or bigots. As Boaz points out, he would vote for same-sex marriage at a state level, while believing that marriage ought to be a function that isn’t handled by the state. Nothing in the argument commits you to the view that gay people are second-class citizens or any other overtly homophobic view. And, well, as Stephen Fry might say, that’s nice. I’d rather live in a world where people are having a polite debate about whether or not the state should be in the marriage business than in a world where they are denying gay people their rights and dignity. So, yeah, that’s nice.

Intellectually, I don’t know whether I agree with the argument, because ethically, I’m rather unsure about my fundamental ethical starting points. When I was a full-on libertarian, I’d have an easy, off-the-shelf answer to these kinds of problems. (Then, of course, I finished puberty and realized that perhaps the world was more complicated than libertarian writers made it out to be.)

There’s definitely intellectual merit to the argument, and some practical merit too. If widely accepted, it’d solve a shit ton of problems: it’d obviously mean there wouldn’t be any inequality between heterosexual and homosexual people over marriage, it’d also mean that similar kinds of unions could be available for polyamorous/polygamous people, and there would be no downside to not being married. That is, the state wouldn’t really be able to offer some benefit only to those who are married. There’d be no state-level discrimination between a bunch of people living in a commune and a straight monogamous couple that are currently married.

There is, of course, an Aristotelian critique of this kind of thing, and I’d point interested readers towards communitarian critiques of liberalism—Sandel, MacIntyre, Etzioni, etc. If you have the full-on libertarian blinders on like I used to, you’ll just dismiss that kind of moral reasoning out of hand. But I’m not really going to discuss that much, because frankly that’s not the primary objection I have to it (I haven’t read enough communitarian moral theory to know whether or not I endorse that approach). The Aristotelian objection can be stated rather snarkily like this: “Oh, you want to privatize marriage? You know that marriage is rather a different kind of thing from a telecommunications company, right?”

There are practical objections to marriage privatization: if marriage were privatized, the state would still be doing a bunch of functions that it currently does for married people—pension provision and regulation, access to healthcare services, access to private records, regulations on banking for joint accounts, and other benefits or services provided to married people differently from non-married people. Without marriage or with privatized marriage, the state would have to decide how to provide those services and under what conditions: simply saying that the state is out of the marriage business doesn’t mean the state doesn’t have to decide which types of marriage-like unions are deserving of special status. There’s a whole barrel of worms there, some of which can only be answered with Yet More Libertarianism. (Remember, in libertarian logic, the answer to problems with libertarianism is always more libertarianism, the answer to market failures is even freer markets.)

But my concern isn’t even the practical ones, although those are tough. The issue I have is a very simple political one.

Even if this view is correct, and even if it’s convincing, it’s completely irrelevant. It isn’t a viable political alternative to the status quo. However compelling free market marriage or “getting the state out of the marriage business” is, it isn’t going to happen. It’s taken a boatload of hard work since the 1960s to convince people that gay people deserve rights, and we are actually on the cusp of getting marriage equality… but instead, we—actual human beings who care about gay rights—shouldn’t be bothering, and instead fighting for marriage privatization.

Instead of having to convince the heterosexual majority of equal rights, we need to persuade them that they need to stop being married altogether and start having denationalized contracts or whatever one might call these non-state-endorsed cluster of marriage-esque things.

Because you know what they’ll say? Yeah, go fuck yourself. Okay, they might be a bit more polite. Either way, politically, it’s impossible. Intellectually, it’s an interesting thing to discuss, but politically it’s a no-hoper.

This is one of the issues with libertarian argument: it is often ahistorical, it just derives policy from a bunch of a priori commitments. Which is fine, but we aren’t ahistorical, we are real, existing people in a particular region of space and time, with historical backgrounds, with real interests in this world. Marriage privatization might be lovely, but given that there are real, existing gay people who are being put at a disadvantage now by not having marriage equality, “hey, there’s a wonderful libertarian solution to this” sounds good, except it isn’t actually a solution, it’s just rhetoric.

To suggest to gay people who are fighting for marriage equality to stop and instead fight for marriage privatization is asking a historically marginalized group of people to give up the fight for a practical real-world change that can improve their lives—our lives—now in order to fight for a pie-in-the-sky libertarian policy proposal that has absolutely no hope of ever going anywhere.

Perhaps in 50 years time, people will come around to marriage privatization and we’ll have formally disestablished the Church of England, and then we’ll just have a world of rational actors wandering around freely entering into contracts with one another, stopping only momentarily from servicing their rational self-interest in order to offer a moment’s thanks to Ayn Rand. Maybe in a libertarian society, gay people will be treated with exactly the same liberty as everybody else. Great.1 But we don’t live in Libertopia, we live in this world, in this reality, with this government. And marriage equality makes that reality less awful by making it so marriage recognizes gay relationships and gay love as equal.

You want marriage privatization? Convince the existing married straight people. Make it a real, live political option, then we’ll talk. But until that point, don’t expect gay people to give up on the fight for marriage equality in order to support marriage privatization.

Postscript If you wish to see a good example of a “marriage privatization” advocate that has grappled with the issues well, try Russell Blackford. Blackford seems to understand that an intellectual consent to the privatization argument isn’t enough, and it isn’t some kind of Solomonic third way in the gay marriage debate. The problem with the marriage privatization argument isn’t that it’s wrong or a bad approach, it’s that rushing towards it now is being done at the expense of real-world steps that can increase equality (like, say, full marriage equality for same-sex couples).

  1. Or maybe they won’t. There’s nothing to stop a libertarian society deciding that hating gay people is just fine, so long as it is an uncoerced free market of opinion that decides as much. Just as in a libertarian society, it would be perfectly fine and dandy for everyone to decide they hated black people and not provide them with any good or services.

Euthanasia, competency, and paternalism

I’ve been working on euthanasia related articles on Wikipedia recently, specifically Dignity in Dying. Mostly, I’ve just been adding in historical information and so on, hoping that I can serve the role of a philosophically reasonably well-informed editor rather than being a “POV warrior”.

But I cannot help but step in and express an opinion.

Most of those who oppose a change in the law on euthanasia–sorry, “assisted dying”–in Britain seem to accept that ‘passive’ euthanasia is acceptable. There aren’t people out there thumping their chests too much on the grand injustices of persistent vegetative state cases like Airedale NHS Trust v. Bland or the Terri Schiavo debacle, nor is there much opposition to people making advance directives, living wills or DNR orders. The bad old days of medical paternalism are supposedly gone, and patients are allowed to refuse treatment if they are competent (under the Mental Capacity Act 2005) to give informed consent ahead of time. There is also fairly uniform consensus that analgesic care that hastens inevitable death is acceptable under the principle of double effect.

The criticism made of attempts to reform the law in the United Kingdom is that such attempts do not provide adequate safeguards to prevent against coercion: the money-grubbing buggers want their rich old aunt’s inheritance and so coerce her into ‘voluntary’ euthanasia. Or, there is the more subtle “don’t want to be a burden” coercion where people do not wish to die but feel a “duty to die”.

But surely, the same argument works against advance directives, living wills and DNR orders. The doctrine of double effect provides convenient moral cover for the doctor: he isn’t killing, he is easing pain and there is a side effect of killing. But said double effect dodge doesn’t work for the coercive money-grubbing inheritors or the subtle coercers. Surely, if there is a problem with active euthanasia, a similar problem exists for a variety of forms of passive ‘euthanasia’.

In the greedy inheritors scenario, they might encourage rich auntie to drink herself into oblivion, or smoke cigarettes or take up dangerous sports in her old age. And the good old principle of double effect applies to that too: they are only encouraging her to enjoy the hedonistic joys of excessive alcohol consumption, the liver cancer is just an undesirable side effect.

If we are able to allow a patient to accept that a patient can be competent enough to instruct a doctor shorten their life passively–with the doctrine of double effect as an ethical fig leaf–then why cannot we accept that they can be competent enough to instruct a doctor to shorten their life actively in a similar scenario?

There may still be reasons to oppose voluntary euthanasia, but there seems to be a rather large inconsistency here in the case for the opposition.

A bit more: I should expand this. My argument thus far makes it so that if we accept someone can be competent to consent to passive euthanasia, they should be considered competent to consent to active euthanasia. And that objections to the idea that people could be competent to consent to active euthanasia mostly (I’m not going to rule out all possible objections a priori) also apply to the possibility of competently consenting to passive euthanasia.

Given this, might we still have grounds to object to active euthanasia? I’d say it’s possible. I can’t think of any good grounds, but my argument leaves space for the possibility of grounds to consent to active euthanasia in spite of the possibility of competent consent.

You can see this simply by analogy: imagine a man turned up to the doctor’s office asking to have his genitals removed for a reason that isn’t considered medically valid:1 both his testicles and penis. The doctor could of course challenge his ability to consent to such a procedure: “he must be mad!” But upon closer inspection, he finds out that the patient is competent. He perhaps uses some sci-fi McGuffin device and finds out that the patient really is completely competent and able to consent.

But he may still have very good reason to not remove the man’s genitals. It would be dangerous, medically pointless, he would lose function, it would go outside the scope of medical practice etc. The argument I’ve made doesn’t mean allowing active, voluntary euthanasia is right (although I lean towards that view) but simply that one particular type of argument opposed to it seems very flawed.

  1. There are of course medically valid reasons to do so.

On missing links

Creationists frequently like to shout about “missing links” in the fossil record. I came up with an analogy today which I think explains exactly why you shouldn’t get trapped in this bad piece of thinking.

If creationists were philosophers rather than dull-witted fools, we’d dignify it with the word ‘paradox’. In fact, it has some similarity in form to Zeno’s paradox.

But here’s the analogy.

Someone says to you: “there are missing links between New York and Los Angeles”.

So you got out a map and found somewhere roughly between the two cities and explained you’ve found the missing link:

“Tulsa, Oklahoma!”

You have now duplicated the problem: you’ve gone from having one missing link to having two: there’s a missing link between Tulsa and New York and another missing link between Tulsa and LA.

So you then suggest Knoxville, Tennessee and Albuquerque, New Mexico.

You’ve now got four missing links.

You can keep going. Flagstaff, Arizona. Amarillo, Texas. Memphis, Tennessee. Lynchburg, Virginia.

Each time you find a missing link, you’ve created more missing links. Eventually, you won’t be able to reach Los Angeles because there’s a missing link between Manhattan and Hoboken and your creationist friend pulls up some ad hoc objection to you saying “the Lincoln Tunnel”. By playing the game of ‘missing link’ finding you already lose. That’s because we have enough links in the chain to know that evolution happens.

Missing links are a curious thing: the more missing links you find, the more you still have to find. In Zeno’s paradox, Achilles must reach an infinite number of points between him and the tortoise, therefore he can never overtake the tortoise. The evidential demands of the creationist are never satisfiable, just as the atlas maker can never satisfy someone who wants to know about all the missing links. Eventually you have to get down to a certain level of “yes, it’s there, deal with it”. You can drive to Los Angeles despite Zeno-style objections.

And once you’ve piled up enough evidence to show that evolution is possible (and I think that has been done), missing link objections become purely theoretical objections. They make for good rhetoric. “There are missing links!” is a pretty nifty soundbite. Sadly, for some people seem to think the set of good arguments is exactly equal to the set of good soundbites.

Evolution works on a smooth progression with hundreds of thousands of intermediaries. Just as on our drive from New York to Los Angeles, we may have stretches at the beginning and end of the trip where there are lots of settlements and geographical features we can point to in order to say “here’s a link between these two cities”, there will be large chunks of the route (like I-40, passing along through Texas, New Mexico, Arizona and eventually through the Mojave Desert) where you are uninterrupted by towns or hamlets for miles and miles. And then suddenly, you get down into San Bernadino and all your missing links are here: streets, cities, blocks, things with names. They are all here. Despite the creationist telling you that these missing links are here, you can sit on a beach in Malibu to relax after your long drive from New York, even if you can’t give a distinct name to each piece of sand you drove past in Arizona or every drop of water you saw in the Mississippi River.

(Postscript: Google Maps tells me the best way is to go via Columbus, Ohio and Indianapolis, then through to Springfield, Missouri. I had mentally planned a more Southern route: Charlottesville, Virginia, then Knoxville, through Arkansas, then onto Tulsa. I used the analogy of driving across the U.S. because one day I’d love to drive a camper van across America for a few months.)

I'm not an experience-seeking user, I'm a meaning-seeking human person

After an evening of cynicism last night, reading a bloody awful article by a pompous twit, and travelling on bloody slow trains, and then logging on to Twitter and seeing a bunch of bloody fools debating things they are completely ignorant of without even a modicum of philosophical charity, I found something which restored my trust in the human race: psd’s talk at Ignite London. It combines giving naughty link-breaking, data-sunsetting corporate types a spank for misbehaviour with an admiration for I Spy books. I had I Spy books as a kid, although mine were products of the late 80s/early 90s and had the Michelin Man, although in not nearly as an intrusively corporate way as Paul’s slides of current day I Spy suggests. Do forgive me: I’m going to do one of those free-associative, meditative riffing sessions that you can do on blogs.

The sort of things Paul talks about underly a lot of the things I get excited about on the web: having technology as a way for people to establish an educational, interactional feeling with the world around them, to hack the world, to hack their context, to have the web of linked data as another layer on top of the world. The ‘web of things’ idea pushes that too far in the direction of designed objects (or spimes or blogjects or whatever the current buzzword is), and the way we talk about data and datasets and APIs makes it all too tied to services provided by big organisations. There’s definitely some co-opting of hackerdom going on here that I can’t quite put my finger on, and I don’t like it. But that’s another rant.

I’ve been hearing about ‘gamification’ for a while and it irritates me a lot. Gamification gets all the design blogs a-tweeting and is a lovely refrain used at TED and so on, but to me it all looks like “the aesthetic stage” from Kierkegaard applied to technology. That is, turning things into games and novelties in order to mask the underlying valuelessness of these tasks. Where does that get you? A manic switching between refrains. To use a technological analogy, this week it is Flickr, next week it is TwitPic, the week after it is Instagram. No commitment, just frantic switching based on fad and fashion. Our lives are then driven by the desire to avoid boredom. But one eventually runs out of novelties. The fight against boredom becomes harder and harder and harder until eventually you have to give up the fight. There’s a personal cost to living life as one long game of boredom-avoidance, but there’s also a social cost. You live life only for yourself, to avoid your boredom, and do nothing for anybody else. Technology becomes just a way for you to get pleasure rather than a way for you to contribute to something bigger than yourself.

In Kierkegaard’s Either/Or, the alternative to this aesthetic life was typified by marriage. You can’t gamify marriage, right? You commit yourself for life. You don’t get a Foursquare badge if you remember your anniversary. The alternative to aestheticism and boredom is an ethical commitment. (And, for Kierkegaard anyway, ultimately a religious commitment.1) And I think the same holds true for the web: you can gamify everything, make everything into Foursquare. Or you can do something deeper and build intentional, self-directed communities of people who want to try and do something meaningful. Gamification means you get a goofy badge on your Foursquare profile when you check into however many karaoke bars. A script fires off on a server somewhere and a bit changes in a database, you get a quick dopamine hit because an ironic badge appears on your iPhone. Congratulations, your life is now complete. There’s got to be more to life and technology than this. If I had to come up with a name for this alternative to gamification that I’m grasping for, it would be something like ‘meaning-making’.

Gamification turns everything into a novelty and a game (duh). Meaning-making turns the trivial into something you make a commitment to for the long haul; it turns the things we do on the web into a much more significant and meaningful part of our lives.

In as much as technology can help promote this kind of meaning-making, that’s the sort of technology I’m interested in. If I’m on my deathbed, will I regret the fact that I haven’t collected all the badges on Foursquare? Will I pine for more exciting and delightful user experiences? That’s the ultimate test. You want a design challenge? Design things people won’t regret doing when they are on their deathbed and design things people will wish they did more of when they are on their deathbed. Design things that one’s relatives will look back in fifty years and express sympathy for. Again, when you are dead, will your kids give a shit about your Foursquare badges?

A long time ago, I read a story online about a young guy who got killed in a road accident. I think he was on a bike and got hit by a car while driving home from work. He was a PHP programmer and ran an open source CMS project. There was a huge outpouring of grief and support from people who knew the guy online, from other people who contributed to the project. A few people clubbed together to help pay for two of the developers to fly up to Canada to visit his family and attend the funeral. They met the guy’s mother and she asked them to explain what it is that he was involved in. They explained, and in the report they e-mailed back to the project, they said that the family eventually understood what was going on, and it brought them great comfort to know that the project that their son had started had produced something that was being used by individuals and businesses all over the world. This is open source: it wasn’t paid for. He was working at a local garage, hacking on this project in between pumping petrol. But there was meaning there. A community of people who got together and collaborated on something. It wasn’t perfect, but it was meaningful for him and for other people online. That’s pretty awesome. And it’s far more interesting to me to enable more people to do things like this than it is to, I dunno, gamify brands with social media or whatever.

This is why I’m sceptical about gamification: there’s enough fucking pointless distractions in life already, we don’t need more of them, however beautiful the user experiences are. But what we do need more of is people making a commitment to doing something meaningful and building a shared pool of common value.

And while we may not be able to build technologies that are equivalent in terms of meaning-making as, say, the importance of family or friendship or some important political commitment like fighting for justice, we should at least bloody well try. Technology may not give us another Nelson Mandela, but I’m sure with all the combined talent I see at hack days and BarCamps and so on, we can do something far more meaningful than Google Maps hacks and designing delightful user experiences in order to sell more blue jeans or whatever the current equivalent of blue jeans is (smartphone apps?).

The sort of projects I try to get involved in have at least seeds of the sort of meaning-making I care about.

Take something like Open Plaques, where there are plenty of people who spend their weekends travelling the towns and cities in this country finding blue memorial plaques, photographing them and publishing those photos with a CC license and listing them in a collaborative database. No, you don’t get badges. You don’t get stickers and we don’t pop up a goofy icon on your Facebook wall when you’ve done twenty of them. But you do get the satisfaction of joining with a community of people who are directed towards a shared meaningful goal. You can take away this lovely, accurate database of free information, free data, free knowledge, whatever you want to call it. All beautifully illustrated by volunteers. No gamification or fancy user experience design will replicate the feeling of being part of a welcoming community who are driven by the desire to build something useful and meaningful without a profit motive.

The same is true with things like Wikipedia and Wikimedia Commons. Ten, fifteen years ago, if you were carrying around a camera in your backpack, it was probably to take tourist snaps or drunken photos on hen nights. Today, you are carrying around a device which lets you document the world publicly and collaboratively. A while back I heard Jimmy Wales discussing what makes Wikipedia work and he said he rejected the term ‘crowdsourcing’ because the people who write Wikipedia aren’t a ‘crowd’ of people whose role is to be a source of material for Wikipedia: they are all individual people with families and friends and aspirations and ideas, and writing for Wikipedia was a part of that. As Wales put it: they aren’t a crowd, they are just lots of really sweet people.

What could potentially lead us into more meaning-making rather than experience-seeking is the cognitive surplus that Clay Shirky refers to. The possibilities present in getting people to stop watching TV and to start doing something meaningful are far more exciting to me than any amount of gamification or user experience masturbation, but I suspect that’s because I’m not a designer. I can see how designers would get very excited about gamification because it means they get to design radically new stuff. They get to crack open the workplace, rip out horrible management systems and replace them with video games. Again, not interested. The majority of things which they think need to be gamified either shouldn’t be, because they would lose something important in the process, or they are so dumb to start with that they need to be destroyed, not gamified. The answer to stupid management shit at big companies isn’t to turn it into a game, it’s to stop it altogether and replace the management structure with something significantly less pathological.

Similarly, I listen to all these people talking about social media. Initially it sounded pretty interesting: there was this democratic process waiting in the wings that was going to swoop in and make the world more transparent and democratic and give us the odd free handjob too. Now, five years down the line and all we seem to be talking about is brands and how they can leverage social media and all that. Not at all interested. I couldn’t give a shit what the Internet is going to do to L’Oreal or Snickers or Sony or Kleenex or The Gap. They aren’t people. They don’t seek meaning, they seek to sell more blue jeans or whatever. I give far more of a shit what the Internet is doing for the gay kid in Iran or the geeky kid in rural Nebraska or a homeless guy blogging from the local library than what it is doing for some advertising agency douchebag in Madison Avenue.

One important tool in the box of meaning-making is consensual decision making and collaboration. There’s a reason it has been difficult for projects like Ubuntu to improve the user experience of Linux. There’s a reason why editing Wikipedia requires you to know a rather strange wiki syntax (and a whole load of strange social conventions and policies - you know, when you post something and someone reverts it with the message “WP:V WP:NPOV WP:N WP:SPS!”, that’s a sort of magic code for “you don’t understand Wikipedia yet!” See WP:WTF…). The reason is those things, however sucky they are, are a result of communities coming together and building consensus through collaboration. The result may be suboptimal, but that’s just the way it is.

Without any gamification, there are thousands of people across the world who have stepped up to do something that has some meaning: build an operating system that they can give away for free. Write an encyclopedia they can give away for free. All the gamification and fancy user experience design in the world won’t find you people who are willing to take up a second job’s worth of work to get involved in meaningful community projects. On Wikipedia, I see people who stay up for hours and hours reverting vandalism and helping complete strangers with no thought of remuneration.

It may seem corny, and it’s certainly not nearly as big of an ethical commitment as the sort Kierkegaard envisioned, but this kind of commitment is something I think we should strive towards doing, and helping others to do. And I think it is completely at odds with gamification, which seeks to basically turn us all into cogs in some kind of bizarre Skinner-style experiment. We hit the button not because we are getting something meaningful out of it, but because we get the occasional brain tickle of a badge or get to climb up the leaderboard or we get seventeen ‘likes’ or RTs or whatever. Gamification seems to be about turning these sometimes useful participation techniques into an end in themselves.

Plenty of the things which make meaning-making projects great are things any good user experience designer would immediately pick up and grumble about and want to design away. Again, contributing to the Linux kernel is hard work. Wikipedia has that weird-ass syntax and all those wacky policy abbreviations. Said UX designer will really moan about these and come up with elaborate schemes to get rid of them. And said communities of meaning will listen politely. And carry on regardless. Grandma will still have a difficult time editing Wikipedia.

When I listen to user experience designers, I can definitely sympathise with what they are trying to do: the world is broken in some fundamental ways, and it is certainly a good thing there are people out there trying to fix that. But some of them go way too far and think that something like “delight” or that “eyes lighting up” moment is the most important thing. If that is all technology is about, we could do that a lot easier by just hooking people up to some kind of dopamine machine. Technology should give us all our very own Nozickian experience machine and let us live the rest of our lives tripped out on pleasure drugs. I read an article a while back that reduced business management to basically working out how to give employees dopamine hits. Never mind their desire for self-actualization, never mind doing something meaningful. Never mind that the vast majority of people opt for reality with warts than Nozick’s experience machine—the real world has meaning.

The failure of meaning-making communities to value user experience will seem pretty bloody annoying, if only to designers. There are downsides to this. It sucks that grandma can’t edit Wikipedia. It sucks that Linux still has a learning curve. Meaning-making requires commitment. It can be hard work. It won’t be a super-duper, beautiful, delightful user experience. It’ll have rough edges. But that’s real life.

A meaningful life is not a beautiful user experience. A meaningful life is lived by persons, not users. But the positive side of that is that these are engaged, meaning-seeking, real human beings, rather than users seeking delightful experiences.

That’s the choice we need to make: are technologists and designers here to enable people to do meaningful things in their lives in community with their fellow human beings or are they here as an elaborate dopamine delivery system, basically drug dealers for users? If it is the latter, I’m really not interested. We should embrace the former: because although it is rough and ready, there’s something much more noble about helping our fellow humans do something meaningful than simply seeing them as characters in a video game.

This post is now on Hacker News, and Kevin Marks has written it up on the Tummelvision blog.

  1. This is one thing I disagree with Kierkegaard very strongly on. But not for any high-falutin’ existentialist reasons. I just don’t believe in God, and more importantly, I don’t believe in the possibility of teleological suspension of the ethical, which makes the step to the religious stage of existence rather harder! I’m not even sure I’m in the ethical. It could all be a trick of my mind, to make me feel like I’m some kind of super-refined aesthete. Or it could be rank hypocrisy. But one important thing to note here is that the aesthetic, ethical and religious stages or spheres of existence, for Kierkegaard, are internal states. The analogies he uses don’t necessarily map onto the spheres. So, you don’t have to be the dandy-about-town, seducing women and checking into Foursquare afterwards to be in the aesthetic. If you are married, that doesn’t mean you are in the ethical stage. Nor does being overtly religious or, rather, pious, mean you are in the religious stage. Indeed, the whole point of Kierkegaard’s final writings, translated into English as the Attack Upon Christendom is that Danish Lutheranism was outwardly religious but not inwardly in a true sense.

Massimo Pigluicci has written an excellent review of Sam Harris’ book on morality.

On morality

John Lawrence Aspden has an interesting post on morality. As someone who has studied moral theory a bit (not as much as I have other areas in philosophy like metaphysics and epistemology), I thought I might be able to enlighten the discussion a little bit.

I read the other day that thousands of years of philosophical thought had produced three ethical schools, and that they were called utilitarian ethics, deontological ethics, and virtue ethics.

I hadn’t heard that there were three answers. Ten minutes of research seems to indicate that they can be characterized as: ‘act for best consequences’, ‘follow rules’, and ‘be virtuous of character’?

There’s at least are a few others that are missing: divine command theory – do what God says; intuitionism – do what your intuitions tell you. There’s probably a few others out there in the books and journals, but that covers most of the theories. But they are a bit more complicated than this.

Let’s start with consequentialism, a set which contains utilitarianism as a component. John dismisses these:

Which leaves only ‘act for best consequences’, but of course, we need to say who is to judge the best consequences. If the judger is me, then surely that’s the definition of evil? If the judger is some sort of average of everyone, then it defines a sort of altruism. I don’t like either of those.

This is an interesting objection, but it isn’t a persuasive one. To see why, you have to see what utilitarians actually say. Bentham reckoned that a utilitarian calculus could be constructed such that you could calculate the effects of an action on all. Such a calculus would be an objective, shared, non-contextual tool that lets you measure the consequences of actions in the world. As with mathematics, from where the name of the calculus derives, two people should be able to weigh up the same consequences and get to the same conclusion. A better wording of the utilitarian principle is “act for the greatest good for the greatest number”.

Take the classic easy case for the utilitarian: you are walking along in a brand new suit with some very expensive shoes and you see a small child struggling in a pond. You immediately see that she is quite likely to drown, but the water is shallow. You can quite easily step into the pond and save her from an untimely death but it is quite likely that you will ruin your shoes and maybe your clothes too. Peter Singer and others have argued that it would be very, very wrong for you to not save the child. A child’s life is worth more than even the most expensive clothes and shoes.

The utilitarian will say that you can calculate the costs to yourself – namely, the cost of replacing the shoes and clothes, the coldness of the water against your skin, and maybe the cost of missing, being late to, or arriving in waterlogged shoes and spoiled clothes to whatever it is you are on your way to, perhaps having to make a witness statement to the police, or using up a small amount of your phone’s battery to call an ambulance – and weigh them up with the rewards you get – the feeling of satisfaction, the shiny medal given to you by the mayor, the gratification of the parents, possible reciprocal effects (the idea that your actions might inspire others to act in a morally brave or helpful way) – and the obvious good effects for others: for the child, for the child’s family and friends, for society overall.

If the utilitarians are right, the utilitarian calculus would be possible to do by anyone. That’s not to say it would be the same with other people in your place. It’s not saying that if you were to substitute another child in the place of the businessman, they would come to the same conclusion, but rather that another person would be able to do the same calculation of costs and benefits and come to the same conclusion if they are presented with the same facts.

The utilitarian is not ignorant of the fact that people will come to different conclusions: they are saying that with maximal knowledge, people would ideally act for the best consequences. You can subscribe to an ethical theory without believing that you’ll always obey it. In fact, that seems to be something of a feature of moral acts: for something to be a moral act, you actually need to make some kind of effort. One of the reasons I’m not a moral vegetarian is because it isn’t something where I am working against my own inclinations: I’m a vegetarian because I don’t like meat; the moral objections I could have to eating meat are not my primary reason for not eating meat.12

Let’s deal with another example: if I find myself on a Saturday morning lying in bed, I may be rather enjoying myself doing exactly nothing. But just down the road is considerable quantities of litter. It would promote the general good if I were to spend my morning removing the litter from the side of the road. It would come at little cost to me: I wouldn’t be able to lie in bed and read Twitter (I might also risk getting hit by a car while walking on the road, or I might encounter some vicious snake on the verge – not that there are many in rural Sussex). But I would get some exercise, perhaps feel good for helping others, and others would benefit aesthetically from the lack of litter. I would reduce the cost to the council in cleaning litter from the verges (and thus perhaps reduce in a negligible amount the taxes we all have to pay), and I would perhaps increase the value of properties in the area. I might even save some poor little animals from dying a horrific death inside an empty crisp packet or something equally good.

That’s all well and good, but I still don’t feel an obligation to do this. Laying on more benefits and reducing the costs even further do not change this. That something can be utilitarian-good but still impose on me no obligation to do it seems to suggest that morality might be a little bit more complicated than this.

There’s some other problems too. Take the little girl in the pond example again. I’m walking along, and before acting, am I supposed to weigh up these costs and benefits to determine whether it is the right thing to do? That takes time and energy. And I’m supposed to do this cost-benefit analysis before I do act or, I guess, not act. Some utilitarians have responded to this and other critiques3 by suggesting that we instead apply the utilitarian calculus only to rules. Act according to rules that as a whole cause more good than harm. The problem with this is that to satisfy the intuitions that led one to utilitarian ethics in the first place – namely, that of judging acts by consequences – one still needs to give some kind of opt-out clause to acts that do not fall under some rule or where applying the rule leads to terrible consequences. In (what I believe to be good and just) legal systems, laws are there to serve justice, rather than the institutes of justice being there simply to serve the arbitrary or badly-written laws.

Let’s move on then. What does John say about deontological ethics?

‘Follow rules’ seems at best silly and at worst evil. If you’ve made up your own list of rules, then again, you need some way of working out what’s on the list. If you’re following someone else’s rules, then they had the same problem, plus you’ve now got to worry that they might be trying to get you to act in their interests, plus their rules might have been corrupted in the process of being transmitted from their head to yours.

Absolutely right. The question is what rules you use. When people say ‘deontological ethics’, they tend to be referring not so much to the rule being “just follow rules”. Rather, they are saying that one ought to look at the specific rules in question. Whose rules? Well, Immanuel Kant. Or – if you believe Wikipedia – Ayn Rand. But in this case, I’d rather not worry about Ayn Rand. So, instead, we worry about Kant.

What’s he say then? “Always act according to that maxim whose universality as a law you can at the same time will”.

That’s not quite the same as saying “follow rules”, but rather follow rules that can be made universal. Kant asks you to imagine a possible world where everyone acts in the way you consider acting and to consider if any “contradictions or irrationalities” arise.

This isn’t simply following rules, this is following rules that have a specific property - that of being capable of being followed universally.

To show a problem with this, consider a rule that most Kantians including Kant seem to think is justified: not lying. The obvious objection – and I claim no originality here – is simply that of the Gestapo turning up on your door and asking if you have seen a particular Jewish person who they are trying to round up. You are in fact sheltering them. Surely, if you are protecting someone from being sent to a concentration camp, a little white lie to the Gestapo is not a breach of one’s moral duty? No, says the orthodox Kantian, but you are allowed to not tell the whole truth. The example I heard was that you are allowed to say “oh, I saw that person going to the shops”. Earlier on in the day, they had gone to the shops and you saw them walking up the street to the shops. You are telling the truth, but in such a way as to misdirect them.

If this seems legalistic and unsatisfying, you are right. It is legalistic and unsatisfying. You can reframe it though. “You should not lie unless the consequences of telling the truth will lead to grossly unjust outcomes for another”. Congratulations, you’ve just become a Kantian utilitarian, which makes about as much sense as a redheaded blonde or an abstinent sexaholic.

The problem, it seems, is that neither of these answers our moral intuitions very well. That may be because our moral intuitions are actually inconsistent. We ought to do cost-benefit analysis for something like the much-maligned National Institute of Clinical Excellence4, and no amount of irrelevant utilitarian hand-waving gives you the right to feed Christians to the lions in the Colosseum or innocent men to electric chairs in Texas. And, err, good luck meshing the two together.

That brings us to virtue ethics. John says:

‘Be virtuous of character’ seems vacuous. How are you supposed to decide what virtue is?

Here, I am going to refer to the Stanford Encyclopedia of Philosophy. The entry on virtue ethics is quite good.

The important thing about virtue ethics is that it isn’t about actions or rules so much as it is about persons. John is a Clojure programmer so I’ll give him an analogy: some programming languages can be easily switched. You can go from Python to Ruby pretty easily. But the difference between, say, Forth and Lisp is quite huge. Virtue ethics is trying to answer the meta-ethical question by reformulating the question as one of being how to live a flourishing life: it says the fundamental unit of ethical reflection isn’t actions or rules but what enables people to flourish. The very question that virtue ethics is trying to address is different: it is trying to push one to reflect differently on moral questions.

I’ll leave it there and I’d like to encourage John to look deeper: there is so much more to ethics than the potted summaries, and the questions of ethics require a lot of reflection to answer well.

  1. The pious and chaste wannabe saint would wish for a complete lack of sexual desire (or perhaps a helpful friend to tie his legs together with rope when he felt lust), but would find it hard to be a saint as he would no longer be struggling morally against his own desires.

  2. As an aside: I am agnostic as to the moral argument of eating meat. I do find the environmental argument about meat eating reasonably convincing, and I do think there is a major problem with cruelty to animals. I cannot work out whether if the meat industry were to be reformed such that cruelty to animals were no longer an issue, I would be for or against eating meat. On a personal level, this is not a concern as I do not feel the need to eat meat.

  3. The objection I am thinking of is the age-old one of the utilitarian seeming to justify the execution of an innocent for a crime he didn’t commit to justify the vengeful bloodlust of the mob if the cost-benefit analysis ends up being in favour of the execution.

  4. Woe unto the government for fucking that up, by the way. NICE is a necessary evil because the alternative to having someone do a rational cost-benefit analysis on drugs is we decide on the basis of who shouts loudest. Which is self-evidently a much worse way of doing it for fucking fuck’s sake.

Unofficial press release not on behalf of BarCampLondon8

I speak only on my own behalf in affirming the following common-sense truths derived from a study of the metaphysics of causation. This is, of course, not a commentary on current events, just a few personal observations that may or may not be related to what little I understand of the works of David Hume and Donald Davidson.

  1. Events happen in a temporal sequence.
  2. That one event (or, indeed, intention, state of affairs or whatever other metaphysical machinery you choose to use to represent events) preceded or accompanied another does not make it a cause for the event.
  3. It is possible – indeed, quite normal – for people to be mistaken in attributing causal relations.
  4. This is not to deny that the cause-effect relationship is often complicated. Many events taken together can form a complex causal relation.
  5. Given (4), it is not necessarily easy to separate out one particular event from a long causal sequence and specify that this is the primary cause for a particular effect. Though it may be logically possible to do counterfactual reasoning, the results of such reasoning are open to doubt and can change quite dramatically if presented with further information.1
  6. Things are often more complicated than they initially seem.
  7. Things are often much more complicated than they seem to media and public relations folk.
  8. I shall stop imitating Wittgenstein’s Tractatus Logico-Philosophicus. It really doesn’t suit me.

What’s this about? Well, reading this press release was one of many events in a not-necessarily-causal sequence that led up to me writing this post.

Ha ha. I’m just serious.

We never would have known religion had evolved without you lecturing us about it

Jesse Bering has a really interesting article about evolution and suicide, but it includes a tired canard about the evolution of religion:

I notice a similar reactionary confusion, incidentally, among “New Atheists” who bleat and huff in a Dawkinsian manner whenever they hear mention of the empirically demonstrable fact that religion is adaptive, something I’ll save for another day.

Irrelevant? Check.

Wide-ranging statement about a vaguely defined group? Check.

Conflation of descriptive and normative claims? Check.

Atheists don’t “huff and puff” about the fact that religion is adaptive. Mostly, they believe it is adaptive because, well, there must be some reason for everyone believing all sorts of dubious metaphysical claims. But pointing out that a belief is adaptive is as irrelevant as responding to a discussion on the price of carrots with the bloody obvious statement that, you know, humans eat carrots. Yes, yes, we know that. Beyond the fact that if it weren’t true, we wouldn’t be talking about the issue in a normative capacity, it’s also not the important point.

Humans are greedy. Humans have a capacity to murder and rape people. Humans often do startlingly dumb things that somehow work out to their advantage. That we can explain why humans do something doesn’t mean we have a reason why humans ought to do something. I can give you some pretty good reasons why politicians lie, misrepresent and spin things – something along the lines of many of them being power-hungry arseholes who are perfectly willing to fuck the citizens over so long as they can remain in power. That is not a justification, that is an explanation.

And sensible people can tell the difference. One’s desires to rape a sexually attractive young lady in a short skirt may be explained by the quantity of alcohol one has just consumed. Such an explanation is not a justification.

Science-minded atheists are perfectly aware that religious beliefs – in fact, our cognitive functions as a whole – are the product of natural selection and many of our mental capacities serve some evolutionarily adaptive purpose. There’s no shortage of naturalistic theories of religion: religion as a system of social bonding, in-group out-group morality, religion as a side-effect of the evolutionary advantage of seeing agency (or rather the disadvantage of not seeing agency and being wrong). People have been proposing them for centuries, and some even have the advantage of empirical backing. As a mere philosopher-in-training, I struggle to keep up with all the different competing explanations: it’d certainly be nice if we had got a conclusion to this discussion sometime soon.

But the fact is such discussions are irrelevant if your interest is in the truth value of a particular religion. If you believe religion is factually wrong and perhaps morally unhelpful – as many atheists do – should one just say “oh, it’s alright, religious people are just acting according to some instinct they have which, you know, has a history of evolutionary advantage”? That doesn’t justify the wacky beliefs or their purported negative social consequences. In fact, it is irrelevant: if we have some positive epistemic duty to “know well” as well as moral duties to “act well”, the fact that we have an instinct to act badly or (in the opinion of the atheist and/or anti-theist) know badly doesn’t excuse us from our duty to act well or know well.

The facts about how we evolved our religious and epistemic faculties may be very relevant as a motivating factor (and they certainly are interesting and worth researching), but the normative claims of the “new atheists” (which are the same normative claims of many of the old atheists, mind) are separate from the descriptive claims of those who seek to find an evolutionary argument against religion. Some of the new atheists do play that game: Dawkins tries to in The God Delusion, Dennett’s book Breaking the Spell is all about trying to understand the purpose and role of religion.

We already have people working on these topics: psychologists of religion, sociologists of religion, anthropologists and so on. But their work doesn’t mean we don’t also have philosophers arguing about whether or not particular religious claims and arguments are true or not. I very much doubt that if shoving people into MRI machines to scan their brains while they are praying is going to tell us whether or not the ontological argument is true or not.

This is why, for instance, I find it utterly wrong-headed when in a debate or discussion or a newspaper article, you get Dawkins or Hitchens (etc.) and line them up next to one of the many people who have worked in either a scholarly or popular capacity on questions of the origin and evolutionary status of religious beliefs (Robert Wright, Robert Winston et al.) as if they are somehow a mid-point or a moderating force between atheism and theism. There are two separate questions: “(how) did religion evolve?” and “are the claims of religion true?” Having someone who claims to be an expert on the former talk about their explanations of religion doesn’t answer the latter question. In fact, it seems mostly to distract from it.

The fact that journalists and popular commentators conflate these two questions and see the answer to one as being anything other than minimally relevant to the other irritates me greatly. But then journalists trying to do the job of philosophers irritates me quite a lot too because they really suck at it sometimes. I mean, really suck at it.

Kwame Anthony Appiah’s review of Sam Harris’ new book in the New York Times is worth reading.

Marilynne Robinson’s review? Not so much.