Overcompensating for one’s biases

I’ve noticed that I have a bias to sometimes give more credence than I should to arguments that go against my political convictions. This is related to the middle-ground fallacy but isn’t exactly the same thing because I can even be willing to go past the 50:50 point.

I’m very much on the Left of the political spectrum so my natural bias is to believe the arguments of liberals/progressives/whatever term you prefer-ists. However, in the last few years I’ve read so much about cognitive biases and moral psychology and so on that I’ve developed a new bias: the desire to be unbiased.

Basically I now have a tendency to think that it is always more virtuous to be able to see both sides of the argument. And indeed, it generally is good to be able to do so! The problem is when I over-compensate for my leftist-bias.

I think this matches up with my tendency to have hipster-ish tendencies with respect to music and other forms of media. I think it’s cool to like a band that others don’t know about because it requires an investment of time on my behalf to discover that band. Once everyone else knows about them then it’s no longer as cool because now I can no longer efficiently signal my general interest in music (unless I commit the faux pas of saying how much I liked them before they were cool).

In the context of politics, I might be trying to signal to others that I’m not a totally biased lefty or I might be counter-signalling that I’m such a lefty that I can see things from the perspective of the Right and still be on the Left myself.

Honestly though, it’s probably just me trying to signal that I’m not prone to the same biases as everyone else. So yeah, I’m just being really lame!

 

Advertisements

Smoking as a Costly Signal

  1. Smoking as a Costly Signal
    Signalling is the idea that in a population of individuals, some will send out signals as a sign of their quality (e.g. fitness as a mate) and some will receive signals to appraise potential mates. Senders benefit from having their signals accepted whereas receivers benefit from accepting signals from fit individuals. Say there is a poor signal and a good signal (i.e. the latter has a better chance of being accepted). Senders will want to send the good signal if they can. Costly signalling implies that they have to pay a greater cost for sending the better signal. Importantly, lower quality individuals pay a greater cost than higher quality individuals. Therefore there will tend to be an efficient separation such that receivers can be fairly confident that by accepting a costly signal they will be accepting a higher quality mate.In the animal kingdom, the peacock’s tail is a great example of costly signalling. The larger the tail, the more difficult it is for the peacock to fly. It seems surprising then that peacocks would evolve to have these cumbersome tails: surely they aren’t adaptive? Amotz Zahavi proposed the Handicap Principle to explain this: a male peacock with a large tail is signalling to peahens that he is so fit that he can still evade predators even with the handicap of a large tail (apologies for the terminology).Yesterday I asked myself why the dramatic increase in the price of cigarettes (at least in my part of the world) seemed to coincide with a dramatic decrease in their ‘coolness’. I thought that if I looked at the phenomenon from a costly signalling perspective then surely an increase in the cost of the signal should allow higher quality senders to more effectively separate themselves from lower quality ones.

    A quick Google search returned the paper linked above. This doesn’t match my intuition for where I’m from but Dewitte (writing from Belgium) claims that adolescent smoking has become more popular and that the reason for this is in fact, costly signalling. Adolescents who are healthier (at least before they start smoking!) can signal their fitness by adopting this habit, effectively saying “I’m so healthy that I can afford to smoke”. Note the similarity to the peacocks. Although everyone can choose to smoke, less healthy people (say someone with asthma) pays a higher cost by smoking such that receivers can reliably infer that anyone who smokes doesn’t have underlying health problems (such that they are more attractive).

 

On Being a Nerd

In a certain subset of my friend group, the term ‘nerd’ is unanimously considered a term of endearment. This happens to be my own default so it came as a surprise when I was recently reminded that this is not the opinion of most people.

Last week one of the people in that aforementioned subset recounted how after he had described an acquaintance as a ‘nerd’, they confronted him and said that they were quite hurt by his comment. My friend was rather taken aback (as I would be in that situation, and explained that to him, calling someone a nerd is virtually the highest compliment he can pay.

Why the difference in opinions? Is it that the two parties have different conceptions of what being a nerd actually entails or do they agree on the definition but differ in terms of the value they place on those traits?

I can’t imagine that their definitions of a nerd would be grossly different. If they were presented with ten people and had to pick out the nerd then I’d wager that they would both look for the same features and would almost certainly pick out the same person. It’s a stereotype for a reason after all.

It seems much more likely that my friend just has a rosier opinion on the actual traits of nerds. Why might this be?

Well, what makes a nerd? In general, a nerd is someone who places a higher value on being intelligent (and on being seen to be intelligent) than most people do. They also tend to prioritise their own interests (which are typically of a cerebral nature) over the attainment of popularity amongst their peers. This results in the stereotypical lack of social skills.

So if my friend calls me a nerd I take it as a compliment because I think “Hey, he must think that I’m smart and that I’m independent enough to do my own thing”. Whereas when my friend called their acquaintance a nerd, that person thought something along the lines of “Why does he think I’m not popular?”

Thankfully this opinion seems to be on the wane. There’s no doubt that society as a whole is more accepting of nerdiness and everything that goes with it nowadays than say, a decade ago. Part of this shift can be attributed to the success of the Silicon Valley enterprises run by people who would have been mocked as computer geeks when in school.

Not only are these people becoming incredibly wealthy, they are also influencing people’s conceptions of what it means to be cool. We can now draw a Venn diagram showing an intersection between nerdiness and coolness and not expect people to scoff at it.

Additionally, pop culture has more positive depictions of nerds nowadays. Granted, shows like the Big Bang Theory still portray nerds as socially inept and often seem to be laughing at them rather than with them but at least they don’t treat them as boring. That’s a big change from Ross Geller in Friends who consistently put everyone to sleep as soon as he started talking about his passion for dinosaurs.

And if you really want proof of how the nerd’s philosophy has permeated throughout pop culture, just listen to Big Sean in the song Wanna Be Cool where he basically raps a nerd credo:

Rocking pink Polo’s, shit ain’t even fit me,
Looking for the inspiration that’s already in me,
All the confidence I was trying to buy myself,
If you don’t like me, fuck it, I’ll be by myself,
Spent all this time for you to say I’m fine,
I really should have spent it trying to find myself.

Continue reading

On Crimes and Cognitive Biases

In my previous post, I touched on how we often prefer a smaller reward now than a larger one later (depending on the size of the rewards and the interval between them of course). After writing it, I went back to reading Steven Pinker’s The Better Angels of Our Nature. It was as Pinker was discussing historical changes in our criminal justice system that I made a connection between the two topics which I felt could make for an interesting blog post.

The question that popped into my mind was this: If our brains aren’t always effective at picturing a far-future reward, are they also worse at picturing a far future punishment?

Consider a potential criminal. One of the reasons that we have prison sentences is as a deterrent. However, if potential criminals don’t see a 30 year sentence as 20 percent worse than a 25 year sentence (as we might predict for a rational actor) then the longer sentence is not 20 percent more effective a deterrent.

Furthermore, a crime that would warrant a 30 year sentence could be more rewarding for the potential criminal so it could actually be rational to commit the more rewarding crime and to accept the longer punishment. Crime pays, kids!

I guess some people could draw the conclusion from that that we need even longer sentences to override that bias but that’s not the view I would take. There are many problems associated with increasing the length of a prison sentence such as cost-effectiveness (older people are less likely to commit crimes) and of course, the danger of sentencing an innocent person for even longer.

Pinker references an essay entitled On Crimes and Punishments by the 18th century Italian economist Cesare Beccaria in which Beccaria makes the point that the promptness and certainty of a punishment are more important than its severity.

We now have strong evidence to prove Beccaria right. An effective criminal justice system is one in which criminals expect that both they and their competitors will be punished fairly and predictably by the state because this dissuades them from taking matters into their own hands.

It follows then that instead of politicians consistently trying to portray themselves as “tough on crime” by boasting of their willingness to introduce longer sentences, they should instead realise that a longer sentence is not an equally more effective deterrent.

 

 

Propagating Urges

The New York Times recently ran a piece entitled The Happiness Code which I came across because several people I follow on Facebook posted a link to it, including one of the interviewees. In the feature, the writer Jennifer Kahn describes her experience at a “self-help workshop” that promotes rationality as a way of improving one’s life.

The organisation that runs the workshops, the Centre for Applied Rationality (CFAR) is an organisation that I’ve been familiar with for a while. One of its founders, Julia Galef, runs the excellent Rationally Speaking Podcast which I recommend everyone checks out.

I’m not going to discuss the article in depth; I’d recommend you read it for yourself instead. I do want to talk about one of the techniques mentioned in it though, that of ‘propagating urges’.

Before discussing what exactly the technique involves, let’s look at what it intends to solve. I was about to write “Have you ever found yourself doing something that you know is a waste of time but lacking the motivation to do what you know you should do?” but I guess I should really be asking how many times a day you experience that. Not because I think you’re lazy but because that feeling is a universal human experience.

This is the result of a phenomenon known as hyperbolic discounting. In other words, we place greater value on rewards that we don’t have to wait a long time for than rewards that are far in the future. This is not simply a matter of us assigning a lower probability to the latter, instead the waiting around actually subtracts from how much we value the reward itself. The major downside to this is that our present-self makes decisions that our future-selves will regret.

If you’re like me then you’ll also have found yourself wishing for some easy way to jolt you out of this reverie whenever you find yourself in it, to give yourself a mental kick in the backside if you will.

This is where CFAR and their ‘propagating urges’ come in. The idea is simple: when you’re doing whatever it is that you really want to be doing, reinforce that with a fist-pump and by exclaiming “YES!” like an athlete celebrating a victory. They also recommend that you picture yourself succeeding in the goal that you’re working towards (again this sounds similar to what a lot of athletes do at the advice of their sports psychologists).

For example, Kahn describes how she gets trapped checking her e-mails until it becomes too late to go to the gym. Personally I often find it difficult to find the time and motivation to write a blog-post because unlike say, a project for my course, there are no ramifications if I fail to write one. CFAR would advise me to start making the aforementioned “victory gesture” whenever I actually do sit-down to write a post and when I finish it so that my brain gets a pleasant sensation and grows to look forward to the writing instead of shying away from it.

In essence, instead of trying to force my brain to value the far-future reward more than the near-future one (which conflicts with our tendency for hyperbolic discounting), I should train my brain to develop urges to work on the steps towards that far-future goal. Those urges will (hopefully) then propagate down the temporal chain until I achieve my objective.

To be honest, the idea of the victory gesture is a bit much for me. I’m willing to try it but I’m also going to experiment to see if there are any alternatives I can use. However, it’s important that they be equally fast because the whole idea is that the action and the reward have to be close in time to one another so that our brains learn to associate them.

Whose opinions can we trust?

I know I’m not the only person who finds it incredibly difficult to definitively state which side of a debate one falls on. Sure, there are certain issues which I don’t have to deliberate over for prolonged periods before feeling that my position is solid (marriage equality comes to mind) but my intuition is that those are the exception.

I generally take the view that if I was to spend a week researching as much as I could about a particular topic, then my opinion about that topic is almost certainly going to change. I don’t necessarily mean that I would end up flipping from one side of the issue to the total opposite; rather, I think of it in more of a Bayesian sense i.e. that the probability I’d assign to each side being correct would shift along a continuum in one direction or another.

Consider abortion. I consider myself to be pro-choice but not “anti-pro-life”. In other words, I do find it possible to put myself in the shoes of someone who is pro-life. I can think to myself “Okay, if I had had the same life experience as this person then I’m sure their arguments would seem far more logical and mine less so”.

Does that mean that I’m just pro-choice because of my life experience? I’d like to think that I hold whatever view I hold because I’ve considered all of the arguments for and against and have come to the rational conclusion. Having said that, most of my peers (at least the ones who seem to have given it sufficient thought) are pro-choice. That makes it incredibly easy for me to hold this set of views rather than the alternative. Whenever that is the case one should always examine one’s beliefs closer.

If all of your opinions are the same as those of your friends then there are two possibilities: either you have favoured one set of views because you want to fit in (consciously or unconsciously) or else you have evaluated all of the arguments and have just happened to come to the exact same conclusions. The latter outcome might apply to some of your views but the chance of it applying to all of them is low indeed.

On the other hand, that argument seems to imply that one should give one’s own opinions more weight than the combined opinions of a whole host of people. Is that too hubristic? I criticised people for adopting a set of views based on what other people believe but perhaps they are just modest and trust the wisdom of the crowd over anything they could come up with themselves. The idea that “the average opinion of humanity will be a better and less biased guide to the truth than my own judgement” has been referred to as “philosophical majoritarianism”.

A question then: should my own views get special epistemic privilege just because they’re mine? Or should I believe in the average opinion of everyone else?

Damn the hubris, I’ll bite the bullet and say that yes, I trust my own views more than the average opinion of everyone else. This avoids the middle ground fallacy i.e. to believe that the truth always lies in the middle of two extremes. If Person A claims that child abuse is morally good and Person B claims that child abuse is morally evil then screw the middle ground, I’m siding with Person B.

This reminds me of a story recounted by Richard Feynman, the great physicist. Feynman was asked to help the Board of Education decide which science books to include in their curriculum. There was one book in particular that he rubbished but which was approved by the board because the average rating given by 65 engineers at a particular company indicated that the book was great.

Feynman’s response to this was “I didn’t doubt that the company had some pretty good engineers, but to take sixty-five engineers is to take a wide range of ability–and to necessarily include some pretty poor guys! It was once again the problem of averaging the length of the emperor’s nose, or the ratings on a book with nothing between the covers. It would have been far better to have the company decide who their better engineers were, and to have them look at the book. I couldn’t claim that I was smarter than sixty-five other guys–but the average of sixty five other guys, certainly!”

Having said that, I’m sure that Feynman would not have made that claim about every other subject other the sun. In the case of evaluating a science book it does seem reasonable to trust the opinion of a Nobel Prize winner over and above that of the average engineer. But we surely wouldn’t trust Feynman’s opinion about the Israel-Palestine conflict more than the average diplomat’s or his opinion about the works of Homer more than the average Classics professor.

This suggests that that we should adopt the philosophical majoritarianism approach whenever we know less than the average person but not when we know more. However, we could surely improve our opinions further by taking not the average of all humanity but the average of the experts in that field. As Feynman said, the company should first have selected the best of their engineers.

I think it is fair to say that the opinions of people who are considered to be experts in their fields should be given higher probabilities of being correct, when those opinions pertain to the field that they are experts in. For example, I listen to what Richard Dawkins has to say about evolutionary biology but I care less for his thoughts on social issues.

Ideally these well informed people should also desire to have beliefs that match reality (rather than being tied to one ideology) and they should be trustworthy.

The idea of having beliefs that match reality is best seen in the scientific community. A particularly good example comes from a tale told by the aforementioned Richard Dawkins:

“I have previously told the story of a respected elder statesman of the Zoology Department at Oxford when I was an undergraduate. For years he had passionately believed, and taught, that the Golgi Apparatus (a microscopic feature of the interior of cells) was not real… Every Monday afternoon it was the custom for the whole department to listen to a research talk by a visiting lecturer. One Monday, the visitor was an American cell biologist who presented completely convincing evidence that the Golgi Apparatus was real. At the end of the lecture, the old man strode to the front of the hall, shook the American by the hand and said – with passion – ‘My dear fellow, I wish to thank you. I have been wrong these fifteen years.’”

One finds it more difficult to imagine such a Road to Damascus conversion in other fields although they do happen. For instance, Elizabeth Warren, the ideal Presidential candidate of many liberals actually used to be a voting Republican. That very fact makes me rate Elizabeth Warren’s opinion higher than many other people’s because it proves that she has past experience of being able to set ideology aside and think for herself.

I submit then that a good heuristic for finding people who want a map that matches the territory is to look for those who have demonstrated their ability to update their opinions.

I don’t want to give the impression that people who flip-flop from one side of an issue to another are the most trustworthy people of all. It’s not as if I’d trust someone who thinks creationism and evolution both have points in their favour more than I’d trust someone who has resolutely supported evolution for decades. Perhaps that’s because evolution has so much evidence behind it that there is little need for me to factor opinions in to the calculus.

The scenario could be different if considering something like raising the minimum wage. When I combine my lack of knowledge about economics with my observation that the experts in the field are divided then I think I would trust someone who says that both sides have valid points more than I would trust the economist who maintains that only their own side is reasonable.

This can probably be most succinctly codified as “Trust those who base their opinions on evidence”.

Is there a similar rule of thumb that we could use to decide who is trustworthy?  An obvious solution is to trust someone who is trusted by someone we already trust. But this only begs the question. We could easily find ourselves stuck in an echo chamber where we only expose ourselves to the views that confirm whatever views we already have (see confirmation bias). I’ve made a conscious approach in the past to read articles by people on the opposite side of the spectrum (and to follow them on Twitter) but that isn’t quite enough because I’m still more likely to read their writing with a more critical eye than writing that confirms my preconceptions.

To quote Terry Pratchett (specifically his character Lord Vetinari): “What people think they want is news, but what they really crave is olds…Not news but olds, telling people that what they think they already know is true”.

Perhaps the best answer is to trust people who we have found using the previous heuristic i.e. trust people with experience of updating their opinions and trust the people they trust. I’m not totally satisfied with this, but it’s a start.

I hope to revisit this topic soon to expand upon my thoughts but I’m just going to post this now anyway as I’m trying to avoid obsessing over my projects. Finished is better than perfect.

 

Don’t toe the party epistemic line

I remember one occasion during my first year in university when I was talking to a few people who were a couple of years older than me and for whom I had quite a lot of respect. Perhaps more importantly, they were popular members of a social group that I wanted to be a part of, a factor which influences a lot more of our decision making than most of us care to admit.

During this chat, one of them mentioned how she liked and respected people more if they had strongly held beliefs which they would unapologetically defend.

I want to talk about this claim. Is it good to have strongly held beliefs and to defend them staunchly in the face of criticism? I’m going to focus on the first half of this question in this post and will tackle the second part sometime in the near future.

_______________________________________________________

I might as well set my stall out early. My opinion is that it is not a good idea to have strongly held beliefs. In fact, I would go so far as to say that one should be very, very careful about letting oneself get too attached to any kind of belief without at least thinking about it first (of course that is far easier said than done).

Paul Graham had this idea long before I did and wrote a very interesting blog post entitled Keep Your Identity Small in which he made the point that we cannot think rationally and objectively about our most deeply held convictions. He concludes that:

“If people can’t think clearly about anything that has become part of their identity, then all other things being equal, the best plan is to let as few things into your identity as possible”

One of the most obvious examples of where labelling yourself with a certain identify can compromise your ability to think clearly is of course politics. I mean, it really doesn’t make any sense why one’s opinion on taxation should determine one’s stance on Israeli-Palestinian conflict or vice versa but when someone joins a particular political party that is almost invariably the result.
They no longer have the same thought-space available to them and are forced into toeing the party epistemic line.

Put simply, if you consider yourself to be a Label, then you no longer have to think so hard about various ideas. Instead, you can take the irrational shortcut of just asking yourself “What would other Labels do?”

What is the problem with this? Personally, I want my beliefs to be true and that demands that I follow the evidence.  That would be much more difficult if I was consistently asking myself “What would other Democrats/Republicans/Atheists/Theists etc. do?”

It is important to state that I don’t take Graham’s essay to mean that I should have no identity at all. Instead, I have resolved to cultivate an identity that supports me in my efforts to have beliefs that are true. If identity can have such powerful effects, then it would be wasteful to not try to turn these effects to one’s advantage.

For example, by thinking of myself as being someone with a growth mindset, then whenever I find myself faced with a challenge I can cheat and use the shortcut “What would other people with growth mindsets do?” and that prompts me to the realisation that I can improve and overcome. This is an example of an identity which can really only have beneficial effects as it doesn’t tie my beliefs up with those of other people. In other words, it does not limit my thought-space.

Of course, you shouldn’t just assume that my approach is the best one.
Don’t toe the party epistemic line!