Tunnelling

This post arose as a result of a email discussion with David Deutsch about quantum tunnelling. I will explain roughly what tunnelling is, the relevant parts of quantum mechanics and something about why tunnelling happens.

Potential energy and tunnelling

If you throw a stone in the air, then it starts with some particular speed, it then slows down in the vertical direction until it stops and then falls to the ground. When the stone is moving slowly it doesn’t have much kinetic energy. So where does all the kinetic energy go? Doesn’t this violate conservation of energy? No. An object that is higher off the ground has a different kind of energy: the energy it would gain by falling to the ground. More precisely the system consisting of the Earth and the stone has this energy. When a set of systems has some energy by virtue of its configuration, i.e. – the stone being above the ground rather than resting on it, it is said to have potential energy.

To gain potential energy, an object has to lose some other kind of energy, e.g. – kinetic energy. So if you roll a stone up a hill it can gain as much potential energy as you impart to it. If you roll a stone up a hill with some specific amount of energy it can get to a certain height on the hill. If you don’t give it enough energy it can’t get over the hill. But suppose there is a place on the other side of the hill with the same potential energy. It doesn’t break conservation of energy for the stone to be on the other side, but that doesn’t happen because the stone would have to go through states in which it gained more energy than it had originally.

However, for a system that can go undergo quantum interference in a particular potential, it can be the case that it does something a bit like appearing on the other side of the hill. This is called quantum tunnelling. You have a potential in a particular region. You have a particle with energy less than the energy required to “go over” the potential. But the particle goes through the region in which the potential is “too large” to a place on the other side where it has the same energy it had originally. This is called tunnelling because it superficially looks a bit like the particle digs a tunnel through the hill instead of climbing over it. Tunnelling only happens in quantum mechanics, not in classical physics.

Locality in quantum mechanics

Before I can discuss tunnelling in more detail I will say something about the context of the explanation. Quantum mechanics is used by many physicists to make accurate predictions about the results of experiments. However, most physicists are unwilling to accept its implications and this has led to a bizarre controversy about the “interpretation” of quantum mechanics. Interpretation in this context means “say absolutely anything you want about quantum mechanics without paying attention to whether what you say makes sense, explains experimental results, or is even compatible with other physical theories.” That’s not what people say it means, but in practise they act exactly as if this was the case.

For example, lots of physicists say quantum mechanics is non-local. In reality, the equations of motion for quantum systems, such as the Schrodinger equation and the Heisenberg equation are entirely local. That is, the equations say the value of the wave function or observables in some region is a function of the stuff in and around that region, not of stuff arbitrarily far away. The main experiment that is usually deemed to demonstrate non-locality, the EPR experiment, has an entirely local explanation. And since the explanation of tunnelling is in part a result of quantum mechanics being local, I shall explain why quantum mechanics is local, despite the standard line that it is non-local.

Quantum mechanics says that all of the stuff you see around exists in multiple versions. There is a version of me that is sitting one millimetre to my right of where I am sitting now. I can’t interact with that version of me and so can’t exchange information with him. These versions are sorted into layers of versions of systems, each of which approximately obeys the laws of classical physics. In one layer I am sitting in my current position and the version of the chair that I am sitting on has a specific pattern of pressure on it. There is another version of me sitting one millimetre to the right on another version of the chair with a slightly different pattern of pressure on it. Each version of me goes with some specific version of the chair, and with some specific version of the other systems I’m interacting with, like my computer. A specific layer is called a universe and the collection of all the layers is called the multiverse. (This should not be confused with other uses of the word multiverse, like the string theory multiverse. The quantum multiverse is necessary to explain the results of experiments, whether the string theory multiverse can be tested at all is still unclear.)

In carefully arranged experiments it is possible to tell that different versions of a system existed since you can’t explain the results without them. One example is the EPR experiment that is commonly said to be non-local. I will explain a simplified version of the experiment and a sketch of the local explanation. For more detail see this and this. It is possible to set up two systems with the following properties. (1) Each system has two observables A,B, both of which have two possible values and the probability of getting either value is 50% for both observables. (2) If you measure the same observable on both particles, e.g. – you measure A on particle 1 and A on particle 2, you get the same result. (3) If you measure A on one particle and B on the other then when you compare the results you find that they match 50% of the time. So whether the results match depends on whether you do the same measurement on both systems. And this remains true if you move them far apart, do the measurements and then compare the results. If they were correlated before the measurement then whether the results match wouldn’t depend on whether you measure the same observable on both. What happens is that when you do the measurement there are two versions of each system and the correlations are only established when you compare them. But if there was only one version of each system how would you explain this result? The correlations can’t be established before the measurement, or after it. So they must be established during the measurement, no matter how far apart the measurements are or even if there is no time for information to get from one system to the other during the measurement. So the correlations must be non-local if there is only a single version of each system as a result of the measurement, hence the misconception that the correlations are non-local. In reality, there are multiple versions of each system after the measurement and the correlations are established locally afterward.

Versions and instances

The above account of what quantum mechanics of the implications of quantum mechanics is usually called the many worlds interpretation, and physicists act as if it is an optional add on. It is not an optional add on. It is the only explanation of what quantum mechanics is saying about the world and it is the only explanation of many experiments, not just EPR type experiments. I’m going to give a slightly fuller account of quantum mechanics before I continue. The existence of multiple distinct versions of each system is only one feature of what quantum mechanics says about how the world works.

Suppose that you measure the position of a particle along a particular line with some accuracy x. There is a mathematical operator that describes all of the different versions of the particle measured with that accuracy: it’s called an observable. Let’s call the observable in question D for distance. So D might have values x, 2x… You would find that each of those versions has a probability predicted by quantum mechanics. A particular version of the system after the measurement will have a sharp value of Dx.  If you were to measure an observable M that gives the momentum of the version of the particle to some accuracy p, you would find that if you make the measurement accurate enough there would be more than one state with a non-negligible probability. When a system only has one state with non-negligible probability it is said to be sharp, otherwise it is unsharp. But before you do the measurement of M there is no single fact of the matter about what value M has. What is more there are experiments you could do to make the different M values interact with one another. As a result it doesn’t make sense to call the things with different values of M distinct versions. I will use the term instance to refer to some set of aspects of a system that could become distinct versions whether or not they actually become distinct versions. All versions of a system are instances, but not vice versa. You could also set up the particle so that neither D nor M is sharp but rather, both of them have non-negligible probabilities for multiple different possible values. Experiments in which multiple instances of a single particle interact are often called single particle interference experiments. And tunnelling is an example of interference.

Tunnelling

David Deutsch wrote a tweet:

‘Quantum tunnelling’ is very badly named. It’s mountaineering: In some universes the system has enough energy to cross the barrier. Period.

In a quantum tunnelling experiment, we send instances of a particle toward a potential (the barrier) where the particle has a negligible probability of having an energy above the barrier. But the particle has a non-negligible probability of getting through the barrier. And if you measure the energy of the particle on the other side of the barrier you find that it is lower than the barrier energy with high probability. This is not a purely theoretical issue. It is used in technologies like tunnel diodes and scanning tunnelling microscopy. Since tunnelling is well known precisely because particles with energy less than the barrier get through the barrier it seemed wrong to say that the particle ends up on the other side by having energy above the barrier energy. So David Deutsch’s tweet was wrong, and is contradicted by both theory and experiment.

What’s actually happening in tunnelling is not that instances of the particle with energy above the barrier get through. Rather, instances of the particle with energy below the barrier go into the barrier because there is no way for the particle to get information about the barrier without interacting with it. Some instances of the particle have to go part way into the barrier for the probability of the particle being reflected to be affected by the barrier. And if instances of the particle go into the barrier then a thin enough barrier will let some of them through. In addition some instances of the particle will be propagating forward through the barrier and others backward and this will have an effect on the probability of reflection, so tunnelling is also an interference effect.

The maths explaining why tunnelling can work goes like this. I reproduce the equations for a sum of quantum states of the particle with a particular energy. I construct a state in which the particle starts off in a state in which it has a high probability of being on one side of a potential barrier with a high probability of being far from the barrier and the particle is propagating toward the barrier. I explain that the wave packet acts as we would expect a wave packet propagating from some distance far from the barrier to act. I argue that the distribution of energies does not change significantly as the instances of the particle propagate toward the barrier. I then argue that the amplitude for instances of the particle above the barrier is not significant. See tunnelling4. So, in conclusion, particles don’t tunnel through a potential by having energy higher than the potential.

UPDATE I made a mistake in this post that changes its conclusions.

Common preference vs critical preference

Popper came up with the idea of a critical preference and I am going to contrast this with common preferences. In Section III of the Introduction to “After the Open Society”, Popper writes:

For we can discuss our various competing assertions, our conjectures, critically; and the result of our critical discussion is that we find out why some among the competing conjectures are better than others.

Accordingly I agree with the optimists when they say that our knowledge can grow and can progress; for we can sometimes justify the verdict of our critical discussions when it ranks certain conjectures higher than others.

This idea is wrong as first pointed out by Elliot Temple. Critical discussion does not rank ideas. A critical discussion attempts to solve a problem and either accepts a particular proposal as a solution to a problem or rejects it. There is nothing in between. Sometimes you might attach a number to various solutions of an idea and choose the one with the one with the highest number or the lowest number or choose in some other way related to the number. For example, you might compare cars to see which uses the least amount of petrol when driving in a city. You may list the cars in order of miles in city per litre of petrol. But you actually choose the one that has the highest number of miles per unit of petrol and reject the rest. Whatever way you list the cars you accept one and reject the rest. So the idea of a critical preference as a ranking of ideas after a critical discussion is wrong. When people think they are ranking ideas they are actually doing something like munging together lots of different contexts in which different ideas are useful, or they are choosing according to some unstated and uncriticised criterion. This means that criticisms that might improve your ideas are ignored or are not generated at all, which is bad.

A common preference is an idea that all of the people involved in a dispute prefer to their original position. To get a common preference that you must answer all of the criticisms of the idea that is adopted. If you don’t then it doesn’t mean anything to say that the adopted solution is preferred to other proposals since they both have the same status: they have both been rejected. You don’t rank the suggestions. Each suggestion is either accepted or rejected.

Turing Test Fail

Some people are claiming that some program passed the Turing Test:

No computer had ever previously passed the Turing Test, which requires 30 per cent of human interrogators to be duped during a series of five-minute keyboard conversations, organisers from the University of Reading said.

That’s not the Turing Test.The original test required that a computer should be able to fool an interrogator into thinking it is a man. The original test is flawed because it attempts to replace the need to explain how stuff works with just looking at behaviour. There has been no large increase in our knowledge about the software required to create new explanatory knowledge. The ability to create such knowledge is what set people apart from everything else. The best existing theory of knowledge, critical rationalism, is ignored by almost everybody, which means that people in the field are using false ideas to decide what problems to work on. So how are they going to succeed?

But the problem with the Turing Test has been made even worse by relaxing the terms of the test so much so that it no longer conveys any useful information. Five minutes is not a long time and so it would be difficult to push the computer to think about any particular issue.

There is an additional problem that people often over interpret stuff in a bid to be nice. For example, the people at the Gorilla Foundation claim they have ‘talked to’ a gorilla named Koko using American Sign Language. If you look at a transcript it looks like they are searching desperately for a meaningful interpretation of gibberish. The people conducting the Turing Test may be cutting the computer slack it doesn’t deserve.

More bad stuff about Sherlock

While I’m complaining about Sherlock I should point out some other flaws in the show. First, Sherlock is described as a sociopath and given various other psychiatric labels. All of those labels are insults and that’s all there is to them. He doesn’t like pointlessly hanging around with other people? This is good. You shouldn’t waste your time and the time of other people by hanging around with them if you aren’t getting any benefit from it. Describing such behaviour in pseudo-medical jargon is just a way of covering up moral disagreement rather than having an open, rational discussion about it.

Second complaint: the treatment of drugs. Sherlock acts angry when he doesn’t have access to cigarettes. He also seems to imagine that alcohol makes people feel good (Season 3, Episode 2). Both ideas are rubbish. Nicotine and alcohol can’t reprogram your brain and so can’t make you do anything or make you feel good. All the sensations they produce can be reinterpreted.

Sherlock, emotions and rationality

Many people have a false model of how emotions work and in particular a false model of the relationship between emotions and rationality.

To see this model in action, you need only watch programs like Sherlock in which the hero is supposedly rational. The hero is said not to have any emotions except when he is doing something stupid. So when Sherlock wants a cigarette he acts like an idiot, not just in the sense of wanting to poison himself in a way that could shorten his life and end up with him dying of lung cancer, but also in the sense of him being angry with people who get in the way of him smoking. In Episode 2 of Season 3, Sherlock gives a terrible best man speech in which he claims that Watson saved his life in more than one way. In other words, if Watson was not his friend he would be miserable or something like that. And we are told that Sherlock isn’t good at dealing with emotions. So the model is that rationality and emotions are antithetical.

The way most people with emotions is that they have some particular interpretation of those emotions that they never bother to question. It doesn’t matter what the interpretation is because absolutely any such rule is bound to be completely broken and lead to disaster. And by disaster I don’t mean driving your car off a cliff, I mean chronically failing to solve problems. An emotion is just a kind of sensation, so any interpretation of that sensation that you don’t criticise is going to be wrong almost all the time. Imagine if you stopped moving every time you saw something red because you thought it was a red traffic light and you will have some impression of just how bone headed this idea really is. Actually I’m understating the problem. Imagine every time you saw a red light, you decided to chain yourself to the first person you saw when the red light was on. That’s basically what people do when they get married, as Godwin pointed out:

But the evil of marriage as it is practised in European countries lies deeper than this. The habit is, for a thoughtless and romantic youth of each sex to come together, to see each other for a few times and under circumstances full of delusion, and then to vow to each other eternal attachment. What is the consequence of this? In almost every instance they find themselves deceived. They are reduced to make the best of an irretrievable mistake. They are presented with the strongest imaginable temptation to become the dupes of falsehood. They are led to conceive it their wisest policy to shut their eyes upon realities, happy if by any perversion of intellect they can persuade themselves that they were right in their first crude opinion of their companion. The institution of marriage is a system of fraud; and men who carefully mislead their judgments in the daily affair of their life, must always have a crippled judgment in every other concern. We ought to dismiss our mistake as soon as it is detected; but we are taught to cherish it. We ought to be incessant in our search after virtue and worth; but we are taught to check our enquiry, and shut our eyes upon the most attractive and admirable objects.

Sherlock acts like an idiot when he deal with emotion because the writers don’t have any other model in mind for dealing with emotions other than turning off all their critical faculties and enacting a ritual that has nothing to do with rationality or reality.

There is a common saying that you can’t criticise an emotion. This is sort of true only because emotions are so lacking in any worthwhile content  that they aren’t worth criticising if you divorce them from the context in which you are having them. If you look at them in context you can often criticise the package deal of which they are a part: a set of emotions, preferences and ideas about how the world works or should work. For example, if you feel happy when you’re with somebody you might think you should have sex and get married and that sort of thing. The way you ought to think goes a bit more like this: “Why am I happy when I am with this person? She looks attractive, she smells nice and she cooks me nice food. To get the same services I can buy air freshener, porn and a cook book. I don’t need to get married or even have sex with this person.” Why don’t people do this? Part of the problem is other bad ideas, like the idea that dealing morally with other people requires mutual sacrifice, so you should sacrifice stuff by getting married and agreeing to do stuff your dislike with your spouse. But it’s kinda difficult to criticise that idea without first realising that your emotions are just sensations and that they should be treated as being part of a wider context.

Justificationism vs ancap and rationality

The socialist anarchopac claims to have an argument against anarchocapitalism (ancap). I think this argument is flawed, but I doubt that many ancaps could reply to it properly.

Anarchopac states that according to ancap all property gained by coercion is illegitimate. So if a thief buys a phone with money stolen from many victims, they collectively own the phone. All government property has been created by taxation, which is theft according to ancap. the money used to fund that property is owned collectively by all of the people from whom it was stolen. Many corporations get government subsidy and that money too comes from taxation, so their property too is owned collectively.

There are a few problems with this argument. The first is that if a thief steals money from people, then he owes them money, not something they would not have chosen to buy. If we were to view taxation as theft, then what the government would owe people is the money it stole, not the goods or services they bought with the proceeds of that theft. The first is that if a thief steals money from people and buys a phone with it they may or may not own the phone, or the proceeds from its sale, or the money the thief stole from them depending on what the law awards them as compensation and what other claims are made on the thief’s assets. This is true even under ancap since the protection agency employed by an individual might only look for assets worth more than some lower bound or something like that. I can see no reason why such a policy would be illegal.

The second problem is that it is not at all clear that taxation is theft. Most taxpayers still think that government is good and necessary and many are enthusiastic about it. They want the government to take their money. Is the government stealing from them? I don’t think so. The trouble is that you have to pay taxation to the government regardless of whether you support their policies. If you dislike the government’s policy on the environment you can’t refuse to pay for that particular policy. Rather, you get to vote for one party or another every four years or so, and occasionally it may happen that a government is toppled by a vote of no confidence or something like that between elections. The rest of the time you are free to say what you like (in the West) but the government can ignore or insult you and there is nothing you can do about it. Now, just so I’m not misunderstood, having the vote is better than not having it. It is sometimes possible to persuade enough people the government is doing something bad or stupid at an election. But I would prefer to change the way government works in the direction of allowing individuals to withdraw financial and practical support from the government piecemeal and on a much shorter timescale than every four years. I think that is the good substance of the ancap position. Taxation is bad financing, it is not theft.

Many ancaps might agree that government and corporate property is not legitimate. But seeing as everybody uses goods and services provided by the government I don’t think that it would be possible to disentangle what property is legitimate and what property is not legitimate. Some positions are morally worse than others to be sure. Campaigning for government support of X is worse than taking money from the government that happens to be available for doing X. If you think it would be better for X to be paid for by non-tax means but you take the money anyway I don’t see that as bad provided that you don’t compromise what you want or say that it’s good for X to be paid for by taxation. The money will be spent anyway so why not take it? The proviso may be difficult to meet, but if you’re willing to walk away if you can’t meet it, then that’s okay.

I think this is an instance of a much more general problem. People often say that some position is rational or not rational. (1) Sometimes what a person means by saying a position is irrational is that there are known criticisms of it and so people shouldn’t hold it. (2) But they also irrational to mean that an idea has been justified: shown to be true or probably true. (1) is possible, (2) is not. Justification is impossible because the conclusion of an argument is only true if its premises are true. So if you have to show something is true then you have to justify the premises, which requires another argument with more premises, which have to be justified and so you get an infinite regress. So justification is a bad standard. What you can do instead is to look for criticisms of your ideas: problems they fail to solve such as inconsistencies with other ideas, or with experimental data. You can then propose replacements for the criticised ideas and so make progress by solving problems. (See Realism and The Aim of Science by Karl Popper, especially Chapter I, Sections 1 and 2 and The Beginning of Infinity by David Deutsch for more details.) What you should do is look for problems and try to solve them, not justify ideas.

Looking for problems requires having the means to spot them and take action to change the way we do things. Liberal democracy has some means to do this, but ancaps have suggested means that may allow us to do better. So if people decide ancap is a good idea what they should they do about the current distribution of property? The best gloss I can come up with on legitimacy is the following: a particular action is legitimate if there are no unrefuted criticisms of it. There is no way to justify an action or idea about what action to take. If there is clear problem with some particular action and there is a way to fix it that has no surviving criticisms you should fix it. Otherwise you should just admit that you don’t know how thing would be if people hadn’t made the mistakes they have made in the past rather than trying to undo things when you don’t know how to do so without doing bad stuff.

For example, Apple has sued Samsung who allegedly copied their iPad designs or something like that, but the government has also pursued an antitrust case against Apple. What would have happened without those cases? I don’t know, nor does anybody else. Apple lost money from the antitrust action. But would people who bought a Samsung pad thing have bought an iPad? Did Apple actually lose money because of what Samsung allegedly did and if not weren’t they just shaking down Samsung? What would Apple have done with the money they had to spend on the antitrust case? How could you even go about finding out what opportunities Apple gained or lost or how to price them in either case? And let’s say Apple has come off worse. Whatever improvements they would have made the resources for making them have already been used and can’t be recaptured. The damage can’t be undone. Apple and Samsung should just be left alone to trade.

There may be some very clear cases where somebody has been screwed and it is possible to make restitution. If the government has seized some property (e.g. – eminent domain or civil asset forfeiture) and it hasn’t ruined the property in question, then it should return the property. Otherwise all we should do is sell off government property and let the market sort the rest out. I don’t think the government has much chance of getting the price of its assets right, so there’s not a lot of point in worrying about that.

Joshua Greene Bad Scientist and Amoralist

Joshua Greene is an associate professor of psychology at Harvard who works on what he calls moral cognition:

My lab studies moral judgment and decision-making, primarily using behavioral experiments and functional neuroimaging (fMRI). The goal of our research is to understand how moral judgments are shaped by automatic processes (such as emotional “gut reactions”) and controlled cognitive processes (such as reasoning and self-control).

So in other words, he’s not studying moral philosophy, right?

Rationalist philosophers such as Plato and Kant conceived of mature moral judgment as a rational enterprise, as a matter of appreciating abstract reasons that in themselves provide direction and motivation. In contrast to these philosophers, “sentimentalist” philosophers such as David Hume and Adam Smith argued that emotions are the primary basis for moral judgment. I believe that emotion and reason both play critical roles in moral judgment and that their respective influences have been widely misunderstood.

Emotion and reason both play a “critical” role? If they are important that implies that there is some standard by which they are important? What’s the standard? He doesn’t say. I think this is because he doesn’t realise that he has raised this issue. Greene isn’t interested in moral philosophy. That is, he isn’t interested in how to make decisions and improve how he makes decisions. It seems likely from the article that he doesn’t think there is such a thing as an objectively better or worse way to make a decision: he is an amoralist. Greene continues:

More specifically, I have proposed a “dual-process” theory of moral judgment according to which characteristically deontological moral judgments (judgments associated with concerns for “rights” and “duties”) are driven by automatic emotional responses, while characteristically utilitarian or consequentialist moral judgments (judgments aimed at promoting the “greater good”) are driven by more controlled cognitive processes. 

Both deontolology and utilitarianism are bad ways to think about moral philosophy. I hold neither of them. So I don’t fit into the little boxes he uses uncritically. Deontology holds that there are rules you have to obey to be moral, utilitarianism that acting morally consists of calculating the greatest good according to some standard. Neither of them accounts for the growth of knowledge. Any rule you could come up with may turn out to be flawed or irrelevant in the light of some new explanation or problem, so deontology is not worth much since it doesn’t explain how to make such decisions. Utilitarianism has the same problem since you have to assume some standard to make it work and so if the standard is unclear or flawed then utilitarianism won’t help you make that decision. For example, does the rule or standard thou shalt not kill apply to turning off a life support machine for a brain dead patient? Or can you do the utility calculation for that problem if you don’t understand whether the patient still counts as alive?

 If I’m right, the tension between deontological and consequentialist moral philosophies reflects an underlying tension between dissociable systems in the brain. Many of my experiments employ moral dilemmas, adapted from the philosophical literature, that are designed to exploit this tension and reveal its psychological and neural underpinnings.

The “dilemmas” Greene discusses tell us nothing about anything except the sort of mess you can get into when you fail to refute bad philosophy:

My main line of experimental research began as an attempt to understand the “Trolley Problem,”…

 First, we have the switch dilemma:  A runaway trolley is hurtling down the tracks toward five people who will be killed if it proceeds on its present course. You can save these five people by diverting the trolley onto a different set of tracks, one that has only one person on it, but if you do this that person will be killed. Is it morally permissible to turn the trolley and thus prevent five deaths at the cost of one?   Most people say “Yes.”

Then we have the footbridge dilemma:  Once again, the trolley is headed for five people. You are standing next to a large man on a footbridge spanning the tracks. The only way to save the five people is to push this man off the footbridge and into the path of the trolley.  Is that morally permissible?  Most people say “No.”

These two cases create a puzzle for moral philosophers:  What makes it okay to sacrifice one person to save five others in the switch case but not in the footbridge case?  There is also a psychological puzzle here:  How does everyone know (or “know”) that it’s okay to turn the trolley but not okay to push the man off the footbridge?

The appropriate answer to both problems is to say that the question is ill-posed. In a real situation there would be many relevant details that would help solve the problem or you would just lack the knowledge to make a good decision. If you know enough to stop the trolley you should do that. And if you don’t have an idea about how to stop it, the appropriate thing to do would be to wait and see instead of doing something that might cause trouble for people who have more knowledge who are trying to fix the problem. And the idea of throwing somebody off a bridge into the trolley’s path is silly. That person might have better ideas about how to solve the problem than you do so if you want to do something you should work together rather than have a fight that ends with one or both of you being run over. So if you actually think about the trolley problem as you would a real problem in your life you wouldn’t be tempted to say stupid stuff like “I would flick a switch that controls something I don’t understand.” Saying you would flick the switch would be like saying that if you’re on a plane and somebody has chest pain you should cut open his chest to do open heart surgery despite the fact that you know nothing about hearts or surgery and you don’t know the cause of the pain.

Instead of taking this line Greene talks all sorts of elaborate piffle about brain systems for deontology and utilitarianism. Those systems don’t exist. Some people who think badly about moral philosophy might think along those lines and their brains might light up in some particular way when they do so, but so what? Greene is trying to do science without thinking about the relevant explanations. He took some canned dilemma from incompetent philosophers and treats what they say about it as gospel. What if somebody thinks or suspects that the trolley dilemma is a load of old tosh but doesn’t say so and Greene puts that person in an MRI? Won’t that pollute his results?

Greene brings up another problem:

Consider the crying baby dilemma:  It’s war time, and you are hiding in a basement with several other people. The enemy soldiers are outside. Your baby starts to cry loudly, and if nothing is done the soldiers will find you and kill you, your baby, and everyone else in the basement. The only way to prevent this from happening is to cover your baby’s mouth, but if you do this the baby will smother to death.  Is it morally permissible to do this?

So the only choices are to let the baby cry or kill him? Really? If somebody tries to shut up his baby under such circumstances and accidentally smothers him do I think he should be held legally liable? I don’t know, it depends on some of the details of the context.

Greene’s work is bad science and bad philosophy. This junk reflects poorly on Harvard, who hired Greene and on Penguin who are publishing a book he wrote.

Tanya on selfishness and altruism

Tanya has a blog post about selfishness and altruism, which is almost entirely wrong. We can see the see the beginning of the problem right at the start of the post:

In ethics we talk of the difference between ‘selfishness’ and ‘altruism’, and although it is frequently acknowledged that these terms are very elusive, we are to a great extent dependent on them for moral discussion.

On the surface of it, a straightforward reading of the dictionary has the case set out plainly.

self·ish adjective \ˈsel-fish\: having or showing concern only for yourself and not for the needs or feelings of other people”

al·tru·ism noun \ˈal-trü-ˌi-zəm\ : feelings and behaviour that show a desire to help other people and a lack of selfishness

An interesting thing to note here, which Tanya doesn’t note, is that these definitions are part of a more general idea about how the world and morality work. That idea is wrong in a way that was pointed out by Ayn Rand, as I shall discuss below. Tanya continues:

However, there’s a sense in which these definitions can only be read plainly if we were discussing, say, animal subjects–subjects which have a clearly defined ‘self’ to which they can clearly be concerned with benefiting or not within a given activity. Animals have this as they are strictly programmed by evolution. They have a ‘self’ (using the term loosely) amounting to a biological entity looking to survive and replicate, as a biological entity. This sets out clear boundaries in which activities such as eating when hungry or finding shelter are self-preserving, while activities such as grooming another or helping another eat when they are hungry are other-orientated.

This is all wrong. Animals enact a program in their genes. They can’t create new explanatory knowledge. As a result of this they can’t create knowledge about their place in the world, what they are doing and why and whether it could be improved. An animal doesn’t have a self anymore than does a cute computer game character. One way we can tell this is that people have made attempts to teach animals language and have failed to teach them anything beyond the simplest rudiments. For some examples see Kanzi: The Ape on the Brink of a Human Mind by Sue Savage-Rumbaugh and Are Dolphins Really Smart? by Justin Gregg, neither of whom necessarily agree with my interpretation of what they wrote. See also The Beginning of Infinity, Chapter 16, especially around p. 407.

Tanya then writes about humans:

Humans are a different kind of creature, we are programmed by evolution only in a weak sense, and we are in a much stricter sense programmed by values. We have selves that are mental, which exist in self-selected ideas, and as such are malleable. We can learn, we can develop preferences, and we can change our minds. What is at one point unselfish can become selfish merely by shifts in our ideas. And therefore, there’s a sense in which we can never become less selfish in the pursuit of altruism. If a person is motivated by ‘wanting to win the football match’ this notion may entail ‘wanting to motivate the team’, ‘wanting to play your best’, or ‘wanting to give the crowd a good time.’ Each action can be interpreted fairly as both selfish and altruistic. Although each can be seen in a sense as generous, they are not idly or purposelessly giving, they serve some desire of the self. This veil of selfishness to our actions continues to apply to most human intentions, even charity.

What about people? Ideas can’t flow as easily from one person to another as they can within a single person’s mind. In particular, you can only judge or act on ideas in your own mind, not on those in the mind of another person. If you want to judge another person’s ideas you have to learn them first. Furthermore, since you have different knowledge than they do you have different opportunities and can make use of different stuff. Some people might be good at drawing, others at programming, others at physics and so on. So it makes sense to say that you are different from other people and that those differences often imply that you can benefit from different deals. So it makes sense to say that a human being can act in his own interest, which can be different from that of another person, but it doesn’t make sense to say this about an animal. Tanya continues:

The lines between altruism and selfishness are blurred by our existence through the mental in which ‘us’ and ‘the outside world’ can be intimately intertwined by values. In the extreme, selfishness can manifest itself in form of dying for a cause or a loved one.

This is very confused. What people commonly call selfishness involves lumping together two very different ideas, as pointed out by Ayn Rand. Idea 1 (good): thinking about what is in your self-interest and trying to enact it. Idea 2 (bad): being willing to do absolutely anything that seems to benefit you according to some ideas you happen to hold at the moment. Lumping those two things together is what Ayn Rand called a package deal.

The same set of common ideas lumps together two very different ideas under the heading altruism: another package deal. Idea 1 (good): you sometimes help people when it benefits you too. Idea 2 (bad): you have an obligation to help people even if, by doing so, you are cutting your own throat. Indeed, cutting your own throat is good and if you’re not doing it then you suck and other people should cut your throat for you.

One way that bad ideas survive: people accept them uncritically by using common words without stopping to question the ideas behind them. When Rand says that she uses the term selfishness for the reason that makes some people afraid to use it, the positive way of interpreting this is that she is bringing these bad ideas out in the open so that we can kill them. Rand replaced those ideas by saying that rational people don’t have conflicts of interest. So you can act in your own interest without screwing over other people.

The bad ideas in the definitions Tanya gave have leaked into her discussion of morality. Tanya claims you can sacrifice yourself for a cause or a loved one and that this is selfish. This is misleading. Let’s suppose you go off to fight in the British army in World War II. If you’re rational your objective is not to die. Rather, you are taking a risk of dying because the consequences of losing the war are worse the the risk of death that you’re taking on. If you get conquered by Nazi Germany it’s virtually impossible to act in your rational self-interest without being murdered. By contrast, if you’re fighting in the Soviet army and you take the risk required to steal food from kulaks, then you’re just an idiot – you are literally throwing away your life for nothing, for less than nothing. By living the idea that productive people can be thrown under the bus, you will reduce production. And you have made it more difficult for any productive person to cooperate with you, which will make it more difficult for you to improve. And in any case if the authorities decide you should be murdered, then how can you say no given the ideas you hold? What argument can you give that you should not be killed for the good of others? The British soldier may be acting in his rational self interest, the Soviet soldier is not.

Under an Objectivist perceptive, this renders talk of altruism redundant, with altruism standing out only in instances in which coercive pressures precede generosity. But it is not evident that, if this be so, by the same token talk of ‘selfishness’ shouldn’t becomes obsolete. If we find ourselves in a situation where one can say ‘I changed my mind drastically from being selfish to being selfish’, the term is hardly descriptive.

This is not a very accurate depiction of Rand’s position. Rand doesn’t consider talk of altruism redundant: she thought it was a good idea to criticise the bad content associated with it. Tanya’s misunderstanding illustrates a weakness of discussing issues in terms of the definitions of words rather in terms of explanations, as criticised by Popper: see The Open Society and its Enemies, Volume 2, Chapter 11, Section II. Rand may be partly to blame for this problem in the interpretation of her work since she sometimes laid stress on definitions. Tanya has taken this bad habit and run with it.

But then it sounds like Tanya is about to turn the corner and get something right:

But of course, all of this misses the point. Practically speaking when people speak of ‘selfishness’ being good or bad, or ‘altruism’ being good or bad, they’re really being used as umbrella terms for some handy rules of thumb to help guide us in selecting and reviewing our values.

Tanya then goes on to give examples of what she considers good and bad ideas under each term. Each term is sometimes used to invoke good moral ideas and sometimes used to invoke bad moral ideas. This often leads to lack of clarity about moral issues when people don’t discuss the ideas. The right thing to do if some issue is unclear is to clarify it by discussing the ideas, not terminology. Tanya then writes:

To decide between values we require much more sophisticated ideas to guide us.

Presumably it would be a good idea to start discussing the ideas she lists under her definitions, but she doesn’t. By contrast, there are some philosophers who don’t stop just when things are getting interesting. Ayn Rand wrote two novels and several non-fiction books, which have a lot of good and substantive and sophisticated moral content. Karl Popper also had good moral ideas, like the idea that you should take ideas seriously and discuss them instead of discussing terminology. See also some of the posts on the blog you’re reading right now, like this one, and Elliot Temple’s blog. Sadly, people often ignore such content.

Against Conspiracy Theories

I have made a YouTube video about why conspiracy theories suck.

Punishment vs responsibility

Many people seem to think that if somebody is responsible for doing something bad, like murder, that legitimises punishing him – that is, harming him in a way that has nothing to do with self defence, defence of others, or defence of property. (I am using the word punishment to denote this idea. You can substitute another term for punishment if you would prefer other terminology.) The standard attitude is that people in prison are bad and so we should make them suffer because it is good for them to suffer. And so if people get raped in prison that’s just too bad.

But there is a problem with punishing somebody because he is responsible. If a person is responsible for something he did that means he could have chosen to do otherwise. He could have created knowledge about a better way to live and so avoided doing the bad thing he did. But if that’s true, why punish him? Why not help him to learn a better way of living and then let him get on with his life? Punishing a criminal won’t make the punisher better off except insofar as it satisfies a desire to hurt the criminal. Punishment won’t make the criminal more productive. Only better ideas about how to live could do that. So punishment is a dead end that benefits nobody.

There is also another problem. The criminal is responsible because he should have realised there are better ways of behaving and found out about them. The non-criminal ways of behaving would enable him to cooperate with other people for mutual benefit. So one principle that should have guided his actions is that he should have been looking for a way to cooperate with other people for mutual benefit. But that principle can’t be reconciled with punishment, which involves the punisher acting in a way that he knows can’t lead to mutual benefit.

A few disclaimers. I am not saying that prisons should be shut down, although they are to some extent devoted to punishment. Some of the people locked up in prison should be locked up to stop them from preying on others. Nor do I think we should necessarily go out of our way to make prison pleasant. But we should also not go out of our way to make prison horrible and destructive.