Justificationism vs ancap and rationality

The socialist anarchopac claims to have an argument against anarchocapitalism (ancap). I think this argument is flawed, but I doubt that many ancaps could reply to it properly.

Anarchopac states that according to ancap all property gained by coercion is illegitimate. So if a thief buys a phone with money stolen from many victims, they collectively own the phone. All government property has been created by taxation, which is theft according to ancap. the money used to fund that property is owned collectively by all of the people from whom it was stolen. Many corporations get government subsidy and that money too comes from taxation, so their property too is owned collectively.

There are a few problems with this argument. The first is that if a thief steals money from people, then he owes them money, not something they would not have chosen to buy. If we were to view taxation as theft, then what the government would owe people is the money it stole, not the goods or services they bought with the proceeds of that theft. The first is that if a thief steals money from people and buys a phone with it they may or may not own the phone, or the proceeds from its sale, or the money the thief stole from them depending on what the law awards them as compensation and what other claims are made on the thief’s assets. This is true even under ancap since the protection agency employed by an individual might only look for assets worth more than some lower bound or something like that. I can see no reason why such a policy would be illegal.

The second problem is that it is not at all clear that taxation is theft. Most taxpayers still think that government is good and necessary and many are enthusiastic about it. They want the government to take their money. Is the government stealing from them? I don’t think so. The trouble is that you have to pay taxation to the government regardless of whether you support their policies. If you dislike the government’s policy on the environment you can’t refuse to pay for that particular policy. Rather, you get to vote for one party or another every four years or so, and occasionally it may happen that a government is toppled by a vote of no confidence or something like that between elections. The rest of the time you are free to say what you like (in the West) but the government can ignore or insult you and there is nothing you can do about it. Now, just so I’m not misunderstood, having the vote is better than not having it. It is sometimes possible to persuade enough people the government is doing something bad or stupid at an election. But I would prefer to change the way government works in the direction of allowing individuals to withdraw financial and practical support from the government piecemeal and on a much shorter timescale than every four years. I think that is the good substance of the ancap position. Taxation is bad financing, it is not theft.

Many ancaps might agree that government and corporate property is not legitimate. But seeing as everybody uses goods and services provided by the government I don’t think that it would be possible to disentangle what property is legitimate and what property is not legitimate. Some positions are morally worse than others to be sure. Campaigning for government support of X is worse than taking money from the government that happens to be available for doing X. If you think it would be better for X to be paid for by non-tax means but you take the money anyway I don’t see that as bad provided that you don’t compromise what you want or say that it’s good for X to be paid for by taxation. The money will be spent anyway so why not take it? The proviso may be difficult to meet, but if you’re willing to walk away if you can’t meet it, then that’s okay.

I think this is an instance of a much more general problem. People often say that some position is rational or not rational. (1) Sometimes what a person means by saying a position is irrational is that there are known criticisms of it and so people shouldn’t hold it. (2) But they also irrational to mean that an idea has been justified: shown to be true or probably true. (1) is possible, (2) is not. Justification is impossible because the conclusion of an argument is only true if its premises are true. So if you have to show something is true then you have to justify the premises, which requires another argument with more premises, which have to be justified and so you get an infinite regress. So justification is a bad standard. What you can do instead is to look for criticisms of your ideas: problems they fail to solve such as inconsistencies with other ideas, or with experimental data. You can then propose replacements for the criticised ideas and so make progress by solving problems. (See Realism and The Aim of Science by Karl Popper, especially Chapter I, Sections 1 and 2 and The Beginning of Infinity by David Deutsch for more details.) What you should do is look for problems and try to solve them, not justify ideas.

Looking for problems requires having the means to spot them and take action to change the way we do things. Liberal democracy has some means to do this, but ancaps have suggested means that may allow us to do better. So if people decide ancap is a good idea what they should they do about the current distribution of property? The best gloss I can come up with on legitimacy is the following: a particular action is legitimate if there are no unrefuted criticisms of it. There is no way to justify an action or idea about what action to take. If there is clear problem with some particular action and there is a way to fix it that has no surviving criticisms you should fix it. Otherwise you should just admit that you don’t know how thing would be if people hadn’t made the mistakes they have made in the past rather than trying to undo things when you don’t know how to do so without doing bad stuff.

For example, Apple has sued Samsung who allegedly copied their iPad designs or something like that, but the government has also pursued an antitrust case against Apple. What would have happened without those cases? I don’t know, nor does anybody else. Apple lost money from the antitrust action. But would people who bought a Samsung pad thing have bought an iPad? Did Apple actually lose money because of what Samsung allegedly did and if not weren’t they just shaking down Samsung? What would Apple have done with the money they had to spend on the antitrust case? How could you even go about finding out what opportunities Apple gained or lost or how to price them in either case? And let’s say Apple has come off worse. Whatever improvements they would have made the resources for making them have already been used and can’t be recaptured. The damage can’t be undone. Apple and Samsung should just be left alone to trade.

There may be some very clear cases where somebody has been screwed and it is possible to make restitution. If the government has seized some property (e.g. – eminent domain or civil asset forfeiture) and it hasn’t ruined the property in question, then it should return the property. Otherwise all we should do is sell off government property and let the market sort the rest out. I don’t think the government has much chance of getting the price of its assets right, so there’s not a lot of point in worrying about that.

Joshua Greene Bad Scientist and Amoralist

Joshua Greene is an associate professor of psychology at Harvard who works on what he calls moral cognition:

My lab studies moral judgment and decision-making, primarily using behavioral experiments and functional neuroimaging (fMRI). The goal of our research is to understand how moral judgments are shaped by automatic processes (such as emotional “gut reactions”) and controlled cognitive processes (such as reasoning and self-control).

So in other words, he’s not studying moral philosophy, right?

Rationalist philosophers such as Plato and Kant conceived of mature moral judgment as a rational enterprise, as a matter of appreciating abstract reasons that in themselves provide direction and motivation. In contrast to these philosophers, “sentimentalist” philosophers such as David Hume and Adam Smith argued that emotions are the primary basis for moral judgment. I believe that emotion and reason both play critical roles in moral judgment and that their respective influences have been widely misunderstood.

Emotion and reason both play a “critical” role? If they are important that implies that there is some standard by which they are important? What’s the standard? He doesn’t say. I think this is because he doesn’t realise that he has raised this issue. Greene isn’t interested in moral philosophy. That is, he isn’t interested in how to make decisions and improve how he makes decisions. It seems likely from the article that he doesn’t think there is such a thing as an objectively better or worse way to make a decision: he is an amoralist. Greene continues:

More specifically, I have proposed a “dual-process” theory of moral judgment according to which characteristically deontological moral judgments (judgments associated with concerns for “rights” and “duties”) are driven by automatic emotional responses, while characteristically utilitarian or consequentialist moral judgments (judgments aimed at promoting the “greater good”) are driven by more controlled cognitive processes. 

Both deontolology and utilitarianism are bad ways to think about moral philosophy. I hold neither of them. So I don’t fit into the little boxes he uses uncritically. Deontology holds that there are rules you have to obey to be moral, utilitarianism that acting morally consists of calculating the greatest good according to some standard. Neither of them accounts for the growth of knowledge. Any rule you could come up with may turn out to be flawed or irrelevant in the light of some new explanation or problem, so deontology is not worth much since it doesn’t explain how to make such decisions. Utilitarianism has the same problem since you have to assume some standard to make it work and so if the standard is unclear or flawed then utilitarianism won’t help you make that decision. For example, does the rule or standard thou shalt not kill apply to turning off a life support machine for a brain dead patient? Or can you do the utility calculation for that problem if you don’t understand whether the patient still counts as alive?

 If I’m right, the tension between deontological and consequentialist moral philosophies reflects an underlying tension between dissociable systems in the brain. Many of my experiments employ moral dilemmas, adapted from the philosophical literature, that are designed to exploit this tension and reveal its psychological and neural underpinnings.

The “dilemmas” Greene discusses tell us nothing about anything except the sort of mess you can get into when you fail to refute bad philosophy:

My main line of experimental research began as an attempt to understand the “Trolley Problem,”…

 First, we have the switch dilemma:  A runaway trolley is hurtling down the tracks toward five people who will be killed if it proceeds on its present course. You can save these five people by diverting the trolley onto a different set of tracks, one that has only one person on it, but if you do this that person will be killed. Is it morally permissible to turn the trolley and thus prevent five deaths at the cost of one?   Most people say “Yes.”

Then we have the footbridge dilemma:  Once again, the trolley is headed for five people. You are standing next to a large man on a footbridge spanning the tracks. The only way to save the five people is to push this man off the footbridge and into the path of the trolley.  Is that morally permissible?  Most people say “No.”

These two cases create a puzzle for moral philosophers:  What makes it okay to sacrifice one person to save five others in the switch case but not in the footbridge case?  There is also a psychological puzzle here:  How does everyone know (or “know”) that it’s okay to turn the trolley but not okay to push the man off the footbridge?

The appropriate answer to both problems is to say that the question is ill-posed. In a real situation there would be many relevant details that would help solve the problem or you would just lack the knowledge to make a good decision. If you know enough to stop the trolley you should do that. And if you don’t have an idea about how to stop it, the appropriate thing to do would be to wait and see instead of doing something that might cause trouble for people who have more knowledge who are trying to fix the problem. And the idea of throwing somebody off a bridge into the trolley’s path is silly. That person might have better ideas about how to solve the problem than you do so if you want to do something you should work together rather than have a fight that ends with one or both of you being run over. So if you actually think about the trolley problem as you would a real problem in your life you wouldn’t be tempted to say stupid stuff like “I would flick a switch that controls something I don’t understand.” Saying you would flick the switch would be like saying that if you’re on a plane and somebody has chest pain you should cut open his chest to do open heart surgery despite the fact that you know nothing about hearts or surgery and you don’t know the cause of the pain.

Instead of taking this line Greene talks all sorts of elaborate piffle about brain systems for deontology and utilitarianism. Those systems don’t exist. Some people who think badly about moral philosophy might think along those lines and their brains might light up in some particular way when they do so, but so what? Greene is trying to do science without thinking about the relevant explanations. He took some canned dilemma from incompetent philosophers and treats what they say about it as gospel. What if somebody thinks or suspects that the trolley dilemma is a load of old tosh but doesn’t say so and Greene puts that person in an MRI? Won’t that pollute his results?

Greene brings up another problem:

Consider the crying baby dilemma:  It’s war time, and you are hiding in a basement with several other people. The enemy soldiers are outside. Your baby starts to cry loudly, and if nothing is done the soldiers will find you and kill you, your baby, and everyone else in the basement. The only way to prevent this from happening is to cover your baby’s mouth, but if you do this the baby will smother to death.  Is it morally permissible to do this?

So the only choices are to let the baby cry or kill him? Really? If somebody tries to shut up his baby under such circumstances and accidentally smothers him do I think he should be held legally liable? I don’t know, it depends on some of the details of the context.

Greene’s work is bad science and bad philosophy. This junk reflects poorly on Harvard, who hired Greene and on Penguin who are publishing a book he wrote.