Joshua Greene Bad Scientist and Amoralist

Joshua Greene is an associate professor of psychology at Harvard who works on what he calls moral cognition:

My lab studies moral judgment and decision-making, primarily using behavioral experiments and functional neuroimaging (fMRI). The goal of our research is to understand how moral judgments are shaped by automatic processes (such as emotional “gut reactions”) and controlled cognitive processes (such as reasoning and self-control).

So in other words, he’s not studying moral philosophy, right?

Rationalist philosophers such as Plato and Kant conceived of mature moral judgment as a rational enterprise, as a matter of appreciating abstract reasons that in themselves provide direction and motivation. In contrast to these philosophers, “sentimentalist” philosophers such as David Hume and Adam Smith argued that emotions are the primary basis for moral judgment. I believe that emotion and reason both play critical roles in moral judgment and that their respective influences have been widely misunderstood.

Emotion and reason both play a “critical” role? If they are important that implies that there is some standard by which they are important? What’s the standard? He doesn’t say. I think this is because he doesn’t realise that he has raised this issue. Greene isn’t interested in moral philosophy. That is, he isn’t interested in how to make decisions and improve how he makes decisions. It seems likely from the article that he doesn’t think there is such a thing as an objectively better or worse way to make a decision: he is an amoralist. Greene continues:

More specifically, I have proposed a “dual-process” theory of moral judgment according to which characteristically deontological moral judgments (judgments associated with concerns for “rights” and “duties”) are driven by automatic emotional responses, while characteristically utilitarian or consequentialist moral judgments (judgments aimed at promoting the “greater good”) are driven by more controlled cognitive processes. 

Both deontolology and utilitarianism are bad ways to think about moral philosophy. I hold neither of them. So I don’t fit into the little boxes he uses uncritically. Deontology holds that there are rules you have to obey to be moral, utilitarianism that acting morally consists of calculating the greatest good according to some standard. Neither of them accounts for the growth of knowledge. Any rule you could come up with may turn out to be flawed or irrelevant in the light of some new explanation or problem, so deontology is not worth much since it doesn’t explain how to make such decisions. Utilitarianism has the same problem since you have to assume some standard to make it work and so if the standard is unclear or flawed then utilitarianism won’t help you make that decision. For example, does the rule or standard thou shalt not kill apply to turning off a life support machine for a brain dead patient? Or can you do the utility calculation for that problem if you don’t understand whether the patient still counts as alive?

 If I’m right, the tension between deontological and consequentialist moral philosophies reflects an underlying tension between dissociable systems in the brain. Many of my experiments employ moral dilemmas, adapted from the philosophical literature, that are designed to exploit this tension and reveal its psychological and neural underpinnings.

The “dilemmas” Greene discusses tell us nothing about anything except the sort of mess you can get into when you fail to refute bad philosophy:

My main line of experimental research began as an attempt to understand the “Trolley Problem,”…

 First, we have the switch dilemma:  A runaway trolley is hurtling down the tracks toward five people who will be killed if it proceeds on its present course. You can save these five people by diverting the trolley onto a different set of tracks, one that has only one person on it, but if you do this that person will be killed. Is it morally permissible to turn the trolley and thus prevent five deaths at the cost of one?   Most people say “Yes.”

Then we have the footbridge dilemma:  Once again, the trolley is headed for five people. You are standing next to a large man on a footbridge spanning the tracks. The only way to save the five people is to push this man off the footbridge and into the path of the trolley.  Is that morally permissible?  Most people say “No.”

These two cases create a puzzle for moral philosophers:  What makes it okay to sacrifice one person to save five others in the switch case but not in the footbridge case?  There is also a psychological puzzle here:  How does everyone know (or “know”) that it’s okay to turn the trolley but not okay to push the man off the footbridge?

The appropriate answer to both problems is to say that the question is ill-posed. In a real situation there would be many relevant details that would help solve the problem or you would just lack the knowledge to make a good decision. If you know enough to stop the trolley you should do that. And if you don’t have an idea about how to stop it, the appropriate thing to do would be to wait and see instead of doing something that might cause trouble for people who have more knowledge who are trying to fix the problem. And the idea of throwing somebody off a bridge into the trolley’s path is silly. That person might have better ideas about how to solve the problem than you do so if you want to do something you should work together rather than have a fight that ends with one or both of you being run over. So if you actually think about the trolley problem as you would a real problem in your life you wouldn’t be tempted to say stupid stuff like “I would flick a switch that controls something I don’t understand.” Saying you would flick the switch would be like saying that if you’re on a plane and somebody has chest pain you should cut open his chest to do open heart surgery despite the fact that you know nothing about hearts or surgery and you don’t know the cause of the pain.

Instead of taking this line Greene talks all sorts of elaborate piffle about brain systems for deontology and utilitarianism. Those systems don’t exist. Some people who think badly about moral philosophy might think along those lines and their brains might light up in some particular way when they do so, but so what? Greene is trying to do science without thinking about the relevant explanations. He took some canned dilemma from incompetent philosophers and treats what they say about it as gospel. What if somebody thinks or suspects that the trolley dilemma is a load of old tosh but doesn’t say so and Greene puts that person in an MRI? Won’t that pollute his results?

Greene brings up another problem:

Consider the crying baby dilemma:  It’s war time, and you are hiding in a basement with several other people. The enemy soldiers are outside. Your baby starts to cry loudly, and if nothing is done the soldiers will find you and kill you, your baby, and everyone else in the basement. The only way to prevent this from happening is to cover your baby’s mouth, but if you do this the baby will smother to death.  Is it morally permissible to do this?

So the only choices are to let the baby cry or kill him? Really? If somebody tries to shut up his baby under such circumstances and accidentally smothers him do I think he should be held legally liable? I don’t know, it depends on some of the details of the context.

Greene’s work is bad science and bad philosophy. This junk reflects poorly on Harvard, who hired Greene and on Penguin who are publishing a book he wrote.

About conjecturesandrefutations
My name is Alan Forrester. I am interested in science and philosophy: especially David Deutsch, Ayn Rand, Karl Popper and William Godwin.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: