Animals shouldn’t have rights

Many people take for granted the claim that animals should have some rights. Exactly what rights animals should have varies from one claimant to another. Some people might say animals should have a right not to have pain inflicted on them, but not the right to vote. This idea is based on misunderstandings of rights and of human beings and animals.

One problem is that many people seem to think that rights are a sort of social nicety, but they are wrong. A right is an enforceable claim to something. For example, if I have a right to own some piece of property, then if somebody takes the property, I have a claim to get the property back. And it’s not just the case that I can say ‘pretty please, give me the property back’. I can call the police and they may use force to get the property back and detain the person who took it. A right is not a polite request for something, or indeed a request of any kind. When you say ‘animals should have rights’, you’re not saying that it would be a good idea for people to treat dogs well. Rather, you’re saying that if a person doesn’t treat a dog well, then people can and should use force directly or indirectly to stop him from treating the dog badly. If you wouldn’t be willing to say that a person should be thrown in jail for violating the alleged right, then you shouldn’t call it a right.

Under what circumstances should a particular thing be granted rights? I don’t think anyone would say that a lump of concrete should have rights. There are a couple of reasons for this. The concrete itself does not ask for rights. In addition, the concrete will not act very differently if  we say it has rights and treat it accordingly. By contrast, if you beat a person up, he will claim that what you’re doing is wrong and that you should be held accountable for your actions. He will also not be inclined to deal with you after you have beaten him up. So it makes a difference whether you treat other people according to their rights or not, both to them and to you. But it is not enough for there be a difference of some kind depending on how you treat an object. If that were the case, then my computer should have rights since it will stop working if I hit it with a sledgehammer.

If I respect a person’s rights, he can go off and do things independently of me that may benefit me, directly or indirectly. He could become a computer programer and help write a great game. He could compose some music. He could become a doctor or nurse, and help save people who can produce goods from which I can benefit, or his medical treatment might save my life or relieve some pain or something like that. My computer doesn’t act the same way. The only way we know of to get a computer to do something is to give it suitable instructions. Those instructions may be written by me or by other people, but there is always a person giving the orders. Nothing useful happens without those orders. The person can produce a potentially open ended stream of benefits for me and for others. The computer can’t do this.

Why can a person do this, but not a computer? The person is capable of creating new explanatory knowledge. A person can create knowledge about music, or physics, or how to lay out a retail store, or how to cut hair, or anything else. Computers can’t create new explanatory knowledge. This is a qualitative difference, not a quantitative difference. The idea that it is possible for us to understand anything about how the world works is required to make the rational, scientific worldview work. If there is a restriction on what sort of things it is possible for people to explain, then this fundamentally means we can’t explain anything. If there was such a limitation on being able to explain parsnips, say, then it would be impossible for us to understand things that interact with parsnips. And we would then be unable to understand things that interact with things that interact with parsnips and so on. So there is a qualitative difference between people, who can understand how the world works arbitrarily well, and computers as they are currently programmed.

There would be another problem with granting a computer rights. The computer can’t give or withhold consent to be treated in a certain way. If I have a right to control a piece of property, that means I can consent to give it up if I want to. If I have a right to control what substances I put in my body, I have the right to consent to put something in my body or not.

The question of whether animals should have rights has a lot to do with whether an animal is more similar to a person or a computer in the respects I explained above. It might appear obvious to you that the animal is more like a person. If you try to torture or kill an animal it will fight back, as a person would. The animal will make noises that sound a lot like the noises that a person makes when he is angry or in pain. And animals are made out of the same kinds of material as humans, muscle, bone, brain, nerves and so on. And an animal’s nervous system is, in many respects, similar to a person’s. So you may think that what is going on inside a screaming animal is the same as what is going on inside a screaming person. So then it makes a difference to the animal how you treat the animal.

If you made this argument, you would be wrong. The problem is that when a person feels pain, his interpretation of that experience is part of what makes it bad. The person understands that he might die, or be unable to perform certain tasks, or it might change his view of the world and make him less inclined to go out, or whatever. Such interpretations depend on his ability to understand the world. Without that ability, no such interpretation would exist. And other animals lack that ability. Dogs don’t write plays, or songs, or come up with scientific theories. It’s not that some dogs do those things, and others don’t. Not a single dog in the whole of history has ever done any of those things.

You might think that some species are smarter than dogs in some way. For example, bonobos have used sign language. But as recorded in Kanzi: The Ape on the Brink of the Human Mind by Sue Savage-Rumbaugh and Roger Lewin, bonobos never managed to understand a sentence as simple as ‘put the coke can in the trash can.’ What was going on in the bonobos was something very different from what happens when a person learns. An animal has some finite set of behaviours it can enact, determined by its genes. It has some set of features of the world that it can discriminate, again, determined by its genes. And the animal can try out combinations of the set of behaviours until the results meet some criteria encoded in its brain by its genes. See R. W. Byrne’s paper Imitation as behaviour parsing and The Beginning of Infinity Chapter 16, Section ‘How do you replicate a meaning?’ starting around p. 401.

Some people might say that we evolved from animals so we can’t be qualitatively different. But evolution has give rise to qualitative differences, e.g. – differences between multicellular organisms and single celled organisms. Differences between animals that perceive light and those that are blind. And since humans are qualitatively different, there pretty much had to be other species that were similar to us in many respects, except in their ability to understand the world.

Since animals can’t understand the world, an animal does not have the potential to produce an open-ended stream of benefits in the same way a person can. We gain nothing by granting them rights. Administering any rights granted to animals would also be a problem since the animals can’t give or withhold consent. A person can consent to eat spicy food, even if it makes him have an experience in which his mouth feels like it is burning and tears are running down his cheeks. But we know he wants this because he can tell us. So how are we to decide whether an animal wants spicy food? The animal can’t tell us.

Granting animals rights is a mistake. We are throwing out the actual interests of people for something that doesn’t benefit us and can’t benefit animals. We may wish to treat animals well for a variety of reasons. Some animals look cute and we don’t want to hurt them. Some animals produce better meat or eggs or milk or whatever if treated in specific ways. This does not require giving animals rights. We have nothing to gain and a lot to lose by trying to grant rights to animals.

EU vs the internet

A European Commission (EC) document called Online platforms and the digital single market has been leaked. It has content that has been described as a link tax, but the article doesn’t provide page references or long quotes for most of its claims. I think the policy described as a link tax is a bad idea, but I wouldn’t describe it as a tax. It’s more like just breaking contracts when the EC happens to feel like it.

On page 9 of the document the problem the EC wants to address is described as follows:

New forms of online content distribution have emerged which may involve several actors, and where, for instance, content is distributed through platforms which make available copyright protected content uploaded by end-users. While these services attract a growing audience and gain economic benefits from the content distribution, there is a growing concern as to whether the value generated by some of these forms of online content distribution is shared in a fair manner between distributors and right holders. In reply to the public consultation, right holders across several content sectors reported that their content is increasingly used without authorisation or through licencing agreements, which, in their view, contain unfair terms.

The emphasis and wrong spelling of “licensing” are in the original.

On p.10, the EC describes what it intends to do about this problem:

in the next copyright package, to be adopted in autumn 2016, the Commission will aim at ensuring fair allocation of the value generated from the online distribution of copyright-protected content by online platforms whose businesses are based on the provision of access to copyright-protected material.

The emphasis in the quote is the same as in the original.

The first problem with this “solution” is that the EC openly states that some of the content is shared under license agreements. This means that the EC will be in the position of breaking the terms of contracts. Defenders of the EC might say they are going to decide on the basis of a “fair allocation”, but there is no such standard of fairness. If you make a contract, you should either stick to the terms or negotiate a new agreement both parties can accept. Otherwise, the other party to the contract just gets shafted and has no recourse. There is no fair way to fuck somebody over like that.

The second problem is that as anyone knows who has followed what has happened on YouTube over the past few years, the copyright as it stands has some major problems. It is already difficult to quote material produced by somebody else, even for the purposes of commentary. Those concerns don’t matter to the EC. They are accountable to nobody. Voters can’t vote them out. Nor can anyone else. So how are they supposed to decide what to do? The EC have to get their information about problems to address from somebody, so they get it from whoever can afford to lobby them.

This document is an example of why the UK should leave the EU. In its current state, the EU can’t be reformed or saved because it has no means of error correction. Sticking around in the hope that maybe the EU will learn a lesson that it has failed to learn over the past several decades, and which it has no means to learn, would be a bad idea.

Harriman doesn’t understand physics

In some respects, physics is not in a very good state. In particular, many physicists are instrumentalists: they see physical theories as instruments for predicting the results of experiments rather than as explanations of what is happening in reality.

There is some resistance to instrumentalism among some physicists and members of the public. But a lot of this resistance takes the form that the laws of physics have to conform to some version of common sense. But common sense is just knowledge that people currently happen to think ought to be uncontroversial. So to say that some idea doesn’t conform to common sense is not particularly relevant to judging that idea. Rather, the idea should be taken seriously as an explanation in its own right. This includes understanding the claims the theory makes about measurement. What sort of physical processes constitute measurements, what sort of limitations do those processes put on what attributes of a system can be measured and so on.

David Harriman is a common sense advocate, and has many of the weaknesses of such people. Harriman writes an article that includes dialogues between a physicist and a layman. The physicist is an intrumentalist and the layman is a common sense advocate.

First, I’ll look at a part of the dialogue about relativity:

P:  “There was a theory that treated length contraction and time dilation in that way. It was proposed by a Danish physicist named Hendrik Lorentz. On the basis of his theory, Lorentz derived some of the fundamental equations of relativity before Einstein did. But the Lorentzian theory was rejected and replaced by Einstein’s theory.”

L:  “Was Einstein’s theory accepted because it was better able to account for the observed facts?”

P:  “Not exactly. The basic advantage of Einstein’s theory is that it’s simpler. He dismissed the idea of explaining the phenomena of relativity by reference to any physical stuff in space (the so-called ether). Instead, we just say that moving bodies appear shorter and moving clocks appear to run slower—as perceived by a stationary observer. In other words, space contracts and time dilates by amounts that depend on the relative motion with respect to an observer.”

L:  “But I want to understand the cause of these effects. You say that length contraction and time dilation don’t refer to real physical changes in moving bodies. Do they instead refer to real effects on our measurement of lengths and times? I remember hearing a classical physicist explain that heating a ruler causes it to expand and thereby affects length measurements. Does motion also affect our physical means of measuring lengths and times? If so, I could make sense of relativity theory. There would still be real lengths of bodies and real time intervals; we merely have to account for and subtract the effects of motion on the measurements. After all, the actual properties and relationships of other bodies can’t change whenever I decide to move!”

If I take a picture of a book from two different angles, the measurements I make relative to the sides of the picture may be different, as in the two pictures below:

The book didn’t change as a result of my taking a photo from a different angle. The constitution of the camera didn’t change either, it still operated the same way after I turned it. The only thing that changed was the relationship between the book and the camera. So different relations between two objects can change the results of measurements even if the two systems operate the same way before and after the change. You can tell that the book remains the same because there are features of the book that remain the same in the two photos, such as the length of the bottom edge of the book compared to the letters on the cover. You could say that those are the real measurements of the book since they remain the same in the two photos, but it is also the case that there is a set of objective facts about the results of measurements on the two photos. Physics ought to tell us about both sets of facts. So the results of some measurements can depend on relations between two bodies.

The layman in the section of dialogue quoted above claims that the relationships between body 1 and body 2 don’t change when body 2 moves. This is a bizarre claim since the relative state of motion of two bodies is a relationship between them. So why shouldn’t some measurements change as a result of different states of relative motion? That is the explanation for the difference in length and time measurements given in standard accounts of special relativity, such as Special Relativity by A. P. French. Note also that as in the case of the photos book above, special relativity claims that some features of a system’s behaviour don’t depend on its relations to other objects. For example, if two atoms emit a photon, the time at which I see each atom emit the photon will in general depend on my state of motion relative to the atoms. And the distance I see between the atoms will also depend on my state of motion relative to the atoms. But the quantity c^2\delta t^2-\delta x^2 where $latex \delta x$ is the distance I measure between the atoms, and \delta t is the time between the photons being emitted. Special relativity is different from what people expect from everyday life, but it is consistent and explains the world better than common sense.

In the dialogue on quantum mechanics his confusion is more understandable. The sort of nonsense the physicist in the dialogue utters is not very far from what a lot of physicists say about quantum theory. But this is a problem with how physicists explain the theory not with the content of the theory itself. And there is a notable symmetry between the two sides of the dialogue, illustrated by the quote below:

L:  “I still don’t understand. If you observe only specific entities with definite properties, and you know of no mechanism by which an inconceivable ‘nothing in particular’ could suddenly acquire such properties, why not accept the fact that these things possess real attributes before the observation?”

P:  “Because we’ve concluded it isn’t possible to develop a theory that explains our experimental results in terms of entities with specific, non-contradictory properties.”

Note that both sides of this dispute talk vaguely about properties, with specifying what properties they are discussing. Neither side gives any explanation of how reality actually works. There is no discussion of any specific experiment, nor of explanations for the outcomes of these experiments. Both sides are discussing the issue entirely in terms of abstractions that float free of all problems, all experimental results and all solutions to problems. There is an explanation of what quantum mechanics says about how the world works. But you can’t understand that explanation by starting with vague mumbo jumbo about properties, as do both Harriman and the standard physicist.

The EU and the ‘who should rule’ question

In political and moral debates people often make false assumptions that limit the set of options they can imagine as a solution. I think this is happening in the debate over whether the UK should remain a member of the EU. The issue is being framed as whether bureaucrats from the EU should be able to dictate what sort of laws the British parliament should pass or whether the British government should control its own laws.

But this way of framing the debate makes a false assumption that the most important issue is who gets to make a decision about UK laws. As Karl Popper pointed out in The Open Society and Its Enemies Chapter 7, this question makes the false assumption that there is a single person or group who has the knowledge required to dictate what everyone should do. A better question to ask is ‘How can we so organize political institutions that bad or incompetent rulers can be prevented from doing too much damage?’ How do the EU and the British parliament compare by that criterion?

The short version of how the EU works runs as follows. The heads of EU states form the European Council. The European council picks a group of politicians called the European commission who are responsible for originating and writing EU regulations and that sort of thing. The European parliament is an elected body who can vote up or down legislation written by the European commission, or amend it, but are not allowed to originate legislation. So the people who are legally supposed to originate and write all the laws can’t be voted out of office by the public. The people who are subject to being removed by the public always have the excuse that they aren’t allowed to originate laws, so they can’t deliver any specific policy.

By contrast, an MP in the British parliament can originate, amend or revoke laws and can be voted out for failing to deliver on policy promises.

The competition for which set of institutions is better isn’t even close. The EU is a bad idea and the British public should vote to leave. If we don’t vote to leave, then it will be extremely difficult to remove bad policies or leaders.

The poor quality of the EU’s institutions shows in its decisions. Take the recent deal made by David Cameron on behalf of the UK. One part of the deal says that EU parliaments can block EU legislation if the EU deems that the decision could be made at the national level and 55% of the parliaments of EU member countries vote against it. Getting one parliament to agree on something is a challenge, getting several to do so is going to be extremely difficult. This is a terrible idea that should have been shot down, but it wasn’t because there is nobody who can be held accountable for it. Why make waves if you can’t benefit?

 

AI prophecies

There is an idea doing the rounds that soon Artificial Intelligences will be created that will be capable of doing all the jobs people can do and this will require introducing a universal basic income. Let’s call this the UBI scenario.

The story goes like this. AIs will be capable of doing all the jobs people can do. And all they will need is some computer hardware and electricity. And they won’t want food, they won’t want time off, they won’t have personal problems and so on. So they will be able to do every job better than people.

First, what’s currently called AI doesn’t have anything resembling human level creativity. The way this works at the moment is that somebody has to think about what information is relevant to judging how well to do a task. The person also has to think about what sort of parameters characterise the task. And then the person has to train the program by running many versions of it and selecting the version that works. So to replace any given job will require years 0f work, specialised hardware and software and a lot of time spent by a highly specialised programmer. The fact that job X has been rendered unnecessary will free people up to do other stuff. And it will take time for people to create knowledge about how to do that job well, before the process of replacing them can even get started.

Second, lots of jobs do require human level creativity, such as just about any customer service job. The customer service person has to be able to think about how to satisfy a particular customer on the fly. And each customer will want something slightly different. So there will be no way to replace that customer service job with a special purpose machine designed to do some specific kind of mechanical task.

If you are worried that AI’s will have human level creativity, then don’t. That will not happen for the foreseeable future. Nobody has much idea how the creation of new explanatory knowledge works, except that it involves producing variations on current knowledge and selecting among those variations. Given that we don’t have a full explanation of how creativity works, there is no way anyone can program it.

The UBI scenario is speculation about a technology that doesn’t exist. It also involves speculating about a situation in which there has a vast increase in philosophical knowledge about how people create knowledge. Nobody can know the implications of such knowledge because if you knew its implications, then you would already have that knowledge. The idea that having such knowledge will reduce people to the status of dependents suckling on the state’s teat reveals a bias on the part of those proposing the UBI scenario and nothing else.

To illustrate one way in which the UBI scenario might be wrong, consider the following story. I don’t say this is what will happen, but it is an alternative that illustrates that UBI scenario worries are pure speculation uncontrolled by criticism. In the future we understand how to create knowledge well enough to create AI. As a result, we learn how to make adults creative after life has beaten them down, and everyone becomes extremely productive. Every person is able to support himself with no government assistance. At the same time, we learn how the brain implements creativity, and how to read a person’s brain in such a way that their mind can be implemented in a computer. So people can then transfer their minds into computer hardware and the cost of living drops to the cost of buying the relevant storage space and processing power in a server farm. So then everyone can afford to simulate a standard of living that makes everything Bill Gates has today look like the life of some drunken lice ridden peasant in the Middle Ages by comparison.

There are lots of serious problems with current institutions. For example, Western welfare states already encourage dependence on the government without AI. Academics are an example of this problem: they are dependent on the government for their income. Perhaps they should try to solve that problem instead of speculating about stuff they can’t know anything about.

Twitter rules

The conservative journalist Milo Yiannopoulos has been unverified by Twitter. Since Yiannopoulos seems to have been unverified for not following the rules of Twitter, I’m going to say a little about why this is a bad move by Twitter, and then examine the rules.

Being unverified means that a little blue badge somewhere on Yiannopoulos’s profile that nobody ever noticed before is no longer present. Verification is also supposed to be some sort of confirmation that you are who you claim to be. This status is only for a person famous enough that somebody might want to steal his identity. Verified status can be revoked if you’re an impostor, or you break the rules of Twitter. Since Yiannopoulos has not been replaced by a cylon or whatever, he must have broken the rules. Twitter have not explained the nature of the alleged violation to the best of my knowledge.

I disagree with Yiannopoulos on at least two issues I’m aware of, but I think the unverification is a bad and silly move on Twitter’s part. Twitter’s value depends on it being a network where people can post ideas, or at least links to ideas. If Twitter are going to try to punish people who post a lot of controversial stuff then they are damaging the reason people want to be there. And the fact that somebody is posting controversial material means there is some live issue that people don’t understand and want to discuss. Discouraging people from posting such material gets in the way of progress on that issue. In this context, discouraging such discussion on Twitter is wrong. Twitter have the right to do whatever they want with their platform, including ban everybody who isn’t a Leninist if they so desire, but not every exercise of your rights is good. I could pour a can of baked beans on my head and walk down the street, but I don’t think it would be a good idea. Twitter’s unverification of Nero is at least as stupid as pouring a can of baked beans on your head.

But what are the rules and how can you avoid breaking them to avoid the savage punishment of the revocation of your blue blobby status? The first rule seems sensible:

You may not make threats of violence or promote violence, including threatening or promoting terrorism.

Yiannopoulos hasn’t broken this rule to the best of my knowledge. There is another rule that is the same as the first rule, except that it is less general:

You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or disease.

The first quoted rule says that you can’t threaten or promote violence. The second quoted rule says you can’t threaten or promote violence on a variety of bases. So the second rule is a more restricted version of the first. The first rule covers the case in which I threaten to kill you because you are wearing a pair of trainers, the second rule doesn’t.

One of the other rules is more troubling:

Harassment: You may not incite or engage in the targeted abuse or harassment of others. Some of the factors that we may consider when evaluating abusive behaviour include:

if a primary purpose of the reported account is to harass or send abusive messages to others

if the reported behavior is one-sided or includes threats;

This rule doesn’t explain what counts as harassment or abuse. And people may disagree about what constitutes harassment or abuse. So how are such disputes to be decided? No explanation is given.

Even more bizarre is the qualification about reported behaviour being “one-sided”. What does this mean? Does it mean if I tweet at somebody and he doesn’t reply, then I’m in trouble with Twitter? Does it mean if I make a case for policy A, and not for an opposing policy, then that is a banning offence? Either reading of this qualification could lead to anybody being banned for just about any statement or tweet. I can’t see any way this rule could be read that would make it acceptable.

Twitter should change their rules, and stop trying to stifle discussion.

Criticism of Nozick on Objectivism

The philosopher Robert Nozick, who wrote a book about libertarianism called Anarchy, State and Utopia, also wrote a criticism of Objectivist moral philosophy.

Nozick claims that Rand failed to prove her moral ideas are correct, and so they are no good. His strategy is to break down Rand’s arguments into a number of steps, invent arguments he thinks are relevant to those steps and then say she has not proved one or more steps.

It is true that Rand doesn’t prove her ideas, since proof is impossible. (Any alleged proof makes assumptions and uses rules of inference, neither of which can be guaranteed to produce correct results. Knowledge is actually created by conjecture and criticism, see Realism and the Aim of Science by Popper.) But the lack of proof is also irrelevant since no position of any kind can be proved. And to the extent that his arguments are treated as criticism of Rand’s moral ideas, they are not much good.

Section I

Nozick constructs an argument saying to try to get a conclusion that only a living being has values with a point. Only living beings can choose. And only for them could there be any point in choosing since their choices can make a difference to life, e.g. – bad choices can lead to injury or death. He then cites Rand’s argument that an immortal invincible robot wouldn’t be able to gain or lose and so couldn’t make choices.

Nozick then asks whether a machine could be programmed to value states of affairs that don’t affect it. So then it would have values and Rand’s argument can’t be correct.

I don’t think Rand’s robot argument is much good since any such robot would break the laws of physics, so it couldn’t exist. But there is a hard and fast distinction between living things that can have problems, and non-living things like rocks that can’t have problems. Rand is right in substance and Nozick is wrong in substance.

Nozick also says that it could be the case that you should value stuff outside yourself because if the kind of thing it is. For example, you should value god since he’s all powerful. This doesn’t make any sense since the idea of god doesn’t make any sense. His other example involves admiring another person’s talent. But if another person has some talent that helps produce goods that will make you better off either directly or indirectly. For example, if somebody is a good singer but sings songs you happen not to like, other people might like the songs and be more productive as a result.

Section II

Nozick says even if we grant that only living things can have values, it doesn’t follow that life is a value.

Nozick then says that the argument for life having value would have to be that (x) having values has value, that life is necessary for values, so life has value. He then points out that if nobody ever got cancer, then we wouldn’t value a cure for cancer, so then having cancer has value by the assumption that having values has value.

But Nozick says we could modify x to x’: anything that is necessary for all value has value, so since cancer isn’t necessary for all values, it isn’t a value. He then says this means that not having achieved all values is a value by this criterion, as is being mortal and destructible.

Nozick also says that if Rand’s idea is of the form you ought to realise the greatest amount of X, where Rand says X is life, somebody could have a different X, like death, so Rand hasn’t proven her case.

This whole series of arguments is silly. In reality, maintaining life is difficult and if you don’t choose to maintain it, you have effectively chosen death. And if you’re going to choose death, why not just kill yourself by throwing yourself off the nearest bridge or something like that? Death is easy to achieve and so doesn’t require much thought for your values.

Section III

This is about the idea that man’s life qua man, that is, as a rational being, is of value to him.

Nozick says man has values other than rationality that separate him from other animals so why pick rationality? And other beings, like aliens, could be rational so then that’s not a property of man qua man.

And Nozick says a man could stop acting rationally, and why shouldn’t he? Nozick bangs on about essences a bit too.

If a man doesn’t act rationally he won’t survive for long without help. But why not just be a parasite? Nozicks asks. Not everyone could do it, but some people could do it, maybe for their whole lives.

Again these arguments are kinda silly. If you’re not going to be rational, you basically have to be a parasite. And being a parasite depends on your host not realising you’re a parasite. so then you have to deceive the host, which makes it more difficult for him to function rationally. So your lifestyle is self-destructive. In addition, the only way you can avoid being rational is to ignore objections to your actions, so you’re choosing stuff that is bad by your lights. So being irrational is not a way to achieve any value you might hold.

Section IV

Nozick then goes on to argue against the idea that no man should sacrifice for another, or ask another to sacrifice for him.

In addition to the parasite argument, which comes up again, he also claims that it might not be true that there are no conflicts of interest among rational men. He claims there could be multiple dimensions of rationality and that achieving one might mean sacrificing others, and this could cause a conflict of interest among rational men.

Also, Nozick seems to think that no conflicts of interest involves people all mysteriously agreeing by magic. He doesn’t seem to understand that people can come to agree on what to do in some situation as a result of critical discussion if they are prepared to have such a discussion.

This is a problem of thinking of rationality in terms of weighing options, which is wrong. If there was some way to weigh different priorities, you would have to choose the appropriate way to do the weighing, which couldn’t be done by weighing. So then there would have to be some master argument that determines how stuff should be weighed.

In addition, all the options for weighing suck, as explained in BoI Chapter 13.

Section V

Nozick brings up Galt’s promise to commit suicide if the looters torture Dagny because he refuses to live on the looters’ terms.  Nozick then says that if Dagny died from a disease, Galt would kill himself. But that doesn’t follow from what he said. Rather, the problem is that Galt doesn’t want to live if his values are going to be destroyed. Nozick has confused the concrete instance of those values being destroyed, Galt giving the looters what they want because Dagny is being tortured, with the principle of why Galt would top himself in that instance.

As a result of this confusion, Nozick burbles on for the rest of the section talking a load of rot about happiness.

He discusses doing something that results in guilt and then using chemicals to forget it. This doesn’t change the fact that when you did the bad thing, you acted against your values. If you can’t afford a computer because you stole from somebody and have to pay him back, then you still don’t have the computer even if you somehow forget the incident.

Nozick proposes that you could implant in you child a device that would make him act on some moral principles P, except when it would benefit him to break those principles, e.g. – murdering somebody to get his fortune. There are three problems with this.

First, trying to control your child like this would be grossly immoral and would hurt you because one of the benefits of interacting with somebody else is he can do things you can’t anticipate.

Second, acting on principles requires creativity, so controlling your child by some device is incompatible with him acting on principles.

Third, murdering people and taking their stuff isn’t a good idea. Even if you don’t get caught, you always have to look over your shoulder, and lie about stuff to cover up your involvement. And also you lost the opportunity to cooperate with the people who made the fortune. To make lots of money they must have good ideas you can learn from, since you can’t earn you have to steal it. And even if they sucked (say they inherited the money and were wasting it out of stupidity), you would still be better of not killing them because they could in principle improve. Also, if they suck you could try advising them on ways they suck and get income from them that way.

Overall, Nozick’s essay is kinda dumb. In a lot of the sections he misunderstands Rand and makes up stuff he thinks she should have said and criticising that. But at least some of what he said was answered in Rand’s work and he ignored the answers, e.g. – Rand’s essay on alleged conflicts of interest.

UPDATE This post was adapted from an e-mail in this thread. The rest of this post consists of further material in that thread.

POST 1

Elliot Temple curi@curi.us [fallible-ideas]

Re: Answer to Nozick (was Re: [FI] Objectivism Criticism)

On Sep 26, 2015, at 5:22 PM, Alan Forrester alanmichaelforrester@googlemail.com [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:

On 21 Sep 2015, at 00:15, Elliot Temple curi@curi.us [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:

http://www.nowandfutures.com/large/On-the-Randian-Argument-Nozick.pdf

Anyone want to answer these, or know a good answer somewhere?

Section II

Nozick says even if we grant that only living things can have values, it doesn’t follow that life is a value.

Nozick then says that the argument for life having value would have to be that (x) having values has value, that life is necessary for values, so life has value. He then points out that if nobody ever got cancer, then we wouldn’t value a cure for cancer, so then having cancer has value by the assumption that having values has value.

But Nozick says we could modify x to x’: anything that is necessary for all value has value, so since cancer isn’t necessary for all values, it isn’t a value. He then says this means that not having achieved all values is a value by this criterion, as is being mortal and destructible.

Nozick also says that if Rand’s idea is of the form you ought to realise the greatest amount of X, where Rand says X is life, somebody could have a different X, like death, so Rand hasn’t proven her case.

This whole series of arguments is silly. In reality, maintaining life is difficult and if you don’t choose to maintain it, you have effectively chosen death. And if you’re going to choose death, why not just kill yourself by throwing yourself off the nearest bridge or something like that? Death is easy to achieve and so doesn’t require much thought for your values.

Depends. How much death do they want. The GREATEST amount?

Personal death is easy to achieve.

Maximizing death in the universe or multiverse is similar to maximizing life or squirrels (http://www.curi.us/1169-morality). It starts with pretty much identical steps for the next million years, regardless of which one you’re trying to maximize in the long run.

If you want to maximize death in the universe, you’ll need things like space travel to go kill all the other solar systems. what if there’s some life there? can’t risk it. better salt the earth, except with better tech. maybe push all the planets in every galaxy into stars. then push the stars into black holes. and nuke all the dark matter. and transmute all the asteroids into nothing but hydrogen, and then spread it out a LOT.

anyway that sounds fucking hard, so we’ll need stuff like capitalism to do it! and peace. if we die, we won’t be able to go destroy all the planets, you know?

fortunately during the next million years of Objectivism, peace and capitalism, we’ll have a lot of time to change our mind about what we should do once we’re powerful enough to maximize death in the whole universe. maybe we’ll figure out some better goals by then. (we already have, and we already can argue them. but Nozick isn’t persuaded. ok. no problem. he can get persuaded next century, or the one after. let no one say we’re unkind to the slow learners!)

the point is if you take some X seriously and want the greatest amount of it, THAT IMPLIES OBJECTIVISM, at least for the next million years. (and after a million years of everyone being an Objectivist, i suspect people will prefer life over death as their X).

the only way to avoid things like reason and liberalism is by not thinking much, and not taking any big grand values seriously. keep everything in little parochial limits. if all you want is a dead Earth, and you don’t care about anything bigger – if you aren’t doing it in a principled “kill everything” way – then you can be a destroyer. but if you care about things like non-contradiction and conceptual thinking, then it’s Objectivism for you.

Section III

 

So your lifestyle is self-destructive. In addition, the only way you can avoid being rational is to ignore objections to your actions, so you’re choosing stuff that is bad by your lights. So being irrational is not a way to achieve any value you might hold.

yeah. if they have any serious, big value, that implies stuff like i was discussing above. it implies critical thinking, reasoning, etc

the only chance to escape stuff like Objectivism is either not valuing anything (being a nihilist) or having only very limited, parochial values (being finite, not being part of the beginning of infinity).

Section IV

Nozick then goes on to argue against the idea that no man should sacrifice for another, or ask another to sacrifice for him.

ugh

he also claims that it might not be true that there are no conflicts of interest among rational men. He claims there could be multiple dimensions of rationality and that achieving one might mean sacrificing others, and this could cause a conflict of interest among rational men.

ugh

Section V

Nozick brings up Galt’s promise to commit suicide if the looters torture Dagny because he refuses to live on the looters’ terms.  Nozick then says that if Dagny died from a disease, Galt would kill himself.

but in the book scenario, the suicide would *prevent her further torture* by taking away it’s purpose (to get to Galt via her). if she died of a disease, his suicide wouldn’t accomplish anything useful.
Elliot Temple
www.fallibleideas.com
www.curi.us

POST 2

Justin Mallone justinceo@gmail.com [fallible-ideas]

Re: Answer to Nozick (was Re: [FI] Objectivism Criticism)

On Sep 26, 2015, at 8:22 PM, Alan Forrester alanmichaelforrester@googlemail.com [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:

On 21 Sep 2015, at 00:15, Elliot Temple curi@curi.us [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:

http://www.nowandfutures.com/large/On-the-Randian-Argument-Nozick.pdf

Anyone want to answer these, or know a good answer somewhere?

Nozick’s article is about Rand’s failure to prove her moral ideas are correct. His strategy is to break down Rand’s arguments into a number of steps, invent arguments he thinks are relevant to those steps and then say she has not proved one or more steps.

It is true that Rand doesn’t prove her ideas, since proof is impossible. But the lack of proof is also irrelevant since no position of any kind can be proved. And to the extent that his arguments are treated as criticism of Rand’s moral ideas, they are not much good.

Section III

This is about the idea that man’s life qua man, that is, as a rational being, is of value to him.

Nozick says man has values other than rationality that separate him from other animals so why pick rationality?

Nozick seems to think that there is some objectivist position like  “whatever particular unique attribute a thing has defines how it should act” or something along those lines.

Like see where Nozick says in describing possibility one in Section III that:

we focus on the idea that what is special to a thing marks its function and from this we can get its peculiarly appropriate form of behavior.

So basically he doesn’t GET the power of reason, and that if you value your life as a man, and wish to preserve it, you’ve got to, as a practical matter, make use of the super powerful faculty of rationality in order to do so.

Seems to think reason is nothing special.

He also says like maybe we discover that dolphins have some property P which we thought made man special, and then we couldn’t say man is special anymore or something.

If we discovered intelligent dolphins then morality and the importance of reason for their life  would apply to them too. I don’t think this poses a huge problem for Rand’s ethics.

The issue isn’t about any old property P. Being capable of reason is a special thing!

If mankind had particularly strange and unique teeth that would not be very important to morality, I think. It would affect details like what techniques competent and effective dental care involve. Those aren’t morally irrelevant but they are very narrow, don’t have lots of reach into other areas. And so then who cares if we discover some other creatures have these odd teeth too. This contrasts strongly with the moral impact of *REASON*.

And other beings, like aliens, could be rational so then that’s not a property of man qua man.

And Nozick says a man could stop acting rationally, and why shouldn’t he? Nozick bangs on about essences a bit too.

“Bangs on about essences a bit too” is a very fair summary of the content. Very confusing stuff in this part.

If a man doesn’t act rationally he won’t survive for long without help. But why not just be a parasite? Nozicks asks. Not everyone could do it, but some people could do it, maybe for their whole lives.

The parasite stuff is a DISASTAH. I’m gonna go in some detail on this one with quotes. Nozick:

There are two forms to the parasite argument, a consequential one and a formal one.

Note I don’t see any argument about how it’s a bad and undesirable lifestyle that makes you more helpless, less powerful, less fulfilled, and often at best involves the heavy cost of optimizing around and flattering other people’s irrationalities.

The consequential argument is that being a parasite won’t work in the long run. Parasites will eventually run out of hosts, out of those to live off, imitate, steal from. (The novel, Atlas Shrugged, argues this view.)

Atlas Shrugged is not a consequentialist morality type book.

It does show the ultimate consequences of stuff. But it also shows what a horrible lifestyle it is to be an ineffective pathetic person who is dependent on the generosity and benevolence of the able while simultaneously hating them and needing to blackmail them (see James, and Rearden’s whole family).

But in the short run, one can be a parasite and survive; even over a whole lifetime and many generations. And new hosts come along. So, if one is in a position to survive as a parasite, what reasons have been offered against it?

One cannot know infallibly how long this time period will be. Better to not set yourself up for a lifestyle of powerlessness and dependence. The fact that you are considering engaging in such a project indicates your judgment is pretty bad as it is, so you should maybe not be so trusting of your judgment as to the course of nations and their political trends, or even the tolerance of those you can manipulate personally.

Nozick then describes what he call “the formal argument.” Basically the point is, moral rules are universal, so if your moral values say something like “you act according to X moral, and everyone else acts according to Y moral principle, and you’ll get away with it as long as everyone else sticks to Y” then X can’t be right.

Moral principles aren’t subjective. You’d need some explanation why you get to act one way and everyone else has to support your parasitism. What is it? One proposed justification, which AS deals with in great detail, is need/lack of ability.

Nozick concludes the section by basically asking why he can’t be a subjectivist.

Back to Alan:

Again these arguments are kinda silly. If you’re not going to be rational, you basically have to be a parasite. And being a parasite depends on your host not realising you’re a parasite. so then you have to decide the host, which makes it more difficult for him to function rationally. So your lifestyle is self-destructive. In addition, the only way you can avoid being rational is to ignore objections to your actions, so you’re choosing stuff that is bad by your lights. So being irrational is not a way to achieve any value you might hold.

Gps.

-JM

POST 4

Justin Mallone justinceo@gmail.com [fallible-ideas] 

To: FI Cc: FIGG

Reply-To: FI

Re: Answer to Nozick (was Re: [FI] Objectivism Criticism)

On Sep 26, 2015, at 8:22 PM, Alan Forrester alanmichaelforrester@googlemail.com [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:

On 21 Sep 2015, at 00:15, Elliot Temple curi@curi.us [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:

http://www.nowandfutures.com/large/On-the-Randian-Argument-Nozick.pdf

Anyone want to answer these, or know a good answer somewhere?

Section IV

Nozick then goes on to argue against the idea that no man should sacrifice for another, or ask another to sacrifice for him.

One mistake I think Nozick makes up front is bolting on some of his own stuff to Rand’s statement, which he then uses to go off on wild tangents unrelated to Rand’s ideas.

Like he starts off quoting Rand:

The basic social principle of the Objectivist ethics is that just as life is an end in itself, so every living human being is an end in himself, not the means to the ends or the welfare of others—and, therefore, that man must live for his own sake, neither sacrificing himself to others nor sacrificing others to himself. To live for his own sake means that the achievement of his own happiness is man’s highest moral purpose.

And then in what seems to be his first partial restatement of this he says:

For each person, the living and prolongation of his own life is a value for him

Notice he’s elevating prolongation to a major theme.

Then he says, to make Rand’s arg work, you need to supply another argument, which he does:

For each person, the living and prolongation of his own life (as a rational being) is the *greatest value* for him.

So now in Nozick’s analysis we’re at something like, prolonging your own life has to be your greatest value or Rand’s arg doesn’t make sense. No wonder he doesn’t understand Galt and Dagny example (mentioned later).

Nozick just doesn’t understand objectivist ethics at a basic level. He thinks it’s got some like lifespan-maximizing utilitarianism in it. He doesn’t understand what sacrifice means either — the giving up of a greater value for a lesser one.

He can’t understand that like e.g. a solider would choose to go on a super high risk mission cuz he values killing some tyrant more than a higher chance at continuing to live, and that’s not a sacrifice, but then if he had these values but then didn’t do the mission cuz he felt guilty about how his death would make someone feel, that very well would, be a sacrifice. If you said this to Nozick he’d be very confused I bet.

In addition to the parasite argument, which comes up again,

Note he talks about parasitism being against your long term interests, as if that’s the only arg. It is against your short term interest as well though. There are opportunity costs to immoral lifestyles. And immoral parasitical lifestyles are less pleasant than productive ones.

he also claims that it might not be true that there are no conflicts of interest among rational men. He claims there could be multiple dimensions of rationality and that achieving one might mean sacrificing others, and this could cause a conflict of interest among rational men.

Gonna quote a bit of Nozick here:

If one believes that ethics involves (something like) one dimension or weighted set of dimensions which is to be used to judge us and the world, so that all of our moral activity (the moral activity of each of
us) is directed toward improving (maximizing) the world’s score on this dimension, then it will be natural to fall into such a vision. But if we legitimately have separate goals, and there are independent sources of moral commitment, then there is the possibility of an objective conflict of shoulds.

So notice first he seems to implicitly criticize Objectivism as being some kinda simple-minded utilitarianism. That’s his framing to set up his allegedly more thoughtful/sophisticated alternative.

If that’s what he thinks, how does he explain Galt giving up living in the world as an engineer for a track laborer job and the hope of a strike which they had no anticipation would end? What utility function is being maximized there?

Secondly, wtf is an independent source of moral commitment? Is this just plain apologetics for subjectivism,
denial of objective morality, etc?

Also, Nozick seems to think that no conflicts of interest involves people all mysteriously agreeing by magic.

Nozick says:

What I shall call the optimistic tradition holds that there are no objective conflicts of interest among persons. Plato, in the Republic, being the most notable early exponent of this view, we might appropriately call it the Platonic tradition in ethics.

Does anyone know what he’s referring to specifically? Because while it’s been a while since I read the Republic, I seem to have missed the part that sounds anything like VoS…..

He doesn’t seem to understand that people can come to agree on what to do in some situation as a result of critical discussion if they are prepared to have such a discussion.

This is a problem of thinking of rationality in terms of weighing options, which is wrong. If there was some way to weigh different priorities, you would have to choose the appropriate way to do the weighing, which couldn’t be done by weighing. So then there would have to be some master argument that determines how stuff should be weighed.

In addition, all the options for weighing suck, as explained in BoI Chapter 13.

Section V

Nozick brings up Galt’s promise to commit suicide if the looters torture Dagny because he refuses to live on the looters’ terms.  Nozick then says that if Dagny died from a disease, Galt would kill himself. But that doesn’t follow from what he said. Rather, the problem is that Galt doesn’t want to live if his values are going to be destroyed. Nozick has confused the concrete instance of those values being destroyed, Galt giving the looters what they want because Dagny is being tortured, with the principle of why Galt would top himself in that instance.

Yeah.

It’d be more interesting if he’d just dropped the disease thing and just straight asked, about the example as it is in the book,

It would be a terrible loss, but does Galt “the perfect man,”

Note btw this seems a bit hostile on Nozick’s part.

have so little moral fiber and resources that life would be intolerable for him ever afterwards

Galt says there would be “no values for me to seek” if Dagny were tortured. Is that actually true? I’m not so sure.

I think you could say that Galt values Dagny not being tortured more than any of those other values, perhaps. That makes sense to me. But NO VALUES seems rly strong.

As a result of this confusion, Nozick burbles on for the rest of the section talking a load of rot about happiness.

He seems to think that by happiness what Oists mean is time periods of having nice fuzzy feelings. I think what Rand had in mind with regard to happiness was not lengths of period of nice feelings but more like, sense of satisfaction from conscious achievement of rational values.

He discusses doing something that results in guilt and then using chemicals to forget it. This doesn’t change the fact that when you did the bad thing, you acted against your values. If you can’t afford a computer because you stole from somebody and have to pay him back, then you still don’t have the computer even if you somehow forget the incident.

Nozick proposes that you could implant in you child a device that would make him act on some moral principles P, except when it would benefit him to break those principles, e.g. – murdering somebody to get his fortune. There are three problems with this.

Only three? 🙂

Take note: Robert Nozick, serious, preftigious academic philosopher who was the head of some fancy pants association of philosophy professors, is seriously saying that if you care about your
kids’ happiness, the best thing to do, presuming it were possible, is to use some kinda mind control chip to force them to act a certain way, except certain times when it’ll benefit them to act otherwise (which the chip knows somehow), and then those times they turn into e.g. an amnesiac murderer. Doesn’t feel the need to get into detail on the detail of how sometimes being an amnesiac murderer part is good part btw. Thinks its pretty clear.

BTW his basic purpose here is to try and trash the idea of happiness being morally very important by way of the academic philosopher’s trick of imagining some impossible fantasy happening IRL and then thinking it makes some big moral point. Kinda related to lifeboat scenarios IMHO.

One thing I want to know is how does this proposed device work? E.g. how can it do stuff like control other people to act certain ways in certain moral situations (which would involve stuff like grasping a moral situation exists, and thus consciousness) without it itself being an AI type person or something? (Is similar issue to Spike neutering chip on Buffy)

So then you’re like using some person-like device as a slave to control another person and make them a slave. So that’s like a double quarter pounder of immorality now. Also hard to manage. Who keeps the enslaver chips enslaved???? What if there’s an enslaver chip rebellion? Then you’ve got angry slaver chips and VERY angry kids. Oh noes.

Why not try persuading your child of the moral ideas you want them to learn instead?

First, trying to control your child like this would be grossly immoral and would hurt you because one of the benefits of interacting with somebody else is he can do things you can’t anticipate.

It kind of amazes me he gets into the thought experiment and isn’t at any point like “hey maybe turning someone into an automaton with massive continuous force would not be happiness maximizing but actually an unbelievably evil form of continuous torture.”

It’s like that awful “Good AI” stuff applied to children and parents.

Second, acting on principles requires creativity, so controlling your child by some device is incompatible with him acting on principles.

Yeah xlnt point.

Third, murdering people and taking their stuff isn’t a good idea.

Lol that we need to explain this to fancy pants libertarian philosophers.

Even if you don’t get caught, you always have to look over your shoulder, and lie about stuff to cover up your involvement. And also you lost the opportunity to cooperate with the people who made the fortune. To make lots of money they must have good ideas you can learn from, since you can’t earn you have to steal it. And even if they sucked (say they inherited the money and were wasting it out of stupidity), you would still be better of not killing them because they could in principle improve. Also, if they suck you could try advising them on ways they suck and get income from them that way.

Overall, Nozick’s essay is kinda dumb. In a lot of the sections he misunderstands Rand and makes up stuff he thinks she should have said and criticising that. But at least some of what he said was answered in Rand’s work and he ignored the answers, e.g. – Rand’s essay on alleged conflicts of interest.

Yeah. Doesn’t know about conflicts of interests, doesn’t really understand stuff like happiness, sacrifice, and seems to know less about morality than some conventional people. Pretty crap essay overall.

-JM

A problem with David Deutsch’s model of time travel

In 1991, David Deutsch published a paper on the quantum mechanics of time travel. This model appeared to solve many of the ‘paradoxes’ of time travel and Deutsch used the same model to discuss time travel in The Fabric of Reality. But there are problems with this model.

A summary of Deutsch’s model

(1) To travel back in time, you go along a path which takes you from a particular region in space and time, back to that same region: this is called a closed timelike curve (CTC). In everyday life, you can go along a path that takes you back to the same region in space, but not in time. For example, you leave home to go to work, and when you have finished your work, you come back home. If you went along a CTC, you would leave your home for work at 9am, work a full day and then arrive at home at 9am. Such paths don’t exist in everyday life, but the current theory of space and time, general relativity, predicts that they are possible.

(2) The paradoxes are popularly alleged to be inconsistencies that could be produced by stuff you could do. For example, you could go back to when you left for work at 9am, and then persuade the earlier version of yourself to go play computer games instead. As a result, you would never have left for work and so would not have come back home along the CTC at 9am. As Deutsch points out in the paper, this sort of thing is just inconsistent, and so it couldn’t happen. There is no paradox in the sense that no inconsistent set of events would happen. Rather, you simply could not go back in time and persuade your earlier version to stay home. So the CTC just constrains what it is possible for you to do in a way that you would not be constrained in the CTC’s absence.

(3) The discussion of the paradoxes usually assumes that reality is described by classical physics. Reality is described by quantum physics, which might have different implications for the constraints on the actions you could take near a CTC. Quantum physics describes physical reality in terms of the multiverse. Every object exists in multiple versions that can interfere with one another under some circumstances. These versions are sorted into layers. Each layer behaves like the universe as described by classical physics to some approximation, i.e. – in some approximation it is a collection of parallel universes. Since reality as described by quantum physics is different from classical physics, the paradoxes might not arise. Deutsch invented a quantum mechanical model in which you could go back in time and persuade an earlier yourself to play video games without any inconsistency. When you travelled back in time, you would give rise to a new universe, in which you and your earlier version stay at home playing video games.

(4) There was another problem with time travel: the knowledge paradox. You could take a mathematical paper back in time and give it to the paper’s author before he wrote it. The author could then copy the paper and send it to the journal that originally published it. But this would mean that a mathematical result came into existence without anyone doing the work  needed to create it. This ‘paradox’ did not produce a logical inconsistency, but it is incompatible with the evolutionary principle. All knowledge has to come into existence by processes that involve producing variations on previous knowledge and then selecting among those variations. There is no such thing as free knowledge that you can get without going through such a process. But the above time travel scenario does give you free knowledge. Nobody invented the result in the paper. If quantum mechanics solved this problem it might do it by making a new universe if you brought such information back in time and showed it to the author. The knowledge would then have been created in one universe and transferred to another. However, the physics didn’t give that result. Deutsch added additional assumptions to try to solve the knowledge problem but it was not clear whether he succeeded.

The problem

So what’s the problem with Deutsch’s model? In quantum mechanics, a system carries information about what versions of other systems it will interact with. I am interacting with a version of my computer that is in a particular location. I am not interacting with another version one millimetre to the right of the version I interact with. What prevents me from interacting with other versions of the computer is that I contain information about what version of my computer I can interact with. That information doesn’t allow this version of me to interact with other versions of my computer. Another version of me is interacting with the version of my computer one millimetre to the right. The information that specifies which version of a system can interact with which version of some other system is called entanglement information. Deutsch’s model implicitly assumes that when I travel back in time all of my entanglement information is erased [1].

If you use a model in which the entanglement information is not erased, the version of the system that goes back in time can’t interact with the past versions of itself or anything else it had interacted with in the past. The reason is as follows. A measurable quantity can be set up so that it is the same across the section of the multiverse in which an experiment is taking place: such a quantity is said to be sharp. If that quantity is not the same across that section of the multiverse, it is unsharp. In general, if one quantity associated with a system is sharp, then others must be unsharp. For example, if the position of a system is sharp, its momentum must be unsharp and vice versa. As a result, if you wanted to transfer information in a systems’s momentum into its position, you could only do that by erasing that information in the momentum. In general my future self’s measurable quantities that depend on those of my past self, so that copying information from one to the other is not possible. The same will be true of all of the other objects in the past that I interacted with. So the future version of me won’t be able to share information with anything in the past that I had interacted with [2]. As a result, the paradoxes don’t arise because the relevant interactions can’t take place. This includes the knowledge paradox, which was not solved by Deutsch’s model. If you can’t transfer information to earlier versions of systems you interacted with, then you can’t give out free mathematical results.

Notes

[1] Technical note. The system that comes out of the CTC is assumed to have a Schrodinger picture state that is the partial trace over its state when it went into the CTC. The partial trace leaves out all of the entanglement information.

[2] Technical note. This issue can be discussed in the Heisenberg picture of quantum computation. Suppose you are entering the region in which the CTC is present at time t=0. If you go in to the CTC, you come out at t=1. You can either go in or not, and whether you will go in can be represented by a qubit Q_{in}. This qubit can be represented by a triple of observables \hat{\mathbf{q}}(t) satisfying the Pauli algebra. Q_{in} can be represented by \hat{\mathbf{q}}_{in}(0)=(\sigma_{11},-\sigma_{12},-\sigma_{13}), where \sigma_{ab} = I^{a-1}\otimes \sigma_b\otimes I^{N-a} and \sigma_b is the bth Pauli matrix. In the Heisenberg picture state \tfrac{1}{4}(I+\sigma_{13})(I+\sigma_{23}), Q_{in} starts out with the obervable \tfrac{1}{2}(\hat{1}-\hat{q}_{in3}(t)) having expectation value 1, where 1 means you will enter the CTC and 0 means you won’t.

Now, whether a version of you comes out of the CTC or not can be represented by a qubit Q_{out}. This qubit is represented by some triple \hat{\mathbf{q}}_{out}(t) that we have to work out. But what we would like to have happen is that if its value is 1, i.e. – if somebody came out of the CTC, that should set Q_{in}‘s value to 0 if it was 1. This corresponds the situation in which the version of you that leaves the CTC persuades you not to go in. A controlled not gate with Q_{in} as the target fits the bill. Suppose Q_{in} had interacted with a qubit represented by (\sigma_{11},-\sigma_{22},-\sigma_{23}) in a controlled not gate, with Q_{in} as the target. This gives the result \hat{\mathbf{q}}_{in}(1)=(\sigma_{11},\sigma_{12}\sigma_{23},\sigma_{13}\sigma_{23}). Since this is the qubit that goes into the CTC, we have \hat{\mathbf{q}}_{out}(0)=(\sigma_{11},\sigma_{12}\sigma_{23},\sigma_{13}\sigma_{23}), which is not in the same form as the qubit used to construct the controlled not. The “controlled not” would be written as U = \tfrac{1}{2}[(I-\sigma_{23})\sigma_{13}+(I+\sigma_{23})]. The controlled not between two qubits Q_a and Q_b would usually be of the form U_{CNOT} = \tfrac{1}{2}[(\hat{1}-\hat{q}_{a3}(t))\hat{q}_{b1}(t)+(\hat{1}-\hat{q}_{a3}(t))]. As a function of Q_{in} and Q_{out}, the gate is not of this form and so it is not a controlled NOT between them. Also, Q_{out} at t=0 has value 0, which means it represents a situation in which you didn’t go in to the CTC, so the qubits don’t represent the experiment we wanted to do. Finally, the gate is its own inverse, so when Q_{out} goes through the gate it ends up with the value Q_{in} had at t=0. So it doesn’t seem to be the case that any exchange of information has taken place between Q_{in} and Q_{out}. Adding more qubits and more interactions would raise the same problems, and also would not allow any exchange of information between past and future versions of the same system.

Rand on Kant (and positivists)

This is a commentary on a blog entry that claims Ayn Rand was wrong about Kant.

The blog author reproduces the following quote from “Faith and Force: Destroyers of the Modern World” Chapter 7 of “Philosophy: Who Needs It”, with labels for Rand’s statements inserted for clarity (around location 1307 in the Kindle version):

He [Kant] did not deny the validity of reason – he merely claimed [1] that reason is “limited,” [2] that it leads us to impossible contradictions, [3] that everything we perceive is an illusion and [4] that we can never perceive reality or “things as they are.” He claimed,in effect, that the things we perceive are not real because we perceive them.

From the blog entry:

K1: Reason is limited in its cognitive employment to the sense world: there is no knowledge by reason alone of meta-physical objects, objects lying beyond the bounds of sense, such as God and the soul.

K2: When reason is employed without sensory guidance or sensory input in an attempt to know meta-physical objects, reason entangles itself in contradictions.

K3: For knowledge, two things are required: sensory input and conceptual interpretation. Since the interpretation is made in accordance with categories grounded in our understanding, the object of knowledge is a phenomenon rather than a noumenon (thing-in-itself). Since phenomena are objects of objectively valid cognition, a phenomenon (Erscheinung) is distinct from an illusion (Schein). (Cf. Critique of Pure Reason B69-70 et passim)

This is an accurate summary of central Kantian theses. (Trust me, I wrote my doctoral dissertation on Kant.) Comparing this summary with what Rand says, one can see how she distorts Kant’s views. Not only does Rand misrepresent K1, K2, and K3, she conflates them in her run-on sentence although they are obviously distinct. Particularly outrageous is Rand’s claim that for Kant, objects of perception are illusory, given Kant’s quite explicit explanations (in several places) of the distinction between appearance and illusion.

I will work from the positions to attributed to Kant by the author of the blog post. These positions, he claims, are not accurately described by Rand’s statements about them [1]-[4], but he is wrong.

Let’s start with K1. According to this statement there is a whole load of stuff in reality that we cannot perceive the way it really is [4]. Also, the author concedes that reason is limited, as Rand claimed in [1].

What about K2? We can’t perceive metaphysical objects, and without those perceptions we are led into contradictions by reason when we attempt to understand those objects, so reason leads to contradictions [2].

And now K3. It reads a lot like [3] but since the words are used in a way that is a bit different from normal there’s a criticism of saying it is the same as [3] on its own. But together with K2 it implies that everything we perceive is a very small part of a much larger reality that we can’t understand. So everything we think we know about the stuff we see is wrong because those objects have metaphysical aspects we can’t understand [3].

So it looks like Rand was correct about this particular issue.

Rand again, from the same essay (around location 1325):

What Kant propounded was full, total, abject selflessness: he held that an action is moral only if you perform it out of a sense of duty and derive no benefit from it of any kind, neither material nor spiritual; if you derive any benefit, your action is not moral any longer.

From the blog:

This too is a travesty of Kant’s actual position. Kant distinguishes duty and inclination. (Grundlegung zur Metaphysik der Sitten, Akademie-Ausgabe 397 ff.) This distinction must be made since there are acts one is inclined to perform that may or may not be in accordance with duty. An inclination to behave rudely contravenes one’s duty, while an inclination to behave in a kind manner is in accordance with it. Kant also distinguishes between acting from duty and acting in accordance with duty. One acts from duty if one’s act is motivated by one’s concern to do one’s duty. Clearly, if one acts from duty, then one acts in accordance with duty. But the converse does not hold: one can act in accordance with duty without acting from duty. Suppose Ron is naturally inclined to be kind to everyone he meets. On a given occasion, his kind treatment of a person is motivated not by duty but by inclination. In this case, Ron acts in accordance with duty but not from duty.

Kant held that an act has moral worth only if it is done from duty. Contra Rand, however, this is obviously consistent with deriving benefit from the act. Suppose — to adapt one of Kant’s examples — I am a merchant who is in a position to cheat a customer (a child, say). Acting from duty, I treat the customer fairly. My act has moral worth even though I derive benefits from acting fairly and being perceived as acting fairly: cheating customers is not good for business in the long run.

So the author’s defence of Kant is that whether you benefit from an action is irrelevant to whether it is right or wrong.If this was accurate, then Rand would have made a mistake.

But before I continue I want to address a mistake in the blogger’s description of Rand’s position:

One can see from this how confused Rand is. She thinks that an act performed from duty is equivalent to one that runs counter to inclination, or counter to one’s own benefit.

Rand’s position is that Kant said an action is moral if you do it from duty and get no benefit. She does not say that an action that doesn’t benefit you is equivalent to a duty. There could be things you could do that would be self-destructive that wouldn’t be a duty, but anything that is duty would be self-destructive.

Let’s look at some actual quotes from Kant.

According to Kant, staying alive is only morally praiseworthy if you want to die, otherwise it is just something people do and is not particularly good or bad:

Again, to preserve one’s life is a duty; and independently of this, every man is, by the constitution of his system, strongly inclined to do so; and upon this very account, that anxious care shown by most men for their own safety is void of any internal worth; and the maxim from which such care arises is destitute of any moral import (i.e., has no ethic content). Men in so far preserve their lives conformably to what is duty, but they do it not because it is so; whereas, when distress and secret sorrow deprive a man of all relish for life, and the sufferer, strong in soul, and rather indignant at his destiny than dejected or timorous, would fain seek death, and yet eschews it, neither biassed by inclination nor by fear, but swayed by duty only, then his maxim of conduct possesses genuine ethic content.

If you actually want to help somebody, doing it is morally worthless. If you don’t want to help him, that is morally good:

To be beneficent when in one’s power is a duty; and besides this, some few are so sympathetically constituted, that they, apart from any motives of vanity or self-interest, take a serene pleasure in spreading joy around them, and find a reflex delight in that satisfaction which they observe to spring from their kindness. I maintain, however, that in such a case the action, how lovely soever, and outwardly coincident with the call of duty, is entirely devoid of true moral worth, and rises no higher than actions founded on other affections, e.g., a thirst for glory, which, happening to concur with public advantage and a man’s own duty, entitles certainly to praise and high encouragement, but not to ethic admiration. For the inward maxims of the man are void of ethical content, viz., the inward cast and bent of the volition to act and to perform these, not from inclination, but from duty only. Again, to take a further case, let us suppose the mind of some one clouded by sorrow, so as to extinguish sympathy,—and that though it still remained in his power to assist others, yet that he were not moved by the consideration of foreign distress, his mind being wholly occupied by his own,—and that in this condition he, with no appetite as an incentive, should rouse himself from this insensibility, and act beneficently purely out of duty,—then would such action have real moral worth; and yet, further, had nature given this or that man little of sympathy in his temperament, leaving him callous to the miseries of others, but instead endowed him with force of mind to support his own sorrows, and so induced him to consider himself entitled to presuppose the same qualities in others, would it not be possible for such a man to give himself a far higher worth than that of mere good nature? Certainly it would; for just at this point all worth of character begins which is moral and the highest, viz., to act beneficently, irrespective of inclination, because it is a duty.

If you want to help somebody and do so, that’s morally neutral. If you don’t want to help him but do it anyway, that’s morally good:

It is thus, without all question, that we are to understand those passages of Scripture where it is ordained that we love our neighbour, even our enemy; for, as an affection, love cannot be commanded or enforced, but to act kindly from a principle of duty can, not only where there is no natural desire, but also where aversion irresistibly thrusts itself upon the mind; and this would be a practical love, not a pathological liking, and would consist in the original volition, and [11] not in any sensation or emotion of the sensory;—a practical love, resulting from maxims of practical conduct, and not from ebullitions and overflowings of the heart.

Anything you do because you want to do it is morally worthless, regardless of whether it happens to match what you would do as a result of duty. The blog author says this is consistent with benefiting from duty, but this would imply you can benefit from something you don’t want, which is false. If you get somebody to do something you allege will benefit that person, but which the person doesn’t want to do, all you’ve done is made somebody do something of which he has a criticism. You don’t have an answer to the criticism so you don’t know whether it benefits the supposed beneficiary.

A few additional notes

The blogger writes:

What’s more, Rand gives no evidence of understanding the problem with which Kant is grappling, namely, that of securing objective knowledge of nature in the teeth of Humean scepticism. One cannot evaluate a philosopher’s theses except against the backdrop of the problems those theses are supposed to solve.

Rand never understood the problem of induction well enough to solve it. (Karl Popper solved the problem of induction, see “Realism and the Aim of Science”, Chapter I, and “Objective Knowledge” chapter 1.) But she understood it well enough to see that Kant’s “solution” was no good. Rand also didn’t pretend that she had solved the problem of induction, unlike many other philosophers. And she solved many other problems. The blogger continues:

To give you some idea of the pitiful level Rand operates from, consider her suggestion near the bottom of the same page that logical positivists are “neo-mystics.”

Rand defines mysticism

Mysticism is the acceptance of allegations without evidence or proof, either apart from or against the evidence of one’s senses or one’s reason.

The logical positivists claimed that science was just about describing sense data in order to exclude “metaphysics”: anything other than sense data. They accepted the idea that you could understand perceptions, but not anything else. But this idea implicitly takes for granted that it is impossible for you to understand reality using reason. So the logical positivists rejected reason, their claims to the contrary notwithstanding. So the logical positivists adopted their ideas against reason: they were mystics. This is not pitiful, it is an accurate criticism of a destructive, anti-rational philosophy.

Fungibility in quantum mechanics

In The Beginning of Infinity David Deutsch explains some of quantum mechanics in terms of fungibility. Two or more things are fungible if there is no difference between them apart from the fact that there is more than one of them. In this post I explain some stuff about fungibility in quantum mechanics. I think the only knowledge you need is the ability to add and multiply, knowledge of what a square root is, and the knowledge that \pi = 3.14159\dots. Other stuff is explained in the post.

For example, for the purposes of settling debts, one dollar is fungible with another. If you pay a debt, the person you have paid can’t legally require that you pay him with a particular dollar. He can’t require that you turn out your pockets and say “I want this dollar and not that one.” You can also have a situation of diversity within fungibility. If you have a debt, then your creditor may own some of the dollars in your wallet and not others. There is no fact of the matter about which dollar he owns, but he owns some of them. So despite being fungible the dollars have diverse properties.

According to quantum mechanics, the whole of physical reality is a complex structure called the multiverse that contains, among other things, multiple instances of all the objects you see around you. (Many physicists say that quantum mechanics isn’t about the multiverse, but they’re wrong. For an explanation of why they are wrong see Chapter 2 of The Fabric of Reality by David Deutsch and Chapters 11 and 12 of The Beginning of Infinity.) Some of those instances are different from one another. For example, some instances of me phrased the sentence before this one in a slightly different way because they had different ideas about how to write this post. But some of the instances of an object are physically identical except for the fact that there is more than one of them: they are fungible. In addition, you can have diversity within that fungibility (from BoI, p. 289, location 5011 in the Kindle edition):

Furthermore, it follows from the laws of quantum physics that, for any fungible collection of instances of a physical object, some of their instances must be diverse. This is known as the Heisenberg uncertainty principle’, after the physicist Werner Heisenberg, who deduced the earliest version of quantum theory.

Dollars are fungible as a matter of convention, not as a matter of physics. So with the dollars it is relatively easy to understand the fungibility. But how can it be the case that objects are physically fungible and how does quantum mechanics describe that fungibility?

First, what does it does it mean for two or more instances of a physical system to be fungible? It has to mean that everything about those systems that could be measured is identical. Each system exists in multiple versions that can interfere in interference experiments. Suppose there are two possible outcomes of a measurement of a particular physical quantity A of some system, and that the possible outcomes are 0 and 1, and that all the instances of the system start out with the value 0. The collection of the instances with the value 0 is represented by the state |0\rangle. There is also a state that would describe a collection of instances all of which have the value 1: |1\rangle. States can be added together by the following rules. (1) If you add two states that are the same their magnitudes can be added up so
\tfrac{1}{2}(|0\rangle+|0\rangle) = |0\rangle.
If the states are different then their magnitudes can’t be added up so \tfrac{1}{2}(|0\rangle+|1\rangle) can’t be written in any simpler way.

There could be another physical quantity B with possible measurement outcomes + and -, where
|+\rangle =\tfrac{1}{\sqrt{2}}(|0\rangle+|1\rangle)

and
|-\rangle =\tfrac{1}{\sqrt{2}}(|0\rangle-|1\rangle).

Using the rules I described for adding up states you can work out that
|0\rangle = \tfrac{1}{\sqrt{2}}(|+\rangle+|-\rangle).
Before I can describe the next thing I have to address a possible misconception. Quantum mechanics is commonly described as a probabilistic theory. What people usually mean when they say this is that if you measure a physical quantity one of the outcomes happens randomly with some probability. People then say the probability means all sort of things that make no sense. For example, some say the probability of getting an outcome is the proportion of that outcome in an infinite sequence of measurements. No such sequences actually exist. Also you could change the proportion of that outcome in an infinite sequence just by changing the order of the sequence. For example the infinite sequence that consists of repeating 01 an infinite number of times has the same number of 1s and 0s as the sequence 001 repeated an infinite number of times, so all that has changed is the order in which the 1s and 0s occur. So if you were to take 010101… as giving a probability of 1/2 for 1 and 001001… as giving a probability of 1/3 for 1 then you could change the probability just by changing the order. There are other bad ideas that I won’t go into here. In quantum mechanics, the probability of an outcome is a way of counting the instances of that outcome that tells you how you could bet on them in such a way that other people could not construct a strategy to make you lose consistently.

If your system is in the state |0\rangle and you measure B, then in one universe you see + and in another you see -. The probability of seeing + is 1/2 and the same is true for -. the way you get the probability for + is by looking at the state |0\rangle and squaring the magnitude of the number in front of the state |+\rangle in that state. The probability for – is obtained by the same procedure. There may be other physical quantities you could measure for your system and they would have different sets of possible outcomes and different probabilities. If you wrote down |0\rangle in terms of any other set of states representing possible outcomes of a possible measurement on that system, then you would use the same rule to get the probabilities for the results of that measurement: square the magnitude in front of the relevant state. All of the predictions it is possible to make about a system are contained in the set of possible outcomes for each quantity and the probabilities of those outcomes. So if two instances of a system had the same set of possible outcomes and probabilities for every physical quantity that could be measured there would be no way for you to tell them apart: they would be physically identical in every way.

Now, from rule (1) you could write
|0\rangle= \tfrac{1}{2}(|0\rangle+|0\rangle),
or
|0\rangle= \tfrac{1}{3}|0\rangle+\tfrac{2}{3}|0\rangle,
or
|0\rangle= \tfrac{1}{\pi}|0\rangle+(1-\tfrac{1}{\pi})|0\rangle.
So the instances of the system with the value 0 can be divided up into sets in a continuous number of different ways. In all of those divisions, each set has the state |0\rangle, and makes the same predictions about the sets of possible outcomes of measurements and the probabilities of those outcomes for each outcome. So although there is more than one instance of the system in which A has the value 0, they all make exactly the same predictions about the values of all measurable quantities. There is more than one instance of 0, but they are physically identical: they are fungible.