Criticism of Nozick on Objectivism

The philosopher Robert Nozick, who wrote a book about libertarianism called Anarchy, State and Utopia, also wrote a criticism of Objectivist moral philosophy.

Nozick claims that Rand failed to prove her moral ideas are correct, and so they are no good. His strategy is to break down Rand’s arguments into a number of steps, invent arguments he thinks are relevant to those steps and then say she has not proved one or more steps.

It is true that Rand doesn’t prove her ideas, since proof is impossible. (Any alleged proof makes assumptions and uses rules of inference, neither of which can be guaranteed to produce correct results. Knowledge is actually created by conjecture and criticism, see Realism and the Aim of Science by Popper.) But the lack of proof is also irrelevant since no position of any kind can be proved. And to the extent that his arguments are treated as criticism of Rand’s moral ideas, they are not much good.

Section I

Nozick constructs an argument saying to try to get a conclusion that only a living being has values with a point. Only living beings can choose. And only for them could there be any point in choosing since their choices can make a difference to life, e.g. – bad choices can lead to injury or death. He then cites Rand’s argument that an immortal invincible robot wouldn’t be able to gain or lose and so couldn’t make choices.

Nozick then asks whether a machine could be programmed to value states of affairs that don’t affect it. So then it would have values and Rand’s argument can’t be correct.

I don’t think Rand’s robot argument is much good since any such robot would break the laws of physics, so it couldn’t exist. But there is a hard and fast distinction between living things that can have problems, and non-living things like rocks that can’t have problems. Rand is right in substance and Nozick is wrong in substance.

Nozick also says that it could be the case that you should value stuff outside yourself because if the kind of thing it is. For example, you should value god since he’s all powerful. This doesn’t make any sense since the idea of god doesn’t make any sense. His other example involves admiring another person’s talent. But if another person has some talent that helps produce goods that will make you better off either directly or indirectly. For example, if somebody is a good singer but sings songs you happen not to like, other people might like the songs and be more productive as a result.

Section II

Nozick says even if we grant that only living things can have values, it doesn’t follow that life is a value.

Nozick then says that the argument for life having value would have to be that (x) having values has value, that life is necessary for values, so life has value. He then points out that if nobody ever got cancer, then we wouldn’t value a cure for cancer, so then having cancer has value by the assumption that having values has value.

But Nozick says we could modify x to x’: anything that is necessary for all value has value, so since cancer isn’t necessary for all values, it isn’t a value. He then says this means that not having achieved all values is a value by this criterion, as is being mortal and destructible.

Nozick also says that if Rand’s idea is of the form you ought to realise the greatest amount of X, where Rand says X is life, somebody could have a different X, like death, so Rand hasn’t proven her case.

This whole series of arguments is silly. In reality, maintaining life is difficult and if you don’t choose to maintain it, you have effectively chosen death. And if you’re going to choose death, why not just kill yourself by throwing yourself off the nearest bridge or something like that? Death is easy to achieve and so doesn’t require much thought for your values.

Section III

This is about the idea that man’s life qua man, that is, as a rational being, is of value to him.

Nozick says man has values other than rationality that separate him from other animals so why pick rationality? And other beings, like aliens, could be rational so then that’s not a property of man qua man.

And Nozick says a man could stop acting rationally, and why shouldn’t he? Nozick bangs on about essences a bit too.

If a man doesn’t act rationally he won’t survive for long without help. But why not just be a parasite? Nozicks asks. Not everyone could do it, but some people could do it, maybe for their whole lives.

Again these arguments are kinda silly. If you’re not going to be rational, you basically have to be a parasite. And being a parasite depends on your host not realising you’re a parasite. so then you have to deceive the host, which makes it more difficult for him to function rationally. So your lifestyle is self-destructive. In addition, the only way you can avoid being rational is to ignore objections to your actions, so you’re choosing stuff that is bad by your lights. So being irrational is not a way to achieve any value you might hold.

Section IV

Nozick then goes on to argue against the idea that no man should sacrifice for another, or ask another to sacrifice for him.

In addition to the parasite argument, which comes up again, he also claims that it might not be true that there are no conflicts of interest among rational men. He claims there could be multiple dimensions of rationality and that achieving one might mean sacrificing others, and this could cause a conflict of interest among rational men.

Also, Nozick seems to think that no conflicts of interest involves people all mysteriously agreeing by magic. He doesn’t seem to understand that people can come to agree on what to do in some situation as a result of critical discussion if they are prepared to have such a discussion.

This is a problem of thinking of rationality in terms of weighing options, which is wrong. If there was some way to weigh different priorities, you would have to choose the appropriate way to do the weighing, which couldn’t be done by weighing. So then there would have to be some master argument that determines how stuff should be weighed.

In addition, all the options for weighing suck, as explained in BoI Chapter 13.

Section V

Nozick brings up Galt’s promise to commit suicide if the looters torture Dagny because he refuses to live on the looters’ terms.  Nozick then says that if Dagny died from a disease, Galt would kill himself. But that doesn’t follow from what he said. Rather, the problem is that Galt doesn’t want to live if his values are going to be destroyed. Nozick has confused the concrete instance of those values being destroyed, Galt giving the looters what they want because Dagny is being tortured, with the principle of why Galt would top himself in that instance.

As a result of this confusion, Nozick burbles on for the rest of the section talking a load of rot about happiness.

He discusses doing something that results in guilt and then using chemicals to forget it. This doesn’t change the fact that when you did the bad thing, you acted against your values. If you can’t afford a computer because you stole from somebody and have to pay him back, then you still don’t have the computer even if you somehow forget the incident.

Nozick proposes that you could implant in you child a device that would make him act on some moral principles P, except when it would benefit him to break those principles, e.g. – murdering somebody to get his fortune. There are three problems with this.

First, trying to control your child like this would be grossly immoral and would hurt you because one of the benefits of interacting with somebody else is he can do things you can’t anticipate.

Second, acting on principles requires creativity, so controlling your child by some device is incompatible with him acting on principles.

Third, murdering people and taking their stuff isn’t a good idea. Even if you don’t get caught, you always have to look over your shoulder, and lie about stuff to cover up your involvement. And also you lost the opportunity to cooperate with the people who made the fortune. To make lots of money they must have good ideas you can learn from, since you can’t earn you have to steal it. And even if they sucked (say they inherited the money and were wasting it out of stupidity), you would still be better of not killing them because they could in principle improve. Also, if they suck you could try advising them on ways they suck and get income from them that way.

Overall, Nozick’s essay is kinda dumb. In a lot of the sections he misunderstands Rand and makes up stuff he thinks she should have said and criticising that. But at least some of what he said was answered in Rand’s work and he ignored the answers, e.g. – Rand’s essay on alleged conflicts of interest.

UPDATE This post was adapted from an e-mail in this thread. The rest of this post consists of further material in that thread.

POST 1

Elliot Temple curi@curi.us [fallible-ideas]

Re: Answer to Nozick (was Re: [FI] Objectivism Criticism)

On Sep 26, 2015, at 5:22 PM, Alan Forrester alanmichaelforrester@googlemail.com [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:

On 21 Sep 2015, at 00:15, Elliot Temple curi@curi.us [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:

http://www.nowandfutures.com/large/On-the-Randian-Argument-Nozick.pdf

Anyone want to answer these, or know a good answer somewhere?

Section II

Nozick says even if we grant that only living things can have values, it doesn’t follow that life is a value.

Nozick then says that the argument for life having value would have to be that (x) having values has value, that life is necessary for values, so life has value. He then points out that if nobody ever got cancer, then we wouldn’t value a cure for cancer, so then having cancer has value by the assumption that having values has value.

But Nozick says we could modify x to x’: anything that is necessary for all value has value, so since cancer isn’t necessary for all values, it isn’t a value. He then says this means that not having achieved all values is a value by this criterion, as is being mortal and destructible.

Nozick also says that if Rand’s idea is of the form you ought to realise the greatest amount of X, where Rand says X is life, somebody could have a different X, like death, so Rand hasn’t proven her case.

This whole series of arguments is silly. In reality, maintaining life is difficult and if you don’t choose to maintain it, you have effectively chosen death. And if you’re going to choose death, why not just kill yourself by throwing yourself off the nearest bridge or something like that? Death is easy to achieve and so doesn’t require much thought for your values.

Depends. How much death do they want. The GREATEST amount?

Personal death is easy to achieve.

Maximizing death in the universe or multiverse is similar to maximizing life or squirrels (http://www.curi.us/1169-morality). It starts with pretty much identical steps for the next million years, regardless of which one you’re trying to maximize in the long run.

If you want to maximize death in the universe, you’ll need things like space travel to go kill all the other solar systems. what if there’s some life there? can’t risk it. better salt the earth, except with better tech. maybe push all the planets in every galaxy into stars. then push the stars into black holes. and nuke all the dark matter. and transmute all the asteroids into nothing but hydrogen, and then spread it out a LOT.

anyway that sounds fucking hard, so we’ll need stuff like capitalism to do it! and peace. if we die, we won’t be able to go destroy all the planets, you know?

fortunately during the next million years of Objectivism, peace and capitalism, we’ll have a lot of time to change our mind about what we should do once we’re powerful enough to maximize death in the whole universe. maybe we’ll figure out some better goals by then. (we already have, and we already can argue them. but Nozick isn’t persuaded. ok. no problem. he can get persuaded next century, or the one after. let no one say we’re unkind to the slow learners!)

the point is if you take some X seriously and want the greatest amount of it, THAT IMPLIES OBJECTIVISM, at least for the next million years. (and after a million years of everyone being an Objectivist, i suspect people will prefer life over death as their X).

the only way to avoid things like reason and liberalism is by not thinking much, and not taking any big grand values seriously. keep everything in little parochial limits. if all you want is a dead Earth, and you don’t care about anything bigger – if you aren’t doing it in a principled “kill everything” way – then you can be a destroyer. but if you care about things like non-contradiction and conceptual thinking, then it’s Objectivism for you.

Section III

 

So your lifestyle is self-destructive. In addition, the only way you can avoid being rational is to ignore objections to your actions, so you’re choosing stuff that is bad by your lights. So being irrational is not a way to achieve any value you might hold.

yeah. if they have any serious, big value, that implies stuff like i was discussing above. it implies critical thinking, reasoning, etc

the only chance to escape stuff like Objectivism is either not valuing anything (being a nihilist) or having only very limited, parochial values (being finite, not being part of the beginning of infinity).

Section IV

Nozick then goes on to argue against the idea that no man should sacrifice for another, or ask another to sacrifice for him.

ugh

he also claims that it might not be true that there are no conflicts of interest among rational men. He claims there could be multiple dimensions of rationality and that achieving one might mean sacrificing others, and this could cause a conflict of interest among rational men.

ugh

Section V

Nozick brings up Galt’s promise to commit suicide if the looters torture Dagny because he refuses to live on the looters’ terms.  Nozick then says that if Dagny died from a disease, Galt would kill himself.

but in the book scenario, the suicide would *prevent her further torture* by taking away it’s purpose (to get to Galt via her). if she died of a disease, his suicide wouldn’t accomplish anything useful.
Elliot Temple
www.fallibleideas.com
www.curi.us

POST 2

Justin Mallone justinceo@gmail.com [fallible-ideas]

Re: Answer to Nozick (was Re: [FI] Objectivism Criticism)

On Sep 26, 2015, at 8:22 PM, Alan Forrester alanmichaelforrester@googlemail.com [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:

On 21 Sep 2015, at 00:15, Elliot Temple curi@curi.us [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:

http://www.nowandfutures.com/large/On-the-Randian-Argument-Nozick.pdf

Anyone want to answer these, or know a good answer somewhere?

Nozick’s article is about Rand’s failure to prove her moral ideas are correct. His strategy is to break down Rand’s arguments into a number of steps, invent arguments he thinks are relevant to those steps and then say she has not proved one or more steps.

It is true that Rand doesn’t prove her ideas, since proof is impossible. But the lack of proof is also irrelevant since no position of any kind can be proved. And to the extent that his arguments are treated as criticism of Rand’s moral ideas, they are not much good.

Section III

This is about the idea that man’s life qua man, that is, as a rational being, is of value to him.

Nozick says man has values other than rationality that separate him from other animals so why pick rationality?

Nozick seems to think that there is some objectivist position like  “whatever particular unique attribute a thing has defines how it should act” or something along those lines.

Like see where Nozick says in describing possibility one in Section III that:

we focus on the idea that what is special to a thing marks its function and from this we can get its peculiarly appropriate form of behavior.

So basically he doesn’t GET the power of reason, and that if you value your life as a man, and wish to preserve it, you’ve got to, as a practical matter, make use of the super powerful faculty of rationality in order to do so.

Seems to think reason is nothing special.

He also says like maybe we discover that dolphins have some property P which we thought made man special, and then we couldn’t say man is special anymore or something.

If we discovered intelligent dolphins then morality and the importance of reason for their life  would apply to them too. I don’t think this poses a huge problem for Rand’s ethics.

The issue isn’t about any old property P. Being capable of reason is a special thing!

If mankind had particularly strange and unique teeth that would not be very important to morality, I think. It would affect details like what techniques competent and effective dental care involve. Those aren’t morally irrelevant but they are very narrow, don’t have lots of reach into other areas. And so then who cares if we discover some other creatures have these odd teeth too. This contrasts strongly with the moral impact of *REASON*.

And other beings, like aliens, could be rational so then that’s not a property of man qua man.

And Nozick says a man could stop acting rationally, and why shouldn’t he? Nozick bangs on about essences a bit too.

“Bangs on about essences a bit too” is a very fair summary of the content. Very confusing stuff in this part.

If a man doesn’t act rationally he won’t survive for long without help. But why not just be a parasite? Nozicks asks. Not everyone could do it, but some people could do it, maybe for their whole lives.

The parasite stuff is a DISASTAH. I’m gonna go in some detail on this one with quotes. Nozick:

There are two forms to the parasite argument, a consequential one and a formal one.

Note I don’t see any argument about how it’s a bad and undesirable lifestyle that makes you more helpless, less powerful, less fulfilled, and often at best involves the heavy cost of optimizing around and flattering other people’s irrationalities.

The consequential argument is that being a parasite won’t work in the long run. Parasites will eventually run out of hosts, out of those to live off, imitate, steal from. (The novel, Atlas Shrugged, argues this view.)

Atlas Shrugged is not a consequentialist morality type book.

It does show the ultimate consequences of stuff. But it also shows what a horrible lifestyle it is to be an ineffective pathetic person who is dependent on the generosity and benevolence of the able while simultaneously hating them and needing to blackmail them (see James, and Rearden’s whole family).

But in the short run, one can be a parasite and survive; even over a whole lifetime and many generations. And new hosts come along. So, if one is in a position to survive as a parasite, what reasons have been offered against it?

One cannot know infallibly how long this time period will be. Better to not set yourself up for a lifestyle of powerlessness and dependence. The fact that you are considering engaging in such a project indicates your judgment is pretty bad as it is, so you should maybe not be so trusting of your judgment as to the course of nations and their political trends, or even the tolerance of those you can manipulate personally.

Nozick then describes what he call “the formal argument.” Basically the point is, moral rules are universal, so if your moral values say something like “you act according to X moral, and everyone else acts according to Y moral principle, and you’ll get away with it as long as everyone else sticks to Y” then X can’t be right.

Moral principles aren’t subjective. You’d need some explanation why you get to act one way and everyone else has to support your parasitism. What is it? One proposed justification, which AS deals with in great detail, is need/lack of ability.

Nozick concludes the section by basically asking why he can’t be a subjectivist.

Back to Alan:

Again these arguments are kinda silly. If you’re not going to be rational, you basically have to be a parasite. And being a parasite depends on your host not realising you’re a parasite. so then you have to decide the host, which makes it more difficult for him to function rationally. So your lifestyle is self-destructive. In addition, the only way you can avoid being rational is to ignore objections to your actions, so you’re choosing stuff that is bad by your lights. So being irrational is not a way to achieve any value you might hold.

Gps.

-JM

POST 4

Justin Mallone justinceo@gmail.com [fallible-ideas] 

To: FI Cc: FIGG

Reply-To: FI

Re: Answer to Nozick (was Re: [FI] Objectivism Criticism)

On Sep 26, 2015, at 8:22 PM, Alan Forrester alanmichaelforrester@googlemail.com [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:

On 21 Sep 2015, at 00:15, Elliot Temple curi@curi.us [fallible-ideas] <fallible-ideas@yahoogroups.com> wrote:

http://www.nowandfutures.com/large/On-the-Randian-Argument-Nozick.pdf

Anyone want to answer these, or know a good answer somewhere?

Section IV

Nozick then goes on to argue against the idea that no man should sacrifice for another, or ask another to sacrifice for him.

One mistake I think Nozick makes up front is bolting on some of his own stuff to Rand’s statement, which he then uses to go off on wild tangents unrelated to Rand’s ideas.

Like he starts off quoting Rand:

The basic social principle of the Objectivist ethics is that just as life is an end in itself, so every living human being is an end in himself, not the means to the ends or the welfare of others—and, therefore, that man must live for his own sake, neither sacrificing himself to others nor sacrificing others to himself. To live for his own sake means that the achievement of his own happiness is man’s highest moral purpose.

And then in what seems to be his first partial restatement of this he says:

For each person, the living and prolongation of his own life is a value for him

Notice he’s elevating prolongation to a major theme.

Then he says, to make Rand’s arg work, you need to supply another argument, which he does:

For each person, the living and prolongation of his own life (as a rational being) is the *greatest value* for him.

So now in Nozick’s analysis we’re at something like, prolonging your own life has to be your greatest value or Rand’s arg doesn’t make sense. No wonder he doesn’t understand Galt and Dagny example (mentioned later).

Nozick just doesn’t understand objectivist ethics at a basic level. He thinks it’s got some like lifespan-maximizing utilitarianism in it. He doesn’t understand what sacrifice means either — the giving up of a greater value for a lesser one.

He can’t understand that like e.g. a solider would choose to go on a super high risk mission cuz he values killing some tyrant more than a higher chance at continuing to live, and that’s not a sacrifice, but then if he had these values but then didn’t do the mission cuz he felt guilty about how his death would make someone feel, that very well would, be a sacrifice. If you said this to Nozick he’d be very confused I bet.

In addition to the parasite argument, which comes up again,

Note he talks about parasitism being against your long term interests, as if that’s the only arg. It is against your short term interest as well though. There are opportunity costs to immoral lifestyles. And immoral parasitical lifestyles are less pleasant than productive ones.

he also claims that it might not be true that there are no conflicts of interest among rational men. He claims there could be multiple dimensions of rationality and that achieving one might mean sacrificing others, and this could cause a conflict of interest among rational men.

Gonna quote a bit of Nozick here:

If one believes that ethics involves (something like) one dimension or weighted set of dimensions which is to be used to judge us and the world, so that all of our moral activity (the moral activity of each of
us) is directed toward improving (maximizing) the world’s score on this dimension, then it will be natural to fall into such a vision. But if we legitimately have separate goals, and there are independent sources of moral commitment, then there is the possibility of an objective conflict of shoulds.

So notice first he seems to implicitly criticize Objectivism as being some kinda simple-minded utilitarianism. That’s his framing to set up his allegedly more thoughtful/sophisticated alternative.

If that’s what he thinks, how does he explain Galt giving up living in the world as an engineer for a track laborer job and the hope of a strike which they had no anticipation would end? What utility function is being maximized there?

Secondly, wtf is an independent source of moral commitment? Is this just plain apologetics for subjectivism,
denial of objective morality, etc?

Also, Nozick seems to think that no conflicts of interest involves people all mysteriously agreeing by magic.

Nozick says:

What I shall call the optimistic tradition holds that there are no objective conflicts of interest among persons. Plato, in the Republic, being the most notable early exponent of this view, we might appropriately call it the Platonic tradition in ethics.

Does anyone know what he’s referring to specifically? Because while it’s been a while since I read the Republic, I seem to have missed the part that sounds anything like VoS…..

He doesn’t seem to understand that people can come to agree on what to do in some situation as a result of critical discussion if they are prepared to have such a discussion.

This is a problem of thinking of rationality in terms of weighing options, which is wrong. If there was some way to weigh different priorities, you would have to choose the appropriate way to do the weighing, which couldn’t be done by weighing. So then there would have to be some master argument that determines how stuff should be weighed.

In addition, all the options for weighing suck, as explained in BoI Chapter 13.

Section V

Nozick brings up Galt’s promise to commit suicide if the looters torture Dagny because he refuses to live on the looters’ terms.  Nozick then says that if Dagny died from a disease, Galt would kill himself. But that doesn’t follow from what he said. Rather, the problem is that Galt doesn’t want to live if his values are going to be destroyed. Nozick has confused the concrete instance of those values being destroyed, Galt giving the looters what they want because Dagny is being tortured, with the principle of why Galt would top himself in that instance.

Yeah.

It’d be more interesting if he’d just dropped the disease thing and just straight asked, about the example as it is in the book,

It would be a terrible loss, but does Galt “the perfect man,”

Note btw this seems a bit hostile on Nozick’s part.

have so little moral fiber and resources that life would be intolerable for him ever afterwards

Galt says there would be “no values for me to seek” if Dagny were tortured. Is that actually true? I’m not so sure.

I think you could say that Galt values Dagny not being tortured more than any of those other values, perhaps. That makes sense to me. But NO VALUES seems rly strong.

As a result of this confusion, Nozick burbles on for the rest of the section talking a load of rot about happiness.

He seems to think that by happiness what Oists mean is time periods of having nice fuzzy feelings. I think what Rand had in mind with regard to happiness was not lengths of period of nice feelings but more like, sense of satisfaction from conscious achievement of rational values.

He discusses doing something that results in guilt and then using chemicals to forget it. This doesn’t change the fact that when you did the bad thing, you acted against your values. If you can’t afford a computer because you stole from somebody and have to pay him back, then you still don’t have the computer even if you somehow forget the incident.

Nozick proposes that you could implant in you child a device that would make him act on some moral principles P, except when it would benefit him to break those principles, e.g. – murdering somebody to get his fortune. There are three problems with this.

Only three? 🙂

Take note: Robert Nozick, serious, preftigious academic philosopher who was the head of some fancy pants association of philosophy professors, is seriously saying that if you care about your
kids’ happiness, the best thing to do, presuming it were possible, is to use some kinda mind control chip to force them to act a certain way, except certain times when it’ll benefit them to act otherwise (which the chip knows somehow), and then those times they turn into e.g. an amnesiac murderer. Doesn’t feel the need to get into detail on the detail of how sometimes being an amnesiac murderer part is good part btw. Thinks its pretty clear.

BTW his basic purpose here is to try and trash the idea of happiness being morally very important by way of the academic philosopher’s trick of imagining some impossible fantasy happening IRL and then thinking it makes some big moral point. Kinda related to lifeboat scenarios IMHO.

One thing I want to know is how does this proposed device work? E.g. how can it do stuff like control other people to act certain ways in certain moral situations (which would involve stuff like grasping a moral situation exists, and thus consciousness) without it itself being an AI type person or something? (Is similar issue to Spike neutering chip on Buffy)

So then you’re like using some person-like device as a slave to control another person and make them a slave. So that’s like a double quarter pounder of immorality now. Also hard to manage. Who keeps the enslaver chips enslaved???? What if there’s an enslaver chip rebellion? Then you’ve got angry slaver chips and VERY angry kids. Oh noes.

Why not try persuading your child of the moral ideas you want them to learn instead?

First, trying to control your child like this would be grossly immoral and would hurt you because one of the benefits of interacting with somebody else is he can do things you can’t anticipate.

It kind of amazes me he gets into the thought experiment and isn’t at any point like “hey maybe turning someone into an automaton with massive continuous force would not be happiness maximizing but actually an unbelievably evil form of continuous torture.”

It’s like that awful “Good AI” stuff applied to children and parents.

Second, acting on principles requires creativity, so controlling your child by some device is incompatible with him acting on principles.

Yeah xlnt point.

Third, murdering people and taking their stuff isn’t a good idea.

Lol that we need to explain this to fancy pants libertarian philosophers.

Even if you don’t get caught, you always have to look over your shoulder, and lie about stuff to cover up your involvement. And also you lost the opportunity to cooperate with the people who made the fortune. To make lots of money they must have good ideas you can learn from, since you can’t earn you have to steal it. And even if they sucked (say they inherited the money and were wasting it out of stupidity), you would still be better of not killing them because they could in principle improve. Also, if they suck you could try advising them on ways they suck and get income from them that way.

Overall, Nozick’s essay is kinda dumb. In a lot of the sections he misunderstands Rand and makes up stuff he thinks she should have said and criticising that. But at least some of what he said was answered in Rand’s work and he ignored the answers, e.g. – Rand’s essay on alleged conflicts of interest.

Yeah. Doesn’t know about conflicts of interests, doesn’t really understand stuff like happiness, sacrifice, and seems to know less about morality than some conventional people. Pretty crap essay overall.

-JM

About conjecturesandrefutations
My name is Alan Forrester. I am interested in science and philosophy: especially David Deutsch, Ayn Rand, Karl Popper and William Godwin.

3 Responses to Criticism of Nozick on Objectivism

  1. Pingback: The Means to Correct

  2. > Doesn’t feel the need to get into detail on the detail of how sometimes being an amnesiac murderer part is good part btw. Thinks its pretty clear.

    multiple grammar errors and/or typos.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: