Notes on “Superintelligence” by Nick Bostrom

Some notes on “Superintelligence” by Nick Bostrom, which is a bad book.

Summary: Bostrom sez that we might make superintelligences that are better than us. He doesn’t realise that saying there could be a qualitatively different kind of intelligence means that science and critical discussion are not universal methods of finding truth. If that’s true, then his whole discussion is pointless since it uses tools he claims are trash: critical discussion and science. Superintelligences might have motivations very different from us and make us all into paperclips, or use us to construct defences for it or something. He doesn’t seem to have any understanding at all of critical discussion or moral philosophy or how they might help us cooperate with AIs. Superintelligences might make us all unemployed by being super productive he sez. Or we might waste all the resources the superintelligences give us. He doesn’t discuss or refer to economics. It’s as if he doesn’t realise there are institutions for dealing with resources. And he also doesn’t seem to understand that more stuff increases economic opportunity, so if AIs make lots of cheap stuff people will have more opportunities to be productive. His proposed solution to these alleged problems is government control of science and technology. Scientists and AIs would be slaves of the govt.

I go through the book chapter by chapter, summarising and criticising.

Chapter 1 Bostrom sez vague stuff about the singularity. This is a prophecy of accelerating progress in something or other. Prophecy is impossible because what will happen in in the future depends on what we do in the future. What we will do depends on what knowledge we will have in the future. And we can’t know what knowledge we will have in the future, or how we would act on it without having the knowledge now. See The Beginning of Infinity by David Deutsch chapter 9 and The Poverty of Historicism by Karl Popper. Anyway, he gives an account of various technologies people have tried to use for AI. He eventually starts describing a Bayesian agent. The agent has a utility function and can update probabilities in that function. He sez nothing about how the function is created. He sez some stuff about AI programs people have written. He then starts quoting surveys of AI researchers (i.e. – people who have failed to make AI) about when AI will be developed as if such surveys have some kind of significance.

Chapter 2 Bostrom sez an AI would have be able to learn. He discusses various ways we might make AI without coming to a conclusion.

Chapter 3 Bostrom discusses ways a computer might be super intelligent. An AI might run on faster hardware then the brain. So it might think a lot of thoughts in the time it takes a human to think one thought. Thinking faster isn’t necessarily much use. People thought for thousands of years at the speed we think now without making much progress. He sez stuff about collective super intelligence: people can do smarter stuff by cooperating according to rules than they can individually. This is not very interesting since all the thinking is done by the people cooperating using those rules, so it’s not an extra level of thinking or intelligence. He sez a super intelligence might be qualitatively better than human intelligence. Qualitative super intelligence would imply that the scientific and rational worldview is false since it could understand stuff we couldn’t understand by rational and scientific methods. The stuff that can’t be understood by scientific and rational methods would interact directly or indirectly with all of the stuff we could understand  by rational methods. We would not be able to understand those interactions or their results, so we couldn’t really understand anything properly.

Chapter 4 vague prophecy stuff about the rate at which super intelligence might develop.

Chapter 5 includes a lot more vague prophecy. He sez govts might want to control super intelligence projects if they look like they might succeed other stuff like that. He sez AIs might maximise utility without taking into account the ways govts restrain themselves from doing stuff that maximises utility. He writes about deontological side constraints: I think this means principles like “don’t murder people” but he doesn’t explain. He doesn’t explain how utility is measured or anything like that. He doesn’t explain how you can know an option has more utility for somebody without giving him a choice between that option and others. He sez AI might be less uncertain and so act more boldly but he doesn’t explain any way of counting uncertainty. He doesn’t explain epistemology, which is dumb since the book is supposed to be about agents who create knowledge. He sez an AI wouldn’t have problems of internal coordination like a group of people. This is dumb since people have lots of internal conflicts.

Chapter 6 Bostrom sez our brains have a slightly increased set of capacities compared to other animals. He doesn’t realise that we’re qualitatively different from other animals. Humans can guess and criticise explanations, animals can’t. He sez a super intelligence might be able to do lots of stuff better than people and then take over the world. They might use nanotechnology or von Neumann probes or something. This is super vague and kinda dumb. If that sort of technology is available then it may be improved and made cheaper by capitalism till everyone can use it, so why wouldn’t everyone use it? So then the supposed super intelligence wouldn’t have a great advantage.

Chapter 7 Bostrom sez a super intelligence might have very different motivations than humans. It might want to maximise the production of paperclips. But we might design a super intelligence to have particular goals. Or it might be made by scanning a human brain or something so it has similar ideas to us. Or the super intelligence might do whatever is necessary to realise some particular goal, including making the whole Earth into defences to protect itself or something. He talks a lot about predicting the super intelligence’s behaviour. This is just prophecy, which is impossible. He also doesn’t mention objective moral standards or critical discussion as things that might help AIs and humans get along.

Chapter 8 Bostrom worries that an AI might act nice to lull us into a false sense of security before making us all into paperclips or whatever. Or the AI might try to do nice stuff by a bad means, like make u happy by putting electrodes in parts of the brain that produce pleasure. This is just more of the same crap as in chapter 7.

Chapter 9 Bostrom talks about controlling AIs so they won’t kill us or whatever. He considers limiting what the AI can do and dictating its motivations. He doesn’t consider critical discussion or moral explanations.

Chapter 10 discusses ways in which a super intelligence might be useful. It might be able to answer any question in a particular domain. As David Deutsch points out in The Fabric of Reality chapter 1, we already have an oracle that can tell us what will happen if we do thing X: it’s called the universe. This isn’t very useful. Being able to explain stuff is more important than prediction. Also, any particular oracle will be fallible and have some limitations. So again we’re back Bostrom ignoring the importance of critical discussion and explanation. A super intelligence might also act as a genie he sez. He sez we would have to design it so it would do what we intended rather than act in some way that formally does what we asked but is actually dumb. Again, critical discussions and explanations just don’t exist in Bostrom’s world.

Chapter 11 Bostrom talks about superintelligences making everyone unemployed. He doesn’t explain why people would be unemployed when cheap stuff made by AIs would open up more economic opportunities. He also sez AIs might produce lots of wealth that people would squander for some unexplained reason. He also sez that people might create lots of AIs on demand for particular kinds of work then get rid of them when the work is done. And people might make AIs work very hard so they are unhappy. He sez this might be avoided by lots of treaties limiting what people can do. This is all kinda dumb. He’s just arbitrarily saying stuff might happen without thinking about it. Like if you create AIs that can create knowledge, you should be interested in their objections to some proposed course of action since they might point a problem you didn’t notice.

Chapter 12 Bostrom discusses deciding what values AIs will have and how to impose them. He seems to think that values work by deciding on some goal and then pursuing it without any reconsideration. But even if AIs wanted to maximise the production of paperclips it wouldn’t make people into paperclips. Rather, the AI would have to work out all of the best stuff we know about how to make stuff, such as critical discussion and free markets. See Elliot Temple’s essay on squirrels and morality for more discussion of this point.

Chapter 13 More of the same. He finally has a discussion of epistemology. He is assuming Bayesian epistemology is true since he writes about priors. But Bayesian epistemology is wrong. Ideas are either true or false so they can’t be assigned probabilities. And the only way to create knowledge is through guessing and criticism, as explained by Karl Popper, see Realism and the Aim of Science, Chapter I and The Beginning of Infinity by David Deutsch chapter 4. The acknowledgements to the book say he consulted David Deutsch.

Chapter 14 sez the govt should control science to make superintelligences serve the common good. So scientists and superintelligences should be slaves to the govt.

Chapter 15 More of the same sort of trash as chapter 14.

About conjecturesandrefutations
My name is Alan Forrester. I am interested in science and philosophy: especially David Deutsch, Ayn Rand, Karl Popper and William Godwin.

3 Responses to Notes on “Superintelligence” by Nick Bostrom

  1. Pingback: Roko’s Basilisk | Conjectures and Refutations

  2. Anon says:

    > An AI might run on faster hardware then the brain. So it might think a lot of thoughts in the time it takes a human to think one thought.

    If AI gets to a human level of knowledge creation, speed would be significant in progressing knowledge, would it not?
    Speed is not necessary of much use by itself, but combined with good thinking methods, I believe it is. Like if it (an AI) could think and internally criticize the ideas on a high level and at a high rate. Is there a reason why this wouldn’t progress knowledge faster than what we are able to do now? It might not be better ideas than what humans are able to conjure per conjecture, but the ideas could be improved much faster due to speed of good criticism. Wouldn’t this be something like a “super intelligence”?

  3. Anon2 says:

    > It might not be better ideas than what humans are able to conjure per conjecture, but the ideas could be improved much faster due to speed of good criticism.

    Criticism doesn’t improve ideas — it only eliminates them. We can already criticize many ideas quickly via a library of criticism (ET’s idea; can’t quickly find a better link than: https://curi.us/2204-alisa-discussion#13205).

    From Elliot’s recent post — https://curi.us/2478-super-fast-super-ais

    > I saw a comment about fast AIs being super even though they aren’t fundamentally better at thinking than people – just the speed would be enough to make them super powerful. I don’t think the person has considered that 100 people have 100x the computing power of 1 person. So to a first approximation, a superfast 100x AI is as valuable (mentally not physically) as 100 people. If we get an AI that is a billion times faster at thinking, that would raise the overall intelligent computing power of our civilization by around 1/7th since there are around 7 billion people. So that wouldn’t really change the world.

    We already have massive parallelization of both conjectures and criticism. I don’t think it would be useful for an AI to think more thoughts if those thoughts are similar to what already take place.

Leave a comment