The goals misconception

In a tweet directed at David Deutsch, and some other people, a tweeter asks:

What would a decision-making agent base its decisions on if not entrenched goals?

I am going to take it that a goal means some specific fixed objective. A goal involves taking some specific kind of action. ‘Choose the best thing to do today’ is not a goal since it makes no specific claim about what you should do. ‘Go to see a film at a cinema today’ is a goal since it makes a specific claim about what you should do.

The answer to this question is that a rational person making a decision will not base his decision on anything. From a conventional point of view, this sounds ridiculous, but that conventional point of view is wrong.

Making a decision involves creating knowledge about what to do next. As such, before you can understand decision making you have to understand epistemology (the theory of knowledge) more generally. I’m not going to explain the whole of epistemology, but I will outline epistemology, explain how it is relevant and point you to where you can learn more.

Philosophers often say that knowledge is justified true belief. Justification is a process that allegedly shows an idea is good or true or something like that. People who believe in justification might hedge a bit and say it shows an idea is probably a good idea, or probably better than the alternatives. This sounds superficially like a reasonable position: who would want to act on an idea that hasn’t been shown to be true or good? The idea that you need to have a goal to make a decision assumes that it is possible and necessary to justify your decision.

The apparent reasonableness of this position is spoiled by the fact that justification is impossible and unnecessary. The correct alternative is to  focus on solving problems, not on justifying your decisions. Saying you’re going to solve problems puts you in a different position from pursuing a goal. There is no fixed standard by which you judge every decision. Rather, you look for problems with your current options and try to solve the problems. If some goal you thought was good turns out to be problematic, you can discard or modify it and you should.

Problems with justification

To understand the problems with justification, you have to understand something about how arguments work. Some arguments are informal and are not really candidates to prove anything. The fact that people are prepared to make informal arguments, and sometimes to take such arguments seriously, are problems for the idea of justification since such arguments aren’t justified. But even formal arguments don’t allow justification. Any formal argument starts with some assumptions and rules for getting conclusions from those assumptions. If the premises are true, and the rules reflect those that hold in reality, then the conclusion is true.

An example of a formal argument. If I am in the House of Commons and the House of Commons is in London, then I am in London. I am in the House of Commons, so I am in London. The rule being used is that if place A is contained in place B and object C is in place A, then it is also in place B. The assumptions are that I am in the House of Commons and the House of Commons is in London. Now, to show that the conclusion is true you have two options:

  1. Say that the rules and the assumptions are correct by fiat.
  2. Show that the assumptions and rules are correct.

Option 1 has the problem that if you admit it, then anyone can claim to prove anything by saying the assumptions are true by fiat. You can say the Earth was created 6000 years ago by saying the Bible is true by fiat. But surely we can’t just say the Bible is true by fiat because it makes ridiculous claims about some guy turning water into wine and stuff like that, but then you’ve adopted option 2. But you can see whether I’m in the House of Commons but you can’t see a lot of the stuff in the Bible because it’s abstract. There are two problems with this. First, to properly understand whether you can see me in the House of Commons you have to understand the physics of eyes and that involves abstractions. Second, the rule itself is abstraction you can’t see. So if you reject anything you can’t see you must reject the rule and the argument falls apart.

Option 2 leads to a different problem. If you’re going to show the rules and assumptions are correct you need to make another argument that justifies them. And that argument will have assumptions and rules that have to be justified. And then you have to make more arguments to justify the rules and assumptions of your new argument. And you have to keep repeating this process indefinitely, so you can never actually justify anything.

Saying that justification can make do with showing your conclusion is probably correct doesn’t solve this problem. Your conclusion is only probably correct if the assumptions and rules are probably correct, which leads to the same problem. In addition, there is no such thing as a theory that is probably correct. Your ideas are either right or wrong. And probabilities of events not of theories and can only be obtained from theories such as quantum mechanics that are themselves either right or wrong. There are other problems with assigning probabilities to theories, see The Beginning of Infinity by David Deutsch Chapter 13.

When a person thinks he has justified his ideas, in reality he has made assumptions and used rules that he has not justified. Those rules and assumptions could be wrong. Any viable epistemology has to take account of the fact that any idea you hold could be wrong. That includes ideas that you think are certainly correct. People often think an idea is obviously correct when it turns out to be flawed on closer inspection, such as the idea that justification is necessary and desirable.

The alternative to justification and goals

So if you don’t justify your ideas, including your decisions, how do you create knowledge rationally? You start with a problem. You guess solutions to the problem. You criticise the guessed solutions until only one is left and you don’t know of any criticisms despite looking for them. The surviving idea is the solution to that problem. You then move on to a new problem. This solution was first pointed out by Karl Popper, David Deutsch and Elliot Temple: it’s called critical rationalism.

For more details on critical rationalism, see Objective Knowledge Chapter 1 by Popper, Realism and the Aim of Science Chapter I by Popper, ‘one the sources of knowledge and of ignorance’ inConjectures and Refutations by Popper, chapters 3 and 7 of The Fabric of Reality by David Deutsch, most of The Beginning of Infinity by David Deutsch and Critical preferences and strong arguments by Elliot Temple.

So how do you apply this to making decisions? Your decision making has to start with a problem you’re trying to solve. You might be trying to decide what to have for breakfast. You then look for solutions to the problem. You could have cereal, or boiled eggs or whatever. Then you look for problems with the options. You might not have enough time to make and eat boiled eggs, so you pick cereal. So then you’ve solved the problem by picking cereal.

But your breakfast decision could go very differently. You might find that when you wake up you’re not hungry. So then you might think eating is pointless and you decide not to have breakfast at all. So you had a goal when you started the problem: the goal of having breakfast. And you ditched that goal because you had a criticism of it. If you had looked on having breakfast as a goal you must fulfil, you would have missed the option of not eating breakfast. So thinking of decision making in terms of goals is an obstacle to making rational decisions.

The misconception that you need to have goals is one of many misconceptions that can get in the way of making rational decisions. And you can’t expect to get rid of all your misconceptions without rational discussion, so you may want to read Fallible Ideas and contribute to the e-mail list linked on that page.

Paying for water

A tweeter asks David Deutsch about paying for water:

@DavidDeutschOxf Random Q: Us paying for water on a planet that is two-thirds water is, at this point, a lack of creativity or knowledge?

Some replies pointed out that you’re paying to have water without toxic stuff in it, or in convenient forms like bottled water. But the system of paying for water itself is an example of creativity and knowledge at work.

There are lots of choices about how to make and distribute water for drinking and other purposes.

Suppose that the only water around was salty. To get drinkable water you would have to remove the salt. This is often done by distilling the water. You make the water evaporate collect the evaporated water and condense it so that it turns back to liquid. But there are lots of possible ways of distilling water.

In some places, such as the UK lots of non-salty water falls out of the sky. So there is non-salty water around for people to use. But the UK could produce more drinking water by desalination.

And when people use water they often render it undrinkable and useless for other purposes, e.g. – they pee in water that is in a toilet. So if you want to use the water again it has to be treated.

And for some applications of water, you want water that is prepared in a more complicated way than drinking water. In chemistry experiments, people often want very pure water with no additives. But tap water often has chemicals in it, e.g. – fluorides. So tap water is no good for some chemistry experiments.

And water can be delivered to the consumer in lots of ways. You can get it in bottles or out of a tap. Or you can take tap water and put it into a machine that purifies the water.

So how do you make a choice among all those options? And how do people decide what delivery options to offer, what purity of water to offer and so on? Pricing is a way of helping people make such decisions.

You can exchange money for a very wide variety of goods. Anything that people are willing to offer in trade can be traded for money in most circumstances in advanced industrial societies. Money is a medium of exchange: it is a good you acquire so you can use it to acquire other goods. This means that you can deal with anyone who has a good you want. If there was no medium of exchange you would have to have some specific good on hand that your trading partner wanted. Without money, if Bob wants milk and Jim wants a chicken but all Bob has is corn, Bob would have to go trade the corn for a chicken before he could get the milk. Instead of doing this, Bob can just give Jim money and Jim can buy his own chicken. Money is a creative solution to the problem that it is difficult to arrange for the wants of trading partners to coincide without a medium of exchange.

If you’re choosing among different ways of getting water you can look at the cost of the different options to decide among them. If you’re running a chemistry experiment, you might decide that having pure water is worth the cost of buying  device for purifying tap water. You prefer the water purifier to the other stuff you could buy with the money you allocate to buying the purifier. If you’re just making tea, you might decide the water purifier isn’t worth the cost: you prefer the other stuff you could buy with the money to the water purifier. If you’re going out cycling you might be willing to pay for bottled water so you can have it in a convenient container. But you might not pay for bottled water if you are at home.

And if people are trying to choose among different ways of supplying water, they can look at whether people are willing to pay enough to make it worthwhile. If non-salty water falls out of the sky in the UK it might not be a good idea to build a desalination plant here. People aren’t willing to pay enough, they prefer the other goods they can buy with the same money. In other places, rain water doesn’t provide what people need for drinking, farming and so on. So people will pay for desalination because they prefer more water to the other stuff they could get with that money.

Some books you could read to understand more about economics include Economics in one lesson or Time will run back both by Henry Hazlitt. For a much longer and deeper explanation see Capitalism by George Reisman.

Why should you learn physics?

In a comment, Elliot Temple asked questions about when and why people should learn physics:

what’s the point of learning about physics? who should learn about it and why? should everyone learn about physics? how should someone decide if they should learn some physics, and which physics, and how to learn it?

There are several reasons people might want to learn about physics.

(1) Learning physics involves figuring out stuff, which can be fun. It’s like trying to figure out a very complicated puzzle. One difference from trying to solve a puzzle invented by a person is that  for lots of physics problems nobody knows the answer. There are some puzzles invented by people for which nobody knows the answer. You can have computer games in which a program generates a puzzle. But even in cases like that the rules for generating the puzzle are known and written down in the text of the program. The laws of physics are not known or written down in many cases.

(2) You can want to learn physics for technological reasons. The laws of physics rule out some ways of solving problems. For example, you can’t travel faster than light so technology that requires faster than light travel won’t work.

(3) You can want to learn some physics for philosophical reasons. There are philosophical disputes about stuff like whether it is possible to understand the world and physics is relevant to those disputes. A person is a physical object, so a person can’t know X if learning X requires breaking the laws of physics. In The Beginning of Infinity, David Deutsch argues that all problems that are worth solving can be solved. This rules out some bad ideas people have about people being unable to understand how the world works cuz our brains evolved by natural selection only to solve some problems, see BoI Chapter 3 starting at about p. 53.

You can comment on the above or explain reasons I left out in the comments below.

The Right Kind of Light

The physicist Seth Lloyd said that “Almost anything becomes a quantum computer if you shine the right kind of light on it.”

This is related to computational gates. A computational gate is an operation performed on some fixed finite number of bits as input and gives a fixed finite number of bits as output after a fixed finite amount of time. The not gate takes one bit as input and changes its value from 1 to 0 or 0 to 1. A controlled not gate takes two bits as input and flips the second bit if the first bit is 1 and leaves it alone otherwise so it would change the bits as follows

(0,0) \to (0,0),

(0,1) \to (0,1),

(1,0) \to (1,1),

(1,1) \to (1,0).

Classical computers like the laptop I am using to write this post can do any classical computation. A classical computation takes bits with some definite set of values as input and changes them to produce some bits as output.

A quantum computer uses qubits – the closest quantum mechanical equivalent of a classical bit. A qubit need not have only a single value, it can have multiple values at the same time: the qubit exists in different versions that have different values. Those different values can undergo a process called interference that pushes both versions into the same state in a way that depends on what happened to both of them while they were different. If you have a set of qubits you can prepare them in all of the possible values of the bits at the same time. You can then do computations on all of the possible states of the qubits and combine those values to get solutions to problems that would be solved far more slowly with a single computation on a single set of values.

It is possible to construct a quantum computer that can do any computation another quantum computer can do – a universal quantum computer. A universal quantum computer would be able to simulate any finite physical system if you give it enough qubits and enough time.

And it is possible to do any computation the universal quantum computer could do by combining computational gates that act on qubits instead of bits. This might not sound too impressive since you might need really huge gates to do big computations. But in reality you can do all possible computations to any accuracy you like by composing gates out of a particular set of gates. Any possible gate for a single single qubit can be described by a set of three numbers all in the range [0,2\pi]. The set of single qubit gates and the controlled not gate form a universal set of gates.

An atom can be isolated in various ways, e.g. – putting the atom in a specially chosen magnetic field.  The atom’s outer electron can be moved between its lowest possible state and the next highest energy state by shining light of the right energy on it. The energy of each photon has to match the energy of the difference between the states. By shining the light at a controlled intensity for a controlled amount of time you can control the electron’s state by giving it a controlled probability of moving from one level to another. You can also control the interference properties of the different versions of the electron. This allows you to do any single qubit gate on the electron by treating what energy level it is on as a qubit. You can also get the atoms to interact by sending light signals between them and in particular you can do a controlled not gate. So by shining the right kind of light on atoms you can make a universal quantum computer. The property of having two or more possible states for an electron in an atom is common. “Almost anything becomes a quantum computer if you shine the right kind of light on it.”

This sounds very complicated. Perhaps all of the work and information is stored in the apparatus for manipulating the qubits. This is the wrong way of looking at the issue. That equipment is needed to set up the atoms to do a computation but it won’t do any computation without the atoms. A large part of your ordinary desktop or laptop computer is not doing computation. Rather, some of the equipment provides ways to put information into the computer, e.g. – the keyboard. Other parts of the equipment supply power to the computer or cool parts that get hot. But without the chips that do the computation, this equipment can’t do much for you. The same is true for the quantum computer. You can shift some of the storage of information out of the qubits into the surrounding apparatus, but you can’t do any quantum computation without the atoms.

Bad Spectator article saying Brexit is better than Trump

The Spectator published a bad editorial called Why Trump’s victory isn’t like Brexit. The article claims that:

[Brexit] was an argument about encouraging more trade, lowering tariffs, restoring sovereignty, reducing net immigration — all ideas which voters proved very capable of understanding.

The author continues:

Donald Trump has no similar agenda. He offers emotion, but not much beyond that. He dislikes trade, and global capitalism in general. His immigration policy has amounted to a bizarre threat to ban Muslims from entering the country and build a wall between the United States and Mexico. At any other time, these policies would have disqualified him from the office — but this year Americans were not looking for solutions. Trumpism was about stopping Hillary Clinton from becoming president and sticking two fingers up to the machine. And beyond that, it is not about very much.

Trump has a website full of policies. The Spectator doesn’t mention these let alone criticise them. This is very bad journalism and very bad writing. Trump’s presidential campaign website is on the first page of Google hits when you search “Donald Trump”. If you read the website then you find he has substantive policies on a lot of issues.

The immigration part of Trump’s platform includes stuff like deporting criminal illegal aliens, detaining anybody caught entering illegally until they can be deported, reforming legal immigration to serve American interests and lots of other stuff including building a wall on the Mexican border. The website also lists problems that these changes are supposed to address.

The healthcare part of his platform is also substantive. He wants to repeal Obamacare and replace it with health savings accounts. He wants to allow competition between insurers across state lines. Again, the site lists problems that these changes will address.

The trade part of Trump’s platform is also substantive. It lists policies and the problems that Trump thinks they will solve. A direct quote:

Use every lawful presidential power to remedy trade disputes if China does not stop its illegal activities, including its theft of American trade secrets – including the application of tariffs consistent with Section 201 and 301 of the Trade Act of 1974 and Section 232 of the Trade Expansion Act of 1962.

This looks like a policy he has thought about. There are lots of people who object to trade and lots of economists who can’t answer their objections. For an example see Vox Day’s discussion with such an economist. Whether Vox Day is right or wrong in the light of a performance like this by an economist it is not surprising that a lot of people don’t agree with free trade.

Trump has also proposed a tax planrepeal anti-fossil-fuel policies and has proposed many other policies.

To the extent that Trump is wrong, the Spectator’s editorial won’t convince anybody to reject his bad policies because it doesn’t explain any substantive points of disagreement. The article doesn’t even refer to another article or a book with arguments against Trump’s policies. Whoever wrote this article needs to learn how to argue.

Notes on “Superintelligence” by Nick Bostrom

Some notes on “Superintelligence” by Nick Bostrom, which is a bad book.

Summary: Bostrom sez that we might make superintelligences that are better than us. He doesn’t realise that saying there could be a qualitatively different kind of intelligence means that science and critical discussion are not universal methods of finding truth. If that’s true, then his whole discussion is pointless since it uses tools he claims are trash: critical discussion and science. Superintelligences might have motivations very different from us and make us all into paperclips, or use us to construct defences for it or something. He doesn’t seem to have any understanding at all of critical discussion or moral philosophy or how they might help us cooperate with AIs. Superintelligences might make us all unemployed by being super productive he sez. Or we might waste all the resources the superintelligences give us. He doesn’t discuss or refer to economics. It’s as if he doesn’t realise there are institutions for dealing with resources. And he also doesn’t seem to understand that more stuff increases economic opportunity, so if AIs make lots of cheap stuff people will have more opportunities to be productive. His proposed solution to these alleged problems is government control of science and technology. Scientists and AIs would be slaves of the govt.

I go through the book chapter by chapter, summarising and criticising.

Chapter 1 Bostrom sez vague stuff about the singularity. This is a prophecy of accelerating progress in something or other. Prophecy is impossible because what will happen in in the future depends on what we do in the future. What we will do depends on what knowledge we will have in the future. And we can’t know what knowledge we will have in the future, or how we would act on it without having the knowledge now. See The Beginning of Infinity by David Deutsch chapter 9 and The Poverty of Historicism by Karl Popper. Anyway, he gives an account of various technologies people have tried to use for AI. He eventually starts describing a Bayesian agent. The agent has a utility function and can update probabilities in that function. He sez nothing about how the function is created. He sez some stuff about AI programs people have written. He then starts quoting surveys of AI researchers (i.e. – people who have failed to make AI) about when AI will be developed as if such surveys have some kind of significance.

Chapter 2 Bostrom sez an AI would have be able to learn. He discusses various ways we might make AI without coming to a conclusion.

Chapter 3 Bostrom discusses ways a computer might be super intelligent. An AI might run on faster hardware then the brain. So it might think a lot of thoughts in the time it takes a human to think one thought. Thinking faster isn’t necessarily much use. People thought for thousands of years at the speed we think now without making much progress. He sez stuff about collective super intelligence: people can do smarter stuff by cooperating according to rules than they can individually. This is not very interesting since all the thinking is done by the people cooperating using those rules, so it’s not an extra level of thinking or intelligence. He sez a super intelligence might be qualitatively better than human intelligence. Qualitative super intelligence would imply that the scientific and rational worldview is false since it could understand stuff we couldn’t understand by rational and scientific methods. The stuff that can’t be understood by scientific and rational methods would interact directly or indirectly with all of the stuff we could understand  by rational methods. We would not be able to understand those interactions or their results, so we couldn’t really understand anything properly.

Chapter 4 vague prophecy stuff about the rate at which super intelligence might develop.

Chapter 5 includes a lot more vague prophecy. He sez govts might want to control super intelligence projects if they look like they might succeed other stuff like that. He sez AIs might maximise utility without taking into account the ways govts restrain themselves from doing stuff that maximises utility. He writes about deontological side constraints: I think this means principles like “don’t murder people” but he doesn’t explain. He doesn’t explain how utility is measured or anything like that. He doesn’t explain how you can know an option has more utility for somebody without giving him a choice between that option and others. He sez AI might be less uncertain and so act more boldly but he doesn’t explain any way of counting uncertainty. He doesn’t explain epistemology, which is dumb since the book is supposed to be about agents who create knowledge. He sez an AI wouldn’t have problems of internal coordination like a group of people. This is dumb since people have lots of internal conflicts.

Chapter 6 Bostrom sez our brains have a slightly increased set of capacities compared to other animals. He doesn’t realise that we’re qualitatively different from other animals. Humans can guess and criticise explanations, animals can’t. He sez a super intelligence might be able to do lots of stuff better than people and then take over the world. They might use nanotechnology or von Neumann probes or something. This is super vague and kinda dumb. If that sort of technology is available then it may be improved and made cheaper by capitalism till everyone can use it, so why wouldn’t everyone use it? So then the supposed super intelligence wouldn’t have a great advantage.

Chapter 7 Bostrom sez a super intelligence might have very different motivations than humans. It might want to maximise the production of paperclips. But we might design a super intelligence to have particular goals. Or it might be made by scanning a human brain or something so it has similar ideas to us. Or the super intelligence might do whatever is necessary to realise some particular goal, including making the whole Earth into defences to protect itself or something. He talks a lot about predicting the super intelligence’s behaviour. This is just prophecy, which is impossible. He also doesn’t mention objective moral standards or critical discussion as things that might help AIs and humans get along.

Chapter 8 Bostrom worries that an AI might act nice to lull us into a false sense of security before making us all into paperclips or whatever. Or the AI might try to do nice stuff by a bad means, like make u happy by putting electrodes in parts of the brain that produce pleasure. This is just more of the same crap as in chapter 7.

Chapter 9 Bostrom talks about controlling AIs so they won’t kill us or whatever. He considers limiting what the AI can do and dictating its motivations. He doesn’t consider critical discussion or moral explanations.

Chapter 10 discusses ways in which a super intelligence might be useful. It might be able to answer any question in a particular domain. As David Deutsch points out in The Fabric of Reality chapter 1, we already have an oracle that can tell us what will happen if we do thing X: it’s called the universe. This isn’t very useful. Being able to explain stuff is more important than prediction. Also, any particular oracle will be fallible and have some limitations. So again we’re back Bostrom ignoring the importance of critical discussion and explanation. A super intelligence might also act as a genie he sez. He sez we would have to design it so it would do what we intended rather than act in some way that formally does what we asked but is actually dumb. Again, critical discussions and explanations just don’t exist in Bostrom’s world.

Chapter 11 Bostrom talks about superintelligences making everyone unemployed. He doesn’t explain why people would be unemployed when cheap stuff made by AIs would open up more economic opportunities. He also sez AIs might produce lots of wealth that people would squander for some unexplained reason. He also sez that people might create lots of AIs on demand for particular kinds of work then get rid of them when the work is done. And people might make AIs work very hard so they are unhappy. He sez this might be avoided by lots of treaties limiting what people can do. This is all kinda dumb. He’s just arbitrarily saying stuff might happen without thinking about it. Like if you create AIs that can create knowledge, you should be interested in their objections to some proposed course of action since they might point a problem you didn’t notice.

Chapter 12 Bostrom discusses deciding what values AIs will have and how to impose them. He seems to think that values work by deciding on some goal and then pursuing it without any reconsideration. But even if AIs wanted to maximise the production of paperclips it wouldn’t make people into paperclips. Rather, the AI would have to work out all of the best stuff we know about how to make stuff, such as critical discussion and free markets. See Elliot Temple’s essay on squirrels and morality for more discussion of this point.

Chapter 13 More of the same. He finally has a discussion of epistemology. He is assuming Bayesian epistemology is true since he writes about priors. But Bayesian epistemology is wrong. Ideas are either true or false so they can’t be assigned probabilities. And the only way to create knowledge is through guessing and criticism, as explained by Karl Popper, see Realism and the Aim of Science, Chapter I and The Beginning of Infinity by David Deutsch chapter 4. The acknowledgements to the book say he consulted David Deutsch.

Chapter 14 sez the govt should control science to make superintelligences serve the common good. So scientists and superintelligences should be slaves to the govt.

Chapter 15 More of the same sort of trash as chapter 14.

Why are atoms stable in quantum mechanics?

In a previous post I explained why atoms are unable in classical physics. The post is about why atoms are stable in quantum mechanics.

Summary Atoms in quantum mechanics don’t suffer from the same radiation problem as atoms in classical mechanics. A quantum system exists in many instances that can interfere with one another on a small scale. As a result, on an atomic scale an electron doesn’t have a trajectory and so it can’t be said to accelerate and it doesn’t radiate. In addition, when the probability of finding an electron is highly peaked at a particular location, quantum mechanics makes the instances spread out. The potential produced by the nucleus pulls the electron instances toward the nucleus. Atoms can be stable because the spreading out produced by quantum mechanics and the attraction produced by the potential balance out.

In classical mechanics, an electron’s orbit around an atom is unstable because it emits the energy it would need to stay in orbit as light. And the electron does this because it is accelerating. To be able to say the electron is accelerating, it has to have a trajectory – a line it travels along. Then if the line changes direction or the electron speeds up along the line you can say it is accelerating. In quantum mechanics, systems sometimes don’t have trajectories.

Absence of microscopic trajectories in quantum mechanics

In quantum mechanics, particles are described very differently from how they are described in classical mechanics. Particles are more complicated than they look. Each particle exists as multiple instances. these instances are copies in the sense that they all obey the same rules. They are instances of a specific particle in the sense that they only interact with other instances of that particle. Sometimes two instances of a particle are different: they have different locations or different momentum or different values of some other measurable quantity.  Sometimes these instances are all fungible – there is literally no detectable physical difference between them. Two instances of the same particle can become different and then become fungible again in a way that depends on what happened to the different versions of the particle: this process is called quantum interference.

Now suppose you have an electron in empty space near some point Pstart. Consider a point Pfinal that some instances of the electron will reach later. How does those instances get there? First instances of the electron spread out from Pstart in all directions. Some instances go to points intermediate between Pstart and Pfinal: P1 and P2. Then some instances of the electron spread out from P1 and P2 in all directions. Some of those instances end up at Pfinal. Figure 1 shows this process with the little domes over the intermediate points indicating the instances moving in all possible directions. There is no explanation of how the electron moves that refers to just one trajectory. And none of the instances individually change direction either. At each point there is some instance coming in from any given direction and another instance leaving in the same direction. And all of the instances of the electron at a given point are fungible so you can’t tell whether the one that left in a given direction came in from that direction or not. So there is no trajectory and no acceleration.

electronpropagation

Figure 1 Instances of the electron become different and then come back together.

Now to deal with some objections you might have.

You may be thinking that people can measure where things are and this seems incompatible with there being lots of instances of the electron in different places. Quantum mechanics deals with this problem in the following way. When you do a measurement, the instances of the electron are divided up into sets. When you see some particular outcome of the measurement, the result means something like ‘this electron is within 5mm and 7mm of the corner of your desk.’ There are multiple sets of instances of the electron that give different measurement results like ‘this electron is within 0mm and 5mm of the corner of your desk’ or whatever. When you do the measurement, your instances and the instances of the measuring instrument are also divided into sets. Each of those sets acts as a record of some particular measurement result. For example, if you are detecting the electron with an instrument with a dial, there is a set of instances for each distinguishable position of the dial.

Why don’t you see multiple instances of yourself interfering in everyday life? Multiple instances of you do interfere in everyday life. They just interfere on a very small scale because it is difficult to arrange interference on a large scale. The reason it is difficult to arrange interference on a large scale is that large differences between instances can be recorded by measuring instruments and other interactions, e.g. – air molecules and light bouncing off your body. That measurement process changes the recorded instances. The only way to undo the change so the instances can become fungible again is to undo the transfer of information about the differences. You would have to track down all the light and air molecules and so on and arrange to exactly undo their interaction with you. This cannot be done with current technology so you don’t undergo quantum interference. As a result, the different instances of you don’t interfere with one another. The different instances of the objects you see around you don’t interfere with each other either. Rather, the instances form independent layers where each layer approximately obeys the laws of classical physics: parallel universes. For more explanation of quantum mechanics see The Fabric of Reality by David Deutsch, especially chapter 2, for more on quantum mechanics and fungibility see The Beginning of Infinity by David Deutsch, Chapter 11 and my post on fungibility.

The electron can have something that looks a bit like a trajectory. The electron can have more instances in some places than in others. The number of instances at different positions can be represented by a curve, like this (Figure 2):

electroncurve

Figure 2 A graph of number of instances with distance along some line for an electron.

If you look at a section of the curve, and find the area of the curve under that section, that tells you the probability of finding the electron in that region. In Figure 3, there is a higher probability of finding the electron in the red region since it has a higher area, so the probability of finding the electron between the two red lines is larger than the probability of finding it between the two green lines:

electroncurveint

Figure 3 A graph of the area under the curve in two different regions of the curve.

I said that there is a number of instances, but that number is continuous and the only way to know anything about it is by calculating or measuring probabilities.

If you look at the electron on a wide enough section of the curve, then the probability of finding the electron there will be close to 1. The curve changes continuously over time so the curve could move so the peak is in different places and that could look a bit like a trajectory:

electroncurvemotion

Figure 4 The curve for the electron moves around, and so the region where there is a large probability of finding the electron moves around. This is the closest thing to a trajectory in quantum mechanics.

For electrons on a large enough scale, and for large objects like a person or car, the trajectory approximation is very accurate. Things move by lumps of high probability moving from one place to another. But the scale of a single atom is small enough that the trajectory approximation doesn’t work.

Stability of atoms

The absence of trajectories by itself doesn’t explain the stability of atoms. It just explains why the problem of radiating accelerating charges doesn’t occur. To understand why atoms are are stable, let’s go back to the electron. To understand the next bit we have to know a little about how the number of instances curve changes over time. The simple version goes a bit like this:

the rate of change of the curve over time = -(curvature of the curve + the potential the electron is in).

The rate of change of the curve near a point is its slope. If the curve is very curvy, then the slope changes a lot. So the curvature is the rate of change of the rate of change of the curve. Figure 5 illustrates this with some lines near the curvy bit illustrating large change of slope, and in less curvy bit representing less change of slope.

electroncurvecurvature

Figure 5 The blue lines change gradient a lot over a small region, so that region has high curvature. The green lines don’t change gradient much and so the region with the green lines doesn’t have much curvature.

The rate of change of the curve over time = -curvature, so near a high peak the curvature is high and the curve gets flatter over time because it decreases at that point. Away from the peak the curvature is smaller and so the curve tends to get flatter more slowly over time. So the curvature term tends to flatten out the curve.

What about the potential? The potential is negative, as explained in the comments on the previous post. So the curve tends to get larger where the potential is large: near the nucleus. The electron can be a stable state that doesn’t change much over time if the flattening caused by the curvature term and the peaking cause by the potential match one another. In this interaction, the electron and proton are recording one another’s position, so their instances are divided up so that the electron and proton stick together.

That’s why atoms are stable in quantum physics.