February 1, 2016 1 Comment
There is an idea doing the rounds that soon Artificial Intelligences will be created that will be capable of doing all the jobs people can do and this will require introducing a universal basic income. Let’s call this the UBI scenario.
The story goes like this. AIs will be capable of doing all the jobs people can do. And all they will need is some computer hardware and electricity. And they won’t want food, they won’t want time off, they won’t have personal problems and so on. So they will be able to do every job better than people.
First, what’s currently called AI doesn’t have anything resembling human level creativity. The way this works at the moment is that somebody has to think about what information is relevant to judging how well to do a task. The person also has to think about what sort of parameters characterise the task. And then the person has to train the program by running many versions of it and selecting the version that works. So to replace any given job will require years 0f work, specialised hardware and software and a lot of time spent by a highly specialised programmer. The fact that job X has been rendered unnecessary will free people up to do other stuff. And it will take time for people to create knowledge about how to do that job well, before the process of replacing them can even get started.
Second, lots of jobs do require human level creativity, such as just about any customer service job. The customer service person has to be able to think about how to satisfy a particular customer on the fly. And each customer will want something slightly different. So there will be no way to replace that customer service job with a special purpose machine designed to do some specific kind of mechanical task.
If you are worried that AI’s will have human level creativity, then don’t. That will not happen for the foreseeable future. Nobody has much idea how the creation of new explanatory knowledge works, except that it involves producing variations on current knowledge and selecting among those variations. Given that we don’t have a full explanation of how creativity works, there is no way anyone can program it.
The UBI scenario is speculation about a technology that doesn’t exist. It also involves speculating about a situation in which there has a vast increase in philosophical knowledge about how people create knowledge. Nobody can know the implications of such knowledge because if you knew its implications, then you would already have that knowledge. The idea that having such knowledge will reduce people to the status of dependents suckling on the state’s teat reveals a bias on the part of those proposing the UBI scenario and nothing else.
To illustrate one way in which the UBI scenario might be wrong, consider the following story. I don’t say this is what will happen, but it is an alternative that illustrates that UBI scenario worries are pure speculation uncontrolled by criticism. In the future we understand how to create knowledge well enough to create AI. As a result, we learn how to make adults creative after life has beaten them down, and everyone becomes extremely productive. Every person is able to support himself with no government assistance. At the same time, we learn how the brain implements creativity, and how to read a person’s brain in such a way that their mind can be implemented in a computer. So people can then transfer their minds into computer hardware and the cost of living drops to the cost of buying the relevant storage space and processing power in a server farm. So then everyone can afford to simulate a standard of living that makes everything Bill Gates has today look like the life of some drunken lice ridden peasant in the Middle Ages by comparison.
There are lots of serious problems with current institutions. For example, Western welfare states already encourage dependence on the government without AI. Academics are an example of this problem: they are dependent on the government for their income. Perhaps they should try to solve that problem instead of speculating about stuff they can’t know anything about.