The history of human intelligence, from the chimpanzee to, say, a string theorist, is a very short one in terms of the Earth’s lifespan. But just wait, says philosopher and author Nick Bostrom. The next chapter – artificial intelligence that reaches human levels of learning – will happen even faster than you might think, like within the next 30 or 40 years.
At that point, Bostrom says, humans will no longer need to do the inventing, as machines will take over. And when that super-intelligence is awakened, he says, we better hope it’s on our side.
To Bostrom, intelligence is power, and supreme intelligence – whether it comes from a human or a machine – will always win out. So while machine intelligence could potentially develop cures for diseases or create livable ecosystems in outer space, he says, its pursuit of optimization could also be driven by interests and goals that have no connection with what humans consider meaningful.
Bostrom calls for the work being done in machine intelligence to be accompanied, and even preceded, by training machines to learn what humans value, predict what we’d approve of and be motivated to act accordingly. Otherwise, he says, our future may be determined by the preferences of artificial intelligence, whatever those preferences may be.
Whether you’re enthusiastic or skeptical about developments in machine intelligence, Bostrom’s perspective is worth heeding. We should at the very least question our ability to control our own creation without doing some serious work before it’s complete.