Why We Won't Build Sentient AI in our Lifetime
Many well-known tech personalities and intellectuals are rather afraid of the rise of General AI. In this context, Artificial General Intelligence (AGI) is when a machine is able to perform any arbitrary or intellectual task as well as a human. This is opposed to Narrow AI, which is when technology is used to achieve (super)human-level performance in one specific task: playing chess, driving a car, accountancy, surgery.
The commonly-held fear is that if General AI is achieved, then the machine will be able to figure out how to improve itself, and very quickly surpass human intelligence to become artificial superintelligence (ASI). This is something that we humans can't possibly control. As a result of this, we will become subordinates to our robot masters... or so the argument goes.
Are we actually building general AI?
No, I don't think we are. At least the research I am familiar with does not point in this direction. We are currently building Narrow AI, and I think we need to do something radically different if we wanted to build sentient, truly General AI.
Some people argue that these narrow AI technologies will somehow add up so that sentient AI emerges. But this is misguided
Even if we solve Narrow AI a million times, this does not equate to General AI.
Every AI system built today has a goal prescribed by an engineer - an objective function - that determines the task that the system should be good at doing. It is highly unlikely that any such objective function would give rise to something we'd call General AI. Humans are unique in that they can set their own objectives, and decide what's a useful thing to do in any context. We're nowhere near understanding how that would happen in a machine.
What are all those guys building instead?
AI researchers at universities, Google, Facebook and Baidu are building generic building blocks for narrow AI. They are ‘general’ inasmuch as you can use them in a large number of narrow AI tasks. For example, once you have a good computer vision system that can understand and represent the 3D world around it, you can specialise it to an array of problems: augmented reality, pedestrian detection, classifying objects, visual information retrieval, face recognition, robot control. In machine learning, this specialisation across tasks is called transfer learning, something that today's deep neural networks are indeed very good at.
These building blocks for vision, speech and natural language are very powerful and transformative. They are a bit like microprocessors or PCs: general-purpose workhorses that people can specialise to perform various kinds of work. However, there is still a lot of work to do in computer vision, and even solving vision does not equate to solving intelligence.
Can we build general AI?
I don't know. Similarly to Andrew Ng's opinion, I think asking this is akin to asking if we will succeed in colonising planets outside the solar system. My answer is a vague maybe, but it's probably too early to ask.
The root of the problem is we don't even know how to test if it works. The Turing test is not really a test of general AI, it is actually a Narrow AI task, just a pretty hard one. You can think of the Turing test as a partially observed control problem: the goal is actually well defined: manipulate people into thinking you are a human.
If simple mobile games like Candy Crush can cause dopamine addiction, why couldn't a chat robot make you fall in love with it?
Machine learning can figure complex things out if it has enough data and if you can tell it what the goal or cost-function is. But it is not magic, it's just a new paradigm of engineering.
Should we aim to build general AI?
That's an interesting question. DeepMind cofounder Demis Hassabis’ motto is "solve intelligence and then use it to help society solve everything else". Let's forget for a minute that we don't know what solving intelligence means and look at this statement from the perspective of how AI is going to create value for society.
There is a short supply of good machine learning talent today. So if you are an AI researcher/entrepreneur, how are you going to create most value? Do you focus on creating AI technologies that solve specific narrow problems really well (such as surgery, education, transport etc.), or work towards the super-goal of solving general AI? The former is likely to see results quicker, and could still generate solutions that can be transferred to other areas and industries.
At the same time narrow AI deployments require a lot of traditional engineering to work alongside general purpose machine learning tools, and will do so for a long time. Even though the vision and control algorithms can be adapted to a new context, the reality is they are still very specialised and fragile. To apply them in a hard task you will need an AI engineers to specify the overall goal of the system and to encode problem-specific constraints and rules. Surgery, for example, is a very complex task and it is impractical to expect an end-to-end machine learning system to learn performing operations by trial-and-error the same way it may learn to play Atari games.