Solving Intelligence vs Hastening the Arrival of the Next AI Winter
I have always struggled a bit to grasp DeepMind's motto and mission statement, who want to solve intelligence. Well, that is in fact only step 1, step 2 is using it to solve everything else. What does solving intelligence mean? Do I have to believe in it? And more importantly: How do I use this mission statement to guide me to choose the next topic I work on when I have just finished some work?
I recently had conversation with a friend about this very same topic, and it suddenly occurred to me that there is a simple reformulation of solving intelligence mission statement that I can actually identify with:
Accelerate our Progress towards the Next AI Winter
At the end of this machine/deep learning hype cycle, either of two scenarios could occur:
- winter scenario: we have exploited current state of AI/ML to its limits, and discovered the boundaries of tasks we can easily and feasibly solve with this new technology, and we agree that human-level general intelligence is not within those boundaries. As a result, we have characterised what makes humans intelligent a bit better, developed very powerful and valuable technologies along the way, Nature has published a couple more of DeepMind's papers, but research-wise an AI winter is likely to set in. AI will no doubt continue to be useful for industry, but some of the research community will scale back and search for the next breakthrough idea or component.
- holy shit scenario: (also known as Eureka moment) We really do solve AI in a way that is clearly and generally recognised as artificial intelligence, and some form of singularity happens. Intelligence is a case of 'I recognise it when I see it', but it's hard to predict what shape it will take.
Now the thing is, we can describe the winter scenario much more accurately than the holy shit scenario. To get there we need to prove - not in the mathematical sense - that the current technologies have reached their limits, bar some predictable, incremental progress.
Formulating our goal this way also gives a clearer prescriptions in terms of what we should be working on next: We need to find and characterise a few situations that our human intelligence can solve but which appears to be clearly beyond reach for machine learning-based systems.
My indirect approach can be just as productive as focussing on 'solving AI'. After all, one of the two scenarios will happen, so if we are effective at speeding up progress towards one of them, we are probably also accelerating our progress towards the other one if that's the one that ends up happening.
It's like proof by contradiction, not necessarily pessimism
This was not meant to be a negative comment on DeepMind or anyone else who tries to solve intelligence. To the contrary, it is a reformulation of DeepMind's goal in a way I can actually relate to and can start thinking about. Whether or not you believe we will ultimately solve intelligence, I think it is a lot easier to think about how and why an AI winter would happen than to characterise how general AI suddenly happens. It's like using an indirect proof: you don't actually have to believe the negative premise.
In conclusion, as a machine learning researcher, maybe you should be focussed on driving yourself (and all your colleagues) out of your intellectually stimulating jobs, as quickly as you can. Let the next winter come sooner rather than later!