What is Deep Learning and how does it work?
Imagine a world where Facebook automatically finds and tags friends in your photos; where Skype translates spoken conversations in real time; or where Google Deepmind’s AlphaGo computer program masters the ancient game of Go. It may not be as far off as you think.
Welcome to the world of machine learning and deep-neural networks, or more simply, ‘deep learning’.
Deep learning is a subset of machine learning that examines computer algorithms to learn and improve independently. One such technique is called ‘neural networks’. Inspired by the nerve cells (neurons) that make up the human brain, neural networks comprise layers (also termed ‘neurons’) that are connected adjacent to one another; the more layers, the ‘deeper’ the network. The algorithms in this technique are the foundation for deep learning and play a central part in image recognition—and robotic vision.
Riddled with seemingly insurmountable problems, Neural networks have been largely ignored by machine learning researchers in the past, until a confluence of events thrust neural networks back into the forefront of the field.
“What gave neural networks the biggest leg-up,” explains CI Ian Reid, “was the advent of a mammoth amount of labelled data.”
In 2007, a pair of computer scientists at Stanford University and Princeton University launched ImageNet, a database of millions of labelled images from the Internet. Now ImageNet provides neural networks with about 10 million images and 1000 different labels. This helped form the foundation for neural networks to become a central tool of robot vision, and gains have been made since.
“A big step has been the emergence of convolutional neural networks,” Ian said.
“As with traditional neural networks, convolutional counterparts are made of layers of weighted neurons.”
These neurons are not solely modelled on the workings of the brain but also from the visual system itself.
As networks get deeper and researchers unwrap the secrets of the human brain on which the they are modelled, the networks will become ever more nuanced and sophisticated.
“As we learn more about the algorithms coded in the human brain and the tricks evolution has given us to help us understand images,” says Director Peter Corke, “we will be reverse-engineering the brain and borrowing these tricks.”