|Member Login

Research

Learning

Robots that learn and improve

Overview


Visual Learning

Visual learning has enormous potential to solve previously impossible problems in machine perception. The recent deep learning breakthrough from the machine learning community has allowed researchers not only to address new visual learning problems, but also to solve old problems. In general, the success of deep learning is attributed to the vast computational resources available and large annotated datasets containing millions of images. In spite of the excitement generated by these recent developments, there is a lack of understanding of how deep learning works, which invites questions about convergence, stability and robustness of such models. This program addresses important challenges in deep learning, such as: effective transfer learning, role of probabilistic graphical models in deep learning, efficient training and inference algorithms, etc. Answering these questions will allow us to design and implement robust visual learning systems that will help our robots fully understand the environment around them.

People


Gustavo Carneiro
View Profile
Vijay Kumar
View Profile
Chunhua Shen
View Profile
Ian Reid
  • Ian Reid

    Deputy Director, Semantic Representations Program Leader, University of Adelaide Node Leader and Chief Investigator

  • University of Adelaide

View Profile
Basura Fernando
View Profile
Jian “Edison” Guo
View Profile
Yan Zuo
View Profile
Zhibin Liao
View Profile
Rafael Felix
View Profile
Benjamin Meyer
View Profile
Bohan Zhuang
View Profile
Ben Harwood
View Profile
Tong Shen
View Profile

Projects


VL1: Fundamental Deep Learning


Ongoing

Vijay Kumar, Gustavo Carneiro, Chunhua Shen, Ian Reid, Basura Fernando, Jian “Edison” Guo, Ben Harwood, Yan Zuo, Zhibin Liao, Rafael Felix

It is essential that the Centre be active at the forefront of current machine learning techniques. To explore, develop and exploit novel network architectures; to develop detection and instance level/pixel level annotations for 1000s classes and open sets of classes. To develop efficient and/or weakly supervised and/or online trained and/or unsupervised and/or zero-shot learning models. Active learning with and from temporal data.

vijay.kumar@roboticvision.org

VL2: Learning for Robotic Vision


Ongoing

Chunhua Shen, Gustavo Carneiro, Vijay Kumar, Ian Reid, Benjamin Meyer, Bohan Zhuang, Ben Harwood, Tong Shen

Learning that is specific to robotic vision tasks where there are resource constraints (embedded vision system). Video segmentation (i.e. image segmentation for video, applied to static scenes / moving camera, and general scenes with unknown motion; DL suitable for deployment on storage and power constrained embedded systems (eg COTSbot); Fast, approximate, and asymmetrically computed inference; Robust inference (via understanding failure modes); Unsupervised learning; Online and lifelong learning for robotic vision; “Any-time” algorithms.

chunhua.shen@roboticvision.org

Australian Centre for Robotic Vision
2 George Street Brisbane, 4001
+61 7 3138 7549