|Member Login

Research

Learning

Robots that learn and improve

Overview


Visual Learning

Visual learning has enormous potential to solve previously impossible problems in machine perception. The recent deep learning breakthrough from the machine learning community has allowed researchers not only to address new visual learning problems, but also to solve old problems. In general, the success of deep learning is attributed to the vast computational resources available and large annotated datasets containing millions of images. In spite of the excitement generated by these recent developments, there is a lack of understanding of how deep learning works, which invites questions about convergence, stability and robustness of such models. This program addresses important challenges in deep learning, such as: effective transfer learning, role of probabilistic graphical models in deep learning, efficient training and inference algorithms, etc. Answering these questions will allow us to design and implement robust visual learning systems that will help our robots fully understand the environment around them.

People


Gustavo Carneiro
View Profile
Chunhua Shen
View Profile
Tom Drummond
View Profile
Rafael Felix
View Profile
Tong Shen
View Profile
Vladimir Nekrasov
View Profile
Ming Cai
View Profile
Benjamin Meyer
View Profile
Ben Harwood
View Profile
Yan Zuo
View Profile
Gil Avraham
View Profile
Luis Guerra
View Profile
Adrian Johnston
View Profile

Projects


Learning


2018 onwards

Gustavo Carneiro, Chunhua Shen, Tom Drummond, Rafael Felix, Tong Shen, Vladimir Nekrasov, Ming Cai, Benjamin Meyer, Ben Harwood, Yan Zuo, Gil Avraham, Luis Guerra, Adrian Johnston

This program explores the enormous potential that still exists towards solving previously impossible problems in machine perception. The recent breakthroughs from the machine learning community have allowed researchers to address new visual learning problems, as well as solve old problems. It addresses the important challenges in deep learning, such as effective transfer learning, the role of probabilistic graphical models in deep learning, and efficient training and inference algorithms. Answering these questions will allow us to design and implement strong visual learning systems that will help robots to understand the environment around them.

gustavo.carneiro@adelaide.edu.au

Demonstrator: Social Robots


2017 onwards

Belinda Ward, Sue Keay, Suman Bista, Nicole Robinson, Gavin Suddrey

Australia spends $115b annually on health and 13% of our workforce is deployed in the healthcare sector, yet many patients receive less than an hour a day of contact with other people. Many residents in aged care facilities receive no visitors at all and our aging population is growing, with 3.7m people over 65 in Australia currently. Social robots can provide a source of interaction, or can be used to check on human wellbeing and to encourage positive health outcomes. For example, they might assist in control of infectious diseases or to encourage patients to follow medical advice.Social robots are an emerging and disruptive innovation and also one ideal to showcase robotic vision capabilities. Social robots are currently limited in their application because they do not move around or navigate well and have very limited understanding of their environment. A demonstrator showing a social robot seamlessly moving from room to room and making sensible decisions will showcase the advantages of applying robotic vision to solve the challenge of robots operating in unstructured environments and under uncertainty. To our knowledge, no one is at the stage of demonstrating even our current level of capability in social robotics.We will demonstrate a social robot entering a room in a hospital or aged care facility, identifying the roles and activities of the people present, and interacting appropriately with them. The robot will be able to enter an unfamiliar room, and understand whether the occupant is asleep, eating, reading, etc.

br.ward@qut.edu.au

Previous Project: VL2: Learning for Robotic Vision


- 2017

Chunhua Shen, Gustavo Carneiro, Vijay Kumar, Ian Reid, Chao Ma, Benjamin Meyer, Tong Shen, Hui Li, Yuchao Jiang, Bohan Zhuang

Learning that is specific to robotic vision tasks where there are resource constraints (embedded vision system). Video segmentation (i.e. image segmentation for video, applied to static scenes / moving camera, and general scenes with unknown motion; DL suitable for deployment on storage and power constrained embedded systems (eg COTSbot); Fast, approximate, and asymmetrically computed inference; Robust inference (via understanding failure modes); Unsupervised learning; Online and lifelong learning for robotic vision; “Any-time” algorithms.

chunhua.shen@roboticvision.org

Previous project: VL1: Fundamental Deep Learning


July 2016 - 2017

Vijay Kumar, Gustavo Carneiro, Chunhua Shen, Ian Reid, Basura Fernando, Jian “Edison” Guo, Ben Harwood, Yan Zuo, Rafael Felix, Adrian Johnston

It is essential that the Centre be active at the forefront of current machine learning techniques. To explore, develop and exploit novel network architectures; to develop detection and instance level/pixel level annotations for 1000s classes and open sets of classes. To develop efficient and/or weakly supervised and/or online trained and/or unsupervised and/or zero-shot learning models. Active learning with and from temporal data.

vijay.kumar@adelaide.edu.au

Australian Centre for Robotic Vision
2 George Street Brisbane, 4001
+61 7 3138 7549