Sensing
Robots that see in all conditions
The Robust Vision program develops robotic vision algorithms and novel vision hardware to enable robots to see and act in all viewing conditions. The program is developing a suite of algorithms that enable robots to perceive their environments and consequently act purposefully under the incredible range of environmental conditions possible, including low light, rain, snow, ice sleet, fog, smoke, dust, wind, glare and heat. In addition, it is further developing innovative sensing hardware to facilitate robot operation under challenging viewing conditions such as low light, or through partial obscuration cameras and hyperspectral cameras.
The key question we are addressing is, how can innovations in existing computer vision and robotic vision techniques and vision sensing hardware enable robots to perform well under the wide range of challenging conditions encountered by robots and applied computer vision technology in the real world?
Michael Milford
Chief Investigator, Project Leader (Self-driving cars Demonstrator)
Queensland University of Technology
Chuong Nguyen
Research Affiliate
CSIRO DATA61, Canberra
Hongdong Li
Chief Investigator
Australian National University
Chunhua Shen
Chief Investigator
University of Adelaide
Jonathan Roberts
Chief Investigator
Queensland University of Technology
Peter Corke
Centre Director, Chief Investigator, QUT Node Leader, Project Leader (Manipulation Demonstrator)
QUT
Niko Sünderhauf
Chief Investigator, Project Leader (Robotic Vision Evaluation & Benchmarking)
Queensland University of Technology
Jürgen “Juxi” Leitner
Research Fellow, Project Leader (Manipulation and Vision)
Queensland University of Technology
Sean McMahon
PhD Graduate, Queensland University of Technology
Brisbane, Australia
Sourav Garg
Research Affiliate
Queensland University of Technology
James Mount
PhD researcher
Queensland University of Technology
Dorian Tsai
PhD researcher
Queensland University of Technology
Dan Richards
Former PhD researcher
Brisbane, Australia
Timo Stoffregen
PhD researcher
Monash University
Medhani Menikdiwela
PhD researcher
Australian National University
James Sergeant
Former PhD researcher
Dorabot, Australia
2018 onwards
Michael Milford, Sourav Garg, James Mount, Steve Martin
This demonstrator is focused on developing a suite of mini autonomous vehicles that can be replicated across our Centre and used at each of the four Centre nodes. They will enable researchers to demonstrate and develop their research. These platforms will engage public, industry and government with a range of technology demonstrations around self-driving car technology and associated ethical and technological problems.
- 2017
Michael Milford, Hongdong Li, Jonathan Roberts, Chunhua Shen, Jürgen “Juxi” Leitner, Chuong Nguyen, Niko Sünderhauf, Sourav Garg, Sean McMahon, James Mount, James Sergeant, Medhani Menikdiwela
Robust robotic visual recognition for adverse conditions will develop algorithms that solve the fundamental robotic tasks of place recognition and object recognition under challenging environmental conditions including darkness, weather, adverse atmospheric conditions and seasonal change, and translate them into applications in industry. This project is relatively mature and hence is now pushing more heavily towards industry outcomes and engagement than some of the other projects.The key question we are addressing is, how can existing computer vision and robotic vision techniques be innovated to enable them to perform well under the wide range of challenging conditions encountered by robots and applied computer vision technology in the real world?
August 2016 - 2017
Chuong Nguyen, Peter Corke, Michael Milford, , Dan Richards, Dorian Tsai, James Mount, Timo Stoffregen
Novel visual sensing for robotic operation in adverse conditions will advance the performance of robot vision algorithms by using new conventional low light cameras, as well as rotational filters, hyperspectral cameras and thermal cameras to improve robot autonomy under any viewing condition. We are looking at the development of new algorithms that exploit the particular advantages of these innovative cameras to enable new performance benchmarks in tasks such as scene understanding, place recognition and object recognition. The development of hardware-software solutions that can deal with these corner cases - reflections, transparency and low light conditions - will be applicable to all Centre projects involving visual sensing and can be used by those projects to robustify their systems for performance under adverse conditions.The key question we are addressing is, how can existing and new special vision hardware be exploited and developed in conjunction with new algorithm development to enable robotic vision systems to operate well under the wide range of challenging conditions encountered by robots and applied computer vision technology in the real world?
Australian Centre for Robotic Vision
2 George Street Brisbane, 4001
+61 7 3138 7549