|Member Login

Research

Technology

Robots that are fast and low cost

Overview


Algorithms & Architecture

This program aims to create advanced algorithms and techniques to allow computer vision to be run in real-time on robotic systems deployed in large-scale real-world applications, using distributed sensing and computation resources and to provide efficient and unified software platforms for real-time robot visual SLAM algorithms and techniques development and employment in real-world environment. There are three research projects (AA1, AA2, and AA3) under this program, each addressing a significant aspect of robotic vision research, development, and applications. AA1 (VOS) will provide a common, distributed computational platform that takes advantage of distributed sensing and computational capabilities to solve large complex robotic problems. AA2 (ACRV-SLAM) is focused on the development and integration of robot vision algorithms in robust vision, real-time vision and semantic vision areas, into a single SLAM-centred robot navigation framework. The framework will be demonstrated in real-world robot applications including AUV (autonomous underwater vehicle), UAV (unmanned aerial vehicle, or flying robot), and ground-based autonomous vehicles. AA3 (SIMUL) aims to provide a photorealistic graphics simulation environment to facilitate and accelerate the development of advanced robot vision algorithms and systems.

People


Hongdong Li
View Profile
Tom Drummond
View Profile
Viorela Ila
View Profile
Peter Corke
  • Peter Corke

    Centre Director, Chief Investigator, QUT Node Leader, AA3 Project Leader

  • Queensland University of Technology

View Profile
Vincent Lui
View Profile
William Chamberlain
View Profile
Steve Martin
View Profile
Richard Hartley
View Profile
Hongdong Li
View Profile
Ian Reid
  • Ian Reid

    Deputy Director, Semantic Representations Program Leader, University of Adelaide Node Leader and Chief Investigator

  • University of Adelaide

View Profile
Andrew Davison
View Profile
Frank Dellaert
View Profile
Marc Pollefeys
View Profile
Yasir Latif
View Profile
Feras Dayoub
View Profile
Mina Henein
View Profile
Thanuja Dharmasiri
View Profile
Andrew Spek
View Profile
Yi “Joey” Zhou
View Profile
Sean McMahon
View Profile
John Skinner
View Profile
Niko Sünderhauf
View Profile
Trung Pham
View Profile

Projects


AA1: VOS-distributed robotic vision


Ongoing

Tom Drummond, Peter Corke, Vincent Lui, William Chamberlain, Steve Martin

The goal of AA1 is to create a Vision Operating System that provides a framework for bringing together multiple sensing and computational resources to solve complex robotic vision problems. This will enable robots to make use of external sensing resources (e.g. CCTV cameras in the environment, or sensors mounted on other robots) as well as computation resources (either attached to those sensors, or provided as a large computing resource within the network). This kind of framework enables novel solutions to complex problems in which the various resources are combined collaboratively to solve complex localisation, navigation, understanding and planning problems.

tom.drummond@roboticvision.org

AA2: ACRV SLAM Framework


Ongoing

Viorela Ila, Richard Hartley, Hongdong Li, Tom Drummond, Ian Reid, Andrew Davison, Frank Dellaert, Marc Pollefeys, Yasir Latif, Vincent Lui, Feras Dayoub, Mina Henein, Thanuja Dharmasiri, Andrew Spek, Yi “Joey” Zhou, Sean McMahon

This project will develop novel SLAM algorithms which can perform in challenging environments (large-scale, dynamic, dense, non-rigid). ACRV-SLAM is a common framework that integrates efficient implementations of the proposed algorithms with the goal to facilitate their distribution to the robotics community and the industrial partners, and to produce high quality demonstrators.

viorela.ila@roboticvision.org

AA3: Computer graphics simulation for robotic vision


Ongoing

Peter Corke, John Skinner, Steve Martin, Niko Sünderhauf, Trung Pham

The performance of a robotic vision system depends on the initial state of the robot and the world it perceives as well as the lighting conditions, unforseen distractors (transient moving objects) and unrepeatable sensor noise. A consequence is that no robotic vision experiment can ever be repeated and the performance of different algorithms cannot be rigorously and quantitatively compared. For machine learning applications a critical bottleneck is the limited amount of real world image data that can be captured and labeled for both training and testing purposes. This project investigates the potential of photo-realistic graphical simulation based on state-of-the-art game-engine technology to address both these challenges.

peter.corke@roboticvision.org

Australian Centre for Robotic Vision
2 George Street Brisbane, 4001
+61 7 3138 7549