Technology: Algorithms & Architecture

This program will create advanced algorithms and techniques that will allow robotic systems deployed in large-scale, real-world environments to run computer vision in real time.


Research leader: Tom Drummond (CI)

Research team: Peter Corke (CI), Vincent Lui (RF), Steven Martin (RE), Will Chamberlain (PhD)

Project aim: This project is creating a Vision Operating System that provides a framework to bring multiple sensing and computational resources together to solve complex robotic vision problems. The system will enable robots to use external sensing resources, for instance CCTV cameras in the environment, or sensors mounted on other robots, as well as computation resource sensors, or large computing resource within the network. This kind of framework enables novel solutions to complex problems, where various resources can be combined in different ways to solve complex localisation, navigation, understanding and planning problems.


Research leader: Viorela Ila (CI)

Research team: Matt Dunbabin (CI), Tom Drummond (CI), Richard Hartley (CI), Hongdong Li (CI), Ian Reid (CI), Laurent Kneip (AI), Feras Dayoub (RF), Yasir Latif (RF), Vincent Lui (RF), Mina Henein (PhD), Andrew Spek (PhD), Sean McMahon (PhD), Jun Zhang (PhD)

Project aim: This project is developing novel simultaneous localisation and mapping (SLAM) algorithms that can perform in challenging large-scale, dynamic, dense and non-rigid environments. In particular, it focuses on developing and integrating robot vision algorithms from robust vision, real-time vision and semantic vision areas, into a single SLAM-centred robot navigation framework. The framework will be demonstrated in real-world robot applications including autonomous underwater vehicles (AUVs), unmanned aerial vehicles (UAVs) and ground-based autonomous vehicles.


Research leader: Peter Corke (CI)

Research team: Niko Sünderhauf (CI), Trung Thanh Pham (RF), Steven Martin (Research Engineer), John Skinner (PhD)

Project aim: A robotic vision system’s performance depends on a number of highly variable factors, including the robot’s initial state, the world it perceives, lighting conditions, unforeseen distractors (like moving objects) and unrepeatable sensor noise. A consequence of these factors is that we cannot faithfully repeat a robotic vision experiment, nor can we rigorously and quantitatively monitor different algorithms’ performance. A critical bottleneck for machine learning applications is the limited amount of real-world image data that can be captured and labelled for both training and testing purposes.

To address these challenges the Computer graphics simulation for robotic vision project is investigating the potential of photo-realistic graphic simulation, based on state-of-the-art game-engine technology.

▴ Final sparse 3D reconstruction using multi-camera underwater image acquisition with color-coded magnitude of the uncertainty of the estimation (violet-high uncertainty, red-low uncertainty)

▴ Two orders of magnitude faster covariance recovery in bundle adjustment algorithm.

Key Results in 2017

ANU researchers developed a new method for globally-optimal image-based camera re-localisation.

This method can be applied to solving the vehicle localisation problem for autonomous driving, and is also useful for robot localisation, camera tracking, as well as spatial computation in virtual and augmented reality (VR/AR) applications. Dylan Campbell (ANU PhD Researcher), Lars Petersson, Laurent Kneip and CI Hongdong Li received the MARR PRIZE Honorable Mention award for their paper “Globally-Optimal Inlier Set Maximisation for Simultaneous Camera Pose and Feature Correspondence” at the 2017 IEEE International Conference on Computer Vision (ICCV) held in Venice, Italy. The Marr Prize is considered one of the top honours for a computer vision researcher.

Additionally, the team achieved first place and a “Best Algorithm Award” in the Non-Rigid Structure from Motion (NRSFM) Challenge held at the Conference on Computer Vision and Pattern Recognition (CVPR).

They also developed a successful computer vision algorithm for monocular non-rigid dynamic scene 3D reconstruction and presented the paper “Monocular Dense 3D Reconstruction of a Complex Dynamic Scene from Two Perspective Frames” at the IEEE International Conference on Computer Vision (ICCV).

Klement Istenic, a visiting PhD researcher from the University of Girona, won first prize in the Student Poster Competition at the OCEANS conference in Aberdeen, UK. The paper “Mission-time 3D Reconstruction with Quality Estimation” was a collaboration between RF Viorela Ila at ANU and colleagues at the University of Girona.

The paper ”Fast Incremental Bundle Adjustment with Covariance Recovery” by RF Viorela Ila, Lukas Polok (Apple Inc), Marek Solony (Brno University of Technology) and Klemen Istenic (University of Girona) was presented at the International Conference on 3D Vision (3DV), Quingdao, China, 2017. The paper won the Best Paper Honorable Mention at the conference.

The article “SLAM++. A Highly Efficient and Temporally Scalable Incremental SLAM Framework” by RF Viorela Ila, Lukas Polok (Apple Inc), Marek Solony (Brno University of Technology) and Pavel Svoboda (Brno University of Technology) was accepted into the International Journal of Robotic Research (IJRR).