2020 Annual Report

This project aimed to develop new standardised benchmark task and evaluation metrics for robotic scene understanding.  To aid future research in robotic vision, the project team developed and published a comprehensive software suite that allows users to control robots in both simulation and reality, and evaluate their performance on a variety of relevant tasks.

Project Leaders

Niko Sünderhauf

Queensland University of Technology

Associate Professor Niko Suenderhauf is a Chief Investigator at the Centre where he leads the Robotic Vision Evaluation and Benchmarking project. As a member of the Executive Committee, Niko leads the Visual Learning and Understanding program at the QUT Centre for Robotics.  Niko conducts research in robotic vision, at the intersection of robotics, computer vision, and machine learning. His research interests focus on scene understanding and how robots can learn to perform complex tasks that require navigation and interaction with objects, the environment, and humans.

Associate Professor Suenderhauf is co-chair of the IEEE Robotics and Automation Society Technical Committee on Robotic Perception and regularly organises workshops at leading robotics and computer vision conferences.  He is member of the editorial board for the International Journal of Robotics Research (IJRR), and was Associate Editor for the IEEE Robotics and Automation Letters journal (RA-L) from 2015 to 2019. Niko served as AE for the IEEE International Conference on Robotics and Automation (ICRA) 2018 and 2020.

In his role as an educator at QUT, Niko enjoys teaching Introduction to Robotics (EGB339), Mechatronics Design 3 (EGH419), as well as Digital Signals and Image Processing (EGH444) to the undergraduate students in the Electrical Engineering degree.

Niko received his PhD from Chemnitz University of Technology, Germany in 2012. In his thesis, Niko focused on robust factor graph-based models for robotic localisation and mapping, as well as general probabilistic estimation problems, and developed the mathematical concepts of Switchable Constraints. After two years as a Research Fellow in Chemnitz, Niko joined QUT as a Research Fellow in March 2014, before being appointed to a tenured Lecturer position in 2017.

 

Visit Profile

Feras Dayoub

Queensland University of Technology

Feras Dayoub is a Senior Lecturer and a Chief Investigator with the Centre at QUT. He is the co-lead of the project on benchmarking and evaluation of robotic vision systems at ACRV. Feras is deeply interested in the reliable deployment of machine learning and computer vision on mobile robots in challenging environments. From 2016 to 2019, Feras was a Research Fellow with the Centre, based at QUT. From 2012 to 2016, he was a Post-Doctoral Research Fellow with the robotics group at QUT where he worked with various types of robots including Agrobotics as part of a Queensland DAF Agricultural Robotics Program in QUT, Autonomous Underwater Vehicles (AUV) as the computer vision lead on the COTSBot project, Unmanned Aerial Vehicles (UAV) as part of a project on assisted autonomy during the inspection of power infrastructure and Mobile service robots as a research fellow on an Australian research council discovery project on lifelong robotic navigation using visual perception. Feras obtained his PhD in 2012 from Lincoln Centre for Autonomous Systems (L-CAS), UK.

Visit Profile

Team Members

David Hall

Queensland University of Technology

David is a research fellow with the ACRV whose long-term goal is to see robots able to cope with the unpredictable real world.

He began this journey with his PhD on adaptable systems for autonomous weed species recognition as a part of the strategic investment in farm robotics (SIFR) team. Since April 2018 he has worked as part of the robotic vision challenge group within the ACRV and QUT Centre for Robotics designing challenges, benchmarks, and evaluation measures that assist emerging areas of robotic vision research.

As a part of the robotic vision challenge group, he has assisted in defining the field of probabilistic object detection (PrOD), creating the probability-based detection quality (PDQ) evaluation measure, developing a PrOD robotic vision challenge and developing a scene understanding robotic vision challenge.  He now looks forward to solving these problems and giving the world robust and adaptable robotic vision systems.

Visit Profile

Haoyang Zhang

Queensland University of Technology

Haoyang joined the Centre as a Research Fellow in December 2018 after completing his PhD at ANU and Data61 CSIRO in July 2018. During his PhD Haoyang worked predominantly with Associate Professor Xuming on visual object detection and segmentation. His research interests include computer vision and its application to robots. He is now working on the Centre Robotic Vision Evaluation & Benchmarking project which will develop new standardised benchmark tasks, evaluation metrics, and competitions for robotic vision.

Visit Profile

Suman Bista

Queensland University of Technology

Suman Bista joined the Centre in 2017 as a Research Fellow based at QUT. He worked on visual navigation and recognition for the Pepper humanoid robot and was supervised by Centre Director, Professor Peter Corke. His research interests include Visual Navigation, Visual learning, Robotics Vision and Optimisation.  In August 2019, Suman joined the Centre’s Robotic Vision Evaluation and Benchmarking Project, continuing as a Research Fellow in this role.  Suman completed his PhD titled “Indoor Navigation of Mobile Robots based on Visual Memory and Image-Based Visual Servoing” in 2016 with the Lagadic Group, INRIA Rennes Bretagne Atlantique, Rennes, France under the supervision of Dr Francois Chaumette and Dr Paolo Robuffo Giordano.

He also holds a Masters in Computer Vision from University of Burgundy, France (2013) and a Bachelor in Bachelors in Electronics & Communication Engineering from  Pulchowk Campus, Institute of Engineering, Tribhuvan University, Nepal (2009).

Visit Profile

Rohan Smith

Queensland University of Technology

Rohan received his Bachelor of Mechatronics from QUT in 2016. He worked on QUT’s Robotronica event in 2015 and was part of the team of postdoctoral research fellows, PhD researchers and undergraduate students working on the winning entry to the Amazon Robotics Challenge in 2017.

Rohan has been working as a research engineer at QUT on the Centre’s Robotic Vision Evaluation and Benchmarking project since mid 2018. He is establishing and maintaining multiple mobile robot platforms for use by Centre researchers. Rohan hopes to make it easier for researchers to develop better ways for robots to interact with the real world.

Visit Profile

Ben Talbot

Queensland University of Technology

Ben is a roboticist working at the intersection of novel research and the engineering required to bring these advances in robotics to the real world. He relishes the challenges that arise in translating research to real world applications, often drawing inspiration from our understanding of human cognitive processes when creating solutions. His research interests include cognitive robotics, artificial intelligence, and real world applications of robotic systems. He currently holds the position of Research Fellow at QUT, working in both the Australian Center for Robotic Vision and QUT Centre for Robotics.

With over 8 years of experience in robotics, Ben has worked on projects with a blend of both research and engineering outcomes. His PhD research was the core of an ARC Discovery Project (DP140103216) on using the human cues in built environments for robot navigation. Since then he has been working on facilitating the translation of novel research to the real world with the Evaluation and Benchmarking project in the Centre.  The work has created BenchBot; software that allows researchers to explore the performance of their novel research in photorealistic 3D simulation and on real robot platforms, with only a few lines of code. Ben is passionate about creating work that strengthens the robotics community, which has resulted in high-quality open source software like BenchBot and OpenSeqSLAM2.0.

Visit Profile

Project Aim

This project aimed to develop new standardised benchmark task and evaluation metrics for robotic scene understanding. The goal for 2020 was to create a new, annual robotic scene understanding competition to be organised at a leading computer vision or robotics conference. The aim was to recreate for robotic vision, the positive effects that competitions have had for the advances in computer vision and deep learning. We furthermore wanted to continue running our Probabilistic Object Detection challenge created in 2019, and release the BenchBot application programming interface (API), a software suite allowing easy robotic vision evaluation in simulation and reality.


Key Results

The project team released a new robotic vision challenge in 2020, the first for Scene Understanding (Semantic SLAM and Scene Change Detection). The task in this challenge is to explore an unknown indoor environment and build a detailed map containing all the objects in the environment. The challenge requires a robot to map apartments, office spaces or cluttered kitchen environments. In collaboration with Google AI, Nvidia, Facebook AI Research, and other international partner universities, we will present this new challenge at the conference for Computer Vision and Pattern Recognition (CVPR), the leading computer vision conference, in 2021.

Our Scene Understanding benchmark challenge builds on our new BenchBot API, a software suite that allows users to control robots in simulation as well as in reality and evaluate them for important robotic vision tasks. BenchBot provides a simple software interface to receive sensor data (including RGB and depth images) from a robot, and send motion commands to the robot. With only a few lines of Python code, the user can successfully control a robot based on its sensor feedback. Importantly, exactly the same code can be executed on a simulated robot in a high-fidelity simulation environment, and on a real robot in our lab. Users provide their code in a Docker container and BenchBot handles the execution on either the simulated or real robot platform.

We also organised the 3rd iteration of the Probabilistic Object Detection Challenge and organised successful workshops at the International Conference on Robotics and Automation (ICRA) and the European Conference on Computer Vision (ECCV).

The team also developed an improved object detector, reviewed existing semantic SLAM algorithms, and ran them as baselines on the new challenge dataset.