2020 Annual Report

This robotic workstation demonstrator showcased the Centre’s capability in vision-enabled robotic grasping in a clear and compelling way to a general audience.  The everyday tasks it replicated included; picking up everyday objects, placing objects, and exchanging objects with a person.

Project Leader

Peter Corke

Queensland University of Technology

Peter Corke is a Distinguished Professor at the Queensland University of Technology (QUT), director of the ARC Centre of Excellence for Robotic Vision, and director of the QUT Centre for Robotics.  He is a Fellow of the IEEE, the Australian Academy of Science and the Australian Academy of Technology and Engineering.  He is a co-founder of the Journal of Field Robotics, member of the editorial board of the Springer STAR series, former Editor-in-Chief of the IEEE Robotics and Automation magazine and former member of the editorial board of the International Journal of Robotics Research. He has over 500 publications in the field, an h-index of 77 and over 30,000 citations. He has held visiting positions at the University of Pennsylvania, University of Illinois at Urbana-Champaign, Carnegie-Mellon University Robotics Institute, and Oxford University.  Peter created the MATLAB and Python Toolboxes for Robotics and Machine Vision, is the author of the popular textbook “Robotics, Vision & Control”, created the Robot Academy repository of open online lessons, and was named the 2017 Australian University Teacher of the Year by the Australian Government’s Department of Education and Training.  In 2020 he was awarded the prestigious IEEE Robotics and Automation Society George Saridis Leadership Award in Robotics and Automation. He is Chief Scientist for Dorabot and advisor to LYRO Robotics. His interests include visual-control of robots and the application of robots to problems such as large-scale environmental monitoring and agriculture, internet-based approaches to teaching at scale, open-source software development and writing.

Visit Profile

Project Aim

For robots, grasping and manipulation is hard. One focus of the Centre’s research was enabling robots to master the manipulation of everyday objects in realistic unstructured and dynamic settings. To achieve this, a robot must be able to integrate what it sees with how it moves. The result will be a new generation of robots that can operate effectively in “messy” human environments, and be versatile enough to handle new tasks, new objects and new environments.

This robotic workstation demonstrator showcased the Centre’s capability in vision-enabled robotic grasping in a clear and compelling way to a general audience.  The everyday tasks it replicated included; picking up everyday objects, placing objects, and exchanging objects with a person.  Specifically, in 2020 the aim of the project was to train the robot to receive an item from a person, hand an item to a person and implement the demonstrations on a mobile manipulation platform.


Key Results

In 2020, the project team, implemented the infrastructure for a table-top manipulation demonstrator intended to be installed in a public space and run safely and largely unsupervised by ensuring that the system is robust to failure.  Developments included: a Linux system service to control the life-cycle of demonstrator software and required services and hardware; watch-dog services and automatic error recovery to ensure safety; a behaviour tree for overall system control; and an AR-tag based calibration process to allow users to change the workspace layout.

In addition to the robustness aspects of the system, the team also created an interactive demonstrator experience which showcases Centre research results and includes the following: spoken voice call to action with different feedback based on the proximity of detected faces; a pick and place demo; compliant control demo where the robot opens and closes valves; and a hand-over demo where the robot passes an object to the user.

The demonstrator was implemented as a behaviour tree with demo switching and robust error recovery capabilities.  User interaction, for selecting demos and interacting with demos is achieved through a simple web interface.

The demonstrator spent 6 weeks off-site at three different locations, ARM Hub, The Cube QUT and at the World of Drones & Robotics Congress 2020. During these installations the demonstrator operated reliably without constant human supervision and interacted with many hundreds of visitors.

In the final months of 2020, the team continued to add capabilities to the robot including; implementation of a more powerful object recognition system (based on the Centre’s RefineNet-lite), integrating the Centre’s Vision & Language capability, and a new language-to-action interface.

The project team also deployed a version of the demonstrator at the Monash node and open sourced a package of software to interface with Franka-Emika Panda robot.