2019 Annual Report

This demonstrator takes the form of a robotic workstation where we can showcase the Centre’s capability in vision-enabled robotic grasping in a clear and compelling way to a general audience. The everyday tasks it aims to replicate are picking up everyday objects, placing objects, and exchanging objects with a person.

Project Aim

Creating robots that see is core to the Centre’s mission. The best way to demonstrate that a robot can see is for it perform a useful everyday action like hand-eye coordination.  

This demonstrator takes the form of a robotic workstation where we can showcase the Centre’s capability in vision-enabled robotic grasping in a clear and compelling way to a general audience. The everyday tasks it aims to replicate are picking up everyday objects, placing objects, and exchanging objects with a person. A key focus for 2020 is having the robot receive an item from a person, hand an item to a person, and implementing the demonstrations on a mobile manipulation platform. 

For robots, grasping and manipulation is hard. The focus of our research is for robots to master manipulation in unstructured and dynamic environments that reflect the unpredictability of the real world. To achieve this, they need to be able to integrate what they see with how they move. The result will be robots that can operate more effectively and be robust enough to handle new tasks, new objects and new environments. 


Key Results

The project team implemented a generalised interface to the Franka-Emika Panda robot that allows for position control and velocity control with compliance and joint limits. The implementation has enabled our research students to more easily access the advanced capabilities of this robot.  The code has been open sourced to the global research community and also been installed on Panda robots at the Centre’s Monash node. 

Within the Centre’s Manipulation & Vision project, PhD Researcher Doug Morrison has developed a generative grasping convolutional neural network (GG-CNN) which predicts a pixel-wise grasp quality that can be deployed in closed-loop grasping scenarios. The network has achieved excellent results in gasping, particularly in cluttered scenes, which has seen an 84% grasp success rate on a set of previously unseen objects, and 94% on household items. His paper with co-authors Centre Director Peter Corke and Research Fellow Juxi Leitner is called “Learning robust, real-time, reactive robotic grasping” and was published in The International Journal of Robotics Research. The project team ported GG-CNN to the Centre’s CloudVis service, our cloud computer vision platform which makes some of the most recent developments in computer vision available for general use. This allows the grasp planner to work on a low-end computer without a GPU which is important in making the demonstrator easy to run on any computer. 

A tabletop demonstrator for GG-CNN was created that allows an unskilled user to command the robot to pick up and bin everyday objects placed on the tabletop. This demonstrator uses a Franka-Emika Panda arm with visual input from an end-effector mounted RGB-D camera. This demonstration is an exemplar of many realworld applications that require grasping complex objects in cluttered environments. 

Research Engineer Gavin Suddrey developed a touchscreen driven demonstrator front end that manages a complete Robot Operating System startup and shutdown and has a touch-based Graphical User Interface (GUI) for individual demonstrator applications.  This means that the demonstrations can be run anywhere and anytime without requiring detailed knowledge of robotic software.  The applications are modular and can be installed and updated into the demonstrator framework by end-users across the Centre. 

A prototype of a tabletop demonstrator for valve turning was created and changes the state of a user-selected valve (open to closed or closed to open) based on input from an end-effector mounted RGB-D camera and using compliant motion. 


Activity Plan for 2020

  • Open source the motion control software developed for the Franka-Emika Panda Robot.
  •  Integrate a comprehensive table-top manipulation demonstrator that incorporates technologies from across the Centre from vision-based grasp planning, to interaction with people through gesture and language.
  • Deploy the demonstrator interface on the mobile manipulation platform and demonstrate the ability to move objects between shelves and tables.