2019 Annual Report

For our final year of operation, the Centre has streamlined our 2020 activity plan around the three key strategic priorities: Culture, Engage and Science.

CULTURE

We are creating a vibrant, energetic, future-focused and collaborative robotic vision community that is developing knowledge leaders for both industry and academia.

STRATEGIC OBJECTIVES

  • Establish and maintain an exciting, high-energy collaborative atmosphere that supports world class research.
  • Develop the next generation of robotic vision experts through effective recruitment and retention.
  • Ensure the Centre functions as a cohesive organisation of interactive collaborative and highly effective research teams.

KEY TASKS

  • Maintain a full complement of Research Fellows.
  • Aim for 90 per cent retention of PhD enrolments.
  • Implement initiatives and monitor progress towards the Centre’s Gender KPI targets.
  • Provide knowledge leadership training to all early career researchers to assist with their career and professional development.
  • Hold the Centre’s final annual symposium – RoboVis – to celebrate the Centre’s achievements.
  • Hold a Research Project Planning Day to finalise research project milestones in the Centre’s final year.
  • Undertake two final project review rounds ensuring project milestones are completed by the Centre’s end date.
  • Hold fortnightly Centre Executive Committee meetings and three Centre Advisory Board Meetings.

ENGAGE

We engage with communities about the potential of robotic vision technologies by sharing our expertise and providing access to robotic vision resources.

STRATEGIC OBJECTIVES

  • Engage with people about robotic vision technologies and the impact these will have on society and the way we work.
  • Identify and engage with key stakeholders on the potential applications of robotic vision.
  • Establish vibrant national and international robotic vision communities.
  • Increase inclusive robotic vision educational opportunities.
  • Connect research organisations, governments, industry and the private sector to build critical mass in robotic vision.
  • Demonstrate how Robotic Vision can solve innovative challenges and transform the world.

KEY TASKS

  • Share our achievements and research discoveries through news stories via a range of traditional, online and social media platforms.
  • Host visits and tours to Centre laboratories in order to teach and educate key groups (i.e. schools, government and industry) about Robotic Vision.
  • Partner with industry in order to ensure a future for the Robotic Vision Summer School after the Centre ends.
  • Investigate the creation of a Robotic Vision Association as a vehicle to continue the research network and collaboration, established by the Centre, beyond our end date.

SCIENCE

We are leading the world in transformational research in the new field of robotic vision.

STRATEGIC OBJECTIVES

  • Create robots that see and understand their environment
  • Deliver internationally recognised research in robotic vision
  • Create and implement projects based on collaboration and innovation that enhance research outcomes.

KEY TASKS

Research Project: Scene Understanding

  • Incorporate ideas from the Learning research project on representing uncertainty in deep models, to enable fusion of local dense geometric maps.
  • Integrate dynamic models of motion into the project’s Object-based SLAM system, and use this for effective planning in the face of dynamic change.
  • Develop two open-source demonstrators of the project’s geometric and semantic scene understanding capability:
      1. object-based SLAM in a dynamic environment; and
      2. end-to-end self-trained visual odometry and dense mapping SLAM.

Research Project: Vision and Language

  • Develop a robust and state-of-the-art model for vision-language-navigation and our new task REVERIE (Remote Embodied Visual Referring Expression in Real Indoor Environment).
  • Develop a demonstrator on the robotic arm located at the Centre’s University of Adelaide node. The team has demonstrated V2L technology previously on the Pepper robot. This demonstrator will extend this work, with the aim of enabling a robotic arm to follow novel natural language instructions.
  • Develop technology to enable a robot to identify information that it needs to specify and then complete its task. Moving from VQA into Visual Dialogue will provide the capability to ask questions that seek information necessary to complete a task, and to identify when enough information has been gathered and an action should be taken.

Research Project: Manipulation and Vision

  • Propose a dataset of 2,000+ 3D object models that will be methodically generated to be diverse in shape complexity and grasping difficulty which would make it a universal benchmarking tool for robotic grasping research.
  • Develop a mobile manipulator or prototype ‘robotic butler’ that can perform a range of real-world tasks (primarily domestic chores).
  • Develop a system that will allow a robot to interact with humans, handing over a range of everyday objects.
  • Develop methods for robots to successfully (and correctly) place objects on a flat surface such as a table or shelf. For example, placing a bottle of water, upright, and not on its side. This forms part of the project’s grasping with intent work.

Research Project: Robots, Humans and Action

  • Model interaction between humans and objects (initially in images and then extended to video).
  • Forecast human pose and (a) predict object interaction (b) anticipate activity within a known context.
  • Automatically learn activities as grammars/state machines from video that can be understood by a robot.
  • Demonstrate 3D pose estimation and tracking of human joints on video in real-time. Render human skeleton form arbitrary viewpoint.
  • Project demonstrator showing monitoring and understanding of a person assembling a piece of Ikea furniture.
  • Robot replication of a sequence of human actions to complete some simple task.
  • Human-robot cooperation in completing a task where the robot monitors a task, predicts future human actions and provides guidance, hands over a tool or part required next, or holds a part in place while a human performs some action. Ability to recover from unexpected actions.

Research Project: Fast Visual Motion Control

  • Develop a suite of metrics and problems to demonstrate the robustness of real-world operation of geometric SLAM algorithms.
  • Develop a suite of algorithms for VO/VIO/SLAM based on these geometries. Distribute open-source code and demonstrate this code on aerial vehicles integrated with the ArduPilot software community.
  • Further develop the filter solutions for event/frame cameras. Develop open-source code for high-quality image reconstruction, optic flow reconstruction, and robust feature tracking algorithms.

Research Project: Robotic Vision Evaluation and Benchmarking

Robotic Vision Challenge

  • Establish a new competition and research challenge that focuses on robotic Scene Understanding and Semantic Object-based Mapping and SLAM.
  • Task competition participants to develop algorithms that enable a robot to explore an indoor environment, such as an apartment or office, and create a map that contains all the objects in the environment. The competition will enable researchers from around the world to compete and compare their algorithms.
  • Introduce the new competition to the research community at the IEEE International Conference on Robotics and Automation (ICRA).

Benchbot

  • Publicly release BenchBot, our new software framework that makes it easy to evaluate robotic vision algorithms in a simulation and on real robots.
  • BenchBot is a core enabling part of the new Robotic Vision competition on robotic Scene Understanding and Semantic Object-based Mapping and SLAM to be presented at ICRA.

Research Project: Learning

  • Learning Lightweight Neural Networks
  • Deep Declarative Networks
  • Generalised Zero shot learning
      • Uncertainty for discovering/classifying new objects
      • Adapt metric learning for (G)ZSL (no need to build a classifier with a fixed number of outputs)
      • Anomaly/novelty detection – failure prediction and performance measure
  • Semantic segmentation
      • Deployment of the efficient models on real-world robots
      • Uncertainty combined with ZSL for semantic segmentation
  • Effective Deep Learning Training
      • Bayesian meta-learning and meta-learning regularisation
      • Model compression (binary weights, information bottleneck to estimate model compressibility)

Demonstrator Project: Self-Driving Cars

  • Continue to support Centre Research Projects as a key demonstrator for testing and applying research discoveries.

Demonstrator Project: Manipulation

  • Open source the motion control software developed for the Franka-Emika Panda Robot.
  • Integrate a comprehensive table-top manipulation demonstrator that incorporates technologies from across the Centre from vision-based grasp planning, to interaction with people through gesture and language.
  • Deploy the demonstrator interface on the mobile manipulation platform and demonstrate the ability to move objects between shelves and tables.