|Member Login

Research

Acting

Robots that see to act and act to see

Overview


Vision & Action

The Vision and Action program will further the understanding of how robots visually interact with their environments.  How a robotic system moves is integral in how to process vision in order to inform future motion control and decision making.  This project considers three key aspects of this scientific question: Firstly, how to use vision as the primary sensor to manipulate objects in a real world.  Secondly, how to visually navigate in complex dynamic environments.  And finally, to understand how to maximise a robots understanding of the environment through motion of the camera.  In the first the camera motion is seen as separate from the robotic manipulation task, in the second, the camera is attached to a moving robot and its motion is the task, and in the third, the camera motion is in addition to the other robotic actuation undertaken for the task.  Solving these challenges will make a substantive contribution to the effective deployment of robotic vision in real-world applications.

Vision control of robots will allow the manipulation of real objects and will push the boundaries on speed, coordination & complexity.

People


Robert Mahony
  • Robert Mahony

    ANU Node Leader and Chief Investigator, Vision & Action Program Leader

  • Australian National University

View Profile
Peter Corke
  • Peter Corke

    Centre Director, Chief Investigator, QUT Node Leader, AA3 Project Leader

  • Queensland University of Technology

View Profile
Jürgen “Juxi” Leitner
  • Jürgen “Juxi” Leitner

    Research Fellow, VA1 Project Leader

  • Queensland University of Technology

View Profile
Chunhua Shen
View Profile
Francois Chaumette
View Profile
Fangyi Zhang
View Profile
Adam Tow
View Profile
Chris Lehnert
View Profile
Jonathan Roberts
View Profile
Jochen Trumpf
View Profile
Sean O’Brien
View Profile
Dorian Tsai
View Profile
Alex Martin
View Profile

Projects


VA1: Learning Robust Hand-eye Coordination for Grasping in Novel Environments


Ongoing

Jürgen “Juxi” Leitner, Robert Mahony, Peter Corke, Chunhua Shen, Francois Chaumette, Fangyi Zhang, Adam Tow, Chris Lehnert

Hand-eye coordination in complex visual environments involves developing robust motion control of robotic platforms based on vision data that is capable of dealing with variation and complexities encountered in real world tasks. This project aims to go beyond engineered visual features and engineered environments to develop demonstrator systems that allow manipulation of real world objects like capsicums, cups, pens, tools, etc. A key aspect of the VA1 project is robustness - to develop systems and architectures that can deal with a wide variety of operating conditions and can adapt easily to new tasks, new objects and new environments. In 2017 the Centre’s work on enabling robots to pick a variety of objects in complex environments will continue. A highlight will be the participation during the 2017 Amazon Robotics Challenge in July. In addition, the aim is to go beyond the state-of-the-art in warehouse automation and shelf picking, facilitated through robotic learning through visual demonstrations by human operators.

juxi.leitner@roboticvision.org

VA2: High Performance Visual Control and Navigation in Complex Cluttered Environments


Ongoing

Robert Mahony, Peter Corke, Jonathan Roberts, Jochen Trumpf, Sean O’Brien, Dorian Tsai, Alex Martin

Efficient and effective manipulation in a complex cluttered environment involves planning ahead and the ability to move quickly and safely in a cluttered environment. The project will consider real-world control scenarios where there are multiple options to achieve the desired task including motions that interact with the environment; for example, moving a glass out of the way in order to reach a bottle, or picking up a sequence of items from a cluttered workspace in an order learned by the algorithm rather than predetermined by the engineer. Achieving high-performance of such tasks involves two separate capabilities: An integrated control and sensing system that allows the robot to move quickly and robustly through a cluttered environment; in particular, algorithms for high-speed obstacle detection and avoidance control strategies must be developed. An integrated decision and planning capability that allows the robot determine (and execute) effective and efficient solution to a complex task involving multiple components: The defining aspect of this project will be to develop solutions to these capabilities that exploit vision sensing in a fundamental manner. Thus, the obstacle avoidance and motion control system is vision-based, with additional sensor modalities as appropriate for given applications. Similarly the decision and planning capability will also be vision-based and will exploit semantic information and other cues as appropriate.

rob.mahony@roboticvision.org

Amazon Picking Challenge


2016

Jürgen “Juxi” Leitner, Niko Sünderhauf, Chris Lehnert, Chris McCool, Steve Martin, Trung Pham, Adam Tow, Liao ‘Leo’ Wu, Fangyi Zhang, Ben Upcroft, Peter Corke

Centre researchers had the chance to show off their research on the world stage at the 2016 Amazon Picking Challenge. A team from the Australian Centre for Robotic Vision was one of only 16 teams from around the world selected to compete. Amazon is already leading the world in logistics robotics with their purchase of the warehousing automation company Kiva Systems for USD$775 in 2015. Amazon uses more than 30,000 robots in its global network of fulfilment (distribution) centres but its robots have limited capability. Despite improvements in vision and manipulation capability, robots are still no match for humans when it comes to identifying and picking things from shelves, hence the origin of the challenge. Can robots automate picking in an unstructured environment? The 2016 challenge was held in conjunction with RoboCup 2016 in Leipzig, Germany. Centre Research Fellow Dr Juxi Leitner led the team. “We saw the Amazon Picking Challenge as a very interesting problem for us. It allowed us to bring two areas of research that we think belong together – robotics and computer vision – to create something larger than the sum of those two parts,” Juxi said. Juxi and members of the Centre team saw the Amazon Challenge as a chance to really advance the Centre’s mission of creating robots that see and understand a task in the real world environment of the warehouse. “Our hope was to take what we learnt from this and start applying it to other areas we’re trying to solve in the Centre, like agricultural, infrastructure and medical applications,” Juxi said. The team found out in early 2016 it was selected, and only had about five months to get a system up and running for the competition. “We had to build a lot and come up with a system in that time frame that really worked,” said Dr Niko Sünderhauf, a Research Fellow with the Centre. There were a lot of challenges to overcome in a short amount of time. The team was using a Baxter robotic platform to compete. “To run the Amazon Picking Challenge and the whole code on the system, we had multiple computers,” Juxi says. “There is one in Baxter, there are two other computers, including one that just runs the visual perception. So, there was quite a lot of infrastructure and it took actually quite some time to make all those systems work with each other.” Eighteen members from the Centre were a part of the team, with six of them actually travelling to Germany for the competition. Leading up to the event, teams were given a list of items that they might be asked to pick during the competition. “You’ve got no idea on the day what items you will actually be asked to pick out of the shelf, you have no idea what arrangement those items will be, and what the lighting conditions will be like when you get there,” said Adam Tow, a Centre PhD student who was a member of the team. The challenge itself was actually broken into two days of competitions. The first day was the stowing task, where teams were asked to pick objects out of a tote, and place them in a shelf. The Centre team was able to pick three items out of the tote, successfully placing two in the shelf. The second day involved the picking task, with the Centre team placing sixth overall with a robust solution to the vision problem, which could be applied despite lighting conditions, something the other teams found challenging. “We picked four objects from the shelf and we put them into the tray. It was really exciting to see the robot do what it was supposed to do,” Juxi said. “I think for all of us, it was a giant learning curve,” Adam said. “We were we absolutely stoked with how we went on the day of the picking task, finishing as high as we did with the calibre of teams that were there.” For team members, the strong showing validated not only their hard work, but their research as well. “From a scientific point of view, the system that we built, we will continue to work on,” Juxi said. “This will be the baseline for our future research, especially focusing on manipulation, agriculture, and cluttered environments like shelves.” That research and hard work is continuing in 2017, with the Centre being selected to compete in the Amazon Robotics Challenge, being held in conjunction with RoboCup 2017 in Nagoya, Japan at the end of July.

juxi.leitner@roboticvision.org

Australian Centre for Robotic Vision
2 George Street Brisbane, 4001
+61 7 3138 7549