Robots that see to act and act to see
Robots that see to act and act to see
The Vision and Action program will further the understanding of how robots visually interact with their environments. How a robotic system moves is integral in how to process vision in order to inform future motion control and decision making. This project considers three key aspects of this scientific question: Firstly, how to use vision as the primary sensor to manipulate objects in a real world. Secondly, how to visually navigate in complex dynamic environments. And finally, to understand how to maximise a robots understanding of the environment through motion of the camera. In the first the camera motion is seen as separate from the robotic manipulation task, in the second, the camera is attached to a moving robot and its motion is the task, and in the third, the camera motion is in addition to the other robotic actuation undertaken for the task. Solving these challenges will make a substantive contribution to the effective deployment of robotic vision in real-world applications.
Vision control of robots will allow the manipulation of real objects and will push the boundaries on speed, coordination & complexity.
Jürgen “Juxi” Leitner, Robert Mahony, Chunhua Shen, Peter Corke, Francois Chaumette, Chris Lehnert, Fangyi Zhang, Zheyu Zhuang, Douglas Morrison, Sean McMahon, Norton Kelly-Boxall, Adam Tow
Hand-eye coordination in complex visual environments involves developing robust motion control of robotic platforms based on vision data that is capable of dealing with variation and complexities encountered in real world tasks. This project aims to go beyond engineered visual features and engineered environments to develop demonstrator systems that allow manipulation of real world objects like capsicums, cups, pens, tools, etc. A key aspect of the VA1 project is robustness - to develop systems and architectures that can deal with a wide variety of operating conditions and can adapt easily to new tasks, new objects and new environments. Robots moving in complex environments involve interaction between their state, the objects that they interact with to perform a given task, and the general environment. The aim is to create more adaptable, robust solutions by building on previous experiences. These might be experiences previously encountered by the robot itself – eg. during exploration or previous task executions, but might also be transferred from other systems or even human demonstrations. This project considers using some of the recent advances in deep learning to build robust visual perception systems capable of modelling complex and changing environments. The project scientific goals involve understanding: 1) How to scope learning algorithms and architectures for hand-eye coordination tasks in complex visual environments. 2) How to interface the resulting robust visual perception system into a control framework and provide analysis of the system performance. 3) How to transfer working architectures from simulation to real-world, and from tasks-to-task, efficiently and effectively. Our goal is to be the world leader in the research field of visually guided robotic manipulation.
Robert Mahony, Peter Corke, Jochen Trumpf, Jonathan Roberts, Jean-Luc Stevens, Sean O’Brien, Peter Kujala, Cedric Scheerlinck, Alex Martin, Dorian Tsai
Complex cluttered environments pose significant challenges for visual control of robotic systems. This project concerns the development of robust and high performance vision based control algorithms that are capable of delivering high-performance (super-human) control. Both manipulation and navigation tasks are of interest and demonstrators in both fields will be considered. The project will focus on enabling robust and safe motion control that provides obstacle avoidance, and an integrated decision-planning system that allows direct physical interaction with the environment where appropriate, for example moving objects in order to achieve a task more efficiently. Efficient and effective manipulation in a complex cluttered environment involves planning ahead and the ability to move quickly and safely in a cluttered environment. The project will consider real-world control scenarios where there are multiple options to achieve the desired task including motions that interact with the environment; for example, moving a glass out of the way in order to reach a bottle, or picking up a sequence of items from a cluttered workspace in an order learned by the algorithm rather than predetermined by the engineer. Achieving high-performance of such tasks involves two separate capabilities: 1) An integrated control and sensing system that allows the robot to move quickly and robustly through a cluttered environment; in particular, algorithms for high-speed obstacle detection and avoidance control strategies must be developed. 2) An integrated decision and planning capability that allows the robot determine (and execute) effective and efficient solution to a complex task involving multiple components: The defining aspect of this project will be to develop solutions to these capabilities that exploit vision sensing in a fundamental manner. Thus, the obstacle avoidance and motion control system is vision based, with additional sensor modalities as appropriate for given applications. Similarly the decision and planning capability will also be vision based and will exploit semantic information and other cues as appropriate.
Jürgen “Juxi” Leitner, Niko Sünderhauf, Chris Lehnert, Chris McCool, Steve Martin, Trung Pham, Adam Tow, Liao ‘Leo’ Wu, Fangyi Zhang, Ben Upcroft, Peter Corke
Centre researchers had the chance to show off their research on the world stage at the 2016 Amazon Picking Challenge. A team from the Australian Centre for Robotic Vision was one of only 16 teams from around the world selected to compete. Amazon is already leading the world in logistics robotics with their purchase of the warehousing automation company Kiva Systems for USD$775 in 2015. Amazon uses more than 30,000 robots in its global network of fulfilment (distribution) centres but its robots have limited capability. Despite improvements in vision and manipulation capability, robots are still no match for humans when it comes to identifying and picking things from shelves, hence the origin of the challenge. Can robots automate picking in an unstructured environment? The 2016 challenge was held in conjunction with RoboCup 2016 in Leipzig, Germany. Centre Research Fellow Dr Juxi Leitner led the team. “We saw the Amazon Picking Challenge as a very interesting problem for us. It allowed us to bring two areas of research that we think belong together – robotics and computer vision – to create something larger than the sum of those two parts,” Juxi said. Juxi and members of the Centre team saw the Amazon Challenge as a chance to really advance the Centre’s mission of creating robots that see and understand a task in the real world environment of the warehouse. “Our hope was to take what we learnt from this and start applying it to other areas we’re trying to solve in the Centre, like agricultural, infrastructure and medical applications,” Juxi said. The team found out in early 2016 it was selected, and only had about five months to get a system up and running for the competition. “We had to build a lot and come up with a system in that time frame that really worked,” said Dr Niko Sünderhauf, a Research Fellow with the Centre. There were a lot of challenges to overcome in a short amount of time. The team was using a Baxter robotic platform to compete. “To run the Amazon Picking Challenge and the whole code on the system, we had multiple computers,” Juxi says. “There is one in Baxter, there are two other computers, including one that just runs the visual perception. So, there was quite a lot of infrastructure and it took actually quite some time to make all those systems work with each other.” Eighteen members from the Centre were a part of the team, with six of them actually travelling to Germany for the competition. Leading up to the event, teams were given a list of items that they might be asked to pick during the competition. “You’ve got no idea on the day what items you will actually be asked to pick out of the shelf, you have no idea what arrangement those items will be, and what the lighting conditions will be like when you get there,” said Adam Tow, a Centre PhD student who was a member of the team. The challenge itself was actually broken into two days of competitions. The first day was the stowing task, where teams were asked to pick objects out of a tote, and place them in a shelf. The Centre team was able to pick three items out of the tote, successfully placing two in the shelf. The second day involved the picking task, with the Centre team placing sixth overall with a robust solution to the vision problem, which could be applied despite lighting conditions, something the other teams found challenging. “We picked four objects from the shelf and we put them into the tray. It was really exciting to see the robot do what it was supposed to do,” Juxi said. “I think for all of us, it was a giant learning curve,” Adam said. “We were we absolutely stoked with how we went on the day of the picking task, finishing as high as we did with the calibre of teams that were there.” For team members, the strong showing validated not only their hard work, but their research as well. “From a scientific point of view, the system that we built, we will continue to work on,” Juxi said. “This will be the baseline for our future research, especially focusing on manipulation, agriculture, and cluttered environments like shelves.” That research and hard work is continuing in 2017, with the Centre being selected to compete in the Amazon Robotics Challenge, being held in conjunction with RoboCup 2017 in Nagoya, Japan at the end of July.
Australian Centre for Robotic Vision
2 George Street Brisbane, 4001
+61 7 3138 7549