3 Oct 2017: The race is on to solve one of the most complex challenges in creating a robot capable of detecting and analysing multiple threats to the Great Barrier Reef including early warning signs for coral bleaching, stress and pollution.
The solution to helping protect the Reef, and the 64000 jobs and $6.4 billion it contributes to the Australian economy each year lies in cracking the multiple challenges of robotic vision in the constantly changing underwater environment. The Australian Centre for Robotic Vision’s pioneering researchers are evolving their ground-breaking prototype reef robot, COTSbot, launched last year to visually recognise and record populations of the reef-eating crown-of-thorns starfish. They have now set their sights even higher and are determined to create an even more sophisticated patrol robot, RangerBot.
Creating a more complex robot presents even more challenges and at the heart of that complexity is working out how to help it “see” underwater environment so that it recognises objects and records them accordingly. The robot needs to find its way through three dimensions in a massively complicated open environment full of fast-moving organic shapes, currents and living beings while using real-time processing and using just 20 watts of power.
In 2016, Australian Centre for Robotic Vision researchers Matthew Dunbabin and Feras Dayoub successfully launched a world’s first prototype reef robot, called COTSbot. COTSbot was programmed to visually recognise and record populations of crown-of-thorns starfish (COTS), a reef-eating coral predator that can reach plague proportions. It also had an automated system that could inject the detected starfish with a solution to control their numbers.
“To find crown-of-thorns starfish in a reef environment is incredibly difficult,” Dunbabin explains. “If a robot is looking for a chair, for instance, that chair will probably have a lot of context around it – in a room with a floor, flat walls and things. But if you put the chair in a forest, no part of that forest will ever look the same as any other. That’s the problem we have on coral reefs. We had to reduce that problem down to something that’s manageable.”
“We generated models of the starfish by training the robot on hundreds of thousands of images collected from many reefs under different lighting and visibility conditions. These models allow the robot to quickly and robustly detect the starfish in new previously unvisited locations on the reef.”
“The aim of RangerBot is to make a more versatile version of COTSbot,” says Dunbabin. “The way to do this, and to reduce its cost significantly, is by making it operate entirely using computer-vision. That’s what we’ve been focusing on: developing new and efficient algorithms for underwater navigation, and obstacle avoidance. Obviously, we’ve already got the real-time COTS detection framework as a great foundation to work from.”
The primary problems according to one of the Centre’s researchers, Hongdong Li, are the constantly changing nature of the reef, combined with the interplay of light and water.
“What we’re trying to do is to give the RangerBot self-location vision and navigation ability using stereo vision,” he says. “The light underwater changes all the time plus there are fish swimming around. There’s sea grass and plenty of other objects which RangerBot needs to identify and process. COTSbot is able to see as a result of being trained on hundreds of thousands of images of starfish under different lighting and visibility conditions. RangerBot needs to recognise multiple objects which takes robotic vision to a whole new level.”
And RangerBot’s vision isn’t the only problem. Robots that navigate their way around homes and factories operate primarily on a two-dimensional plane which means they calculate their whereabouts using a relatively simple calculation, Li says.
“For an underwater robot, there are actually six dimensions to navigate,” Li says. “RangerBot needs to deal with the roll, pitch and yaw angles as well as the X, Y, and Z values.”
“Plus, unlike a robot in a factory environment where the objects to be navigated are often man-made and regular in shape, on the Great Barrier Reef, RangerBot will have to deal with odd-shaped objects. That means we also have to reconstruct fully irregular shapes and build them into the image learning and we need to create a robot capable of detecting these unusual shapes.”
Despite the complexities, the Centre intends to test five RangerBot prototypes in the next 12 months.
Media Contact: LJ Loch 0488038555 or firstname.lastname@example.org
About The Australian Centre for Robotic Vision
The Australian Centre for Robotic Vision has been funded $25.6 million over 7 years to form the largest collaborative group of its kind generating internationally impactful science and new technologies that will transform important Australian industries and provide solutions to some of the hard challenges facing Australia and the globe.
Formed in 2014, The Australian Centre for Robotic Vision is the world’s first research centre specialising in robotic vision. The group of researchers are on a mission to give robots the ability to see and understand for the sustainable wellbeing of people and the environments we live in.
The Australian Centre for Robotic Vision has assembled an interdisciplinary research team from four leading Australian research universities- QUT, The University of Adelaide (UoA), The Australian National University (ANU), and Monash University as well as CSIRO’s Data61, and overseas universities and research organisations including INRIA Rennes Bretagne, Georgia Institute of Technology, Imperial College London, the Swiss Federal Institute of Technology Zurich, and the University of Oxford.
Australian Centre for Robotic Vision
2 George Street Brisbane, 4001
+61 7 3138 7549