Pepper visual navigation

A complete image-based topological navigation system for Pepper, based on the teach-and-repeat paradigm, has been developed. The first step involves showing or leading Pepper around the space by holding the robot’s hand. Images of the environment are captured as the robot moves and are stored as a teaching sequence. From this teaching sequence, a small subset of images is selected, which is known as the image memory. This image memory consists of reference images organized in an adjacency graph, which gives the topological representation of the environment.

During navigation, A* search is used to find the optimal path between the current location and the destination in this topological graph. The robot can subsequently find its way around by comparing what it can see currently with its set of nearby reference images. The images from the robot’s RGB camera are used to localize within the topological graph, and an Image-Based Visual Servoing (IBVS) control scheme drives the robot to the next reference image in the graph.

Depth images update a local egocentric occupancy grid, and another IBVS controller navigates local free-space. The output of the two IBVS controllers is fused to form the final control command for the robot. The core navigation module can run entirely onboard the robot without requiring any external computing resources. The advantage of this method is that it does not need a global metric map or expensive sensors. We demonstrate real-time navigation for the Pepper robot in an indoor open-plan office environment without the need for accurate mapping and localization.

Comments Closed