|Member Login

Event

05 October 2018

Human-robot cooperation and collaboration in manipulation: advancements and challenges

Overview


Abstract:

Humans display collaborative manipulation in many tasks, such as handovers and manipulation of objects excessively large and heavy for a lone person. Concurrently, robots are increasingly present in spaces shared with people where collaborative skills can complement human workers’ capabilities and increase efficiency, e.g., lifting heavy loads, working directly alongside humans with no safety caging, assisting humans in accomplishing complex and tedious tasks, combining the benefits of the fully manual assembly and fully automated manufacturing lines. To this end, forces, compliances, predictions and learning are critical for the success of cooperative manipulation and for the interaction to be safe. Many current projects are studying these key aspects in manipulation (e.g., H2020 CogIMon, H2020 SARAFun, H2020 Handy), confirming that robotic manipulation, particularly for shared tasks with humans, is a topic of high relevance nowadays and with deep resonance in the whole society (towards Industry 4.0).

During this workshop, we will focus on how to achieve safe and efficient human-robot collaboration in manipulation tasks and discuss key questions such as, what makes human collaboration so successful and how to transport and replicate it to robots, what is still missing in robotic manipulation to become optimally collaborative without separation and safety fencing, what the main approaches to collaborative manipulation are. Complementing humans sharing a workspace to accomplish a task more effectively involves challenges. The robot should be intuitive and safe through hardware and actions. Robots should recover and learn from errors. Based on a model of the task, the robot should coordinate its actions with its teammate’s actions within the model through communication (e.g., motion, speech). In addition, humans and robots can take different responsibilities such as: humans as supervisors providing information, instructions, decisions; humans and robots as peers working together at the same level to achieve a common goal; robots as assistants where humans lead.

We aim to bring together academic researchers and industrial experts in key areas for collaborative grasping and manipulation such as perception, control, learning, human studies and safety. We aspire to identify recent developments in these research areas, both in theory and applications, discussing recent achievements, debating underlying assumptions, and challenges for future progress.

 

List of invited speakers:

Andrea M. Zanchettin, Politecnico Milano (Italy)

Sami Haddadin, Hannover University (Germany)

Anthony Remazeilles, Tecnalia (Spain)

Michael Mistry, University of Edinburgh (UK)

Sylvain Calinon, IDIAP/EPFL (CH)

Yukie Nagai, National Institute of Information and Communications Technology (NICT) (JP)

This workshop will cover but will not be limited to:

  • Collaborative manipulation;
  • Learning;
  • Learning for manipulation;
  • Human studies on grasping and manipulation;
  • Bi-manual manipulation for human robot collaboration;
  • Multimodal human-robot interaction for collaboration;
  • Verbal, nonverbal, and co-verbal human-robot interaction for collaboration;
  • Safe human-robot collaboration;
  • Role of uncertainty in manipulation;
  • Uncertainty in human-robot interaction.
  • Intent reading and understanding;
  • Role of force and compliance in collaborative tasks;
  • Role of prediction in human and robotic manipulation;

Program


Time

Venue

Description

8:55 - 9:00

Introduction

9:00 - 9:25

Anthony Remazeilles

9:25 - 9:50

Saeed Abdolshah

9:50 - 10:15

Paolo Rocco

10:15 - 11:45

Poster Session / Coffee Break

11:45 - 12:10

Michael Mistry

12:10 - 12:35

Sylvain Calinon

12:35 - 13:00

Yukie Nagai

13:00 - 13:10

Closing remarks

Speakers


Paolo Rocco

Paolo Rocco

Title: 
The digitalisation of the worker in the factory 4.0
 
Abstract: 
Future manufacturing paradigms will require advanced flexibility and cognitive capabilities to respond to the increasing need for mass customisation. Production environments will be populated by humans and robots, sharing the same workspace. Despite the advancements in technology, today’s collaborative robots, however, are just able to safely operate next to human workers. 
The proliferation of low-cost surveillance cameras, as well as wearable devices such as smart groves or AR headsets, is gradually introducing the possibility to collect data from human workers. Their position at the factory floor, together with the sequence of activities they perform can be beneficial to allow automation to understand and reason about human behavior, thus putting human beings back at the center of industrial production – aided by tools such as collaborative robots. This talk presents the latest development on robotics systems that are not just capable of sharing their workspace with humans, but to ultimately assist and anticipate them in manufacturing tasks.

 

Bio:

Since 1996 Dr. Rocco has been with the Department of Electronics, Information and Bioengineering of Politecnico di Milano, where he is currently Full Professor in Systems and Control. He teaches Automatic Control to Mechanical and Aerospace engineering students and Control of Industrial Robots to Automation engineering students.

Since January 2013 he has been serving as Chair of the BSc and MSc Programs in Automation and Control Engineering at Politecnicodi Milano.

Dr. Rocco has served in the executive board of SIDRA, the national society of Italian Professors in Automatic Control, and of ANIPLA,  the Italian National Association for the Automation.

At present he serves in the Board of Directors of euRobotics, the association of all stakeholders in robotics in Europe, private part in the PPP (Public Private Partnership) SPARC.

He is co-founder of Smart Robots, a spin-off company of Politecnicodi Milano.

Saeed Abdolshah

Saeed Abdolshah

Bio:

At the Technical University of Munich, Saeed investigates physical Human-Robot Interaction from a bio-mechanical point of view.

Anthony Remazeilles

Anthony Remazeilles

Title: 
Observing human assembly: lessons learned
 
Abstract:
Among all possible human robot collaborations within an industrial context, the teaching of assembly tasks is very appealing. Indeed, with a simplified and more human friendly task definition interface, collaborative robots would be well equipped to provide fast reconfiguration and cope with the evolution of the consumer needs: shorter product life-time, with more variability, requiring more frequent reprogramming and more customization. Looking at human capabilities, it is obvious that humans are very skilled in learning and reproducing new tasks, being able to optimize their actions to do them faster and better. So that humans should definitely be a source of inspiration for the next generation of collaborative robots. In that context, we have been looking at the kinetic and kinematic behavior of humans while performing assembly tasks, and in particular we compared the human strategies in unimanual and bimanual tasks, to get first insights on how this affects their efficiency. Going back to the knowledge transfer to the robot, we investigated how a human demonstration can be taken on-board to correctly adjust the set of generic manipulation skills the robot may be built with. First, we looked at how a demonstration can be segmented in a set of known skills, and how each of them can then be characterized to provide the needed information for the robotic system. This presentation will briefly cover these different points that were studied during the European project SARAFun.
Bio: 
Anthony is currently a researcher within Tecnalia, Donostia, in the Spanish basque country. I am belonging to the Assistive Technology group from the Health Division, where I am considering the development of technological solutions for assisting the elderly and dependant people. Areas of interest: Computer vision, Image processing, Visual servoing, Surgical Robotics and Robotics

Michael Mistry

Michael Mistry

Title:
Physical Human-Robot Interaction with Multi-Limbed and Legged Robots
Abstract:
I will present our recent work on an impedance control framework for multi-arm and multi-legged systems subject to contact and physical human interaction. We use projected operational space control to impose a Cartesian impedance behaviour in the task-space, while optimising contact forces in the internal null space. We demonstrate how the controller allows us to estimate external interaction forces without direct contact force sensing, and additionally estimate and compensate for modelling errors.  We apply our controller to multiple manipulators grasping an object of unknown weight, as well as to a quadruped robot balancing on uneven terrain. In both cases, the robots are subject to physical human interactions and unknown disturbances.

 

Bio:

Michael currently works at the University of Edinburgh, UK. His research focuses on human motion and humanoid robotics.

Sylvain Calinon

Sylvain Calinon

Title: 
Challenges in extending learning from demonstration to collaborative skills

Abstract:
Many human-centered robot applications would benefit from the development of robots that could acquire new movements and skills from human demonstration, and that could reproduce these movements in new situations. From a machine learning perspective, the challenge is to acquire skills from only few interactions with strong generalization demands. It requires the development of intuitive active learning interfaces to acquire meaningful demonstrations, the development of models that can exploit the structure and geometry of the acquired data in an efficient way, and the development of adaptive controllers that can exploit the learned task variations and coordination patterns. The developed models need to serve several purposes (recognition, prediction, generation), and be compatible with different learning strategies (imitation, emulation, exploration). For the reproduction of skills, these models need to be enriched with force and impedance information to enable human-robot collaboration and to generate safe and natural movements.

I will present an approach combining model predictive control, statistical learning and differential geometry to pursue such goal. I will illustrate the proposed approach with various applications, including robots that are close to us (human-robot collaboration, robot for dressing assistance), part of us (prosthetic hand control from EMG and tactile sensing), or far from us (shared control of bimanual robot in deep water).

Bio:
Dr Sylvain Calinon is a Senior Researcher at the Idiap Research Institute (http://idiap.ch). He is also a lecturer at the Ecole Polytechnique Federale de Lausanne (EPFL), and an external collaborator at the Department of Advanced Robotics (ADVR), Italian Institute of Technology (IIT). From 2009 to 2014, he was a Team Leader at ADVR, IIT. From 2007 to 2009, he was a Postdoc at the Learning Algorithms and Systems Laboratory, EPFL, where he obtained his PhD in 2007. He is the author of 100+ publications at the crossroad of robot learning, adaptive control and human-robot interaction, with recognition including Best Paper Awards in the journal of Intelligent Service Robotics (2017) and at IEEE Ro-Man’2007, as well as Best Paper Award Finalist at ICRA’2016, ICIRA’2015, IROS’2013 and Humanoids’2009. He currently serves as Associate Editor in IEEE Transactions on Robotics (T-RO), IEEE Robotics and Automation Letters (RA-L), Intelligent Service Robotics (Springer), and Frontiers in Robotics and AI.
Personal website: http://calinon.ch

Yukie Nagai

Yukie Nagai

Title: 
Biologically-inspired cognitive architecture for human-robot collaboration

Abstract:

Human-robot collaboration requires various cognitive capabilities in robots. Examples of such capabilities include to estimate the intention of humans, to make plan to jointly achieve the goal with humans, to adapt to sudden changes in human behaviors, etc. In contrast to great advances in the development of perceptual and motor systems, cognitive architectures for robots are still immature than those of humans and leave much room for further development.
My talk presents biologically-inspired cognitive architectures for robots. It is known that the human brain has a special type of neurons that map own-executed actions to the same actions performed by others as if they were a mirror. This type of neural group, called the mirror neuron system, plays a crucial role in recognizing the goal of others’ actions and reading the intention of the actions based on own action experiences. We designed recurrent neural networks that work like the mirror neuron system. The networks learned to predict sensorimotor signals obtained through action experiences. As the sensory and motor signals are closely coupled during learning, only sensory input obtained from the observation of others’ actions can recall the associated motor signals, which facilitate the prediction of the observed actions. I will show how the neural networks simulating the function of the mirror neuron system enables a robot to estimate the intention of others and to help them to jointly achieve the goal.

Bio:

Her research investigates how human infants acquire social cognitive abilities through interactions with the environment by means of constructive approach. We design computational models (e.g., neural networks, Bayesian models, etc.) for robots to learn to communicate with others in order to reveal the underlying neural mechanisms of cognitive abilities.

As a key mechanism for development, her group have proposed a computational theory based on predictive coding. It has been suggested that the human brain minimizes prediction error between incoming sensory signals and top-down prediction by updating the internal model and/or affecting the environment. In order to verify our theory and to understand to what extent the theory accounts for cognitive development, robots have been designed that learn to recognize the self, differentiate the self from others, imitate others, share intentions and emotional states with others, help others, and so on, which appear at different ages in early infancy.

Furthermore, she developed assistant systems for developmental disorders. People with autism spectrum disorder (ASD) are known to suffer from hyper- and/or hypo-sensitivity in perception, which is hypothesized to cause their difficulties in social communication. She investigated underlying sensory and neural mechanisms for atypical perception by conducting computational cognitive experiments and design wearable simulators that allow typically developing people to experience the perceptual world of ASD. This approach contributes to deeper understanding of the underlying neural mechanisms for social cognitive development.

Organisers


Valerio Ortenzi, PhD

Valerio Ortenzi, PhD

Queensland University of Technology,

Australian Centre for Robotic Vision, 2 George Street, Brisbane, QLD 4000, Australia.

Email: valerio.ortenzi@qut.edu.au

URL: https://research.qut.edu.au/ras/people/valerio-ortenzi/

Phone +61 7 3138 2348

Marco Controzzi, PhD

Marco Controzzi, PhD

The Biorobotics Institute, Scuola Superiore Sant’Anna, viale Rinaldo Piaggio 34, 56024 56025 Pisa, Italy.

Phone: +39 050 883 460

Email: marco.controzzi@santannapisa.it

URL: http://www.santannapisa.it/en/personale/marco-controzzi

Naresh Marturi, PhD

Naresh Marturi, PhD

Extreme Robotics Laboratory, University of Birmingham, Edgbaston, Birmingham, B15 2TT, United Kingdom.

Phone: +44 7741 803 831.

Email: n.marturi@bham.ac.uk

Yasemin Bekiroglu, PhD

Yasemin Bekiroglu, PhD

Vicarious AI, 2 Union Square, Union City, CA 94587, USA.

Email: Yasemin@vicarious.com

Peter I. Corke, Professor

Peter I. Corke, Professor

Queensland University of Technology, Australian Centre for Robotic Vision, 2 George Street, Brisbane, QLD 4000, Australia.

Phone: +61 7 3138 1794.

Email: peter.corke@qut.edu.au.

URL: https://research.qut.edu.au/ras/people/peter-corke/

Andrea Cherubini, PhD

Andrea Cherubini, PhD

LIRMM, 860 rue de St Priest, 34095 Montpellier cedex 5, France.

Phone: +33 (0) 467418689.

Email cherubini@lirmm.fr .

URL: https://www.lirmm.fr/lirmm_eng/users/utilisateurs-lirmm/andrea-cherubini

Australian Centre for Robotic Vision
2 George Street Brisbane, 4001
+61 7 3138 7549