|Member Login


05 October 2018

Proposed Workshop: Human-robot cooperation and collaboration in manipulation: advancements and challenges



Humans display collaborative manipulation in many tasks, such as handovers and manipulation of objects excessively large and heavy for a lone person. Concurrently, robots are increasingly present in spaces shared with people where collaborative skills can complement human workers’ capabilities and increase efficiency, e.g., lifting heavy loads, working directly alongside humans with no safety caging, assisting humans in accomplishing complex and tedious tasks, combining the benefits of the fully manual assembly and fully automated manufacturing lines. To this end, forces, compliances, predictions and learning are critical for the success of cooperative manipulation and for the interaction to be safe. Many current projects are studying these key aspects in manipulation (e.g., H2020 CogIMon, H2020 SARAFun, H2020 Handy), confirming that robotic manipulation, particularly for shared tasks with humans, is a topic of high relevance nowadays and with deep resonance in the whole society (towards Industry 4.0).


During this workshop, we will focus on how to achieve safe and efficient human-robot collaboration in manipulation tasks and discuss key questions such as, what makes human collaboration so successful and how to transport and replicate it to robots, what is still missing in robotic manipulation to become optimally collaborative without separation and safety fencing, what the main approaches to collaborative manipulation are. Complementing humans sharing a workspace to accomplish a task more effectively involves challenges. The robot should be intuitive and safe through hardware and actions. Robots should recover and learn from errors. Based on a model of the task, the robot should coordinate its actions with its teammate’s actions within the model through communication (e.g., motion, speech). In addition, humans and robots can take different responsibilities such as: humans as supervisors providing information, instructions, decisions; humans and robots as peers working together at the same level to achieve a common goal; robots as assistants where humans lead.


We aim to bring together academic researchers and industrial experts in key areas for collaborative grasping and manipulation such as perception, control, learning, human studies and safety. We aspire to identify recent developments in these research areas, both in theory and applications, discussing recent achievements, debating underlying assumptions, and challenges for future progress.



List of invited speakers:

Andrea M. Zanchettin, Politecnico Milano (Italy)

Sami Haddadin, Hannover University (Germany)

Anthony Remazeilles, Tecnalia (Spain)

Michael Mistry, University of Edinburgh (UK)

Sylvain Calinon, IDIAP/EPFL (CH)

Yukie Nagai, National Institute of Information and Communications Technology (NICT) (JP)



This workshop will cover but will not be limited to:

  • Collaborative manipulation;
  • Learning;
  • Learning for manipulation;
  • Human studies on grasping and manipulation;
  • Bi-manual manipulation for human robot collaboration;
  • Multimodal human-robot interaction for collaboration;
  • Verbal, nonverbal, and co-verbal human-robot interaction for collaboration;
  • Safe human-robot collaboration;
  • Role of uncertainty in manipulation;
  • Uncertainty in human-robot interaction.
  • Intent reading and understanding;
  • Role of force and compliance in collaborative tasks;
  • Role of prediction in human and robotic manipulation;





8:55 - 9:00


9:00 - 9:30

Andrea Zanchettin

9:30 - 10:00

Sami Haddadin

10:00 - 10:30

Anthony Remazeilles

10:30 - 11:30

Poster Session / Coffee Break

11:30 - 12:00

Michael Mistry

12:00 - 12:30

Sylvain Calinon

12:30 - 13:00

Yukie Nagai

13:00 - 13:10

Closing remarks


Andrea M. Zanchettin

Andrea M. Zanchettin

The digitalisation of the worker in the factory 4.0
Future manufacturing paradigms will require advanced flexibility and cognitive capabilities to respond to the increasing need for mass customisation. Production environments will be populated by humans and robots, sharing the same workspace. Despite the advancements in technology, today’s collaborative robots, however, are just able to safely operate next to human workers. 
The proliferation of low-cost surveillance cameras, as well as wearable devices such as smart groves or AR headsets, is gradually introducing the possibility to collect data from human workers. Their position at the factory floor, together with the sequence of activities they perform can be beneficial to allow automation to understand and reason about human behavior, thus putting human beings back at the center of industrial production – aided by tools such as collaborative robots. This talk presents the latest development on robotics systems that are not just capable of sharing their workspace with humans, but to ultimately assist and anticipate them in manufacturing tasks.



Andrea Zanchettin was born in Cremona (Italy) in 1983. In 2008 he received his Master of Science in Computer Science Engineering from Politecnico di Milano.
From June to December 2008, he was beneficiary of a research grant in the context of the research field “Control of mechanical systems with low resolution sensors”. In January 2009, he joined the PhD program in Information Technology at Politecnico di Milano. During Spring 2010, he spent a research stay at the Department of Automatic Control (Reglerteknik) at Lund University.
He obtained his PhD in Information Technology in 2012 from Politecnico di Milano, with a dissertation entitled “Human-centric behaviour of redundant manipulators under kinematic control”. From January 2012 until February 2014 he has been a temporary research assistant at the Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB). From March 2014 to September 2016, he has been a fixed-term assistant professor at DEIB. In September 2014, Andrea Zanchettin has been the recipient of the Young Author Best Paper Award, sponsored by the IEEE RAS Italian Chapter (I-RAS). Since October 2016, he has been a tenure-track assistant professor at DEIB.
His research interests are about mechatronic systems, automatic control and intelligent human-robot interaction. Andrea Zanchettin has been member of the IEEE Robotics and Automation Society since 2009.

Sami Haddadin

Sami Haddadin

Leibniz Universität Hannover, Germany

Sami Haddadin is interested in topics such as Human-Robot Interaction and interaction design, Human Motor Control, Safe robots, Robot and mechatronic design, Dexterous manipulation, Mobile manipulation, Nonlinear robot control, Control of intrinsically elastic robots, Human biomechanics and injury mechanics, Real-time reflex and motion planning, Real-time task planning and learning, Nonlinear control and learning for Variable Impedance Robots, Brain controlled assistive robots

Anthony Remazeilles

Anthony Remazeilles

Tecnalia (Spain)

Anthony is currently a researcher within Tecnalia, Donostia, in the Spanish basque country. I am belonging to the Assistive Technology group from the Health Division, where I am considering the development of technological solutions for assisting the elderly and dependant people. Areas of interest: Computer vision, Image processing, Visual servoing, Surgical Robotics and Robotics

Michael Mistry

Michael Mistry

Michael currently works at the University of Edinburgh, UK. His research focuses on human motion and humanoid robotics.

Sylvain Calinon

Sylvain Calinon

Challenges in extending learning from demonstration to collaborative skills

Many human-centered robot applications would benefit from the development of robots that could acquire new movements and skills from human demonstration, and that could reproduce these movements in new situations. From a machine learning perspective, the challenge is to acquire skills from only few interactions with strong generalization demands. It requires the development of intuitive active learning interfaces to acquire meaningful demonstrations, the development of models that can exploit the structure and geometry of the acquired data in an efficient way, and the development of adaptive controllers that can exploit the learned task variations and coordination patterns. The developed models need to serve several purposes (recognition, prediction, generation), and be compatible with different learning strategies (imitation, emulation, exploration). For the reproduction of skills, these models need to be enriched with force and impedance information to enable human-robot collaboration and to generate safe and natural movements.

I will present an approach combining model predictive control, statistical learning and differential geometry to pursue such goal. I will illustrate the proposed approach with various applications, including robots that are close to us (human-robot collaboration, robot for dressing assistance), part of us (prosthetic hand control from EMG and tactile sensing), or far from us (shared control of bimanual robot in deep water).

Dr Sylvain Calinon is a Senior Researcher at the Idiap Research Institute (http://idiap.ch). He is also a lecturer at the Ecole Polytechnique Federale de Lausanne (EPFL), and an external collaborator at the Department of Advanced Robotics (ADVR), Italian Institute of Technology (IIT). From 2009 to 2014, he was a Team Leader at ADVR, IIT. From 2007 to 2009, he was a Postdoc at the Learning Algorithms and Systems Laboratory, EPFL, where he obtained his PhD in 2007. He is the author of 100+ publications at the crossroad of robot learning, adaptive control and human-robot interaction, with recognition including Best Paper Awards in the journal of Intelligent Service Robotics (2017) and at IEEE Ro-Man’2007, as well as Best Paper Award Finalist at ICRA’2016, ICIRA’2015, IROS’2013 and Humanoids’2009. He currently serves as Associate Editor in IEEE Transactions on Robotics (T-RO), IEEE Robotics and Automation Letters (RA-L), Intelligent Service Robotics (Springer), and Frontiers in Robotics and AI.
Personal website: http://calinon.ch

Yukie Nagai

Yukie Nagai

Her research investigates how human infants acquire social cognitive abilities through interactions with the environment by means of constructive approach. We design computational models (e.g., neural networks, Bayesian models, etc.) for robots to learn to communicate with others in order to reveal the underlying neural mechanisms of cognitive abilities.

As a key mechanism for development, her group have proposed a computational theory based on predictive coding. It has been suggested that the human brain minimizes prediction error between incoming sensory signals and top-down prediction by updating the internal model and/or affecting the environment. In order to verify our theory and to understand to what extent the theory accounts for cognitive development, robots have been designed that learn to recognize the self, differentiate the self from others, imitate others, share intentions and emotional states with others, help others, and so on, which appear at different ages in early infancy.

Furthermore, she developed assistant systems for developmental disorders. People with autism spectrum disorder (ASD) are known to suffer from hyper- and/or hypo-sensitivity in perception, which is hypothesized to cause their difficulties in social communication. She investigated underlying sensory and neural mechanisms for atypical perception by conducting computational cognitive experiments and design wearable simulators that allow typically developing people to experience the perceptual world of ASD. This approach contributes to deeper understanding of the underlying neural mechanisms for social cognitive development.


Valerio Ortenzi, PhD

Valerio Ortenzi, PhD

Queensland University of Technology,

Australian Centre for Robotic Vision, 2 George Street, Brisbane, QLD 4000, Australia.

Email: valerio.ortenzi@qut.edu.au

URL: https://research.qut.edu.au/ras/people/valerio-ortenzi/

Phone +61 7 3138 2348

Marco Controzzi, PhD

Marco Controzzi, PhD

The Biorobotics Institute, Scuola Superiore Sant’Anna, viale Rinaldo Piaggio 34, 56024 56025 Pisa, Italy.

Phone: +39 050 883 460

Email: marco.controzzi@santannapisa.it

URL: http://www.santannapisa.it/en/personale/marco-controzzi

Naresh Marturi, PhD

Naresh Marturi, PhD

Extreme Robotics Laboratory, University of Birmingham, Edgbaston, Birmingham, B15 2TT, United Kingdom.

Phone: +44 7741 803 831.

Email: n.marturi@bham.ac.uk

Yasemin Bekiroglu, PhD

Yasemin Bekiroglu, PhD

Vicarious AI, 2 Union Square, Union City, CA 94587, USA.

Email: Yasemin@vicarious.com

Peter I. Corke, Professor

Peter I. Corke, Professor

Queensland University of Technology, Australian Centre for Robotic Vision, 2 George Street, Brisbane, QLD 4000, Australia.

Phone: +61 7 3138 1794.

Email: peter.corke@qut.edu.au.

URL: https://research.qut.edu.au/ras/people/peter-corke/

Andrea Cherubini, PhD

Andrea Cherubini, PhD

LIRMM, 860 rue de St Priest, 34095 Montpellier cedex 5, France.

Phone: +33 (0) 467418689.

Email cherubini@lirmm.fr .

URL: https://www.lirmm.fr/lirmm_eng/users/utilisateurs-lirmm/andrea-cherubini

Australian Centre for Robotic Vision
2 George Street Brisbane, 4001
+61 7 3138 7549