|Member Login


Workshop – Towards robust grasping and manipulation skills for humanoids




The ability to grasp and manipulate objects provides an essential means to interact with the environment. Recent years have seen a proliferation of research projects to use robotic manipulation in real world applications such as human robot collaboration and industrial tasks. Despite the promising progress, robotic grasping and manipulation has yet to demonstrate necessary robustness and dexterity to be fully exploited in various settings, such as in everyday life contexts, industrial environments, and when dealing with novelty and uncertainty, e.g., object shape, pose, weight, friction at contacts, and with unstructured environments.

Studies on human grasping and manipulation have shown that sensorial capabilities play a key role in the success of human manipulation, allowing a better perception of the object and the interaction with it, and revealing adaptation and control strategies, e.g., using environment and its constraints for more effective manipulation. Inspired by these findings, robotics research aiming to robustify object grasping and manipulation skills shows the importance of effective use of sensory data (visual, tactile, proprioceptive) from planning stage to task completion. Various kinds of approaches have been proposed, e.g., data-driven and empirical approaches such as learning from experience and from human demonstration, analytic approaches such as modelling physical and dynamical constraints manually, and approaches to hand designs such as under-actuated and soft hands.  

In this workshop, we aim to bring together researchers and experts in key areas for grasping and manipulation such as perception, control, learning, design of hands and grippers, and studies analysing human manipulation skills. We aspire to identify recent developments in these research areas, both in theory and applications, discussing recent achievements, debating underlying assumptions, and challenges for future progress.

Topics of interest:

The workshop topics include (but are not limited to):

  • Perception-guided grasping and manipulation (vision, touch)
  • Grasp and manipulation planning
  • Learning for grasping and manipulation (e.g., from human demonstration, exploration)
  • Collaborative manipulation
  • Bi-manual manipulation
  • Visual, tactile servoing
  • Closed-loop grasping and manipulation
  • End-effector design (e.g., anthropomorphic, underactuated)
  • Human manipulation and grasping
  • Reactive control strategies for object manipulation
  • Deformable object manipulation
  • Multimodal interactive perception
  • Sensor fusion based on tactile, force and visual feedback
  • In-hand manipulation


Call for Papers:

We welcome the submission of two page extended abstracts describing new or ongoing work. Final instructions for poster presentations and talks will be available on the workshop website after decision notifications have been sent. All abstracts will be accessible on the workshop website. Submissions should be in .pdf format. Please send submissions to valerio[dot]ortenzi[at]qut[dot]edu[dot]au with the subject line “Humanoids 2017 Workshop Submission”. For any question or clarification, please contact the organisers.


Important Dates:

Abstract submission deadline: October 25, 2017
Acceptance notification: November 5, 2017
Final materials due: November 10, 2017
Workshop date: November 15, 2017



Object Modeling and Grasping Pipeline based on Superquadric Models, Giulia Vezzani, Ugo Pattacini and Lorenzo Natale,  (pdf)

Towards Reactive and Robust Manipulation Tasks using Behavior Trees, Michele Colledanchise and Lorenzo Natale, (pdf)

Markerless visual servoing for humanoid robot platforms, Claudio Fantacci, Ugo Pattacini, Vadim Tikhanoff and Lorenzo Natale, (pdf)

Generative Perception for Robotic Grasping, Douglas Morrison, Peter Corke and Jürgen Leitner, (pdf)

Hierarchical Grasp Detection for Visually Challenging Environments, D. Morrison, N. Kelly-Boxall, S. Wade-McCue, P. Corke, and J. Leitner, (pdf)

A Framework for Bimanual Folding Assembly Under Uncertainties, Diogo Almeida and Yiannis Karayiannidis, (pdf)

Simulation of the underactuated Sake Robotics Gripper in V-REP, Simon-Konstantin Thiem, Svenja Stark, Daniel Tanneberg, Jan Peters and Elmar Rueckert, (pdf)


15th November




8:30 - 8:40


8:40 - 9:10

Invited Talk - Tamim Asfour

9:10 - 9:40

Invited Talk - Michael Beetz

9:40 - 10:00

Poster teasers

10:00 - 10:30

Coffee break & poster session

10:30 - 11:00

Invited Talk - Maximo Roa

11:00 - 11:30

Invited Talk - Fanny Ficuciello

11:30 - 12:00

Invited Talk - Robert Haschke

12:00 - 12:30

Invited Talk - Marco Controzzi

12:30 - 13:30


13:30 - 14:00

Invited Talk - Paolo Robuffo Giordano

14:00 – 14:30

Invited Talk - Matteo Bianchi

14:30 - 15:00

Invited Talk - Robert Platt Jr ()

15:00 - 15:30

Coffee break & poster session

15:30 - 16:00

Invited Talk - Gerhard Neumann

16:00 - 16:30

Invited Talk - Sami Haddadin

16:30 - 17:00

Panel discussion & closing

Invited Speakers

Dr. Robert Haschke

Dr. Robert Haschke

University of Bielefeld

Robert Haschke is currently heading the Robotics Group within the Neuroinformatics Group, striving to enrich the dexterous manipulation skills of our two bimanual robot setups through interactive learning. His fields of research include neural networks, cognitive bimanual robotics, grasping and manipulation with multi-fingered dexterous hands, tactile sensing, and software integration.


Tactile Sensors and Tactile Processing for Robust Robot Grasping

At Bielefeld University we have developed a variety of tactile sensors, ranging from large tactile arrays over 3D-shaped tactile fingertips to flexible fabrics-based ones. In the talk, I will introduce the sensor designs, propose a ROS-toolbox for tactile data processing and visualisation, and provide an overview of some applications for tactile-based grasping and manipulation, including tactile servoing, tactile surface exploration, slip detection, and grasp stabilisation.

Professor Michael Beetz

Professor Michael Beetz

University Bremen

Michael Beetz is a Professor of Computer Science at the Faculty for Mathematics & Informatics of the University Bremen and head of the Institute for Artificial Intelligence (IAI). Michael investigates AI-based control methods for robotic agents, with a focus on human-scale everyday manipulation tasks. With his openEASE, a web-based knowledge service providing robot and human activity data, Michael aims at improving interoperability in robotics and lowering the barriers for robot programming. Due to this the IAI group provides most of its results as open-source software, primarily in the ROS software library. His research interests include plan-based control of robotic agents, knowledge processing and representation for robots, integrated robot learning, and cognitive perception.


Automated Models of Everyday Activity

Recently we have witnessed the first robotic agents performing everyday manipulation activities such as loading a dishwasher and setting a table. While these agents successfully accomplish specific instances of these tasks, they only perform them within the narrow range of conditions for which they have been carefully designed. They are still far from achieving the human ability to autonomously perform a wide range of everyday tasks reliably in a wide range of contexts. In other words, they are far from mastering everyday activities. Mastering everyday activities is an important step for robots to become the competent (co-)workers, assistants, and companions who are widely considered a necessity for dealing with the enormous challenges our aging society is facing.

Modern simulation-based game technologies give us for the first time the opportunity to acquire the commonsense and naive physics knowledge needed for the mastery of everyday activities in a comprehensive way.
In this talk I will describe AMEvA (Automated Models of Everyday Activities), a special-purpose knowledge acquisition, interpretation, and processing system for human everyday manipulation activity that can automatically
(1) create and simulate virtual human living and working environments (such as kitchens and apartments) with a scope, extent, level of detail, physics, and photo-realism that facilitates and promotes the natural and realistic execution of
human everyday manipulation activities;
(2) record human manipulation activities performed in the respective virtual reality environment as well as their effects on the environment and detect force-dynamic states and events;
(3) decompose and segment the recorded activity data into meaningful motions and categorise the motions according to action models used in cognitive science; and
(4) represent the interpreted activities symbolically in KnowRob using first-order time interval logic formulas linked to sub-symbolic data streams.


Dr. Maximo A. Roa

Dr. Maximo A. Roa

Deutsches Zentrum für Luft- und Raumfahrt (DLR)

Maximo A. Roa works since 2010 as Group Leader on Dexterous Manipulation and Planning in the Institute of Robotics and Mechatronics at DLR – German Aerospace Center. In 2015 he also joined Roboception, a DLR spin-off working on 3D perception solutions for robotics, as a Senior Expert on Grasping and Manipulation. Dr. Roa obtained his PhD in Robotics in 2009 at the Polytechnical University of Catalunya, and is a certified PMP. He worked previously for Hewlett Packard R&D as a Research Specialist. He currently serves as co-chair of the IEEE-RAS Technical Committee on Mobile Manipulation.


Action-perception loop for industrial robotic applications

Robot manipulators are coming more and more into small and medium companies, thanks mainly to reductions in cost of industrial manipulators and to better and easier to use interfaces for robot programming. However, the underlying problem of robustness in the grasping and manipulation executions is still present in most of our applications nowadays. The talk will present several use cases with different levels of action-perception loop-closure, to encourage discussion on the missing components that will lead us to robust and reliable manipulation actions.


Dr. Fanny Ficuciello

Dr. Fanny Ficuciello

Università degli Studi di Napoli, Federico II

Fanny Ficuciello received the Laurea degree magna cum laude in Mechanical Engineering from the University of Naples Federico II in 2007. She received the Ph.D. degree in Computer and Automation Engineering at the University of Naples Federico II, in November 2010. Currently she is holding a Post Doctoral position at the University of Naples Federico II. Her research activity is focused on biomechanical design and bio-aware control strategies for anthropomorphic artificial hands, grasping and manipulation with hand/arm and dual arm robotic systems, human-robot interaction control, variable impedance control and redundancy resolution strategies. Recently she is involved also on surgical robotics research projects, as a member of the ICAROS center (Interdepartmental Center for Advances in Robotic Surgery) of the University of Naples Federico II. She has published more than 30 journal and conference papers and book chapters. She is the recipient of a National Grant within the “Programma STAR Linea 1” under which she is the PI of the MUSHA project. From 2008 she is member of the IEEE Robotics and Automation Society. She is involved in the organization of international conferences and workshops. Currently she serves as associate editor of Journal of Intelligent Service Robotics (JIST) and she is on the editorial board of prestigious conferences in the field of robotics.


Underactuated Artificial Hands: Design and Control of in a Synergy-based Framework

A hot topic in robotic manipulation is how to merge learning and model-based strategies to provide autonomy to robotic systems. This is a fundamental problem for robotic manipulation and it touches different functional system levels based on both visual and tactile perception and also affects the mechanical design of the robot parts that move and interact with the environment. The anthropomorphism contemplated in the mechanical design encourages the inspiration of learning techniques and control from the functioning of the human being. Artificial hands for robotics and prosthetics require enhanced manipulation skills to reproduce human’s abilities. This calls for the design of complex dexterous hands with advanced sensorimotor skills and human-like kinematics. Besides humanoid robots and prosthetics applications, other areas such as the mini-invasive laparoscopic surgery could benefit by the use of suitably designed hands able to enter the patient’s body through the trocar and to replace the hands of the surgeon by equaling dexterity and sensory ability at the same time. Due to the presence of multiple degrees of freedom (DoFs), a synergistic approach inspired by the human hand functioning can be adopted for design purposes. As a matter of fact, the undergoing research in the field aims at the reproduction of human abilities not only by means of anthropomorphic design but also by the adoption of human-inspired control strategies. Fanny Ficuciello’s contribution to this topic concerns synergies computation and mechanical design, control and learning applied to robotic hands and surgical tools.

Professor Gerhard Neumann

Professor Gerhard Neumann

University of Lincoln

Gerhard Neumann is a Professor of Robotics & Autonomous Systems in College of Science at the University of Lincoln. His research interests include machine learning, robotics, reinforcement learning, imitation learning, and deep learning. Gerhard authored 50+ peer reviewed papers, many of them in top ranked machine learning and robotics journals or conferences such as NIPS, ICML, ICRA, IROS, JMLR, Machine Learning and AURO. In Darmstadt, he is principle investigator of the EU H2020 project Romans and also already acquired DFG funding. He organised several workshops and is senior program committee for several conferences.


Dr. Marco Controzzi

Dr. Marco Controzzi

Scuola Superiore Sant'Anna

Marco Controzzi received the M.Sc. degrees in mechanical engineering from the University of Pisa, Italy, in 2008, and the Ph.D. in Biorobotics from the Scuola Superiore Sant’Anna, Pisa, Italy, in 2013. He is currently Assistant Professor at The BioRobotics Institute, Scuola Superiore Sant’Anna, Pisa, Italy. He joined the ARTS-Lab of the Scuola Superiore Sant’Anna in 2006 as Research Assistant, and since 2008 he is leading the mechanical design of the robot hands. Marco Controzzi is founder of a spin-off company of the Scuola Superiore Sant’Anna: Prensilia Srl (www.prensilia.com).
The current research of Marco Controzzi is mainly devoted to the design and development of advanced artificial devices aimed at improving the lives of people with disabilities. In detail, he is interested in the mechatronic and controllability issues of dexterous robotic upper-limbs to be used as thought-controlled prostheses. These devices are advancing an emerging field of science that combines the principles of robot design with medicine. The personal objective of Marco Controzzi is to provide individuals with artificial limb that will be able to effectively restore the natural function of the missing limb.
Marco Controzzi is also expanding his research interest to the emerging field of collaborative robotics. The long-term objective related to this field is to work in new roads towards the development of a new generation of robotic systems that will be able to cooperate and support humans in a wide range of activities.


Exploiting artificial hands to efficiently cooperate with humans

The talk will cover the ongoing projects at the Biorobotics Institute toward the development of artificial hands and their use in the field of human-robot collaboration.

Dr. Matteo Bianchi

Dr. Matteo Bianchi

Università of Pisa

Matteo Bianchi is working as Assistant Professor of the University of Pisa – Department of Information Engineering – Centro di Ricerca “E. Piaggio”, and Clinic Research Affiliate at Mayo Clinic (Rochester, MN – US). His research interests include haptic interface design, with applications in medical robotics and assistive/affective human-robot interaction; human and robotic hands: optimal sensing and control; human-inspired control for soft robots; psychophysics and mathematical modelling of the sense of touch and human manipulation.


Human grasping and manipulation: lesson learned for the design, control and sensing of robotic hands

Humans are able to intuitively exploit the shape of an object and environmental constraints to achieve stable grasps and perform dexterous manipulations. For these reasons the investigation of human behavior can represent a key enabling factor for an effective design, control and sensing of robotic hands. In this talk, I will first discuss human kinematic ability in environmental constraint exploitation for grasping. We formulate the hypothesis that such ability can be described in terms of a synergistic behavior in the generation of hand postures, i.e., through a reduced set of commonly used kinematic patterns. This is in analogy with previous studies showing the presence of such behavior in different tasks, such as grasping. We investigated this hypothesis in experiments with human participants, who were asked to grasp objects from a flat surface. We quantitatively characterized hand posture behavior from a kinematic perspective, i.e., the hand joint angles, in both pre-shaping and during the interaction with the environment. To determine the role of tactile feedback, we repeated the same experiments but with subjects wearing a rigid shell on the fingertips to reduce cutaneous afferent inputs. Results show the persistence of at least two postural synergies in all the considered experimental conditions and phases. Tactile impairment does not alter significantly the first two synergies, and contact with the environment generates a change only for higher order Principal Components. A good match also arises between the first synergy found in our analysis and the first synergy of grasping as quantified by previous work. This study is motivated by the interest of learning from the human example, extracting lessons that can be applied in robot design and control. With this as motivation, I will finally discuss how human inspiration and minimalistic sensing can be used with soft-adaptable robotic hands to implement tactile-based grasp primitives for human-robot interaction.

Prof. Dr.-Ing. Sami Haddadin

Prof. Dr.-Ing. Sami Haddadin

Leibniz Universität Hannover

Sami Haddadin is interested in topics such as Human-Robot Interaction and interaction design, Human Motor Control, Safe robots, Robot and mechatronic design, Dexterous manipulation, Mobile manipulation, Nonlinear robot control, Control of intrinsically elastic robots, Human biomechanics and injury mechanics, Real-time reflex and motion planning, Real-time task planning and learning, Nonlinear control and learning for Variable Impedance Robots, Brain controlled assistive robots

Dr. Paolo Robuffo Giordano

Dr. Paolo Robuffo Giordano

Lagadic team, Irisa / Inria Rennes

Paolo Robuffo Giordano is a CNRS senior research scientist (DR2) in
the Lagadic group at IRISA/Inria, Rennes, France. He holds a PhD degree
in Systems Engineering obtained in 2008 at the University of Rome “La
Sapienza”. From January 2007 to July 2007 and from November 2007 to
October 2008, he was a research scientist at the Institute of Robotics
and Mechatronics, German Aerospace Center (DLR), Germany, and from
October 2008 to November 2012 he was a senior research scientist at the
Max Planck Institute for Biological Cybernetics and scientific leader of
the group “Human-Robot Interaction”. His scientific interests include
motion control for mobile robots and mobile manipulators, visual control
of robots, active sensing, bilateral teleoperation, shared control,
multi-robot estimation and control, aerial robotics.


Blending Human Assistance and Local Autonomy for Advanced
Telemanipulation (pdf)

Nowadays and future robotics applications are expected to
address more and more complex tasks in increasingly unstructured
environments and in co-existence or co-operation with humans. Achieving
full autonomy is clearly a “holy grail” for the robotics community:
however, one could easily argue that real full autonomy is, in practice,
out of reach for many years to come. The leap between the cognitive
skills (e.g., perception, decision making, general ”scene
understanding”) of us humans w.r.t. those of the most advanced nowadays
robots is still huge. In most applications involving tasks in
unstructured environments, uncertainty, and interaction with the
physical word, human assistance is still necessary, and will probably be
for the next decades.

These considerations motivate research efforts into the (large) topic of
shared control for complex robotics systems: on the one hand, empower
robots with a large degree of autonomy for allowing them to effectively
operate in non-trivial environments. On the other hand, include human
users in the loop for having them in (partial) control of some aspects
of the overall robot behavior.

In this talk I will then review several recent results on novel shared
control architectures meant to blend together diverse fields of robot
autonomy (sensing, planning, control) for providing a human operator an
easy “interface” for commanding the robot at high-level. Applications to
the control of a dual-arm manipulator systems for remote
telemanipulation will be illustrated.

Dr. Robert Platt Jr

Dr. Robert Platt Jr

Northeastern University

Robert Platt is an Assistant Professor of Computer Science at Northeastern University. Prior to coming to Northeastern, he was a Research Scientist at MIT and a technical lead at NASA Johnson Space Center, where he helped develop the control and autonomy subsystems for Robonaut 2, the first humanoid robot in space.


Robotic Manipulation Without Geometric Models

Most approaches to planning for robotic manipulation take a geometric description of the world and the objects in it as input. Unfortunately, despite successes in SLAM, estimating the geometry of the world from sensor data can be challenging. This is particularly true in open world scenarios where we have little prior information about the geometry or appearance of the objects to be handled. This is a problem because even small modelling errors can cause a grasp or manipulation operation to fail. In this talk, I will describe some recent work on approaches to robotic manipulation that eschew geometric models. Our recent results show that these methods excel on manipulation tasks involving novel objects presented in dense clutter.

Prof. Dr. Tamim Asfour

Prof. Dr. Tamim Asfour

Karlsruhe Institute of Technology

Tamim Asfour is full Professor at the Institute for Anthropomatics and Robotics, Karlsruhe Institute of Technology (KIT) where he holds the chair of Humanoid Robotics Systems and is head of the High Performance Humanoid Technologies Lab (H2T). His current research interest is high performance 24/7 humanoid robotics. Specifically, his research include the engineering humanoid robot systems, grasping and dexterous manipulation, learning from human observation and sensorimotor experience as well as on the mechano-informatics of humanoids as the synergetic integration of mechatronics, informatics and artificial intelligence methods to create integrated complete humanoid robot systems. He is developer of the ARMAR humanoid robot family and is leader of the humanoid research group at KIT since 2003.


Combining Model-based and Learning-based Approaches for Humanoid Grasping


Grasping has been studied since the beginning of robotics and considerable progress has been achieved in the last decade. Nevertheless, we are still far way from humanoid robots, which are able to grasp and manipulate any object in the real world. In this talk, I will present recent advances towards complete humanoid robot systems performing complex household tasks including robust execution of single and bimanual grasping and manipulation tasks by integrating vision, force, control, machine learning and AI techniques. In addition, I will present results on grasping familiar and unknown daily objects and maintenance tools by combining model-based and learning-based approaches.


Dr. Valerio Ortenzi

Dr. Valerio Ortenzi

Research Fellow, ARC Centre of Excellence for Robotic Vision

Email: valerio.ortenzi@qut.edu.au

Dr. Yasemin Bekiroglu

Dr. Yasemin Bekiroglu

Vicarious AI, California, USA

Email: yasemin@vicarious.com

Dr. Yiannis Karayiannidis

Dr. Yiannis Karayiannidis

Assistant Professor, Chalmers University of Technology Researcher with the Center of Autonomous Systems, KTH

Email: yiannis@chalmers.se

Dr. Edward Johns

Dr. Edward Johns

Dyson Research Fellow, Imperial College London Department of Computing,South Kensington Campus

Email: e.johns@imperial.ac.uk

Prof. Peter Corke

Prof. Peter Corke

Distinguished Professor, EECS, Queensland University of Technology, Director, ARC Centre of Excellence for Robotic Vision

Email: peter.corke@qut.edu.au


Dr. Valerio Ortenzi, Research Fellow, ARC Centre of Excellence for Robotic Vision,
Queensland University of Technology
Gardens Point, S Block 1130-01, 2 George Street, Brisbane, QLD, 4000
Email: valerio.ortenzi@qut.edu.au

Australian Centre for Robotic Vision
2 George Street Brisbane, 4001
+61 7 3138 7549