|
Title |
8:50 |
Welcome and Introduction
Tamim Asfour, Karlsruhe Institute of Technology (KIT), Germany
Giorgio Metta Istituto Italiano di Tecnologia, Italy
|
9:00 |
Joint research on human and humanoid cognition: nature of nurture?
Giulio Sandini -
Istituto Italiano di Tecnologia (IIT), Italy
» read abstract
« hide abstract
Robots will become ubiquitous in the future entering our everyday life in different forms and shapes. In this scenario an essential ability shall be for the robot to be able to interact with humans in a human-like manner and to cope safely with collaborators of different ages and technical skills. Human-like interaction is definitely not a new objective for artificial machines as it has been one of the implicit long term goals of cybernetics (in the ’60 and 70’) and artificial intelligence (in the ’70 and ’80) even before it came into the realm of robotics research with the concept of “embodied intelligence”.
Human-like human-robot interaction, as an essential part of robot cognition, shall require a long list of “modules” (e.g. perceptual processes, learning modules, motor control, planning etc.) and “mechanisms” (e.g. sensors, actuators, materials etc.) and excellent research results are being produced in these individual areas of research. However we still need very much to invest our efforts on the “cognitive glue” which offers the possibility of guiding the development of the individual skills while keeping the “big picture” as a reference with the ultimate goal of endowing a robot with the ability to anticipate the intentions of other robots and humans (an essential ability for safe operation). Providing the tools to address the “big picture” and implementing a few modules has been the main inspiration of the Robotcub project and the fact that the iCub is now seen as a (small) flagship of European robotics research and that the number of “open source” robotics projects is booming, demonstrate that it was a good approach (at least it is a popular one). However it may be time now to consider how this approach has evolved in the last years and if new actions are required to adjust it course (or to change it completely). Certainly a robotics community which has as an objective to develop robots collaborating with non-expert humans and frail population should not reduce its effort on cognition research without renouncing to this objective. Even in the presence of criticisms on the “practical” achievements of the last 10 years, cognition is still an essential ingredient of a human-robot human-like interaction.
During this talk I shall discuss some critical aspects of human-robot interaction and the approach we are following to identify and implement the basic components supporting “human-like” interaction by using a humanoid robot as a development tool. I shall also briefly discuss how this approach, even if built on a natural synergy between scientists interested in understanding human cognition and engineers working on cognitive robotics, needs to be nurtured to build the joint research environment appropriate for the overall field of cognition to advance (and to build safe robots along the way).
|
9:45 |
Cognition for robotics in H2020
Cécile Huet - European Commission, Robotics, DG Connect, EC
|
10:00 |
Xperience - Robots Bootstrapped through Learning from Experience
Rüdiger Dillmann -
Karlsruhe Institute of Technology (KIT), Germany
» read abstract
« hide abstract
Current research in embodied cognition builds on the idea that physical interaction with and exploration of the world allows an agent to acquire intrinsically grounded, cognitive representations which are better adapted to guiding behavior than human crafted rules. Exploration and discriminative learning, however, are relatively slow processes. Humans are able to rapidly create new concepts and react to unanticipated situations using their experience. They use generative mechanisms, like imagining and internal simulation, based on prior knowledge to predict the immediate future. Such generative mechanisms increase both the bandwidth and speed of cognitive development, however, current artificial cognitive systems do not yet use generative mechanisms in this way.
The Xperience project addresses this problem by structural bootstrapping, an idea taken from language acquisition research: knowledge about grammar allows a child to infer the meaning of an unknown word from its grammatical role together with understood remainder of the sentence. Structural bootstrapping generalizes this idea for general cognitive learning: if you know the structure of a process the role of unknown actions and entities can be inferred from their location and use in the process. This approach will enable rapid generalization and allow agents to communicate effectively.
|
10:15 |
Building biologically and psychologically grounded anthropomorphic machines: The Distributed Adaptive Control theory of mind, brain and behaviour
Paul Verschure -
Universitat Pompeu Fabra and ICREA, Spain
» read abstract
« hide abstract
I will address the question what the function is of the brain and how this function can be emulated in social robots. The brain evolved to maintain a dynamic equilibrium between an organism and its environment through action. The fundamental question that such a brain has to solve in order to deal with the how of action in a physical world is: why (motivation), what (objects), where (space), when (time) or the H4W problem. Post the Cambrian explosion of about 560M years ago, a last factor became of great importance for survival: the who of other agents and now brains adapted to the H5W challenge. I propose that H5W defines the top-level objectives of the brain and will argue that brain and body evolved to provide specific solutions to it by relying on a layered control architecture that is captured in the Distributed Adaptive Control (DAC) theory of mind and brain. I will show how DAC addresses H5W through interactions across multiple layers of neuronal organization, suggesting a very specific structuring of the neuraxis which can be captured in robot control architectures. In explaining how the function of the brain is realized I will show how the DAC theory provides for very specific predictions that have been validated at the level of behaviour and the neuronal substrate. I will in particular look at the generation of goal-oriented behaviour by humans and machines. These examples will show that robot based models of mind and brain do not only advance our understanding of ourselves and other animals but can also lead to novel technical solutions to complex applied problems.
|
10:30 |
Fundamental Cognitive Mechanisms for Humanoid Robots: The POETICON Contribution
Katerina Pastra -
Cognitive Systems Research Institute, Greece
» read abstract
« hide abstract
The POETICON project views cognitive humanoids as active interpreters: of verbal interaction and of sensorimotor experiences.
Thus, it combines experimental research and computational modeling for endowing humanoids with fundamental mechanisms needed to deal with unconstrained tasks, and task transfer needs.
In this talk, we will present the syntactic and semantic mechanisms that are being developed for humanoids, their biological basis and their computational realisation.
We will demonstrate how such mechanisms are being employed during visual processing and motor control and how they lead to incorporating language dynamically in the robot-seeing, robot-doing, robot-learning loop.
|
10:45 |
Break
|
11:15 |
AMARSi - Adaptive Modular Architectures for Rich Motor Skills
Jochen Steil -
Bielefeld University, Germany
» read abstract
« hide abstract
The recently finished FP7-AMARSi unfolded from the hypothesis that only combined progress in mechanics, learning, and architecture will lift our currently carefully engineered and mostly pre-programmed robots towards the richness of motor behavior which we experience in biology and which
is necessary for application in our daily life. The project thus subscribed to a multi-faceted approach to investigate modularity in human motor control, to develop new compliant platforms including the quadruped Oncilla and the compliant humanoid COMAN, to provide new learning algorithms for motor primitives, and to develop model-driven engineering tools to glue these approaches together in advanced control architectures. The talk will highlight some of the results, which were published in more than 200 papers as well as perspectives from this approach for further research. It will in particular emphasize routes to industrial innovation that are already manifest in a larger number of follow-up activities.
|
11:30 |
The role of audio-visual fusion in human-robot interaction
Radu Horaud -
INRIA, France
» read abstract
« hide abstract
Audio and visual perception play a complementary role in human interaction.
Perception enables people to communicate based on verbal (speech and language) and non-verbal (facial expressions, head movements, hand and body gesturing) communication.
These two modes of communication have a large degree of overlap, in particular in social contexts.
Moreover, the two modalities disambiguate each other whenever one of the modalities is weak, ambiguous, or corrupted by various perturbations.
Human-computer interaction (HCI) has attempted to address these issues, e.g., using smart & portable devices.
With these devices the human is in the loop, e.g., images and sounds are recorded purposively by the user,
such that the device microphones are at only a few centimeters away from the user's mouth and the device camera faces the user, or tethered interaction.
The robustness of current HCI systems degrades significantly as the microphones are located a few meters away from the users, or if the camera is not properly oriented towards the user.
Similarly, face detection and recognition work well under limited conditions. Altogether,
the HCI paradigm cannot be easily extended to untethered interaction, e.g., in the presence of social events involving several people. Currently we investigate human-robot interaction (HRI).
The main difference between HCI and HRI is that, while the former is user controlled, the latter is implemented within the paradigm of an autonomous intelligent humanoid robot that purposively interacts with the users,
the interaction being based on its on-board sensors and actuators and following a well-defined scenario. In the future such a robot could be able to understand non-verbal and verbal interactions between people,
analyze their intentions and their dialogue, extract information and synthesize appropriate behaviors, e.g., waves to a person, turns its head towards the dominant speaker, nods, gesticulates, asks questions,
gives advices, waits for instructions, etc. In this talk we will briefly describe a number of topics of crucial importance for HRI :audio-visual sound-source separation and localization in natural environments,
for example to detect and track moving speakers, inference of temporal models of verbal and non-verbal activities (diarization), continuous recognition of particular gestures and words, context recognition,
and multimodal dialogue. We will illustrate these ideas with some findings from the EU STREP project HUMAVIPS (humavips.eu).
|
11:45 |
Neural Dynamics: how a theoretical language of embodied cognition and dynamical systems thinking enable cognitive robotic behaviours.
Yulia Sandamirskaya, Gregor Schoner - Ruhr-Universität Bochum, Germany
» read abstract
« hide abstract
Every moment of our awake existence we are aware of our visual surroundings,
are capable of reaching for and handling objects, can chain simple actions into sequences to achieve goals.
Considerable experimental and modelling work has gone into understanding the flexibility, robustness, and adaptivity of human cognitive behaviour.
The Neural Dynamics project has committed to the mathematical and conceptual framework of Dynamic Field Theory to understand the principles of cognitive behaviour,
which may be brought to action and generate cognitive behaviours in robots. Dynamic Field Theory has its origins in the sensory-motor domain
where it has been used to understand movement preparation, sensory-motor decisions, and motor memory. In the meantime, however, the framework has been extended to understand
elements of visual cognition involved in scene representation and object recognition, sequences of cognitive or motor operations may be understood in this framework,
while all processes are endowed with pervasive learning capabilities and flexibility, needed to deal with real-world situations. The developed neural-dynamic language
has been used to create artificial cognitive systems with a range of cognitive capabilities: on the perceptual side leading to object and pose recognition, formation
of actionable representations of scenes and action sequences, on the action side enabling temporal organisation of elementary behaviours, motivational dynamics, and
simultaneous planning and action. In this presentation, I will especially highlight the pervasive learning and adaptivity of the neural-dynamic architectures, realised
through the neural-dynamic reinforcement learning, conditioning, associations, and habit formation.
|
12:00 |
Tactile sensing in manipulation: from humans to machine learning
Patrick van der Smagt -
TU Munich, Germany
» read abstract
« hide abstract
The EC Project "tacman" investigates the use of tactile sensing in in-hand manipulation.
By searching for relevant physical properties from human manipulation, we exploit machine-learning techniques to have robots do the same.
The goals and achievements of the project will be highlighted, focusing on methodologies where appropriate.
|
12:15 |
Whole-body compliant dynamical contacts in cognitive humanoids.
Francesco Nori -
Istituto Italiano di Tecnologia (IIT), Italy
» read abstract
« hide abstract
The aim of CoDyCo is to advance the current control and cognitive understanding about robust, goal- directed whole-body motion interaction with multiple contacts. CoDyCo will go beyond traditional approaches: (1) proposing methodologies for performing coordinated interaction tasks with complex systems; (2) combining planning and compliance to deal with predictable and unpredictable events and contacts; (3) validating theoretical advances in real-world interaction scenarios.
First, CoDyCo will advance the state-of-the-art in the way robots coordinate physical interaction and physical mobility. Traditional industrial applications involve robots with limited mobility. Consequently, interaction (e.g. manipulation) was treated separately from whole-body posture (e.g. balancing), assuming the robot firmly connected to the ground. Foreseen applications involve robots with augmented autonomy and physical mobility. Within this novel context, physical interaction influences stability and balance. To allow robots to surpass barriers between interaction and posture control, CoDyCo will be grounded in principles governing whole-body coordination with contact dynamics.
Second, CoDyCo will go beyond traditional approaches in dealing with all perceptual and motor aspects of physical interaction, unpredictability included. Recent developments in compliant actuation and touch sensing allow safe and robust physical interaction from unexpected contact including humans. The next advancement for cognitive robots, however, is the ability not only to cope with unpredictable contact, but also to exploit predictable contact in ways that will assist in goal achievement.
Third, the achievement of the project objectives will be validated in real-world scenarios with the iCub humanoid robot engaged in whole-body goal-directed tasks. The evaluations will show the iCub exploiting rigid supportive contacts, learning to compensate for compliant contacts, and utilizing assistive physical interaction.
|
12:30 |
Lunch
|
14:00 |
THE Hand Embodied - Science and Technology going hand in hand
Antonio Bicchi -
University of Pisa, Italy
» read abstract
« hide abstract
THE Hand Embodied was first conceived on a restaurant napkin where a scientist and an engineer were having lunch. It grew into a solid scientific vision and later in a proposal through deep interaction between researchers with background in medicine, neuroscience, physiology, psychology, physics and several different hues of engineering. Its execution was supervised and helped by an outstanding panel of experts from neuroscience, computer science and engineering. Results were positive beyond our own expectations. Although only academic or research partners took part in THE, quite a few real-world applications and even start-ups were spun off.
In this presentation, after illustrating such results, I will make the point that it is auspicable that in the future the EU will continue funding research projects that spans across multiple disciplines and integrates their methods and results, without too short-sighted a concern on immediate Return-on-Investment.
|
14:15 |
Control Issues in Human-Robot Collaboration: Results from the SAPHARI project
Marilena Vendittelli, Alessandro De Luca -
Sapienza University of Rome, Italy
» read abstract
« hide abstract
In this talk, some basic control issues related to physical Human-Robot Interaction (pHRI) will be presented, with solutions developed within the on-going EU FP7 SAPHARI project.
The framework is a hierarchical control architecture of consistent behaviors, which are organized in safety, co-existence, and active collaboration layers. Typical human-robot
interaction tasks involve dynamically varying and uncertain environments. We require thus fast on-line collision avoidance methods driven by exteroceptive sensors (one or more
RGB-D sensor) for coexistence, rapid detection of unavoidable physical contacts at the whole body level based only on proprioceptive sensing (using model-based residuals or
motor currents) for safety, the ability to distinguish between accidental collisions and intentional contacts (by suitable filtering of signals) for physical collaboration of
human and robot. In this latter case, forces and common motion at the contact should be monitored and regulated. In particular, we will show results of contact force estimation
at a generic location along the robot structure, without using tactile or force sensors. The estimated forces are used either in an impedance or in a force control scheme, both
designed to impose the requested behavior at the contact point. While all these developments were made for fixed-based lightweight manipulators, their possible extension to humanoid
robots and the open issues will also be highlighted.
|
14:30 |
Results from the CYBERLEGs system and perspectives for wearable robotics
Nicola Vitiello -
Scuola Superiore Sant'Anna, Italy
» read abstract
« hide abstract
The scientific and technological global goal of the CYBERLEGs project is the development of an artificial cognitive system for dysvascular trans-femoral amputees’ lower-limb functional
replacement and assistance in activities of daily living. CYBERLEGs will be a robotic system constituted of an active cognitive artificial leg for the functional replacement of the
amputated limb and a wearable active orthosis for assisting the contralateral sound limb. CYBERLEGs will allow the amputee to walk back and forward, go up and down stairs, and move
from sit-to-stand and stand-to-sit with a minimum cognitive and energetic effort. The control system of CYBERLEGs will be based on motor primitives as fundamental buildings block,
thus endowing CYBERLEGs with semiautonomous behavior for planning the motion of the prosthesis joints and the assistive action of the orthosis module. CYBERLEGs will be capable of
high-level cognitive skills, interfaced to the amputee through a bi-directional interaction. CYBERLEGs will be able ‘to understand’ user-motor intentions smoothly and effectively
and to prevent the risk of fall for the amputee, by means of a multi-sensory fusion algorithm based on: the observation of the motion of the amputee body; the interaction force between
CYBERLEGs and the amputee; their force interaction with ground. Finally, CYBERLEGs will be capable of closing the loop with the amputee: the amputee will receive an efferent feedback
from CYBERLEGs which will enhance the perception of CYBERLEGs as a part of his/her own body.
The following major results achieved within the first two years of the CYBERLEGs project will be overviewed within this presentation: (i) development and experimental validation of a
pelvis active exoskeleton for the hip flexion-extension assistance, (ii) development of a control strategy based on adaptive oscillator for assisting the amputees ground-level walking,
(iii) a non-invasive human-machine interface to allow amputees to control a lower-limb prosthesis through a wearable sensory apparatus comprising IMUs and pressure-sensitive foot insoles,
(iv) an algorithm to real-time detect the incipient fall, based on the use of adaptive oscillators.
|
14:45 |
PacMan: Grasping in Unstructured Enviroments
Jeremy Wyatt -
University of Birmingham, UK
» read abstract
« hide abstract
I this talk I will give an overview of the aims, philosophy and results of the PacMan project. This project is concerned with advancing methods for the manipulation of unfamiliar objects.
I will describe the three legs of the project: work on compositional representations of objects, work on machine learning and planning for grasping under unfamiliar and uncertain conditions,
and work on synergetic grasping, haptics and control. The talk will include latest results and future directions.
|
15:00 |
The power of social humanoids: lessons from child-robot interaction
Tony Belpaeme -
Plymouth University, UK
» read abstract
« hide abstract
In the last five years the ALIZ-E project team has worked to further the science and technology behind human-robot interaction, focusing specifically on letting
small humanoids interact with children over an extended period of time. I show how recent advances in sensing and AI, together which new insights into social human-robot
interaction, allow us to build engaging child-robot interaction scenarios. To evaluate our systems we ran field studies where Nao robots interacted with children (7-13 yo).
We evaluated the robot as educational tools in school settings, and as support tool for children who were hospitalised with diabetes. I will present results showing the potential
of the robot, and will demonstrate that adaptation and personalisation is key to achieving positive outcomes. Finally, I will make a case for why Human-Robot Interaction might
very well be the killer app humanoid robotics is waiting for.
|
15:15 |
Break
|
15:45 |
COMAN Experience to WALK-MAN Humanoid: Development of Compliant Humanoids with Enhanced Physical Interaction and Efficient Performance.
Nikos Tsagarakis -
Istituto Italiano di Tecnologia (IIT), Italy
» read abstract
« hide abstract
Most of the humanoids robots developed in the past are powered by intrinsically stiff actuation systems and controlled by stiff position/velocity units coupled with highly geared,
non-backdrivable transmissions. These robots are precise and highly repeatable, when performing locomotion and manipulation tasks within well-defined environments under anticipated
physical interactions. However, it has become clear that the ability of these robots to cope with disturbances and adapt to unpredicted physical interactions is severely limited.
Moreover, these systems are rather inefficient. The talk will introduce ongoing work and results towards the development of intrinsic and actively controlled compliant humanoid
robots starting from the COMAN humanoid results and experience gained to our most recent work on the currently under development WALK-MAN humanoid platform.
|
16:00 |
KoroiBot - Developing Methods to improve humanoid walking capabilities
Katja Mombaur -
University of Heidelberg, Germany
» read abstract
« hide abstract
The goal of the KoroiBot project is to enhance the ability of humanoid robots to walk in a dynamic and versatile fashion in the way humans do. Humanoid robots of the 21st century are expected to support humans in many different environments, such as factories, households, disaster sites or space missions, to name just a few. However, one of the fundamental abilities that they still lack for these tasks is the ability to walk in a truly human-like fashion. This problem is not only linked to the present hardware, but also to a large extent to the control principles and the software used.
The KoroiBot project which started in October 2013 addresses this problem by developing novel software techniques for increased walking performance and evaluating them on the partners existing humanoids. The new software technologies are based on mathematical models and methods, in particular optimization, and learning from biology, that will help to increase humanoid walking performance. Especially for humanoid robots with their redundant DOF, optimization is the solution to make the redundancy a benefit rather than a burden. Models are developed based thorough experimental studies of human walking in many different scenarios which are collected in the KoroiBot motion capture data base. We study different walking tasks in this project: walking on level ground and uneven terrain, on balance bars and step stone bridges, in interactions with other people, as well as multi-contact walking with additional hand supports and push recovery. This is only possible by a truly interdisciplinary approach combining expertise in humanoid robotics, mathematics, cognitive sciences and biomechanical modeling.
Beyond the scope of humanoids, the methods can be useful for the design and control of exoskeletons and intelligent prostheses, in functional electrical stimulation of hemiplegic or paraplegic patients, biped-based novel transportation systems or in computer animations and games.
|
16:15 |
Integrative Approach for the Emergence of Human-like Robot Locomotion (H2R project)
Diego Torricelli, José L. Pons -
CSIC, Spain
» read abstract
« hide abstract
In humans, walking emerges naturally from a hierarchical organization and combination of neural control and biomechanical mechanisms.
The implementation of complex robotic humanoids is of particular interest for neurophysiologist and bioengineers as workbenches for validating theories
on biological motor control and cognitive behavior. The project H2R aims to integrate different bio-inspired control approaches in a compliant biped platform,
to demonstrate that this integration results in improved locomotion stability and human-likeness under disturbed environment.
|
16:30 |
COMANOID: Multi-contact Collaborative Humanoids in Difficult Access Airliner Assembly Line Operation
Abdelrahman Kheddar -
CNRS (France/Japan)
» read abstract
« hide abstract
This talk we briefly present a starting H2020 RIA project dedicated to the usage of humanoid robots with multi-contact motion technology in the context of airliner assembly.
COMANOID investigates the deployment of robotic solutions in well-identified Airbus airliner assembly operations that are laborious or tedious for human workers and for which access
is impossible for wheeled or rail-ported robotic platforms. As a solution to these constraints a humanoid robot is proposed to achieve the described tasks in real-use cases provided
by Airbus Group. Recent developments in humanoid multi-contact planning and control suggest that this is a much more plausible solution than current alternatives such as a manipulator
mounted on multi-legged base. In particular the project focuses on implementing a real-world humanoid robotics solution using the best of research and innovation. The main challenge
will be to integrate current scientific and technological advances including multi-contact planning and control; advanced visual-haptic servoing; RGBD-based perception and localization;
human-robot safety and the operational efficiency of cobotics solutions in airliner assembly.
|
16:45 |
Panel Discussion
|
17:30 |
End
|