
{"id":34,"date":"2021-03-08T17:59:46","date_gmt":"2021-03-08T16:59:46","guid":{"rendered":"http:\/\/archive.iar.kit.edu\/workshops\/icra2021\/?page_id=34"},"modified":"2021-05-29T20:39:30","modified_gmt":"2021-05-29T18:39:30","slug":"invited-speakers","status":"publish","type":"page","link":"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/index.php\/program\/invited-speakers\/","title":{"rendered":"Invited Speakers"},"content":{"rendered":"\n<div style=\"height:75px\" aria-hidden=\"true\" class=\"wp-block-spacer mdbtheme-block-item \"><\/div>\n\n\n\n<section class=\"wp-block-mdbtheme-section mdbtheme-block-item  container py-2 undefined\">\n<hr class=\"wp-block-separator mdbtheme-block-item\"\/>\n\n\n\n<h2 class=\"wp-block-heading\">Invited Speakers<\/h2>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer mdbtheme-block-item \"><\/div>\n\n\n\n<div class=\"wp-block-media-text mdbtheme-block-item  alignwide is-stacked-on-mobile is-vertically-aligned-top no-shadow\" style=\"grid-template-columns:23% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"220\" height=\"220\" src=\"http:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/aloimonos.jpg\" alt=\"\" class=\"wp-image-151 size-full\" srcset=\"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/aloimonos.jpg 220w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/aloimonos-150x150.jpg 150w\" sizes=\"auto, (max-width: 220px) 100vw, 220px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p class=\"has-medium-font-size\"><strong><a href=\"http:\/\/prg.cs.umd.edu\/\">Yiannis Aloimonos<\/a><\/strong><br>University of Maryland, USA<br><\/p>\n\n\n\n<p><strong>Title of the Talk<\/strong>: The theory of Therbligs: A compositional approach to incremental robot learning<\/p>\n\n\n\n<div>\n  <a href=\"http:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/05\/icra-workshop-Yiannis-Aloimonos.pdf\" target=\"_blank\" style=\"font-weight: bold\" rel=\"noopener\">\n    Abstract (PDF)\n  <\/a>\n<\/div>\n<\/div><\/div>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer mdbtheme-block-item \"><\/div>\n\n\n\n<div class=\"wp-block-media-text mdbtheme-block-item  alignwide is-stacked-on-mobile is-vertically-aligned-top no-shadow\" style=\"grid-template-columns:23% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"180\" height=\"180\" src=\"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/05\/mbeetza.jpg\" alt=\"\" class=\"wp-image-377 size-full\" srcset=\"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/05\/mbeetza.jpg 180w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/05\/mbeetza-150x150.jpg 150w\" sizes=\"auto, (max-width: 180px) 100vw, 180px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p class=\"has-medium-font-size\"><strong><a href=\"https:\/\/ai.uni-bremen.de\/team\/michael_beetz\">Michael Beetz<\/a><\/strong><br>University of Bremen, Germany <br><\/p>\n\n\n\n<p><strong>Title of the Talk<\/strong>: DTKR&amp;R &#8212; a simulation-based predictive modelling engine for<br>cognition-enabled robot manipulation<\/p>\n\n\n\n<!-- Collapse buttons -->\n<div>\n  <a data-toggle=\"collapse\" href=\"#collapseExampleBeetz\" aria-expanded=\"false\" aria-controls=\"collapseExampleBeetz\" style=\"font-weight: bold\">\n    Abstract \n  <\/a>\n<\/div>\n<!-- \/ Collapse buttons -->\n\n<!-- Collapsible element -->\n<div class=\"collapse\" id=\"collapseExampleBeetz\">\n  <div class=\"mt-3\">\n    <p>Recent years have seen an impressive progress of robot simulators and\nenvironments as fully developed software systems that provide\nsimulations as a substitute for real-world activity.  They are\nprimarily used for training modules of robot control programs, which\nare, after completing the learning process, deployed in real-world\nrobots. In contrast, simulation in (artificial) cognitive systems is a\ncore cognitive capability, which is assumed to provide a &#8222;small-scale\nmodel of external reality and of its own possible actions within its\nhead, it is able to try out various alternatives, conclude which is\nthe best of them, react to future situations before they arise,\nutilise the knowledge of past events in dealing with the present and\nfuture, and in every way to react in a much fuller, safer, and more\ncompetent manner to the emergencies which face it&#8220;  (Craik, The Nature of Explanation, 1943). This\nmeans that simulation can be considered as an embodied, online\n<b>predictive modelling engine<\/b> that enables robots to\ncontextualize vague task requests such as &#8222;bring me the milk&#8220; into a\nconcrete body motion that achieves the implicit goal and avoids\nunwanted side effects. In this setting a robot can run small-scale\nsimulation and rendering processes for different reasoning tasks all\nthe time and can continually compare simulation results with reality &#8211; it is a promising Sim2Real2Sim setup that has the potential to\ncreate much more powerful robot simulation engines. We introduce\n<b>DTKR&amp;R<\/b>, a robot simulation framework that is currently\ndesigned and developed with this vision in mind.<\/p>\n  <\/div>\n<\/div>\n<!-- \/ Collapsible element -->\n<\/div><\/div>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer mdbtheme-block-item \"><\/div>\n\n\n\n<div class=\"wp-block-media-text mdbtheme-block-item  alignwide is-stacked-on-mobile is-vertically-aligned-top no-shadow\" style=\"grid-template-columns:23% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"220\" height=\"220\" src=\"http:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/cangelosi_2012.jpg\" alt=\"\" class=\"wp-image-154 size-full\" srcset=\"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/cangelosi_2012.jpg 220w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/cangelosi_2012-150x150.jpg 150w\" sizes=\"auto, (max-width: 220px) 100vw, 220px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p class=\"has-medium-font-size\"><strong><a href=\"https:\/\/www.research.manchester.ac.uk\/portal\/angelo.cangelosi.html\">Angelo Cangelosi<\/a><\/strong><br>The University of Manchester; United Kingdom<br><\/p>\n\n\n\n<p><strong>Title of the Talk<\/strong>: Developmental Robotics for Language Learning, Trust and Theory of Mind<\/p>\n\n\n\n<!-- Collapse buttons -->\n<div>\n  <a data-toggle=\"collapse\" href=\"#collapseExampleCangelosi\" aria-expanded=\"false\" aria-controls=\"collapseExampleCangelosi\" style=\"font-weight: bold\">\n    Abstract \n  <\/a>\n<\/div>\n<!-- \/ Collapse buttons -->\n\n<!-- Collapsible element -->\n<div class=\"collapse\" id=\"collapseExampleCangelosi\">\n  <div class=\"mt-3\">\n    <p>Growing theoretical and experimental research on action and language processing and on number learning and gestures clearly demonstrates the role of embodiment in cognition and language processing. In psychology and neuroscience, this evidence constitutes the basis of embodied cognition, also known as grounded cognition (Pezzulo et al. 2012). In robotics and AI, these studies have important implications for the design of linguistic capabilities in cognitive agents and robots for human-robot collaboration, and have led to the new interdisciplinary approach of Developmental Robotics, as part of the wider Cognitive Robotics field (Cangelosi &amp; Schlesinger 2015; Cangelosi &amp; Asada 2021). During the talk we will present examples of developmental robotics models and experimental results from iCub experiments on the embodiment biases in early word acquisition and grammar learning (Morse et al. 2015; Morse &amp; Cangelosi 2017) and experiments on pointing gestures and finger counting for number learning (De La Cruz et al. 2014). We will then present a novel developmental robotics model, and experiments, on Theory of Mind and its use for autonomous trust behavior in robots (Vinanzi et al. 2019). The implications for the use of such embodied approaches for embodied cognition in AI and cognitive sciences, and for robot companion applications will also be discussed.<\/p>\n  <\/div>\n<\/div>\n<!-- \/ Collapsible element -->\n\n\n\n<p><\/p>\n<\/div><\/div>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer mdbtheme-block-item \"><\/div>\n\n\n\n<div class=\"wp-block-media-text mdbtheme-block-item  alignwide is-stacked-on-mobile is-vertically-aligned-top no-shadow\" style=\"grid-template-columns:23% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"450\" height=\"450\" src=\"http:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/dyhsu_2.jpg\" alt=\"\" class=\"wp-image-155 size-full\" srcset=\"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/dyhsu_2.jpg 450w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/dyhsu_2-300x300.jpg 300w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/dyhsu_2-150x150.jpg 150w\" sizes=\"auto, (max-width: 450px) 100vw, 450px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p class=\"has-medium-font-size\"><strong><a href=\"https:\/\/www.comp.nus.edu.sg\/~dyhsu\/\">David Hsu<\/a><\/strong><br>National University of Singapore, Singapore <\/p>\n\n\n\n<p><strong>Title of the Talk<\/strong>: Interactive Visual Grounding and Grasping in Clutter<\/p>\n\n\n\n<!-- Collapse buttons -->\n<div>\n  <a data-toggle=\"collapse\" href=\"#collapseExampleHsu\" aria-expanded=\"false\" aria-controls=\"collapseExampleHsu\" style=\"font-weight: bold\">\n    Abstract \n  <\/a>\n<\/div>\n<!-- \/ Collapse buttons -->\n\n<!-- Collapsible element -->\n<div class=\"collapse\" id=\"collapseExampleHsu\">\n  <div class=\"mt-3\">\n    <p>Pass me the blue notebook right next to the coffee mug.&#8220; This is the spoken instruction to the robot, which is faced with a pile of objects on the table. What would it take for the robot to succeed? It must understand natural language instructions, recognize objects and their spatial relationships visually, and most importantly, connect language understanding, visual perception with robot actions. One main challenge here is the inevitable ambiguity in human languages and uncertainty in visual perception. In this talk, I will introduce INVIGORATE, a robot system that interacts with human through natural language and grasps a specified object in clutter. By integrating model-based reasoning and data-driven deep learning, INVIGORATE takes one step towards a service robot that helps with household tasks at home.<\/p>\n  <\/div>\n<\/div>\n<!-- \/ Collapsible element -->\n\n\n\n<p><\/p>\n<\/div><\/div>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer mdbtheme-block-item \"><\/div>\n\n\n\n<div class=\"wp-block-media-text mdbtheme-block-item  alignwide is-stacked-on-mobile is-vertically-aligned-top no-shadow\" style=\"grid-template-columns:23% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"200\" height=\"200\" src=\"http:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/inamura.jpg\" alt=\"\" class=\"wp-image-157 size-full\" srcset=\"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/inamura.jpg 200w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/inamura-150x150.jpg 150w\" sizes=\"auto, (max-width: 200px) 100vw, 200px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p class=\"has-medium-font-size\"><strong><a href=\"https:\/\/www.nii.ac.jp\/en\/faculty\/informatics\/inamura_tetsunari\/\">Tetsunari Inamura<\/a><\/strong><br>National Institute of Informatics, Japan<\/p>\n\n\n\n<p><strong>Title of the Talk<\/strong>: <\/p>\n\n\n\n<p>Cloud-based VR gamification towards learning explanation of the daily-life activity<\/p>\n\n\n\n<p><\/p>\n\n\n\n<p><\/p>\n\n\n\n<!-- Collapse buttons -->\n<div>\n  <a data-toggle=\"collapse\" href=\"#collapseExampleInamura\" aria-expanded=\"false\" aria-controls=\"collapseExampleInamura\" style=\"font-weight: bold\">\n    Abstract \n  <\/a>\n<\/div>\n<!-- \/ Collapse buttons -->\n\n<!-- Collapsible element -->\n<div class=\"collapse\" id=\"collapseExampleInamura\">\n  <div class=\"mt-3\">\n    <p>In recent years, attempts to bridge the gap between natural language\nprocessing research and robotics research has accelerated. A typical\nexample is the visual navigation task, which learns the relationship\nbetween a sequence of visual information about an agent&#8217;s movement and\nthe sentences that describe its navigation. Researchers have proposed\nvarious machine learning models by sharing open and large data sets on\nsuch navigation tasks. However, the main behaviors are often\ntwo-dimensional movements in a room or a city. Large datasets of\ncomplex behaviors such as assembling objects or physical and social\ninteractions with others are not easily available due to the enormous\ncost of building datasets. On the other hand, simulators in robotics\nresearch are becoming more and more important, and VR systems that\nallow humans to intervene in robot simulators have been proposed. In\nthis talk, I introduce an attempt of gamification using a VR system as\na mechanism to collect natural language expression that corresponds to\nsocial interactions and complex physical behaviors. I have developed\nthe SIGVerse system, which combines a robot simulator with a VR space\nwhere humans log in as avatars. Based on the VR system, I designed a\nrobot competition task in which humans and robots interact\nlinguistically and perform social and physical actions. This system\nenables us to collect interaction data while providing fun for the\ncompetitors and participants. I will also introduce our recent attempt\nto accelerate HRI research in the coronavirus pandemic with the VR\nsystem.<\/p>\n  <\/div>\n<\/div>\n<!-- \/ Collapsible element -->\n<\/div><\/div>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer mdbtheme-block-item \"><\/div>\n\n\n\n<div class=\"wp-block-media-text mdbtheme-block-item  alignwide is-stacked-on-mobile is-vertically-aligned-top no-shadow\" style=\"grid-template-columns:23% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"170\" height=\"170\" src=\"http:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/ramirez-amaro.jpg\" alt=\"\" class=\"wp-image-153 size-full\" srcset=\"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/ramirez-amaro.jpg 170w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/ramirez-amaro-150x150.jpg 150w\" sizes=\"auto, (max-width: 170px) 100vw, 170px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p class=\"has-medium-font-size\"><strong><a href=\"https:\/\/www.chalmers.se\/en\/staff\/Pages\/karinne.aspx\">Karinne Ramirez-Amaro<\/a><\/strong><br>Chalmers University of Technology, Sweden<br><\/p>\n\n\n\n<p><strong>Title of the Talk<\/strong>: Robots that Reason &#8211; A Semantic Reasoning Method for the Recognition of Human Activities<br><\/p>\n\n\n\n<!-- Collapse buttons -->\n<div>\n  <a data-toggle=\"collapse\" href=\"#collapseExampleRamirezAmaro\" aria-expanded=\"false\" aria-controls=\"collapseExampleRamirezAmaro\" style=\"font-weight: bold\">\n    Abstract \n  <\/a>\n<\/div>\n<!-- \/ Collapse buttons -->\n\n<!-- Collapsible element -->\n<div class=\"collapse\" id=\"collapseExampleRamirezAmaro\">\n  <div class=\"mt-3\">\n    <p>Autonomous robots are expected to learn new skills and to re-use past experiences in different situations as efficient, intuitive and reliable as possible. Robots need to adapt to different sources of information, for example, videos, robot sensors, virtual reality, etc. Then, to advance the research in the understanding of human activities, in robotics, the development of learning methods that adapt to different sensors are needed. In this talk, I will introduce a novel learning method that generates compact and general semantic models to infer human activities. This learning method allows robots to obtain and determine a higher-level understanding of a demonstrator\u2019s behavior via semantic representations. First, the low-level information is extracted from the sensory data, then a meaningful semantic description, the high-level, is obtained by reasoning about the intended human behaviors. The introduced method has been assessed on different robots, e.g. the iCub, REEM-C, and TOMM, with different kinematic chains and dynamics. Furthermore, the robots use different perceptual modalities, under different constraints and in several scenarios ranging from making a sandwich to driving a car assessed on different domains (home-service and industrial scenarios). One important aspect of our approach is its scalability and adaptability toward new activities, which can be learned on-demand. Overall, the presented compact and flexible solutions are suitable to tackle complex and challenging problems for autonomous robots.<\/p>\n  <\/div>\n<\/div>\n<!-- \/ Collapsible element -->\n<\/div><\/div>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer mdbtheme-block-item \"><\/div>\n\n\n\n<div class=\"wp-block-media-text mdbtheme-block-item  alignwide is-stacked-on-mobile no-shadow\" style=\"grid-template-columns:23% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"500\" height=\"500\" src=\"http:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/Giulio-Sandini.jpg\" alt=\"\" class=\"wp-image-169 size-full\" srcset=\"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/Giulio-Sandini.jpg 500w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/Giulio-Sandini-300x300.jpg 300w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/Giulio-Sandini-150x150.jpg 150w\" sizes=\"auto, (max-width: 500px) 100vw, 500px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p class=\"has-medium-font-size\"><strong><a href=\"https:\/\/www.iit.it\/people\/giulio-sandini\">Giulio Sandini<\/a><\/strong><br>Instituto Italiano di Tecnologia, Italy<\/p>\n\n\n\n<p><strong>Title of the Talk<\/strong>: TBA<br><strong>Abstract &#8211; <\/strong>TBA<\/p>\n<\/div><\/div>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer mdbtheme-block-item \"><\/div>\n\n\n\n<div class=\"wp-block-media-text mdbtheme-block-item  alignwide is-stacked-on-mobile is-vertically-aligned-top no-shadow\" style=\"grid-template-columns:23% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"1024\" src=\"http:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/photo-3-1024x1024.jpg\" alt=\"\" class=\"wp-image-156 size-full\" srcset=\"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/photo-3-1024x1024.jpg 1024w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/photo-3-300x300.jpg 300w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/photo-3-150x150.jpg 150w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/photo-3-768x768.jpg 768w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/photo-3.jpg 1280w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p class=\"has-medium-font-size\"><strong><a href=\"https:\/\/www.cmpe.boun.edu.tr\/~emre\/\">Emre Ugur<\/a><\/strong><br>Bogazici University, Turkey<br><\/p>\n\n\n\n<p><strong>Title of the Talk<\/strong>: Learning discrete representations from continuous self-supervised&nbsp;interactions: A neuro-symbolic robotics approach<\/p>\n\n\n\n<!-- Collapse buttons -->\n<div>\n  <a data-toggle=\"collapse\" href=\"#collapseExampleUgur\" aria-expanded=\"false\" aria-controls=\"collapseExampleUgur\" style=\"font-weight: bold\">\n    Abstract \n  <\/a>\n<\/div>\n<!-- \/ Collapse buttons -->\n\n<!-- Collapsible element -->\n<div class=\"collapse\" id=\"collapseExampleUgur\">\n  <div class=\"mt-3\">\n    <p>Interaction with the world requires processing low-level continuous sensorimotor representations whereas abstract reasoning requires the use of high-level symbolic representations. Truly intelligent robots are expected to form abstractions continually from their interactions with the world and use them on-the-fly for complex planning and reasoning in novel environments. In this talk, we address the challenging problem of autonomous discovery of discrete symbols and unsupervised learning of rules via a novel neuro-symbolic architecture. In this architecture, action grounded categories are formed in the binary bottleneck layer in a predictive, deep encoder-decoder network that processes the image of the scene of the robot. To distill the knowledge represented by the neural network into rules and plans, PPDDL representations are formed from learned decision trees that replace decoder functionality of the network. The discovered symbols are interpretable, formed incrementally, re-used to learn more complex symbols and directly deployed by the off-the-shelf planners in order to achieve manipulation tasks such as building towers from objects with different affordances.<\/p>\n  <\/div>\n<\/div>\n<!-- \/ Collapsible element -->\n<\/div><\/div>\n\n\n\n<div style=\"height:50px\" aria-hidden=\"true\" class=\"wp-block-spacer mdbtheme-block-item \"><\/div>\n\n\n\n<div class=\"wp-block-media-text mdbtheme-block-item  alignwide is-stacked-on-mobile is-vertically-aligned-top no-shadow\" style=\"grid-template-columns:23% auto\"><figure class=\"wp-block-media-text__media\"><img loading=\"lazy\" decoding=\"async\" width=\"720\" height=\"720\" src=\"http:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/woergoetter.jpg\" alt=\"\" class=\"wp-image-152 size-full\" srcset=\"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/woergoetter.jpg 720w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/woergoetter-300x300.jpg 300w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/woergoetter-150x150.jpg 150w\" sizes=\"auto, (max-width: 720px) 100vw, 720px\" \/><\/figure><div class=\"wp-block-media-text__content\">\n<p class=\"has-medium-font-size\"><strong><a href=\"https:\/\/alexandria.physik3.uni-goettingen.de\/cns-group\/\">Florentin W\u00f6rg\u00f6tter<\/a><\/strong><br>Georg-August University G\u00f6ttingen, Germany<\/p>\n\n\n\n<p><strong>Title of the Talk<\/strong>: How Humans Recognize Actions: Behavioral and fMRI Experiments Support Robotic Action Grammar<\/p>\n\n\n\n<!-- Collapse buttons -->\n<div>\n  <a data-toggle=\"collapse\" href=\"#collapseExampleWoergoetter\" aria-expanded=\"false\" aria-controls=\"collapseExampleWoergoetter\" style=\"font-weight: bold\">\n    Abstract \n  <\/a>\n<\/div>\n<!-- \/ Collapse buttons -->\n\n<!-- Collapsible element -->\n<div class=\"collapse\" id=\"collapseExampleWoergoetter\">\n  <div class=\"mt-3\">\n    <p>Since about 2010 several groups  (e.g. the groups of Aloimonos, Asfour, Kjellstr\u00f6m, and others) have advocated and used different but related grammar-like representations to encode actions for robots. Our representation is based on so-called Semantic Event Chains. This separates actions into temporal chunks defined by touching and untouching relations between objects (including the actor&#8217;s hand or other body parts), because these transition events are highly characteristic for different action types. Recently we had focused on the question whether or not humans use the same &#8222;algorithm&#8220; to predict and recognize action. Here we show first a set of virtual reality experiment to support this notion. This had been flanked by a second study using functional magnetic resonance that shows how out brain &#8222;hooks on&#8220; to these transition events. These results, thus, indicate that the SEC-framework may have direct explanatory value for human processing of action, too.<\/p>\n  <\/div>\n<\/div>\n<!-- \/ Collapsible element -->\n<\/div><\/div>\n\n\n\n<div style=\"height:100px\" aria-hidden=\"true\" class=\"wp-block-spacer mdbtheme-block-item \"><\/div>\n\n\n\n<hr class=\"wp-block-separator mdbtheme-block-item\"\/>\n\n\n\n<div class=\"wp-block-columns mdbtheme-block-item is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column mdbtheme-block-item is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"wp-block-image mdbtheme-block-item  no-shadow\"><figure class=\"aligncenter size-large is-resized\"><a href=\"http:\/\/www.ieee.org\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/05\/ieee-mb-bk.png\" alt=\"\" class=\"wp-image-221\" width=\"142\" height=\"41\"\/><\/a><\/figure><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column mdbtheme-block-item is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"wp-block-image mdbtheme-block-item  no-shadow\"><figure class=\"aligncenter size-large is-resized\"><a href=\"https:\/\/www.ieee-ras.org\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/05\/ieee-ras-logo.png\" alt=\"\" class=\"wp-image-222\" width=\"150\" height=\"55\"\/><\/a><\/figure><\/div>\n\n\n\n<p><\/p>\n<\/div>\n\n\n\n<div class=\"wp-block-column mdbtheme-block-item is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"wp-block-image mdbtheme-block-item  no-shadow\"><figure class=\"aligncenter size-large is-resized\"><a href=\"https:\/\/www.ieee-icra.org\"><img loading=\"lazy\" decoding=\"async\" src=\"http:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/icra_logo.png\" alt=\"\" class=\"wp-image-71\" width=\"150\" height=\"75\" srcset=\"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/icra_logo.png 600w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/03\/icra_logo-300x150.png 300w\" sizes=\"auto, (max-width: 150px) 100vw, 150px\" \/><\/a><\/figure><\/div>\n\n\n\n<p><\/p>\n<\/div>\n<\/div>\n\n\n\n<hr class=\"wp-block-separator mdbtheme-block-item\"\/>\n\n\n\n<div class=\"wp-block-columns mdbtheme-block-item is-layout-flex wp-container-core-columns-is-layout-9d6595d7 wp-block-columns-is-layout-flex\">\n<div class=\"wp-block-column mdbtheme-block-item is-layout-flow wp-block-column-is-layout-flow\">\n<div class=\"wp-block-image mdbtheme-block-item  no-shadow\"><figure class=\"aligncenter size-large is-resized\"><a href=\"https:\/\/www.oml-project.org\/\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/05\/OML_Logo.png\" alt=\"\" class=\"wp-image-335\" width=\"141\" height=\"101\" srcset=\"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/05\/OML_Logo.png 562w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/05\/OML_Logo-300x215.png 300w\" sizes=\"auto, (max-width: 141px) 100vw, 141px\" \/><\/a><\/figure><\/div>\n<\/div>\n\n\n\n<div class=\"wp-block-column mdbtheme-block-item is-layout-flow wp-block-column-is-layout-flow\">\n<div style=\"height:20px\" aria-hidden=\"true\" class=\"wp-block-spacer mdbtheme-block-item \"><\/div>\n\n\n\n<div class=\"wp-block-image mdbtheme-block-item  no-shadow\"><figure class=\"aligncenter size-large is-resized\"><a href=\"https:\/\/ellis.eu\/\"><img loading=\"lazy\" decoding=\"async\" src=\"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/05\/1200px-Logo_of_ELLIS-1024x242.png\" alt=\"\" class=\"wp-image-336\" width=\"256\" height=\"61\" srcset=\"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/05\/1200px-Logo_of_ELLIS-1024x242.png 1024w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/05\/1200px-Logo_of_ELLIS-300x71.png 300w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/05\/1200px-Logo_of_ELLIS-768x182.png 768w, https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/wp-content\/uploads\/2021\/05\/1200px-Logo_of_ELLIS.png 1200w\" sizes=\"auto, (max-width: 256px) 100vw, 256px\" \/><\/a><\/figure><\/div>\n\n\n\n<p><\/p>\n<\/div>\n<\/div>\n<\/section>\n\n\n\n<p><\/p>\n","protected":false},"excerpt":{"rendered":"","protected":false},"author":1,"featured_media":0,"parent":22,"menu_order":0,"comment_status":"closed","ping_status":"closed","template":"","meta":{"footnotes":""},"class_list":["post-34","page","type-page","status-publish","hentry"],"_links":{"self":[{"href":"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/index.php\/wp-json\/wp\/v2\/pages\/34","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/index.php\/wp-json\/wp\/v2\/pages"}],"about":[{"href":"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/index.php\/wp-json\/wp\/v2\/types\/page"}],"author":[{"embeddable":true,"href":"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/index.php\/wp-json\/wp\/v2\/comments?post=34"}],"version-history":[{"count":58,"href":"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/index.php\/wp-json\/wp\/v2\/pages\/34\/revisions"}],"predecessor-version":[{"id":466,"href":"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/index.php\/wp-json\/wp\/v2\/pages\/34\/revisions\/466"}],"up":[{"embeddable":true,"href":"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/index.php\/wp-json\/wp\/v2\/pages\/22"}],"wp:attachment":[{"href":"https:\/\/archive.iar.kit.edu\/workshops\/icra2021\/index.php\/wp-json\/wp\/v2\/media?parent=34"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}