|
|
Invited Talks
Wednesday, 02. April
9:10 - 9:50 |
Engineering artificial Cognitive Systems |
Colette Maloney, Cognitive Systems and Robotics Unit, European Commission, Luxembourg |
|
In 2003, the EU launched an ambitious multi-disciplinary programme calling for research
on artificial cognitive systems
(see link). The
motivation was that artificial systems should be able to function effectively in
circumstances not planned for explicitly at design time; this requires new engineering
principles & approaches. Hence, the stated goal of this programme is to create and
develop a scientific foundation that would concern many domains of engineering science,
and to demonstrate its impact through exemplary real-world applications. Since the first
Call for Proposals under this programme some 50 projects have been contributing to
building such a foundation. They cover a broad range of pertinent issues, provide new
theoretical insights about perception and understanding, action and interaction, learning
and representations; they also show how these insights can be put to practical use, for
instance through innovative designs and implementations of robotic systems.
This talk examines what we have learned so far and outlines future directions for
research.
|
|
14:00 - 15:00 |
Towards Cognitive Interaction - Routes, Progress and Challenges |
Helge Ritter, Faculty of Technology, Bielefeld University |
|
We are
rapidly pushing our abilities to
create technical systems of
unprecedented complexity. The
interaction between humans and such
systems raises new challenges, one of
the foremost being how to facilitate
the guidance and use of such systems
and endow it with the ease we are
accustomed from natural cooperation
and communication between humans. We
argue that the realization of that
goal will require a basic
understanding of how to synthesize the
quality of cognitive interaction from
more realizable constituents that
cover substantial partial functions
such as intelligent motion, attention,
situated communication and memory with
learning. We point out some exemplary
research questions and report on
ongoing research that led to the
Bielefeld-based research initiative
"CITEC - Cognitive Interaction
Technology" launched recently in the
context of the German Excellence
Initiative, along with the closely
associated "Cognition and Robotics
Lab" (CoR-Lab), both bringing together
an interdisciplinary consortium of
computer scientists, biologists,
linguists and psychologists aiming
towards elucidating principles of
cognitive interaction and their
replication in technical systems.
|
|
Thursday, 03. April
09:00 - 10:00 |
Cognition - the interaction of brain, body, and environment |
Rolf Pfeifer, Artificial Intelligence Laboratory, University of Zurich, Switzerland |
|
Traditionally, in
robotics, artificial intelligence, and
neuroscience, there has been a focus
on the study of the control or the
neural system itself. Recently there
has been an increasing interest into
the notion of embodiment in all
disciplines dealing with intelligent
behavior, including psychology,
philosophy, and linguistics. In this
talk, I explore the far-reaching and
often surprising implications of this
concept. While embodiment has often
been used in its trivial meaning,
i.e. "intelligence requires a body",
there are deeper and more important
consequences, cognition as emergent
from the interaction of brain, body,
and environment, or more generally
from the relation between physical and
information (neural, control)
processes. Often, morphology and
materials can take over some of the
functions normally attributed to
control, a phenomenon called
"morphological computation". It can be
shown that through the embodied
interaction with the environment, in
particular through sensory-motor
coordination, information structure is
induced in the sensory data, thus
facilitating categorization,
perception and learning. A number of
case studies are presented to
illustrate the concepts introduced. I
conclude with some speculations about
potential lessons for robotics and
cognitive science.
|
|
14:30 - 15:30 |
Grounding Language in Action Representation |
Mark Steedman, School of Informatics, University of Edinburgh, Scotland, UK |
|
For both
neuro-anatomical and psychological
reasons, it has been argued for many
years that language and planned action
are related. I will discuss this
relation and suggest a formalization
related to AI planning formalisms,
drawing on linear and combinatory
logic. This formalism gives a direct
logical representation for the
Gibsonian notion of "affordance" in
its relation to action representation.
This relation is so direct that it
raises an obvious question: since
higher animals make certain kinds of
plans, and planning seems to require a
symbolic representation closely akin
to language, why don't those animals
possess a language faculty in the
human sense of the term? I will show
that the recursive concept of the
mental state of others that underlies
propositional attitudes provides
almost all that is needed to
generalize planning to fully
lexicalized natural language grammar.
The conclusion will be that the
evolutionary development of language
from planning may have been a
relatively simple and inevitable
process. A much harder question is
how symbolic planning evolved from
neurally embedded sensory-motor
systems in the first place, how action
concepts can be learned from sensory
motor data, and how such grounded
action concepts might differ from
standard logicist assumptions usually
made in symbolic planners.
|
|
Friday, 04. April: Second Conference Day
09:00 - 10:00 |
HAL: Human Activity Language - A new tool for cognitive systems |
Yiannis Aloimonos, Computer Vision Laboratory, University of Maryland, USA |
|
We propose a
linguistic approach to model human
activity. This approach is able to
address several problems related to
action interpretation in a single
framework. The Human Activity Language
(HAL) consists of kinetology,
morphology, and syntax. Kinetology,
the phonology of human movement, finds
basic primitives for human motion
(segmentation) and associates them
with symbols (symbolization). The
input is measurements of human
movement in 3D (signals), as for
example produced by motion capture
systems or from visual data. This way,
kinetology provides a non-arbitrary
grounded symbolic representation for
human movement that allows synthesis,
analysis, and symbolic
manipulation. The morphology of a
human action is related to the
inference of essential parts of the
movement (morpho-kinetology) and its
structure (morpho-syntax). In order to
learn the morphemes and their
structure, we present a grammatical
inference methodology and introduce a
parallel learning algorithm to induce
a grammar system representing a single
action. In practice, morphology is
concerned with the construction of a
vocabulary of actions or a
praxicon. The syntax of human
activities involves the construction
of sentences using action morphemes. A
sentence may range from a single
action morpheme (nuclear syntax) to a
sequence of sets of morphemes. A
single morpheme is decomposed into
analogs of lexical categories: nouns,
adjectives, verbs, and adverbs. The
sets of morphemes represent
simultaneous actions (parallel syntax)
and a sequence of movements is related
to the concatenation of activities
(sequential syntax). Nuclear syntax,
especially adverbs, is related to the
motion interpolation problem, parallel
syntax addresses the slicing problem,
and sequential syntax is proposed as
an alternative method to the
transitioning problem. Consequences of
the framework to surveillance,
automatic video annotation, humanoid
robotics and Cognitive Science will be
discussed throughout the talk, whose
main theme is that the praxicon and
its grammatical structure constitutes
a new tool for "meaning".
|
|
|
|
|