The main focus of our research is on skill learning in embodied systems and the development of autonomous intelligent systems that can act in uncertain and unstructured environments. Towards this goal we study different perceptual-motor learning tasks in simulation and on various robotic platforms (humanoids, manipulators and mobile robots) and use methods from robotics and machine learning, in particular, imitation, reinforcement and deep learning. We aim to combine data-driven approaches with model-based formulations by exploring the mathematical structure of intelligent systems acting in a complex environment in terms of geometry, optimal control and probability theory. Since human capabilities are far beyond those of artificial systems, we try to gain further insights for the development of intelligent systems from studies in human motor control, biomimetics, neuroscience and psychology.

Current Research Topics:

  • Reinforcement learning
  • Imitation learning
  • Deep learning
  • Computational human motor control