Motivation

The Czech Institute of Informatics, Robotics and Cybernetics (CR) is seeking to appoint a Postdoctoral or PhD Researcher who should join Robotics and machine perception group for a CR-funded project which is called MIRRACLE (Multimodal representation of robotic actions and tasks applied in learning by demonstration). This project is led by CIIRC CTU in Prague (Mgr. Karla Stepanova, Ph.D., karlastepanova.cz) and is focused on learning multimodal representations for single actions and afterwards learning an overall task representation based on this former knowledge. The resulting robotic plans will be run first in simulation, later on a real-robot (Franka Emica robots or KUKA iiwa). It combines visual processing (object recognition and tracking), robotics control and motion planning, language processing, symbol grounding, task and world representation and scene understanding.

Project description:

Title: Multimodal representation of robotic actions and tasks applied in learning by demonstration

Project duration: 3 years (7/2021-06/2024)

Project webpage: http://imitrob.ciirc.cvut.cz/mirracle.php

Brief project description: In many robotic applications, robot actions (such as pick, place, push, touch, mount, etc.) are programs created by expert roboticists. These approaches often lack the required robustness and do not generalize to new tasks, environments and embodiments. When we consider how humans solve dexterous manipulation tasks (e.g. assembling a bookshelf) and navigate in their surroundings, we observe that many modalities (vision, proprioception, touch, etc.) are combined to form a robust reactive procedure (Shams & Seitz, 2008). Although there are recent attempts to incorporate two modalities (e.g. (Lee, 2019)) in robotics, these still do not represent actions in a holistic and fully multimodal approach, which restricts their applicability in more complex tasks/environments. An open research question is how to combine percepts with very different formats perceived with variable quality (e.g. vision is less reliable in a dark environment), and temporal misalignments. Today’s mapping algorithms are not capable of tackling all these challenges and need to be extended or replaced. New algorithms shall be able to consider prior knowledge about actions, modalities, and embodiment to make sense of new incoming percepts and refine the state estimate of the world continuously. In this project, we will focus on creating multimodal representation of robotic actions (e.g. push, pull, screw, touch, etc.). We will explore which representations of individual modalities enable the best generalization to new embodiments and environments (e.g., distance between end-effector and the object, etc.). We will incorporate prior knowledge about the uncertainty of individual domains. The proposed solution should enable the robot to learn individual new actions from observations (either simulated or real) and generalize them to new environments. With the developed solution, we will target an application area of Learning by demonstration, where we will show how our solution allows easier teaching of robots to perform a new task (e.g., assembly and other industrial tasks) by automatic segmentation of the demonstrated action sequence and mapping these to previously learned or programmed primitive actions. Verbal descriptions and gestures will be incorporated.

Conditions:

Requirements: Python, beneficial is knowledge of ROS, Interest and experience in either: a) computer vision, pose estimation, object tracking, etc., b) language acquisition and symbol grounding, c) generative neural networks, VAE, etc., modalities fussion, d) robotics - motion plannning, motion primitives, learning by demonstration

Salary: Postdoc: 600 000 CZK/year + bonuses based on the publications (aprox. 24 000 euro/year + bonuses)

PhD: 450 000 CZK/year + bonuses based on the publications (aprox. 18 000 euro/year + bonuses)

Starting date: Anytime since July 2021

Duration: Minimum 1 year, max. 3 years till June 2024

Application should contain:

  • CV
  • PostDoc: 3 most relevant journal or top conference publications; PhD candidates: any relevant publications or former projects
  • Motivation letter

You can apply via e-mail to: imitrob@ciirc.cvut.cz

Group and hosting university, equipment, and cooperating researchers:

Group webpage: http://imitrob.ciirc.cvut.cz

Equipment: During the project we will mainly use available KUKA nad Franca Emica robots as well as iCub robot at FEL CTU in Prague, HTC Vive sensors (including tactile glove) and RGB-D Realsense cameras, furthermore we have available at our hosting institute big computational cluster and storage capacity, Testbed for Industry 4.0, several manipulator robots (KUKA, Franca, Motoman, ABB, etc.), mobile robots, Vicon optical tracking, augmentated reality equipment, etc.

Hosting university: CIIRC CTU in Prague. The Czech Technical University in Prague (CTU) was founded in 1707 and is one of the oldest technical universities in Europe and currently the major technical university in the Czech Republic. Offering high quality education and a long tradition of cutting edge science and engineering, CTU counts approx. 1.7K members of academic staff, 18.5K students, 8 faculties, and 5 institutes.

One of the youngest of them, the Czech Institute of Informatics, Robotics, and Cybernetics (CIIRC CTU) (https://www.ciirc.cvut.cz/en/) where the project will be based, was founded in 2013, starting its operation in newly built facility in 2017. The CIIRC CTU's research focuses on four basic pillars: industry, energy, smart cities, and healthy society, both in basic and applied research. The transferring technology from academia to practice is an important commitment for CIIRC CTU. The Institute currently has nearly 300 employees working in 8 research departments complemented by Centers and Testbed units for Industry 4.0. CIIRC is providing excellent research in the fields of robotics, artificial intelligence, machine learning, optimization, automated reasoning, machine perception, computer vision, intelligent, distributed and complex systems, automatic control, computer-aided manufacturing, bioinformatics, biomedicine and assistive technologies. CIIRC CTU is a founding partner and coordinator of the RICAIP, one of the largest running EU projects in the field of AI and Industry 4.0. CIIRC CTU hosts two ERC grant holders in AI and three large Excellent Teams projects. CIIRC's groups have obtained research funding from Amazon, Google, Facebook, Porsche, Skoda Auto and other companies and CIIRC teams regularly place high in competitions ranging from Amazon Alexa Price, to autonomous car competitions and world championships in Automated Reasoning.

Cooperating researchers: Ivana Kruijff-Korbayova (DFKI, Talking robots group), M.Hoffmann (FEL, CTU, Humanoids robots group), K.Mikolajczyk, Y.Demiris, I.Mansouri (task and motion planning), A.Cangelosi, etc.