Main contents

Intelligent Robotics


The Intelligent Robotics group was formed in 2012 as Professor Ville Kyrki joined the faculty. Our research interests include intelligent robotic systems and robotic vision with a particular emphasis on developing methods and systems that cope with imperfect knowledge and uncertain senses. Research topics include multi-modal estimation and control for robotics, manipulation under uncertainty, and learning and reasoning in robotics. Different kinds of mathematical models are applied to help robots make decisions and to get smarter over time.

Research areas

a) Grasp planning under uncertainty

GOAL: Finding a sequence of actions and obtain the most useful information about object's attributes for a grasping task in order to maximize the probability of success of performing a grasp. A probabilistic framework is applied for sensor-based grasping. Both simulation and real experiments are used to demonstrate the viability of the approach.

INVESTIGATORS: Ekaterina Nikandrova, Ville Kyrki.

b) Symbol grounding from uncertain measurements


GOAL: Learning concepts from experience. In order for a robot to perform tasks such as cleaning up a table after dinner, it has to be able to make decisions based on higher level abstract concepts. However, in practice, robots observe things at a low abstraction level (for example, as noisy digital images). In this project, we automatically find abstract concepts that are relevant to the current task at hand.

INVESTIGATORS: Joni Pajarinen, Ville Kyrki. The project is financed by the Academy of Finland (Suomen Akatemia).

c) Multi robot communication through embodiment:

MOTIVATION: Heterogeneous multi-robot systems hold promise for achieving robustness by leveraging on the complementary capabilities of different agents and efficiency by allowing sub-tasks to be completed by the most suitable agent. A key challenge is that agent composition in current multi-robot systems needs to be constant and pre-defined. Moreover, the coordination of heterogeneous multi-agent systems has not been considered in manipulative scenarios. In RECONFIG project we propose a reconfigurable and adaptive decentralized coordination framework for heterogeneous multiple & multi-DOF robot systems.

GOAL: One of the project's objectives is to develop a robot to robot symbolic level communication system for object information exchange in multi-robot collaboration scenarios. More analytically, we are developing some visual observation methods to firstly detect any other agent using knowledge of its motion state and then identify the target objects through motion-assisted segmentation of a 3-D point cloud. In other words, a robot to robot gesturing detection system will assist the implicit communication of robotic agents to broadcast information regarding objects of interest in the robotic environment.

INVESTIGATORS: Polychronis Kondaxakis, Ville Kyrki. The project is funded by EU FP7 “Cognitive, Decentralized Coordination of Heterogeneous Multi-Robot Systems via Reconfigurable Task Planning (RECONFIG)” project.

d) Multi-modal robot programming by demonstration for in-contact tasks

MOTIVATION: Relatively low-cost, state of the art over-actuated industrial-quality robotic platforms endowed with significant proprioceptive capacity (e.g. force/torque sensing at each joint, wrist and hand) are making new application areas possible. Currently, such advances tend to result in high engineering costs, particularly for integration and development in production lines. In other words, these new robots require balanced coupling with comparably efficient and cost effective software methods.

GOAL: This project aims for the creation of programming interfaces for robotic systems that are natural and intuitive to human users. The robot will directly learn in-contact tasks from human demonstrators (Programming by Demonstration). By in-contact tasks we mean tasks for which the ability to skillfully distribute a spatial and temporal configuration of mechanical forces at the interface between the arm and the material, with or without the use of tools, plays a crucial role (e.g. deburring of welding lines, clay pottery, etc.). The learning process will exploit the multiple sensory modalities typical of human communication (e.g. tactile/force, audio/speech and vision) in order to facilitate more natural and effective human robot interactions. More broadly, we believe that robotic systems with enhanced proprioceptive capacities can be effectively used for the transfer of physical skills, requiring non-trivial sensorimotor coordination, from humans to robots.

SETUP: KUKA LWR 4+, BH8-282 BarrettHand (for further information, see section "Our robots" at the bottom of this page).

INVESTIGATORS: Alberto Montebelli, Ville Kyrki. The project is financed by the Academy of Finland (Suomen Akatemia).


Figure: Molding clay on a rotating pottery wheel, an example of in-contact task.


Our robots

Kinova JACO robotic arm

JACO, Kinova’s Advanced Manipulator Arm is a versatile 6 degrees of freedom robotic arm, which can lift an object up to 1.5 kg and has a reach of 90 cm. The gripper consists of 3 individually controlled underactuated fingers, which allow easily and safely handle variety of objects.

Nao humanoid robot

NAO is a programmable, 57-cm tall humanoid robot developed by Aldebaran Robotics. Its body consists of 25 degrees of freedom. It is equipped by the sensor network, including 2 cameras, 4 microphones, sonar rangefinder, 2 IR emitters and receivers, 1 inertial board, 9 tactile sensors, and 8 pressure sensors.

KUKA LWR 4+

KUKA LWR 4+ is a light-weight 7 degrees of freedom serial robotic arm, with a payload of 7 Kg and a reach of 800 mm. The robot is provided with integrated torque sensors at each joint, programmable active compliance, torque control and gravity compensation. Its overall control cycle runs up to 1 KHz.

BH8-282 BarrettHand

BH8-282 BarrettHand is a low-weight 3-fingered programmable grasper, with a payload of 6 Kg. Its fingers can be dynamically reconfigured, thus allowing selection among multiple grasping modalities (see figure). Each finger has two joints, one motor and one torque sensor. Tactile sensors measure local pressure at palm and fingertips.