Main contents

SpacePartner Project Related Videos

2011 SpacePartner project summary video

This video is summary of the research project called SpacePartner, which was active from 2008 until end of 2011, i.e. four years. The video presents the research problem, hypothesis, and the user experiments that were performed.

2011 SpacePartner project final demonstration at Aalto university

This video demonstrates the research done in SpacePartner project. This is primarily how tasks can be communicated using affordances. This is done in a priori known environment but with a presence of unexpected events which requires the robot to be able to perform new tasks sequences. In addition, WorkPartner robot's capabilities to perform autonomously cooperative tasks will be demonstrated.

Astronaut and WorkPartner robot are working together to attach a module into its socket. The WorkPartner robot does most of the heavy work while human does tasks requiring cognitive capabilities and dexterious manipulation. Human is communicating tasks with speech only.

2011 Indirect Human-Robot Task Communication Using Affordances - Unambiguous environment case

This video shows a participant performing a task requesting user experiment on an environment where each action is associated to one object, and vice versa. The participant can request tasks in two different ways: direct and indirect. The idea of the experiment is to compare these two ways to request tasks with regards to task communication workload, usage preferences and task communication times.

With direct task request the participant is required to request the task with both action and object name, e.g. "reset radio". With indirect task requests the participant can request a task using only task related object name. The robot has a database of actions that can be performed with different objects. The robot uses this database to interpret the correct task request.

The speech recognition is done with open-source CMU Sphinx 2 software. The user wears a headset to enable the speech interface. The robot is controlled with GIM/MaCI middle-ware. Between tasks the user is calculating multiplications as a secondary task.

The user experiment result: the indirect task communication was able to decrease human task communication workload and task communication times, while still being also the user preferred way to communicate tasks.

This video is related to paper titled "Indirect Human-Robot Task Communication Using Affordances" http://vapahtaja.com/papers/Heikkila_Indirect_2011.pdf which was presented in 20th IEEE International Symposium on Robot and Human Interactive Communication (Ro-Man). The paper won the best paper award in the conference.

2011 Indirect Human-Robot Task Communication Using Affordances - Ambiguous environment case

This video shows a participant performing a task requesting user experiment on an environment where each action is usually related to several objects, and vice versa. The participant can always request task using task related object and action name, e.g. "pickup wrench". If the participant does not remember this full task request, there is two different ways to complete the task request: direct and indirect. The idea of the experiment is to compare these two ways to request tasks with regards to task communication workload, usage preferences and task communication times.

With direct task request the participant can request the robot to give list of objects it knows, i.e. "objects", or list of actions that it can perform with certain object, e.g. "actions with wrench". The participant is still however required to request the full task with both action and object name, e.g. "pickup wrench".

With indirect task requests the participant can request a task using only task related object or action name. The robot makes prediction of the most likely task request using knowledge of previous tasks. The assumption is that tasks are in 75% of the cases performed in recurring or a priori known sequences. In case the prediction is correct, the participant confirms it by answering "yes". If the prediction is wrong, the participant can answer "no" and the robot gives the second likely task. If this one is also incorrect, the robot gives the participant a list of possible request related object and actions, a bit similarly as in the direct case.

The user experiment result: the indirect task communication was able to decrease human task communication workload and task communication times, while still being also the user preferred way to communicate tasks.

This video is related to paper titled "Indirect Human-Robot Task Communication Using Affordances" http://vapahtaja.com/papers/Heikkila_Indirect_2011.pdf which was presented in 20th IEEE International Symposium on Robot and Human Interactive Communication (Ro-Man). The paper won best paper award in the conference.

2010 Geological Astronaut-Robot Cooperative Exploration with Autonomous SpacePartner Robot

The idea of the test was to compare different speech based task communication methods with the WorkPartner robot. This video demonstrates what different request the test persons were able to communicate to the robot:

  • stop
  • Wopa
  • follow
  • analyse rock/analyse/rock
  • setup unit/setup/unit

The robot speed was limited to 15% of its actual maximum speed in order to guarantee the test persons' safety during the tests.

The dialog in the video goes like this.

  • Astronaut: Stop.
  • Robot: Stopping.
  • Astronaut: Wopa (the name of the robot).
  • Robot: Yes sire.
  • Astronaut: Follow.
  • Robot: Following the astronaut.
  • Astronaut: Rock. (Requesting the robot to analyse the rock).
  • Robot: Analysing the rock.
  • Astronaut: Setup (Requesting the robot to setup a measurement unit).
  • Robot: Setting up the unit.

The tests were done in Aalto University in May-June 2010. The speech recognition was done with CMU Sphinx II, pointing of objects was done with the human center of mass, and the human localisation was done with the robot chest mounted SICK laser rangefinder.

2010 SpacePartner Robot Gravity Compensated Manipulators

WorkPartner service robot with friction and gravity compensated manipulators. The used motor controllers are Elmo Whistle 5/60. Five Degrees of Freedom (DOF) in both arms and two DOF in the body.

2009 SpacePartner Spatial Information Interface Test

Spatial Information Interface test. User asks where object is located and the robot replies using speech, virtual environment model (shown in up-left) and robot's camera (shown in up-right) image.

The ball is recognized using OpenCV ellipse fitting. The speech commands are recognized using CMU sphinx-2 speech recognition system. The virtual environment model uses Open Dynamics Engine (ODE) based SimPartner software. The speech responses are generated using Festival speech synthesis system.

The scenario steps are basically:

1. User gives speech request: "Where is ITEM?". ITEMs used here are ball, box and hammer.

2. Robot responds either: a) ITEM is in LOCATION. b) I don't know where is ITEM? c) I did not understand, can you repeat?

3. Robot points the ITEM from camera image and from virtual environment model.

2008 SimPartner - Dynamic Real-time Rigid-body Mobile Robot Simulator

Simulated WorkPartner robot lifting a bar from place to another.

Simulated with ODE (Open Dynamics Engine) based SimPartner simulator. All the manipulators' angles are PID controlled. Wheels are controlled simply by setting them to a fixed speed for the any required period.