Neural sensorimotor processing and robotics: biology and technology
We have concentrated in such sensory cues as optical flow and stereo. These cues are used extensively in everyday tasks in organisms ranging from flies and other insect to primates. The geometric information available in those cues is necessary for tasks such as navigation and grabbing. The same image cues are also useful for segmentation and tracking, and further in the case of optical flow cue, detected movement can be used to focus attention (and processing) to changes in the environment.
Lot of research has been done concerning optical flow and stereovision in the computer vision community. The present computing power has made it possible to implement many of the optical flow and stereo algorithms in real-time applications, making it possible to use them in robot control. We feel that this is where the challenge lies. The main problems are extracting information useful for control from the wealth of data available and dealing with the ambiguity and uncertainty that is inherent in interpreting visual information.
Our solution is, in line with biological systems, to minimize the uncertainty by fusing information from several sensory cues. Another research theme is to approximate the uncertainty in stereo disparity and/or optical flow calculations and use it in formulating the control action.
To make use of the wealth of data available one has to concentrate on that portion of data that is useful for control. This is partly done here by choosing the particular visual cues and focusing attention as biological organisms do. The processing can be further concentrated by taking more lessons from biological systems, such as that primate visual system has largest part of upper level visual motion processing cells tuned to divergent optical flow fields that are generated by forward motion. While many of these heuristics can be hand tailored to robotic systems, it seems that to fully utilize them, they have to be learned from interaction with the environment, again as suggested by biological systems.
We have implemented several optical flow, stereovision and reinforcement learning algorithms and tested them on "toy problems". The remaining time is used to test these implementations in real robotic platforms and continue algorithmic development based on the results that are obtained. We also consider the problem of fusing sensory information across sensory modalities. The particular case being the fusing of visual information with radar type information from sonars and laser scanners.
Academy of Finland