Robot Learning

From Intelligent Materials and Systems Lab

Revision as of 15:16, 12 February 2007 by Maarja (talk | contribs)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)

Learning and adaptation are inherent capabilities in dynamic and partially unknown environments. Properties of such kind of environments are not known in advance and therefore it is not possible to model the correctly. It is also likely that information about the environment contains noise and ambiguity. Mobile robot learning and adaptation is a research area that addresses the problems of moving robots in complex real-world environments. It contains in turn several difficult research problems, like how to process sensor information, how a moving robot can localize itself, how to build a map of the environment under these circumstances, how to plan paths to targets and follow them while avoiding unexpected obstacles, how to make decisions and how to learn whine changes in the environment happen unexpectedly.

Kristo.jpg

In our research we have focused on path planning and learning strategies in cases where the robot is working in hazardous environments under times constraints and where main objective is is to fulfill an assigned mission. This addresses several new issues of the robot learning and adaptation problem. In time-restricted and mission-oriented learning problems, exploration, environment-modeling and path planning are not goals by itself but only as far as they help to complete the mission. Therefore the robot should chose a decision making strategy that will gain only knowledge that is useful during the mission time and consider the hazards associated with exploration of unknown areas. We test and evaluate our decision-making strategies with a mini-robot Khepera manufactured by K-Team in a model environment. The model environment permits us to set up controlled experiments, change the environmental condition with a little effort, gain and record lots of experimental data within a short time, verify the results and make generalizations about path-planning and learning strategies. We have reported new strategies for mobile robot learning for repeated missions, in the presence of hazard and changes of environmental conditions. Currently we are testing strategies for time-critical mobile robot missions.

People

  • Maarja Kruusmaa
  • Kristo Heero, Cybernetica Ltd.
  • Jan Willemson Institute of Computer Science, University of Tartu
  • Adam Eppendahl, University of Malaya
  • Yuri Gavshin, Institute of Computer Science, University of Tartu

Link to our publications