DFKI-LT - Curiosity-Driven Acquisition of Sensorimotor Concepts Using Memory-Based Active Learning
Curiosity-Driven Acquisition of Sensorimotor Concepts Using Memory-Based Active Learning
3 Proceedings of the 2008 IEEE International Conference on Robotics and Biomimetics,
Operating in real-world environments, a robot will need to continuously learn from its experience to update and extend its knowledge. The paper focuses on the specific problem of how a robot can efficiently select information that is "interesting", driving the robot's "curiosity." The paper investigates the hypothesis that curiosity can be emulated through a combination of active learning, and reinforcement learning using intrinsic and extrinsic rewards. Intrinsic rewards quantify learning progress, providing a measure for "interestingness" of observations, and extrinsic rewards direct learning using the robot's interactions with the environment and other agents. The paper describes the approach, and experimental results obtained in simulated environments. The results indicate that both intrinsic and extrinsic rewards improve learning progress, measured in the number of training cycles to achieve a goal. The approach presented here extends previous approaches to curiosity-driven learning, by including both intrinsic and extrinsic rewards, and by considering more complex sensorimotor input.
Files: BibTeX, 0448.pdf