Abstract
This thesis presents a robot agent which learns to exploit objects in its environment
as tools, allowing it to solve problems which would otherwise be impossible to
achieve. Our agent learns by watching a single demonstration of tool use by
a teacher, and then by experimenting in the world with a variety of available
tools. The emphasis in our approach is on learning tool-use in a relational context,
and our agent is able to generalise across objects and tasks to learn the spatial
and structural constraints which describe useful tools and how they should be
employed.
Two learning mechanisms are employed to achieve this: learning by explanation,
and learning by trial-and-error. A form of explanation-based learning is used
to identify the most important sub-goals the teacher was able to achieve by using
the tool. The action model constructed via this explanation is then refined
through trial-and-error experimentation and the use of a novel Inductive Logic
Programming (ILP) algorithm.