Record Details

New learning modes for sequential decision making

ScholarsArchive at Oregon State University

Field Value
Title New learning modes for sequential decision making
Names Judah, Kshitij (creator)
Fern, Alan P. (advisor)
Date Issued 2014-03-21 (iso8601)
Note Graduation date: 2014
Abstract This thesis considers the problem in which a teacher is interested in teaching action
policies to computer agents for sequential decision making. The vast majority of policy
learning algorithms o er teachers little flexibility in how policies are taught. In particular,
one of two learning modes is typically considered: 1) Imitation learning, where
the teacher demonstrates explicit action sequences to the learner, and 2) Reinforcement
learning, where the teacher designs a reward function for the learner to autonomously
optimize via practice. This is in sharp contrast to how humans teach other humans,
where many other learning modes are commonly used besides imitation and practice.
This thesis presents novel learning modes for teaching policies to computer agents, with
the eventual aim of allowing human teachers to teach computer agents more naturally
and efficiently.
Our first learning mode is inspired by how humans learn: through rounds of practice
followed by feedback from a teacher. We adopt this mode to create computer agents that
learn from several rounds of autonomous practice followed by critique feedback from a
teacher. Our results show that this mode of policy learning is more e effective than pure
reinforcement learning, though important usability issues arise when used with human teachers.
Next we consider a learning mode where the computer agent can actively ask questions
to the teacher, which we call active imitation learning. We provide algorithms
for active imitation learning that are proven to require strictly less interaction with the
teacher than passive imitation learning. We also show that empirically active imitation learning algorithms are much more efficient than traditional passive imitation learning in terms of amount of interaction with the teacher.
Lastly, we introduce a novel imitation learning mode that allows a teacher to specify
shaping rewards to a computer agent in addition to demonstrations. Shaping rewards are
additional rewards supplied to an agent for accelerating policy learning via reinforcement
learning. We provide an algorithm to incorporate shaping rewards in imitation learning
and show that it learns from fewer demonstrations than pure imitation learning.
We wrap up by presenting a prototype User-Initiated Learning (UIL) system that
allows an end user to demonstrate procedures containing optional steps and instruct the
system to autonomously learn to predict when the optional steps should be executed, and
remind the user if they forget. Our prototype supports user-initiated demonstration and
learning via a natural interface, and has a built-in automated machine learning engine
to automatically train and install a predictor for the requested prediction problem.
Genre Thesis/Dissertation
Topic Sequential Decision Making
Identifier http://hdl.handle.net/1957/47464

© Western Waters Digital Library - GWLA member projects - Designed by the J. Willard Marriott Library - Hosted by Oregon State University Libraries and Press