Motivation:

A few weeks ago I spent some time reflecting on McGeer’s passive dynamic walkers-which walk smoothly down an inclined plane without any digital computation-and wondered whether a reinforcement learning algorithm could discover similar gaits for flat terrain [1]. From a reinforcement learning perspective, Policy Gradients may be the simplest approach that may be used to address bipedal dynamics yet these methods have demonstrated great effectiveness on continuous control tasks [2]. For this reason, I decided to start investigating this problem with Vanilla Policy Gradients.

A secondary motivation came from The Policy of Truth article where Ben Recht, a professor of optimisation and machine learning at Berkeley, presented a harsh critique of Policy Gradient methods and the usage of stochastic policies in particular. While Policy Gradient methods definitely have important limitations I demonstrate that the points raised by Ben Recht are non-issues. On the other hand, it must be noted that bipedal walkers can’t be reduced to reinforcement learning. If anything, considering that McGeer’s cleverly-designed walkers don’t do any computations, the embodiment of the bipedal walker must be taken seriously.

In order to get started on a simple problem1, I decided to apply Policy Gradients to unicycles which-like passive dynamic walkers-are dynamically similar to inverse pendulums.

The Policy Gradients formalism:

Trajectories, Policies and Rewards:

A trajectory(aka rollout) is a sequence of states and actions generated by a dynamical system:

and a policy is a conditional distribution on an agent’s actions given states . This conditional distribution is typically parametrised by a function approximator(ex. neural network) with parameters and serves as a stochastic policy from which we can sample actions: .

Now, the objective of Policy Gradients is to find a policy that maximises the total reward after time steps:

Derivation of Policy Gradients:

Given the reward function , our objective function is:

where is the probability distribution over trajectories induced by :

Having defined , the Policy Gradients update may be derived as follows:

A reasonable monte-carlo approximation to would be:

where each denotes a distinct trajectory generated by running a simulator with policy . Crucially, this approximation holds regardless of the dynamics of the environment:

What is Policy Gradients doing?

  1. Increasing the probability of paths with positive reward.
  2. Decreasing the probability of paths with negative reward.

But, what if rewards are strictly non-negative?

Reducing the variance of with baselines:

In environments where there are no negative rewards we may extract more signal from our data by using baselines. For concreteness, we may use a constant baseline:

or even better, we may use a state-dependent baseline which estimates the expected value of the current state by minimising over all trajectories during training. In this case is a value-estimator ,more often than not parametrised by a neural network, that allows the Policy Gradients model to do temporal difference learning and therefore exploit the sequential structure of the decision problem. Moreover, using the advantage estimate rather than the reward reduces variance of the gradient estimate as it extracts more signal from the observations by telling the model how much the current action is better than what is normally done in state .

Furthermore, it’s very useful to note subtracting baselines doesn’t introduce bias into the expectation:

Vanilla Policy Gradients:

By putting the above ideas together, we end up with a variant of the Vanilla Policy Gradients algorithm:

  1. Initialise the policy parameter and baseline
  2. For do:
    a. Collect a set of trajectories by executing the current policy
    b. At each time step in the trajectory compute and .
    c. Re-fit the baseline, by minimising summed over all trajectories and time steps.
    d. Update the policy using a policy gradient estimate which is a sum of terms
  3. end for

This is the algorithm I used to train the unicycle controller. But, before describing my implementation I’d like to address Ben Recht’s claim that continuous control researchers have no good reason to use stochastic policies.

Why do we use stochastic policies?:

In The Policy of Truth, Ben Recht is rather dismissive of Policy Gradient methods and argues that stochastic policies are a modeling choice that is never better than using deterministic policies and optimal control methods in general. In particular, Ben argues that if the correct policy is deterministic then the probabilistic model class must must satisfy several show-stopping constraints:

  1. rich enough to approximate delta functions
  2. easy to search by gradient methods
  3. easy to sample from

Why then do reinforcement learning researchers use Policy Gradient methods? There are several important good reasons:

  1. In high-dimensional state-spaces(ex. images) where we might use convolutional neural networks to learn a low-dimensional state-representation , state-aliasing is practically inevitable and so it makes sense to use stochastic policies to reflect the model’s uncertainty.
  2. Sampling from a stochastic policy is a convenient method of exploring the state-space.
  3. Stochastic policies effectively smooth out rough/discontinuous reward landscapes and allow the agent to obtain reward signals it couldn’t obtain otherwise.

Moreover, Ben’s three points are actually non-issues:

  1. Neural networks, due to Cybenko’s approximation theorem, may be used to parametrise any probability distribution. Empirically, they are also easy to search in the space of expressible functions using gradient methods. This is why Variational Inference has found many practical applications and Edward has gained a lot of traction among statistical machine learning researchers and engineers.
  2. Neural networks are easy to sample from. In fact, inference with neural networks is computationally cheap.
  3. A corollary of my first point is that we may easily approximate delta functions. If your policy is a conditional Gaussian,which is what I used for controlling the unicycle, then your distribution definitely contains good approximations of delta functions.

To be honest, I am surprised that Ben Recht didn’t bring up any real issues such as curriculum learning2 and transfer learning where a lot of active research is currently ongoing[5]. Moreover, it’s important to note that Policy Gradient methods were developed for problems where optimal control theory, Ben Recht’s proposed alternative, doesn’t work at all.

Controlling a unicycle with Policy Gradients:

Modelling assumptions:

As shown in [4] iff we suppose that our unicyclist is riding on a level surface without turning, motion in the wheel plane may be modelled by a planar inverted pendulum of diameter with horizontally moving support. After cancelling out the mass term, an analysis of the force diagrams shows that we have the following Newtonian equation of motion:

where may be identified with a unicycle controller and represents the riders acceleration of the wheel as a reaction to the instantaneous angular momenta. By defining we have the simpler equation:

Now, assuming that the rider starts approximately upright(i.e. ) we may linearise so we have:

and there exist relatively simple stable solutions for this linearised equation such as:

but a more sophisticated unicyclist would anticipate the future consequences of the rate as well as react to the angle of the fall . This way, he/she may be able to overcome the effects of a finite reaction time. For this reason, I defined the state of the unicycle system to be:

The simulator:

In order to generate rollouts with a policy we must choose a method for numerically integrating and after some reflection I opted for Velocity Verlet due to its simplicity and numerical stability properties. Assuming that we have evaluated and fixed a reasonable value for we have:

Doing this for both and in TensorFlow, I defined the following Velocity Verlet function:

def velocity_verlet(self):
    """
        Update the unicycle system using Velocity Verlet integration. 
    """
    
    ## update rules for theta:
    dd_theta = self.alpha*(tf.sin(self.theta)-self.model.action*tf.cos(self.theta))
    theta = self.theta + self.d_theta*self.time + 0.5*self.dd_theta*tf.square(self.time)
    d_theta = self.d_theta + 0.5*(dd_theta+self.dd_theta)*self.time
    
    ## update rules for z:
    dd_z = self.g*self.model.action
    z = self.z + self.d_z*self.time + 0.5*self.dd_z*tf.square(self.time)
    d_z = self.d_z + 0.5*(dd_z+self.dd_z)*self.time
    
    step = tf.group(self.dd_theta.assign(dd_theta),self.theta.assign(theta),
                    self.d_theta.assign(d_theta),self.dd_z.assign(dd_z),
                    self.z.assign(z),self.d_z.assign(d_z))
            
    return step

Now, in order to encourage the policy network to discover controllers in the linear domain of the unicycle system, i.e. , I initialised the variables at the beginning of each rollout in the following manner:

def restart(self):
        """
            A simple method for restarting the system where the angle is taken to be a slight
            deviation from the ideal value of theta = 0.0 and the system has an important
            initial horizontal acceleration.
        """
        
    step = tf.group(self.theta.assign(tf.random_normal((1,1),stddev=0.1)),
                    self.d_theta.assign(tf.random_normal((1,1))),
                    self.dd_theta.assign(tf.random_normal((1,1))),
                    self.z.assign(tf.random_normal((1,1))),
                    self.d_z.assign(tf.random_normal((1,1))),
                    self.dd_z.assign(tf.random_normal((1,1),mean=0.0,stddev=1.0)))
    
    return step

The key part in this snippet of code is that .

The Policy Gradients model:

For the unicycle controller(i.e. the policy ) I defined a conditional Gaussian as follows:

def two_layer_net(self, X, w_h, w_h2, w_o,bias_1, bias_2):
        """
            A generic method for creating two-layer networks
            
            input: weights
            output: neural network
        """
        
        h = tf.nn.elu(tf.add(tf.matmul(X, w_h),bias_1))
        h2 = tf.nn.elu(tf.add(tf.matmul(h, w_h2),bias_2))
        
        return tf.matmul(h2, w_o)
    
    def controller(self):
        """
            The policy gradient model is a neural network that 
            parametrises a conditional Gaussian. 
            
            input: state(i.e. angular momenta)
            output: action to be taken i.e. appropriate horizontal acceleration 
        
        """
        
        with tf.variable_scope("policy_net"):
            
            tf.set_random_seed(self.seed)
            
            W_h = self.init_weights([2,100],"W_h")
            W_h2 = self.init_weights([100,50],"W_h2")
            W_o = self.init_weights([50,10],"W_o")
            
            # define bias terms:
            bias_1 = self.init_weights([100],"bias_1")
            bias_2 = self.init_weights([50],"bias_2")
            
            eta_net = self.two_layer_net(self.pv,W_h, W_h2, W_o,bias_1,bias_2)
            
            W_mu = self.init_weights([10,1],"W_mu")
            W_sigma = self.init_weights([10,1],"W_sigma")
            
            self.mu = tf.multiply(tf.nn.tanh(tf.matmul(eta_net,W_mu)),self.action_bound)
            self.log_sigma = tf.multiply(tf.nn.tanh(tf.matmul(eta_net,W_sigma)),self.variance_bound)
            
            
        return self.mu, self.log_sigma

and sampling from this Gaussian is as simple as doing:

def sample_action(self):
        """
            Samples an action from the stochastic controller which happens
            to be a conditional Gaussian. 
        """
                        
        dist = tf.contrib.distributions.Normal(self.mu,tf.exp(self.log_sigma))
        
        return dist.sample()

Likewise, you may check my Policy Gradients class to check out how I calculated the baseline.

Defining rewards:

Defining the reward might be the most tricky part of this experiment as it’s not obvious how the unicycle controller should be rewarded and from a dynamical systems perspective a reward doesn’t really make sense. Either the unicycle has a stable controller or it doesn’t. But, in order to adhere to the Policy Gradients formalism I opted for the instantaneous height of the unicycle as the reward. This is as simple as:

and as a result the REINFORCE loss(i.e. no baseline subtracted) is simply:

def reinforce_loss(self):
        """
            The REINFORCE loss without subtracting a baseline. 
        """
                                
        dist = tf.contrib.distributions.Normal(self.mu, tf.exp(self.log_sigma))
                
        return dist.log_prob(self.mu)*self.height

Results:

After defining reasonable hyperparameters(, rollouts per batch = 10, horizon = 30, total epochs=100,…) I ran a few tests in the following notebook and found that the learned controller was very good at bringing the unicycle to maximum height but terrible at keeping the unicycle at that height once it got there. In a word, it was very unstable.

Actually, if we evaluate the model with we find that the unicycle exhibits highly nonlinear behaviour in the neighborhood of

So the force is negative for angles greater than and positive for angles less than . Given that this is many standard deviations away from , this is most likely due to my choice of tanh activation for the final layer of the policy network.

Furthermore, if we analyse the learned controller behaviour as a function of the full state(i.e. and ) we observe the following:

The combination of the short-sighted policy and the tanh nonlinearity makes me wonder whether there are small tweaks to my TensorFlow functions which may lead to much better results.

Discussion:

While I think the results are interesting and may be improved from a technical perspective, I don’t think any particular reward function makes sense for learning re-usable locomotion behaviours. Ideally, the agent would be able to propose and learn curriculums of locomotion behaviours in an unsupervised manner in order to learn models of its affordances and its intrinsic locomotory options. One promising approach to this was articulated by Sébastien Forestier, Yann Mollard and Pierre Oudeyer in [6] but I’ll have to think about how Intrinsically Motivated Goal Exploration can be assimilated within the Policy Gradients framework.

References:

  1. Passive Dynamic Walking. T. McGeer. 1990.
  2. Emergence of Locomotion Behaviours in Rich Environments. Nicolas Heess et al. 2017.
  3. Policy Gradients for Reinforcement Learning with Function Approximation. Richard S. Sutton, David McAllester, Satinder Singh & Yishay Mansour. 1999.
  4. Unicycles and Bifurcations. R. C. Johnson. 2002.
  5. Modular Multitask Reinforcement Learning with Policy Sketches. Jacob Andreas, Dan Klein, and Sergey Levine. 2017.
  6. Intrinsically Motivated Goal Exploration Processes with Automatic Curriculum Learning. Sébastien Forestier, Yoan Mollard, and Pierre-Yves Oudeyer. 2017.

Footnotes:

  1. In my experience it always helps to work on tractable and conceptually interesting variants of complex problems(ex. bipedal locomotion) as these problems often provide deep insights into the complex problem of interest. 

  2. I’d be interested to see how optimal control theory may be used for curriculum learning.