Skip to main content

Differentiable Simulators

· 3 min read
Yang You

In this blog, we delve into the mechanics of differentiable simulators and explore why they are sometimes a more advantageous choice compared to reinforcement learning (RL) methods.

What is a Differentiable Simulator?

Imagine a robot in an environment where, in each state sSs \in \mathcal{S}, the agent can execute an action aAa \in \mathcal{A} leading to a subsequent state sSs' \in \mathcal{S}. We can describe this transition with the function f:S×ASf: \mathcal{S} \times \mathcal{A} \to \mathcal{S}. In conventional non-differentiable simulators, this function ff is often treated as a black box, with observed rewards rRr \in \mathcal{R} serving as the primary signal for RL-based learning.

Contrastingly, in differentiable simulators, the function ff is perceived as an end-to-end differentiable operator. This implies that if we define some loss related to the output state l(s)l(s'), it becomes feasible to compute the gradient in relation to the input state and actions, such as l(s)s\frac{\partial l(s')}{\partial s} and l(s)a\frac{\partial l(s')}{\partial a}. This capability enables the optimization of an entire sequence of actions using the chain rule.

Why are Differentiable Simulators Effective for Policy Learning?

As Yann LeCun insightfully noted at NeuIPS 2016, "If intelligence is a cake, the bulk of the cake is unsupervised learning, the icing on the cake is supervised learning, and the cherry on the cake is reinforcement learning" (source). The key takeaway here is the direct supervisory gradient flow in relation to the actions we aim to optimize, in contrast to the indirect reward-based approach of the REINFORCE algorithm in RL.

For instance, consider a task where the goal is to maneuver an object to a target position using a pusher. In an RL scenario, this would require extensive exploration, potentially involving thousands of trials before significant progress is made. Conversely, in a differentiable simulator, each iteration of gradient descent inherently contributes to progress toward the goal.

Miscellaneous Considerations

While differentiable simulators present certain advantages over RL, they are not without their limitations and challenges, such as:

  • Invalid Gradients in Certain Scenarios: Consider a scenario involving a rigid rolling pin used to flatten dough. If the initial sequence of actions fails to make contact with the dough, the resulting gradient will be zero throughout. To address this, some studies, like PlasticineLab, suggest incorporating a contact loss based on proximity to the target object. Others propose 'softening' rigid tools by increasing their influence radius, allowing objects to be affected without direct contact.

  • Limited Efficiency in Long-Horizon Tasks: As discussed in this paper, the dependency of differentiable physics on local gradients poses significant challenges. The loss landscape in these scenarios is often complex and riddled with potentially misleading local optima, which can diminish the reliability of this method for certain tasks.