Dynamic Walking 2010. Emanuel Todorov. Linearly-solvable optimal control: Theory and applications
Optimal control has a lot of potential, however its applications to robotics have been limited, partly because the problem is very hard to solve even numerically. In this talk I will summarize a recently-developed formulation of stochastic optimal control which renders the problem linear (in the unknown value function). This makes it possible to use large numbers of basis functions for parameterizing the solution. Other advantages include a compositionality law for optimal controllers, as well as a duality with Bayesian inference. This duality yields an interesting relation between deterministic and stochastic optimization: the Hessian of the total cost around an optimal deterministic trajectory coincides with the Laplacian approximation to the density of stochastic trajectories generated by the optimal feedback controller. I will show how this relation can be exploited to optimize feedback controllers for periodic behaviors such as walking. The new formulation can also deal with contacts, because it relies on integral equations and does not need to assume smooth dynamics. This is in contrast with most prior application of optimal control to walking, where the contact states have usually been specified in advance.