Hamiltonian (control theory)

The Hamiltonian of optimal control theory was developed by Lev Pontryagin as part of his minimum principle.[1] It was inspired by, but is distinct from, the Hamiltonian of classical mechanics. Pontryagin proved that a necessary condition for solving the optimal control problem is that the control should be chosen so as to minimize the Hamiltonian. For details see Pontryagin's minimum principle.

Notation and Problem statement

A control u(t) is to be chosen so as to minimize the objective function


J(u)=\Psi(x(T))+\int^T_0 L(x,u,t) dt

where x(t) is the system state, which evolves according to the state equations


\dot{x}=f(x,u,t) \qquad x(0)=x_0 \quad t \in [0,T]

and the control must satisfy the constraints


a \le u(t) \le b \quad t \in [0,T]

Definition of the Hamiltonian


H(x,\lambda,u,t)=\lambda^T(t)f(x,u,t)-L(x,u,t) \,

where \lambda(t) is a vector of costate variables of the same dimension as the state variables x(t).

For information on the properties of the Hamiltonian, see Pontryagin's maximum principle.

The Hamiltonian in discrete time

When the problem is formulated in discrete time, the Hamiltonian is defined as:


H(x,\lambda,u,t)=\lambda^T(t+1)f(x,u,t)-L(x,u,t) \,

and the costate equations are


\lambda(t+1)=-\frac{\partial H}{\partial x}dt + \lambda(t)

(Note that the discrete time Hamiltonian at time t involves the costate variable at time t+1.[2] This small detail is essential so that when we differentiate with respect to x we get a term involving \lambda(t+1) on the right hand side of the costate equations. Using a wrong convention here can lead to incorrect results, i.e. a costate equation which is not a backwards difference equation).

The Hamiltonian of control compared to the Hamiltonian of mechanics

William Rowan Hamilton defined the Hamiltonian as a function of three variables:

\mathcal{H} = \mathcal{H}(p,q,t) = \langle p,\dot{q} \rangle -L(q,\dot{q},t)

where \dot{q} is defined implicitly by

p = \frac{\partial L}{\partial \dot{q}}

Hamilton then formulated his equations as

\frac{ d}{ dt}p(t) = -\frac{\partial}{\partial q}\mathcal{H}
\frac{ d}{ dt}q(t) =~~\frac{\partial}{\partial p}\mathcal{H}

Similarly the Hamiltonian of control theory (as normally defined) is a function of 4 variables

H(q,u,p,t)= \langle p,\dot{q} \rangle -L(q,u,t)

and the associated conditions for a maximum are

\frac{dp}{dt} = -\frac{\partial H}{\partial q}
\frac{dq}{dt} = ~~\frac{\partial H}{\partial p}
\frac{\partial H}{\partial u} = 0

This definition agrees with that given by the article by Sussmann and Willems.[3] (see p. 39, equation 14). Sussmann-Willems show how the control Hamiltonian can be used in dynamics e.g. for the brachystochrone problem, but do not mention the prior work of Carathéodory on this approach .[4]

References

  1. I. M. Ross A Primer on Pontryagin's Principle in Optimal Control, Collegiate Publishers, 2009.
  2. Varaiya, Chapter 6
  3. Sussmann; Willems (June 1997). "300 Years of Optimal Control" (PDF). IEEE Control Systems.
  4. See H. J. Pesch- R. Bulirsch: J.O.T.A. 80 1994 199-225

External links

This article is issued from Wikipedia - version of the Monday, March 07, 2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.