Journal of Applied Mathematics and Stochastic Analysis
Volume 2005 (2005), Issue 1, Pages 77-88
doi:10.1155/JAMSA.2005.77

Maximum process problems in optimal control theory

Goran Peskir

Department of Mathematical Sciences, University of Aarhus, Ny Munkegade, Aarhus 8000, Denmark

Received 16 January 2004; Revised 23 June 2004

Copyright © 2005 Goran Peskir. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Given a standard Brownian motion (Bt)t0 and the equation of motion dXt=vtdt+2dBt, we set St=max0stXs and consider the optimal control problem supvE(SτCτ), where c>0 and the supremum is taken over all admissible controls v satisfying vt[μ0,μ1] for all t up to τ=inf{t>0|Xt(0,1)} with μ0<0<μ1 and 0<0<1 given and fixed. The following control v is proved to be optimal: “pull as hard as possible,” that is, vt=μ0 if Xt<g(St), and “push as hard as possible,” that is, vt=μ1 if Xt>g(St), where sg(s) is a switching curve that is determined explicitly (as the unique solution to a nonlinear differential equation). The solution found demonstrates that the problem formulations based on a maximum functional can be successfully included in optimal control theory (calculus of variations) in addition to the classic problem formulations due to Lagrange, Mayer, and Bolza.