Lahrech, Samir; Addou, Ahmed One aspect on Bellman’s equation to optimal control theory. (English) Zbl 1033.49036 Appl. Math. E-Notes 4, 16-20 (2004). Summary: In this present work, we develop the idea of the dynamic programming approach. The main observation is that the Bellman function \(\omega(x,t)\) which is the function that provides, for any given state \(x\) at any given time \(t\), the smallest possible cost among all possible trajectories starting at this event, is in general not differentiable, and consequently we cannot use the Hamilton-Jacobi-Bellman (HJB) equation. By the classical Hamilton-Jacobi-Bellman theory, if the value function \(u\) is continuously differentiable, then it is the unique solution of the (HJB) equation. It is well known that the value function \(\omega\) is in general discontinuous, even if all the data of the problem are continuously differentiable. Using the (HJB) equation in some nonclassical sense (e.g., using generalized gradients, in the framework of viscosity solutions, proximal solutions, etc.) has become a very active research area. Here, we give techniques based on the differential set \(\partial\omega(x,l)\) of the function \(\omega\) at the element \(x\) along the direction \(l\) for the analysis of such problems. MSC: 49L20 Dynamic programming in optimal control and differential games 49L25 Viscosity solutions to Hamilton-Jacobi equations in optimal control and differential games Keywords:optimal control; dynamic programming; Bellman function; Hamilton-Jacobi-Bellman theory; generalized gradients; viscosity solutions; proximal solutions; differential set PDFBibTeX XMLCite \textit{S. Lahrech} and \textit{A. Addou}, Appl. Math. E-Notes 4, 16--20 (2004; Zbl 1033.49036) Full Text: EuDML EMIS