×

On the rate of convergence of finite-difference approximations for Bellman’s equations with variable coefficients. (English) Zbl 0971.65081

Finite difference schemes to generate approximate viscosity or probabilistic solutions for degenerate Bellman’s equation are considered, estimates of their convergence rates being deduced.
For fixed real \(T>0\) the studied problem reads as \[ \begin{aligned} F\left(\frac{\partial u}{\partial t}, {\mathbf D }_2 u , \nabla u , u , t, x \right) = 0 &\quad \text{in } (0,T) \times \mathbb{R}^d \\ u(T,x) = g(x)&\quad \text{in } \mathbb{R}^d \end{aligned} \tag{1} \] where the sought scalar function \(u\) ought to be defined on the cylinder \({\mathcal C}=(0,T) \times \mathbb{R}^d\), being \(\mathbb{R}^d\) the \(d\)-dimensional Euclidean space, and \({\mathbf D }_2 u = ( \partial ^2 u /\partial x_\imath \partial x_\jmath)_{\imath \jmath } \).
In (1), \(F\) denotes – for real \(t\in (0,T)\) and \(x\in\mathbb{R}^d\), for given scalar functions \(v\), \(\omega\) and \(d\)-dimensional square-matrix and vector functions \( \mathbf y \) and \( \mathbf z \), respectively, all of them having \({\mathcal C}\) as domain, – the functional defined by \[ F( v,{\mathbf y } , {\mathbf z }, \omega, t, x):= \sup _{\alpha } \{v + \sum_{\imath , \jmath =1}^{d} { a }^{\imath \jmath}(\alpha,t,x){ y }_{\imath \jmath} + \sum_{\imath =1 }^{d}{ b }^{\imath }(\alpha,t,x){ z }_{\imath} - c^{\alpha}(t,x)\omega + f^{\alpha}(t,x)\}. \] Here the supremum must be taken for \(\alpha\) running on the set of all admissible controls – a separable metric space denoted by \(A\) – and the coefficients \({\mathbf a }= (a^{\imath \jmath}),{\mathbf b }=({ b }^{\imath }),c\), and the forcing term \(f\) are all given on \(A\times{\mathcal C}\).
This kind of differential system is related to dynamic programming equations for value functions in control problems of diffusion processes.
Convergence proofs for numerical solutions to this problem are obtained mainly with two methods. The first one is due to H. J. Kushner and P. G. Dupuis [Numerical methods for stochastic control problems in continuous time, Springer Verlag, New York (1992; Zbl 0754.65068)] and is based on showing that the controlled Markov chains it constructs converge weakly to the controlled diffusion process, while the second one – introduced in [G. Barles and P. E. Souganidis, Asymp. Anal. 4(3), 271-283 (1991; Zbl 0729.65077)] within a general abstract framework and analysed by W. Fleming and M. Soner [Controlled Markov processes and viscosity solutions, Springer Verlag, New York (1993; Zbl 0773.60070)] – employs the uniqueness of viscosity solutions for (1). These methods lack estimates for their convergence rates, as opposed to the technique presented in this paper, which reaches such estimates, although without indicating the sharpness level. It is worth mentioning that the presented technique is of independent interest, as it relies upon quite elementary analytical properties of the value function.

MSC:

65M12 Stability and convergence of numerical methods for initial value and initial-boundary value problems involving PDEs
35B37 PDE in connection with control problems (MSC2000)
93E20 Optimal stochastic control
49J20 Existence theories for optimal control problems involving partial differential equations
93C20 Control/observation systems governed by partial differential equations
65M06 Finite difference methods for initial value and initial-boundary value problems involving PDEs
PDFBibTeX XMLCite
Full Text: DOI