Some non monotone schemes for Hamilton-Jacobi-Bellman equations

We extend the theory of Barles Jakobsen to develop numerical schemes for Hamilton Jacobi Bellman equations. We show that the monotonicity of the schemes can be relaxed still leading to the convergence to the viscosity solution of the equation. We give some examples of such numerical schemes and show that the bounds obtained by the framework developed are not tight. At last we test some numerical schemes.


Introduction
We are interested in the following HJB equation arising in infinite horizon, discounted, stochastic control problems with F (x, t, p, X) = sup α∈A L α (x, t, p, X), where a, b, c, f are at least continuous functions on R N × A with values in S(N ) the space of symmetric N × N matrices, R N , R and R respectively. The space of controls A is supposed to be a compact metric space. Supposing h is an approximating parameter, we consider a discrete scheme S(h, x, r, [t] x ) defined for x ∈ R N , r ∈ R, t a function defined on R N , and [t] x is a function defined at x from t. This notation was introduced by [3] to prove that if the scheme S was monotone then it was non decreasing in r and non increasing in [t] x . A simple example of an unconditionnally monotone scheme can be given by the discretization of the heat equation by an implicit scheme in one dimension.
A grid X h = ∆t {0, 1..., N T } × ∆xZ being given, the scheme can be written: and the monotonocity is checked because S is increasing in u n+1 i and decreasing in u n+1 i−1 , u n+1 i+1 and u n i . When the scheme S is monotone, uniformly continuous and a consistent approximation of F and when a discrete bounded solution u h can be found for then u h converges to the viscosity solution of the problem (1) ( [1], [2]). When the scheme is not monotone, the scheme may not converge [4] or converge towards a false solution. This requirement about monotonicity of the scheme leads to finite difference scheme such as in [5] when a α is not diagonally dominant or Semi Lagrangian scheme ( [6], [1]) of low order. In this paper we relax the constraint on the monotonicity of the scheme such that it can converge to the right solution. We will suppose that this scheme is a perturbation of a monotone schemeŜ and we have in mind the schemes based on interpolation method (Semi Lagrangian scheme or finite difference scheme with carefully chosen directions). Because in some case the resolution of (3) can be impossible we will try to relax the equality requirement and only assume we can find a function u h such that on a given grid X that may depend on h where ǫ(h) is a continuous function of h with ǫ(0) = 0. If existing, the solution of such a scheme is not unique, so we consider a constructed sequence of solution u h of (4) and get bounds proving the convergence of u h towards the right solution.
In a second part, we show that Semi Lagrangian method [7] with high order interpolation can be cast in this framework. We besides prove that the result obtained is not optimal. We then develop a finite difference approach with interpolation. Once again the method can be cast in the framework developped and the convergence rate obtained is not optimal for this case.
In the sequel the constant C may vary between lines.

Main result
We define the norm denoted || as follows: for any integer m ≥ 1 and z = ( C 0,1 (R N ) stands for the set of functions f : R n −→ R with finite norm |f | 1 , C b (R N ) the set with finite norm |f | 0 . In the sequel we make the following classical assumptions Assumption A1 For any α ∈ A, a α = 1 2 σ α σ αt for some N × P matrix σ α . Furthermore there exists λ, K independent of α such that: We just recall the well-posedness and regularity result given in [1] with demonstrations in the references therein.
Proposition 1 Assume (A1). There exists an unique viscosity u solution of (1) in C b (R N ). If w 1 and w 2 are in C b (R N ) and are sub-and supersolution of (1) respectively, then Remark 2.1 Assumptions (5) can be given for more general hölder spaces, and regularity of the solution is given in [2].
Here we add some new definitions that will be helpful in the sequel. First we introduce the notion of ǫ monotone scheme stating that the scheme S is "nearly" monotone Definition 2.2 An ǫ(p, K) monotone scheme S is a scheme such that there existsλ satisfying: • for every h > 0, x ∈ R N , r ∈ R, m ≥ 0, for every function u ∈ C b (R N ): Because it is sometimes difficult or impossible to prove that a discrete scheme has a solution, we relax the notion of solution as we did in (4): 3 An ǫ(c) solution u of scheme S is a continuous function which satisfies: In the same spirit we relax the notion of subsolution and supersolution.
Definition 2.4 An ǫ(c) subsolution (supersolution) u of scheme S is a continuous function which satisfies: On the scheme we make the following assumptions: Assumption S1 The scheme S is ǫ(p, K) monotone.
Assumption S2 (Regularity of S scheme ) For every h > 0 and φ ∈ Assumption S3 (Consistency) There exists integers m , k i i = 1, m, and a constant K such that for every h ≥ 0, x ∈ R N and a smooth function φ We add an assumption on the existence of a solution of the discretized scheme with sufficient regularity which has to be checked for each scheme: Assumption S4 We suppose there exists C and r independent of h such that for each h we can find a ǫ(Ch r ) solution u h of scheme S.
Remark 2.5 Assumptions (S2) and (S3) are classical. Assumption (S1) is a relaxation of the constraint on the monotonicity of the scheme.
An ǫ monotone scheme doesn't assure a discrete comparison result but we give here a relaxation of this result: Lemma 1 Assume (S1). Let v ∈ C b (R N ) and u ∈ C 0,1 (R N ). If u is a subsolution of (4) and v is an ǫ(C) supersolution of (4) then For n large enough, δ n > 1 λ (Kh p |u| 0,1 + C). Using the subsolution definition, the assumption (S1) : which gives the contradiction We now give the lower and upper bound for the error given by the scheme bases on the now standard Krilov method [8]. The lower bound will be given by the classical Krilov method, while the upper bound will be given by the use of a switching system as in [2] that gives a supersolution of the problem.
To use the theory developed in [2], we need to add a last assumption: We introduce the following switching system where and L α given by (2), We give the two lemma proved in [2] Lemma 2 Assume (A1) and (A2).
• Assume in addition (A3), then for any δ > 0, there are M ∈ N and where C depends on λ, K from (A1).
A function u can be regularized by where ρ ǫ is the mollifier sequence such that We can now give the main result of the paper using the two previous lemma for an upper bound of the ǫ solution.
The existence a solution u ǫ in C 0,1 (R N ) satisfying |u ǫ | ≤ C and |u ǫ − u| 0 ≤ Cǫ is given by Lemma 2.6 in [1]. Noting that for each e ≤ ǫ, u ǫ (.−e) satisfies The regularized function u ǫ is a subsolution of problem (1) as given by Lemma 2.7 in [1]. The u ǫ are uniformly bounded in C 0,1 so we have |u ǫ | m,0 ≤ Cǫ 1−m . First use the relation for m > 0, and the consistency property (S3) to get (4). Using the lemma (1), and the assumption (S4), we get that there existsĈ function ofC such that In the last line we have used that because u ǫ is bounded uniformly in C 0,1 (R N ), u ǫ is regular and |D n u ǫ | 0 ≤ Cǫ 1−n for n ≥ 1. At last using the mollifier properties, the uniform boundedness of u ǫ in C 0,1 For the upper bound we follow the Barles Jakobsen demonstration that we shorten except for points different from the initial proof. We fix a δ > 0, and pick up the corresponding {α i } i∈I according to (A3). The corresponding solution v ǫ of (9) exists according to lemma (2) and is regularized as in lemma (3). We note where φ(y) = (1 + |y| 2 ) 1 2 . Because of the boundedness and continuity of u h and w, the maximum is attained at a point x. Because of the definition of x, Then using (S3): Using the properties of the mollified v ǫ,i , and the definition of φ : Then we use the ǫ monotony property (8), the definition of m κ , the fact that v ǫ,i is bounded uniformly by the properties of mollifiers and lemma (2) to get Using (14) and (15) we get An estimate of m is obtained by letting κ goes to 0. Then for any y ∈ R N , Using the lemma (2) and (3) we get withĈ depending onC and uniform in y. The conclusion is obtained by 1 and letting δ going to 0.

Some numerical schemes
In this section we give the notations used for the discretization and interpolation used by the scheme. A thourough study of interpolation method for time dependent HJB equation can be found in [9]. A spatial discretization ∆x of the problem being given, thereafter (i 1 ∆x, .., i N ∆x) withī = (i 1 , ..., i N ) ∈ Z N will correspond to the coordinates of a meshMī defining an hyper-cube of size ∆x N in dimension N . For Gauss Lobatto Legendre quadrature grid (ξ i ) i=1,..M ∈ [−1, 1] M , and for a meshī, the point yī ,j withj = (j 1 , .., j N ) ∈ [0, M ] N will have the coordinate (∆x(i 1 + 0.5(1 + ξ j 1 )), .., ∆x(i N + 0.5(1 + ξ j N )). We denote X ∆x,M := (yī ,j )ī ,j the set of all the grids points on the whole domain.
We notice that for regular meshes with constant volume ∆x N , we have the following relation for all x ∈ R N : As in [9] we introduce I ∆x,N the Lagrange interpolator associated to the Gauss Lobatto Legendre quadrature. On a meshMī and for a point x in this mesh, we note vī = miñ j v(yī ,j ),vī = max j v(yī ,j ). We introduce the following truncated operator: We recall some properties easily proved in [9] Lemma 4 The interpolatorÎ ∆x,M satisfies and the positive weights (w h i,j (f ) are functions of f Then we add a result for the interpolation error :

Lemma 5
• For each K-Lipschitz bounded function f : , there exists C such that: When the truncation is realized, the truncated solution is in between the linear interpolation and the interpolation of high degree so the rate of convergence remains equal to 2. The second point is easily check by noticing that I ∆x,M is a Lagrange interpolator so linear, that I ∆x,M (m) = m and that the truncation operator tr satisfies tr(f + m) = tr(f ) + m.
Remark 3.1 It is always possible to relax the truncation to the min/max as explain in [9].

A Camilli Falcone style scheme
The first scheme we study is a modification of Camilli Falcone scheme [6] where the linear interpolator I h,1 is replaced by a potentially high order interpolatorÎ h,M with M > 1. We begin by defining the monotone operator ([1], [6])Ŝ which is the Lagrangian scheme without interpolation. First for any bounded continuous function φ, x, y, z ∈ R N , we set where σ α i is the i-th column of σ. The semi discretized scheme S is defined as follows for y a quadrature point.
We need to prove that we can construct an approximated solution of the discretized problem. We introduce the operator T define for U a function on X ∆x,M : There exists s depending on h and C independent of h such that u h = T s h,∆x 0 is a ǫ(Ch q−2 ) solution of scheme S.

Proof 5
Note v h the unique solution given of scheme (18). For x ∈ X ∆x,M , U a function on X ∆x,M , using where we have use the uniform boundedness of v h in C 0,1 (R N ) . Gathering (20) and (21) we get Iterating we find that Taking k = (q−1)log(h) log(1−λh) , using the fact that ∆x = h q , we get that Then using (A1), the fact that u h =Î h,∆x u h , and (24) Proposition 5 Suppose u h has been constructed as in proposition (4), the previous developed framework gives us that we can find q such that |u − u h | < Ch ) (see [1]), a direct use of (24) shows giving the result.

Remark 3.2
The fact that the bound given by the framework is not tight is explained by the following: • We cannot prove that the discrete problem has a solution but we can only prove the existence of an ǫ solution, • The upper bound in the theorem 1 is obtained by modifying the framework in [2]. A tighter bound could be perhaps obtained by modifying the framework in [1].

Finite difference scheme
The matrix a α can be written P DP t where P is an unitary matrix with i-th columns ξ i and D is a diagonal D i,j = δ i,j d i ≥ 0. We note (b α ) + the vector such that (b α ) + i (x) = max(0, b α i (x)) and (b α ) − i (x) = max(0, −b α i (x)). The operator L α can be discretized for a regular function u using two parameters h andĥ by where (e i ) i=1,N is the canonical basis in R N . We suppose that the equation has been normalized such that This is always possible because of (A1) and noting that i d α For any bounded continuous function φ, x, y, z ∈ R N , we set [φ] x (z) = φ(x + z) and define the operatorŜ By using [2] we get the following proposition Proposition 7 This operatorŜ is monotone, consistent, and there exists an unique solution v h bounded uniformly in C 0,1 (R N ) of and u solution of (1) satisfies We introduce the operator We give the results obtained exactly as in the Lagrangian scheme Proposition 8 Assume (A1) hold and that ∆x = h q . Then the scheme (27) satisfies assumptions (S1), (S2), (S3) with We then have to check that we can find an ǫ solution u h satisfying (4). We introduce the operator define for U a function on X ∆x,M : for x ∈ X ∆x,M Proposition 9 Assume (A1), (A2) hold. Suppose that ∆x = h q with q > 2. There exists s depending on h and C independent of h such that u h = I h,∆x T s h,∆x 0 is a ǫ(Ch q−4 ) solution of scheme S Proof 7 The proof is similar to the one of proposition (4) We first prove that Taking k = − (q−2)log(h) log(1+λh 2 ) , using the fact that ∆x = h q , we get that Then using (A1), (30), the relation | sup .. − sup . .| ≤ sup |.. − ..|, the fact that the probabilities are between 0 and 1 and the fact that u h =Î h,∆x u h : Proposition 10 Constructing an ǫ solution of (4) with the u h given by proposition (9), we get Our framework indicates that q should be above 21/5. A direct estimation gives that q should be greater than 11/5 Proof 8 The rate of convergence and the value q = 21 5 is due to a direct use of theorem (1). A direct use of (30) with the rate of convergence |u − v h | 0 ≤ h 1 5 in [2] gives us a better estimation for q

Numerical tests
The theoretical bounds obtained in the previous section are not better than the ones obtained with linear interpolation. The interest of this approximation relies on the fact that where the solution is smooth we expect that the solution won't be truncated and that the consistency error will be far better than the theoretical one. All case treated are 2D cases. For the first three examples, we only give results for the Lagrangian scheme because the finite difference scheme developed coincides with the classical finite difference. The domain of resolution will be noted Q, 1 is the diagonal unitary matrix and 1 is the vector with 1 components. For all SL Scheme we choose h = 0.0002. The number of commands tested is equal to 2000. The software is parallelized with 48 cores as explained in [9]. The interpolation used are either linear (2 points per mesh in each direction, monotone scheme), either quadratic (3 points per mesh in each direction), either Cubic (4 points per mesh in each direction). The discretization points are the Gauss Legrendre Lobatto points [9]. On all the case and all the tests the fixed point iteration is converging but it can be very slow especially for finite difference. The maximum number of iteration is taken equal to 1e5 and no acceleration was used. The criterion for stopping iteration between iteration i and i + 1 was |u i+1 − u i | ≤ 1e − 7. The rate of convergence of the different case is not stable so not reported. In the table Err indicate the sup of the error for a given discretization given by NbM (number of meshes in each direction), ItN gives the number of fixed point iteration used and Time the time needed to compute.

A first regular test case
The solution of this test case is given by u(x, y) = sin(πx)sin(πy) The coefficients are given by and the function f α is given by f α (x, y) = (C + π 2 σ 2 1 u(x,y)>0 )u(x, u) − bπ(cos(πx)sin(πy) + sin(πx)cos(πy)) The numerical coefficients are  and the boundary condition is a Dirichlet one with 0 value. Results are given in table 1 and this first regular case clearly indicates that the quadratic approximation is by far the most efficient interpolation. In fact it is rapidly converging to the h discretized operator so that the interpolation error becomes negligible for a number of mesh equal to 80.

−bπcos(πx)sin(πy)
The values taken are a = −1,ā = 1, σ = 1, C = 0.7, b = 0.5, Q = [−1, 1] 2 . Results clearly indicates that the Finite Difference method proposed is not competitive with the Semi Lagrangian scheme. The convergence of the fixed point iteration for a step h = 0.01 is not achieved with finite difference with 1e5 iteration (the error given between the last two iterations is around 2e7). Once again Semi Lagrangian with quadratic interpolation is the most effective method.