On the implementation of a primal-dual algorithm for second order time-dependent Mean Field Games with local couplings

. We study a numerical approximation of a time-dependent Mean Field Game (MFG) sys-tem with local couplings. The discretization we consider stems from a variational approach described in [14] for the stationary problem and leads to the ﬁnite diﬀerence scheme introduced by Achdou and Capuzzo-Dolcetta in [3]. In order to solve the ﬁnite dimensional variational problems, in [14] the authors implement the primal-dual algorithm introduced by Chambolle and Pock in [20], whose core consists in iteratively solving linear systems and applying a proximity operator. We apply that method to time-dependent MFG and, for large viscosity parameters, we improve the linear system solution by replacing the direct approach used in [14] by suitable preconditioned iterative algorithms.


Introduction
In this work we consider the following MFG system with local couplings m(·, ·) = m 0 (·), u(·, T ) = g(·, m(·, T )) in T d . (MFG) In the notation above ν ≥ 0, d ∈ N, T d is the d-dimensional torus, H : T d × R d → R is jointly continuous and convex with respect to its second variable, f , g : T d × R → R are continuous functions and m 0 ∈ L 1 (T d ) satisfies m 0 ≥ 0 and T d m 0 (x)dx = 1. System (MFG) has been introduced by J.-M. Lasry and P.-L. Lions in [27,28] in order to describe the asymptotic behaviour of symmetric stochastic differential games as the number of players tends to infinity. Several analytical techniques can be used to prove the existence of solutions to (MFG) under various assumptions on the data. Despite the recent introduction of the MFG system, the literature dedicated to its theoretical study is already too rich to be covered exhaustively in this introduction. The interested reader may refer to the monographs [10,24], the surveys [16,25] and the references therein for the state of the art of the subject.
A useful approach that can be used to establish the existence of solutions to (MFG) is the variational one, already presented in [28]. The main idea behind is that, at least formally, system (MFG) can be seen as the first order optimality condition associated to minimizers of the following optimization problem inf (m,w) where, in the definition of b, H * (x, ·) denotes the Legendre-Fenchel conjugate of H(x, ·). Under the assumption that f (x, ·) and g(x, ·) are non-decreasing, problem (P ) is shown to be a convex optimization problem and convex duality techniques can be successfully applied in order to provide existence and uniqueness results to (MFG). This argument has been made rigorous in several articles: let us mention [17,18] in the context of first order MFGs (ν = 0), the paper [19] for degenerate second order MFGs, and finally [29,30] for ergodic second order MFGs. The variational approach described above has also been successful in the numerical resolution of system (MFG). In this direction, we mention the article [26] dealing with applications in economics, the paper [1] concerned with the so-called planning problem in MFGs, the works [7,9] focused on the resolution of a discretization of (P) by the Alternating Direction Method of Multipliers (ADMM) and [14] where several first order methods are implemented and compared for the stationary version of (MFG). Let us mention that the variational approach is closely related to the so-called mean field optimal control problem, for which numerical methods have been studied in [6,15], among others.
In this paper we consider a finite difference discretization of problem (P). Assuming that f (x, ·) and g(x, ·) are non-decreasing, the discretization that we consider is such that it preserves the convexity properties of problem (P) and the first order optimality conditions for its solutions, which are shown to exist, coincide with the finite difference scheme for MFGs introduced in [3]. A very nice feature of this approach is that the solutions of the resulting discretized MFGs are shown to converge to the solutions of (MFG). We refer the reader to [2], where the convergence result is obtained under the assumption that (MFG) admits a unique classical solution, and to [5] in the framework of weak solutions (see [32] for the definition of this notion). We solve the discrete convex optimization problem by using the primal-dual algorithm introduced in [20]. As was pointed out in [14] (see also [31] in the context of transport problems), the primal-dual algorithm we consider seems to be faster than the ADMM when ν in (MFG) is small (or null). On the other hand, the efficiency of both methods is arguable when ν is large. This is due to the fact that, in both algorithms, at each iteration one has to invert a matrix whose condition number importantly increases as the viscosity parameter increases. Naturally, preconditioning strategies (see e.g. [11]) can then be used in order to improve the efficiency of both algorithms. This strategy has been already successfully implemented in [7] for the ADMM.
Our main objective in the present work is to take a closer look at the phenomenon described at the end of previous paragraph when considering the primal-dual algorithm. Therefore, we focus our analysis in the case where ν > 0. We have implemented standard indirect methods for solving the linear systems appearing in the computation of the iterates of the primal-dual algorithm. As our numerical results suggest, it is very important to design suitable preconditioning strategies in order to be able to find solutions of the discretization of problem (P ) efficiently, and in a robust way with respect to the viscosity parameter. For this, we explore different preconditioning strategies, and in particular, multigrid preconditioning (see also [4,7], where multigrid strategies have been implemented for other solution methods).
The article is organized as follows. In section 2 we introduce some standard notation and we recall the finite difference scheme for (MFG) introduced in [3]. The variational interpretation of this finite difference scheme is discussed in section 3. Next, in section 4, we recall the primal-dual algorithm introduced in [20] and we consider its application to the discretization of (P). In section 5, we summarize the preconditioning strategies that we consider and we discuss a numerical example, which is the time-dependent version of one of the examples treated in [3,14].

2.
Preliminaries and the finite difference scheme in [3] In this section we introduce some basic notation and present the finite difference scheme introduced in [3], whose efficient resolution will be the main subject of this article. For the sake of simplicity, we will assume that d = 2 and that given q > 1, with conjugate exponent denoted by q = q/(q−1), the Hamiltonian H : In this case the function b defined in (1.1) takes the form Let N T , N h be positive integers and set ∆t = T /N T , the time step, and h = 1/N h , the space step. We associate to these steps a time grid T ∆t := {t k = k∆t ; k = 0, . . . , N T } and a space grid T 2 h := {x i,j = (ih, jh) ; i, j ∈ Z}. Since T 2 h intends to discretize T 2 , we impose the identification z i,j = z (i mod N h ),(j mod N h ) , which allows to assume that i, j ∈ {0, . . . , N h − 1}. A function y := T 2 × [0, T ] → R is approximated by its values at (x i,j , t k ) ∈ T 2 h × T ∆t , which we denote by y k i,j := y(x i,j , t k ). Given y : T 2 h → R we define the first order finite difference operators where, for every a ∈ R, we set a + := max(a, 0) and a − := a + −a. The discrete Laplacian operator ∆ h y : The Godunov-type finite difference discretization of (MFG) introduced in [3] is as follows: find u, m : and the operator T (u , m ) : with the convention: The existence of a solution (u h,∆t , m h,∆t ) of system (MFG h,∆t ) is proved in [3, Theorem 6] as a consequence of Brouwer fixed point theorem. Furthermore, if we assume that f and g are increasing with respect to their second argument, and one of them is strictly increasing, this solution is unique when h is small enough (see [3,Theorem 7]). As we will see in the next section, these results can also be obtained by variational arguments. The convergence, as h and ∆t tend to 0, of suitable extensions of u h,∆t and m h,∆t to T 2 × [0, T ] to a solution (u, m) of (MFG) is proved in [2] under the assumption that (u, m) is unique and sufficiently regular. The later smoothness assumption has been relaxed in [5].

The finite dimensional variational problem and the discrete MFG system
Following [14] in the stationary case and [1] for the planning problem, we introduce some finite-dimensional operators that will allow us to write easily a finite dimensional version of problem (P). Denoting by R + the set of non-negative real numbers and by R − the set of non-positive real numbers, we define K := R + × R − × R + × R − and for v = (v (1) , v (2) , v (3) , v (4) ) ∈ R 4 we denote by One can easily check (see e.g. [3]) that the corresponding dual operators are given by for all u ∈ U. For later use, notice that and so Note that if (m, w) ∈ M × W is such that G(m, w) = (0,m), where we recall thatm is defined in (2.2), then where we recall that F and G in (3.5) are defined in (1.1).
We have the following result Theorem 3.1. For any ν > 0 problem (P h,∆t ) admits at least one solution (m h,∆t , w h,∆t ) and associated to it there exists u h,∆t : In order to prove the result above, let us first show a lemma that implies the feasibility of the constraints in (P h,∆t ).
Proof. Let us definem 0 i,j :=m i,j andm k i,j := 1 for all k = 1, . . . , N T and i, j. Since h 2 i,jm k i,j = 1 for all k = 0, . . . , N T , by (3.3) and the definition of A we easily get that Am ∈ Im(B). Therefore, there existsŵ ∈ W satisfying G(m,ŵ) = (0,m). Then, given δ > 0, we set for all k = 0, . . . , N T − 1 and i, j which satisfiesw k i,j ∈ int(K) and (Bw) k = (Bŵ) k . The result follows. Now, we prove the existence of solutions to (P h,∆t ). Lemma 3.2. Problem (P h,∆t ) admits at least one solution (m h,∆t , w h,∆t ) and every such solution satisfies As a consequence, by definition ofb, (m n ) k i,j ≥ 0 for all i, j and k and (w n ) k ∈ K for all k. Since Am n +Bw n = 0, relation (3.6) and that F(m n ) is uniformly bounded (because F and G are continuous and m n is bounded), relation (3.8) yields the existence of C 3 > 0 (independent of n) such that sup i,j,k |(w n ) k i,j | ≤ C 3 . Thus, there exists (m h,∆t , w h,∆t ) ∈ M×W such that, up to some subsequence, m n → m h,∆t and w n → w h,∆t as n → ∞. Since G(m n , w n ) = (0,m) we obtain that G(m h,∆t , w h,∆t ) = (0,m), The lower semicontinuity of B + F implies that which implies that (m h,∆t , w h,∆t ) solves (P h,∆t ). Finally, if (m, w) ∈ M × W solves (P h,∆t ) and m k i,j = 0 for some i, j and k = 1, . . . , N T , then, by the definition of B, we must have that w k−1 i,j = 0. Thus, the constraint (Am + Bw) k−1 i,j = 0 can be written as Since the left hand side above is non-positive and the right hand side is non-negative (by definition of K), we deduce that all the terms above are zero. By repeating the argument at the indexes neighboring (i, j), we deduce that m k ≡ 0 and so h 2 i,j m k i,j = 0 which, by (3.6), contradicts G(m, w) = (0,m). The result follows. Remark 3.1. Notice that the proof of the existence of a solution to (P h,∆t ) also works when ν = 0.
Proof of Theorem 3.1. By Lemma 3.2 we know that there exists a solution (m h,∆t , w h,∆t ) to (P h,∆t ) and m h,∆t i,j > 0 for all i, j. Thus, in order to conclude it suffices to show the existence of u h,∆t such that (MFG h,∆t ) holds true. For notational convenience we will omit the superindexes h and ∆t. Define the La- Note that the linear mapping M m → (Am, m) ∈ U × R N h ×N h is invertible as it is shown by its matrix representation (see (4.7) in the next section). As a consequence G is surjective and, hence, by standard arguments, where we have used definition (3.4) and that m k i,j > 0 for all k = 1, . . . , N T and all i, j.
, by the last relation in (3.2), the third relation in (3.10) can be rewritten as and hence, by the second relation in (3.2) and the first relation in (3.10), we have that The last relation in (3.10) yields that for all k = 1, . . . , N T and all i, j which, by (3.2) and under the convention (2.3), is equivalent to Shifting the index k, the expression above yields The result follows.

4.
A primal-dual algorithm to solve (P h,∆t ) As discussed in [14], for solving the optimization problem and its dual min where ϕ : R N → R ∪ {+∞} and ψ : R N → R ∪ {+∞} are convex l.s.c. proper functions, methods in [13,[20][21][22][23] can be applied with guaranteed convergence under mild assumptions. In [14], devoted to the stationary case, the method proposed in [20] has the best performance when the viscosity parameter is small or zero. This method is inspired by the first-order optimality conditions satisfied by a solution (ŷ,σ) to (4.1)-(4.2) under standard qualification conditions, which reads (see [34,Theorem 8]) where γ > 0 and τ > 0 are arbitrary and, given a l.s.c. convex proper function φ : Given θ ∈ [0, 1], τ and γ satisfying τ γ < 1, and starting points (y 0 ,ỹ 0 , σ 0 ) ∈ R N × R N × R M , the iterates {(y k , σ k )} k∈N generated by σ k+1 := prox γψ * (σ k + γỹ k ), converge to a primal-dual solution (ŷ,σ) to (4.1)-(4.2) (see, e.g., [20]). In the case under study, the equations of the time-dependent discretization are very similar to their stationary counterparts (see [14]). Specifically, the discrete linear operators A and B defined in (3.1), by an abuse of notation, are represented by real matrices A and B, of dimensions (   (i) The matrixÃ is block lower triangular with invertible diagonal blocks and, hence, it is invertible. Indeed, the first diagonal block Id N 2 h is obviously invertible and the other blocks, given by νL + 1 ∆t Id N 2 h , are also invertible because they are strictly diagonally dominant. (ii) SinceÃ is invertible, the matrix Q := CC * =ÃÃ * +BB * (4.8)

respectively, given by
is positive definite and, hence, invertible.
Therefore, (P h,∆t ) is a particular instance of (4. where Q is defined in (4.8). By setting and, if γτ < 1, the convergence of (m [l] , w [l] ) to a solution (m,ŵ) to (P h,∆t ) is guaranteed together with the convergence of (n [l] , v [l] ) to some (n,v) as l → ∞. In order to compute the Lagrange multiplierû ∈ U, which solves the first equation in (MFG h,∆t ), note that (3.9) can be written equivalently as  where (m,ŵ) is the primal solution andẑ = (λ,û). Therefore, in order to approximateẑ, note that from (4.9) we have

Remark 4.2. (i)
In order to obtain an efficient algorithm, the computation of prox τ ϕ in (4.9) should be fast. A complete study of prox τ ϕ is presented in [14,Section 3.2] showing that its computation depends on the resolution of a real equation, which can be efficiently solved.
(ii) An important step in (4.9) is the efficient computation of the inverse of Q. Different preconditioning strategies to tackle this issue will be presented in the following section.

Preconditioning strategies
At the beginning of each iteration of the primal-dual algorithm (4.9), we require the solution of a linear system Qz = b.
(5.1) The purpose of this section is to discuss preconditioning strategies for the solution of this linear system. For the stationary setting discussed in [14], the solution of such a system via direct methods such as the backlash (mldivide) command in MATLAB 1 was feasible for relatively fine meshes (up to the order of 100 nodes per space dimension). However, as shown in Table 1, introducing a temporal dimension and thus increasing the degrees of freedom to N 2 h × N T significantly increases the computation time. Indeed, the use of backlash on fine space and time grids -e.g. 128 2 space grid points and 40 time steps -requires an amount of RAM that is prohibitive on the machine used for our performance tests 2 , leading to "out of memory" errors. We mitigate this problem by exploring the solution of (5.1) via preconditioned iterative methods, which perform efficiently for finer space and time subdivisions and different viscosities.  We begin by illustrating the difficulties associated to the conditioning of the system in (5.1). Table 2 shows the condition number of the system for different space-time discretizations and viscositiy values. Without any precoditioner, the condition numbers of different discretizations scale up to 10 8 . The same Table shows that by selecting a suitable preconditioner, such as the modified incomplete Cholesky factorization [11] (michol in MATLAB), the conditioning of the system is improved by 4 orders of magnitude. We have tested different choices of preconditioners and iterative methods for our problem. Since the matrix Q in our setting is sparse, symmetric, and positive-definite, we have implemented an incomplete Cholesky factorization with diagonal scaling, a modified incomplete Cholesky factorization, and multigrid preconditioning. As for the choice of the iterative method, our tests included both preconditioned conjugate gradient (pcg), and the biconjugate gradient stabilized method (BiCGStab). The interested reader will find in [36, Chapters 6 and 8] a thorough description of the aforementioned methods, and in the Appendix of this article performance tables for the different methods.
Our findings suggest that the use of an iterative pcg method, preconditioned by modified incomplete Cholesky factorization is satisfactory for small viscosities (ν ≤ 0.05). However, this algorithm fails to converge for high viscosity systems on refined grids (ν = 0.5, N T = 40, N h ∈ {64, 128}). Exchanging the pcg method by a BiCGStab algorithm preconditioned by modified incomplete Cholesky factorization slows down the process on finer grids, but allows for convergence in the failure cases of pcg: ν = 0.5, N T = 40, N h ∈ {64, 128}.

Multigrid preconditioner
We implement a multigrid preconditioned algorithm for solving (5.1). We refer the reader to [35] for an introduction and an overview of multigrid methods. We briefly review the main concepts behind the method. Consider two linear systems A 1x1 = b 1 and A 0x0 = b 0 , stemming from two discretizations of a linear PDE over the grids G 1 and G 0 , respectively. Assume also that G 1 is a refinement of G 0 . Loosely speaking, the main idea of the method is that in order to find a good approximation of the solutionx 1 on the finer grid, we first consider what is known as a smoothing step. This step consists in computing a few iterates x 1 1 , . . . , x η1 1 with a standard indirect method, such as Jacobi or Gauss-Seidel, and to define the residual r 1 := b 1 − A 1 x η1 1 , which is shown to be smoother (less oscillatory) than the first residual b 1 − A 1 x 1 . Then, we consider in the coarser grid G 0 the second system A 0x0 = b 0 with b 0 =r 1 , wherer 1 is the restriction of r 1 to G 0 . Assuming that we can compute a good approximation ofx 0 , which we still denote byx 0 , we then extend this solution to G 1 by using a linear interpolation. Calling e 1 the resulting vector, we update x η1 1 by redefining it as x η1 1 + e 1 and we end the procedure by applying again a few iterations, say η 2 , of a smoothing method initialized at x η1 1 . This last step is called post smoothing.
The previous paragraph introduced what is known as a two grid iteration. If we consider more grids G 0 , G 1 ,. . ., G , where for each k = 0, . . . , − 1, G k ⊆ G k+1 , we can proceed similarly and obtain a better approximation of the solution to A x = b . As in the previous case, we begin with the finest grid G and we perform η 1 smoothing steps to obtain the residual r := b − A x η1 whose restriction to G −1 is denoted byr . In this grid we consider the system A −1 x −1 =r and we perform again a smoothing step and a restriction of the residual to G −2 . The procedure continues until we get to the coarsest grid G 0 , where the solution e 0 to the corresponding linear system can be found easily (typically using a direct method). Next, the solution e 1 on the grid G 1 is corrected with the interpolation of e 0 . Another post smoothing is performed to the corrected solution on G 1 and using its interpolation in the grid G 2 we correct the previous solution on this grid. The smoothing, interpolation and correction iterations end once we arrive to the finest grid G to obtain the final approximation ofx . The previous procedure is called a multigrid method with a V -cycle. An alternative, to obtain a more accurate solution, is to proceed as before going from G to G −1 and then for k = − 1, . . . , 1 to perform two consecutive coarse-grid corrections, instead of one as in the V -cycle. The resulting procedure is known as multigrid with a W -cycle. Finally, in between the V -cycle and the W -cycle, we have the F-cycle, where in the process of going from the coarsest grid to the finest one, if a grid has been reached for the first time, another correction with the coarser grids using a V -cycle is performed.
In our context, we use one cycle of the multigrid algorithm, which is a linear operator as a function of the residual on the finest grid, as a preconditioner for solving (5.1) with the BiCGStab method. Since Q is related to the finite difference discretization of the operator −∂ 2 tt + ν 2 ∆ 2 − ∆ and ν is not necessarily small, as in [4], it is natural to consider the refinements of the grid only in the space variable (we refer the reader to [35] for semi-coarsening multigrid methods in the context of anisotropic operators). We suppose that the spatial mesh is such that N h = H2 , with H > 1 and is a positive integer (in the numerical example in the next section H will be equal to 2 or 3, H 2 being the number of spatial points in the coarsest grid).
Let us specify the main steps of the multigrid method we use as a preconditioner.

Cycle:
We use the F-cycle.
Restriction operator: As in [4], in order to restrict the residual on the grid G k to the grid G k−1 , we use the second-order operator R k : , for n = 0, . . . , N T , i, j = 1, . . . , 2 k−1 H.
Interpolation operator: We denote by I k : R (2 k−1 H) 2 (N T +1) → R (2 k H) 2 (N T +1) the interpolation operator from the grid G k−1 to the grid G k . We have chosen a standard bilinear interpolation operator in the space variable, which is also a second-order operator and dual to the restriction operator (I k = 4R * k ). According to [12], the sum of the orders of R k and I k has to be at least equal to the degree of the differential operator. In our context, both are equal to 4.
Linear systems on the different grids: The linear systems are defined by the matrices where we recall that A k and B k are the finite difference discretizations of ∂ t −ν∆ and div(·), respectively, on the grid G k (see (3.1)).
Smoother: Here we have used Gauss-Seidel iterations in the lexicographic order. There is no reason for choosing the lexicographic order, other than its simplicity.
Solving the system on the coarsest grid G 0 : We can use an exact solver such as backlash in MATLAB. Indeed, in G 0 the size of the system is really small with respect to the size of the system on the grid G (in G 0 , we can even store the inverse of Q 0 and inversion at this level just becomes a matrix multiplication).
The multigrid preconditoning procedure is summarized in Algorithm 2.
x k ←Perform η 1 steps of Gauss-Seidel from x k with b k as second member.
x k ←Perform η 2 steps of Gauss-Seidel from x k with b k as second member. return x k

Numerical Tests
In this section we present a test case considered in [3], for which the stationary solution has been computed numerically in [14] using the primal-dual algorithm presented above.
The setting is as follows: we consider system (MFG) with g ≡ 0 and f (x, y, m) := m 2 − H(x, y), H(x, y) = sin(2πy) + sin(2πx) + cos(2πx), for all (x, y) ∈ R 2 and m ∈ R + . This means that in the underlying differential game modelled by (MFG), a typical agent aims to get closer to the maxima ofH and, at the same time, he/she is adverse to crowded regions (because of the presence of the m 2 term in f ). We first validate the dynamic behavior of our solution. Figure 1 shows the evolution of the mass at four different time steps. Starting from a constant initial density, the mass converges to a steady state, and then, when t gets close to the final time T , the mass is influenced by the final cost and converges to a final state. This behavior is referred to as turnpike phenomenon in the literature [33]. It is illustrated by Figure 2, which displays as a function of time t the distance of the mass at time t to the stationary state computed as in [14]. In other words, denoting by m ∞ ∈ R N h ×N h the solution to the discrete stationary problem and by m ∈ M the solution to the discrete evolutive problem, Figure 2  For the multi-grid preconditioner, Table 3 shows the computation times for different discretizations. It can be observed that finer meshes with 128 3 degrees of freedom are solvable within CPU times which outperfom others methods shown in the Appendix and in [14]. Furthermore, the method is robust with respect to different viscosities.
From Table 3 we observe that most of the computational time is used for solving the second proximal operator (the third equality of (4.9)), which does not use a multigrid strategy but which is a pointwise operator (see Proposition 3.1 of [14]) and thus could be fully paralellizable.
Unlike the stationary case, low viscosities seem to make the algorithm be slightly slower. However, Table 4 shows that the average number of iterations of BiCGStab stays low regardless of the viscosity. Indeed Table 3 shows that more Chambolle-Pock iterations are needed to converge. The same behavior happens when we use a direct exact solver instead of the multi-grid preconditioned BiCGStab algorithm.
Concluding Remarks. In this work we have developed a first-order primal-dual algorithm for the solution of second-order, time-dependent mean field games. The procedure consists of: a variational formulation for the MFG, its discretization via finite differences, the application Chambolle-Pock algorithm to the resulting minimization. While this method has been studied for stationary MFG in [14], its numerical realization for time-dependent MFGs was prohibitive in terms of computing time, as the Chambolle-Pock iteration requires   Table 3. Time (in seconds) for the convergence of the Chambolle-Pock algorithm, cumulative time of the first proximal operator with the multigrid preconditioner, and number of iterations, for different viscoty values ν and two types of grids. Here we used η 1 = η 2 = 2, T = 1 and a tolerance between two iterations of the Chambolle-Pock algorithm equal to 10 −6 in normalized 2 -norm.
the solution of a large-scale linear system at each iteration. We have overcome this difficulty by studying different preconditioning strategies for the associated linear system. Overall, the multigrid preconditioner with a BiCGStab iteration performs satisfactorily for different discretizations and viscosity values.
(a) iterations to decrease the residual by a factor 10 −3 .