ADAPTIVE WAVELET SCHEMES AND FINITE VOLUMES

. We consider a procedure for combining high order ﬁnite volumes and tree-based nonuniform grids. Especially, we focus on eﬃcient algorithms for third order multidimensional volume interpolation and communication between levels of reﬁnement. In the end, numerical results are reviewed to validate our approach. R´esum´e. Nous consid´erons un processus permettant de combiner volumes ﬁnis et grilles non uni-formes bas´ees sur une structure d’arbre. En particulier, nous nous int´eressons `a des algorithmes eﬃcaces pour l’interpolation multidimensionnelle en volumes d’ordre trois en pr´ecision et pour la communication entre les niveaux de raﬃnement. Enﬁn, nous validons notre d´emarche en analysant les r´esultats num´eriques obtenus


Introduction
A recurring topic in the modeling of physical phenomenon is the conception of numerical solvers able to accurately describe the solution of given models. This can be challenging especially in cases where the range in spacial scales is varying a lot. For instance, the domain of the simulation might be very large in contrast to the high resolution required to describe steep gradients or even discontinuities in the solution. In such conditions, using a uniform mesh will inevitably lead to extremely costly computations. It is thus interesting to consider adaptive methods, which rely upon locally refining the mesh nearby those problematic areas. In this way, one can hope that most of the domain will be covered by a relatively coarse mesh resulting in faster computations, but without giving up on accuracy when and where needed.
In this paper, we are interested in the study and implementation of high order finite volumes schemes compliant with Adaptive Mesh Refinement (AMR) for simple problems of convection. In the AMR framework, individual computational cells are organized directly in a tree. Each cell can be refined or coarsened separately from the others, as needed. The main advantages of the tree approach are the flexibility in refinement and the efficiency in using cells. However, standard grid-based solvers cannot be used on tree. In addition, an access to neighboring cells is more difficult in a tree than in an array.
Our work is organized as follows. In a first section we give an in depth motivation from the physical and computational standpoints. Then we briefly recall the basis of finite volumes and discuss some strategies to achieve high order convergence. In the third section we aim at reviewing common challenges when dealing with AMR, and propose some efficient ways to tackle them. Finally, we present the numerical results obtained through a program implemented in C.

Singularities in physics simulation
Adaptive Mesh Refinement proves particularly useful in simulations where different scales interact. For instance when an instability triggers a singularity, the large scale needed to simulate the global physical state is inadequate to catch the small scale of the singularity. And meshing right away the whole domain with a fine mesh is computationally too expansive.
This does not apply to stable situations like the Lid-driven Cavity, neither to homogeneous turbulence where small scales develop everywhere in the domain. AMR can be used to finely track a boundary or to capture the boundary layers.

Numerical advantages and disadvantages of mesh refinement
Here we discuss the pros end the cons of the Adaptive Mesh Refinement technique. The use of an AMR demands specific numerical solvers and complicates the implementation and the parallelization. The loss of uniformity induces a loss of the error function regularity which allows to apply successive discrete operators still retaining the same order of accuracy. It makes the increase in the order of accuracy of the numerical schemes more difficult. For finite differences it also impacts the mass conservation. This problem may be solved using finite volume techniques. As the data structure is more complex, the computational time per point is multiplied by four or five.
Hence if a singularity occurs only in few percents of the simulation time or on few percents of the simulation domain then it may be advantageous to resort to AMR techniques. These techniques optimize the discretisation by reducing the number of degrees of freedom of the computation. In some occasions, such as the six-dimensional Vlasov equations, given the number of grid points, it is the only way to fit the simulation into the computer memory.

Three examples in physics
Thanks to AMR it was possible to simulate and better understand vortex singularity appearing in incompressible Euler equations [GMG98]. In this numerical experiment, the authors used a grid as fine as 2048 3 retaining only less than 1% of the points that would have been necessary for the equivalent uniform grid. In this paper they used a finite volume second-order upwinding Godunov scheme. In further works these authors preferred a central scheme which remains to Lax-Friedrichs type schemes and provides third-order accuracy in smooth regions using CWENO reconstruction.
In [KWA09], AMR applied to simulate the collision of galaxies provided the needed high dynamical ranges. A coarse grid at 128 3 can be refined up to 13 times, reaching scales of 10 6 3 uniform grid equivalent which is out of reach of any existing computer.
The physical phenomenon of magnetic reconnection occurs in an ionized medium at the heart of magnetic confinement fusion plasmas but also at the interface between the solar wind and the terrestrial magnetosphere and explains the aurora borealis. This phenomenon results from a very localized magneto-hydrodynamic instability in space and time. To simulate this singularity, adaptative mesh refinement schemes allow very small scales to be reached that are inaccessible to uniform grid schemes.
However, the equations of the 2D reduced MHD models involve high order derivatives and require high order approximations, which are difficult to implement with adaptive schemes. The method presented in [DSD17] is based on finite differences, which poses conservation problems. To do this, we can rely on the wavelet approach, with interpolettes and interpolation wavelets, which are similar to the one adopted in [DSD17].

From conservation laws to finite volumes
In this section, we briefly recall the basis of finite volumes schemes and some strategies to get a higher order of convergence. This class of numerical schemes is well suited to study conservation laws. A system of N conservation laws models the transport of N quantities of interest u = (u 1 , ..., u N ) T -also named conserved variables -in the space R d . The equation ruling the behavior of these quantities can be written in integral form: with f ∈ C 1 (R N ; R N ×d ) the analytic flux. When coupled to initial datum u 0 : R d → R N , equation (1) becomes equivalent to the following weak conservative formulation: In order to discretise this problem, we restrain ourselves to a bounded subspace Ω ⊂ R d . We then have to complete the conservation laws with some boundary conditions. To stay in a simple setting, we consider Ω to be the unit cube and the boundary conditions are taken periodic. We introduce the uniform Cartesian mesh T , and consider the averaged solution U n K over the cell K ∈ T at time t n . The integral formulation (1) directly implies: The right hand side cannot be computed in the general case, and this is where an approximation has to be done. If we denote E(K) the set of interfaces of the cell K, and if we have an approximation F K,σ of the averaged flux on each interface σ ∈ E(K) between times t n and t n+1 , we get the so called finite volumes scheme: Thus the main stake when dealing with finite volumes schemes is to find a numerical flux that consistently approximates the right term from (3). A common approach to build such approximations is to consider the one-dimensional Riemann problem projected along the orthogonal direction to σ: where U L , U R are values of the solution on both sides of the interface σ, and f σ (u) = f (u) n σ . Solutions of the Riemann problem depend among other things on the eigenvalues (λ i ) 1≤i≤N of Jac f σ , which is why the numerical flux F K,σ itself often depends on them (see table 1 for examples of common numerical fluxes).

High order finite volumes schemes
Before moving to the next topic, we address the issue of achieving high order convergence with finite volumes schemes. A first approach would be to draw on the formalism of finite differences schemes. In fact, in the case of smooth enough solutions the latter can reach an arbitrary high order by the mean of Taylor expansions. The idea is then to try to design stable finite differences schemes that can be rewritten under conservative form (4) Order Godunov fσ(u (x/t = 0)), u self-similar solution of Riemann problem (5) by substituting the point-wise values with cell averages. A simple example is given by the Lax-Wendroff scheme. Consider u the solution of the 1D scalar linear transport with constant velocity c, and (x i ) i the equidistant cells centers of some domain discretisation. A first Taylor expansion in time of the solution writes: Using the relations u t = −cu x and u tt = c 2 u xx , one can replace the time derivatives in (6) by second order Taylor expansions in space: Relation (7) thus gives the second order finite differences scheme: where u n i stands for the point-wise approximation of the solution at position x i and time t n . Note that the last term in (8) adds numerical viscosity, and removing it would result in an unconditionally unstable centered scheme. Substituting u n i by the averageū n i over cell C i = [x i−1/2 , x i+1/2 ], this scheme can finally be rewritten under conservative form u n+1 Here we have chosen: for the numerical flux. Hence the Lax-Wendroff method can be interpreted at the same time as a finite volumes and a finite differences scheme, and is of order 2 both in time and space. The obvious issue with this example is that in a more general case, it is not always possible to rewrite a given finite differences scheme under conservative form. Furthermore, this approach is based on an assumption of smoothness, which is not necessarily true. More specifically, the Lax-Wendroff scheme is known to have a highly oscillatory behavior for solutions with discontinuities. Alternatively, one could try to improve the order of convergence through the use of a reconstruction operator. The most celebrated one is the MUSCL reconstruction first introduced by van Leer in [VL79], and which consists in replacing the piece-wise constant solution by a piece-wise linear approximationû n i (x) := u n i + σ n i (x − x i ) with σ n i a slope determined by some "interface differencing" discussed below. This way, the numerical flux benefits from more accurate left and right interface limits (9) induced by the slopes σ n i , leading to a second order convergence in space if the original scheme is of order one.
The update (10) is then replaced by a Runge-Kutta time integration to get second order both in time and in space. As for the choice of the slope, it should somehow approximate the gradient of the solution, since otherwise the reconstruction wouldn't be of order two. For instance, one could pick the centered slope σ n i = (u n i+1 − u n i−1 )/2∆x (see fig. 1). A clear advantage of the MUSCL method is that it allows to increase accuracy of any first order scheme without changing the numerical flux, and is thus very easy to implement.
Going even further, one can replace the piece-wise constant solution by a piece-wise second order polynomial , as studied in [vT09]. This reconstruction generates a third order scheme in space ifû n i preserves the averaged solution u n i , and if it approximates space derivatives of the solution at x i−1/2 and x i+1/2 with an error of at most O(∆x 2 ). This translates into the following system: Solving this system, we get: Finally we have : Interestingly, the stencil of the third order reconstruction operator (11) is of size three, which is the same as for the MUSCL reconstruction, yet the former is more precise.

Adaptive mesh refinement framework
As mentioned in the opening of this paper, there is need for a fine enough grid in order to accurately catch fine details of the solution. Given that the scale of the simulation is likely to be too large in comparison to the "scales of interest", one cannot afford to use a uniform mesh. In this section, we present some issues concerning non-uniform grids from the algorithmic point of view.

Representing a non-uniform grid
The main goal is to discretise a space domain of dimension d ∈ N into a non-uniform mesh made of superimposed grids. For the sake of simplicity, we limit ourselves to the unit cube of dimension d, and consider an initially uniform grid. Each cell of the initial grid can be subdivided into 2 d equally sized sub-cells which contribute to a first level of refinement. Of course those sub-cells themselves can be split in a recursive manner, leading to greater levels of refinements. All these levels can be seen as "layers" applied to the initial grid, each added layer being twice as refined as the previous one. A convenient way of representing this structure is to use a tree-based representation. The tree connects nodes from consecutive levels between each other, every node resulting from the gathering of 2 d cells. Inside a node, cells can be referenced either using their coordinates i := (i 0 , i 1 , ..., i d−1 ) ∈ {0, 1} d , or using the binary dyadic combination of their coordinates fig. 2 for an example in dimension 2). When a cell is divided for refinement, it becomes a node with its proper cells playing the role of leaves. In the tree, nodes are represented by squares, while leaves (non-divided cells) are represented by triangles. The dotted arrows give an example of neighborhood for a specific cell.
The advantage of the tree structure lies in the fact that it gives a straightforward way to navigate in the node hierarchy: from a given node, it is easy to visit its parent or its children. However, what is less trivial is to access nodes at the same level, for instance if one wants to iterate over a cell's neighbors (see fig. 3). Let us show an example of how it can be achieved. For instance, suppose that we have a cell C i belonging to the node N and that we want to access its neighbor at relative position l := 2k + j where k ∈ {0, 1, ..., d − 1} is the direction and j ∈ {0, 1} is the sense along that direction (positive if j = 0, negative otherwise). First question that one may ask: is the neighbor cell we are looking for in the same node or in a neighbor node ? The answer is given by the quantity |j − p k (i)| with p k (i) := (α(i)/2 k ) mod 2. The function p k can be interpreted as a "parity" argument taking the value 0 if the cell C i is at the left along direction k inside the node, and 1 if it is at the right along the same direction inside the node. Thus we see that the neighbor sought after is in the same node if and only if |j − p k (i)| is equal to 0. In that case, the neighbor cell is C i ∈ N with i such that α(i ) = α(i) + 2 k (1 − 2j). In the opposite case, we have to recursively apply the algorithm to the node N itself in order to find its own neighbor N in the relative position l = 2k + j. Once this is done, the neighbor cell C i ∈ N is given by the coordinates α(i ) = α(i) − 2 k (1 − 2j).
A constraint that will be applied from now on is that there can be no direct transitions between nonconsecutive levels of refinement. As a consequence, the example given through figure 3 will in reality not be authorized. Moreover in the case of periodic boundary conditions, we shall consider the root node to be its own neighbor.

Volume interpolation
When refined, a coarse cell becomes a node and it is necessary to attribute accurate values to its children. Knowing the averaged values of the solution in coarser cells next to the refined one, we can approximate the average of the solution in the new sub-cells via volume interpolation. In what follows we give an example of third order volume interpolation in 1D. Let µ [a,b] : R[X] → R be the application mapping a polynomial to its average over the real interval [a, b]. We are looking for values (α, β, γ) ∈ R 3 such that: Evaluating this equality for polynomials 1, X, X 2 , we get the following system: If instead we wanted to interpolate µ [1,2] (P ), we would have considered the values (α β , γ ) ∈ R 3 such that: By symmetry with the previous interpolation, we get α = −1/8, β = 1 and γ = 1/8.   This volume interpolation is computed from the point of view of the new sub-cells: for each such cell, we gather the values of the coarse cells around to compute either (12) or (13). This might be inefficient since we have to retrieve several times the same values at the coarse level, as shown in fig. 4. Alternatively, we could adopt another strategy illustrated through fig. 5 by iterating over the coarse cells and "diffusing" their contributions to the new sub-cells. In 1D it is equivalent, however this approach becomes more efficient than the former one at higher dimensions. In fact, suppose that we have a 2D mesh where several cells have been refined in sub-cells. A third order volume interpolation can be derived from the 1D one by performing a tensor product of the stencil presented in figure 5. An efficient way of completing the interpolation is to first send the contributions with weights 1 to the children. Once this is done for all coarse nodes, we start over again by sending the contributions horizontally (those with weights ±1/8) and accumulate them with the previous contributions. Finally, we proceed similarly in the vertical direction. Note that exchanging the last two steps would lead to the same result. Here the diagonal sub-cells have been omitted because the contributions with weights ±1/64 are indirectly handled. In fact contributions are accumulated when combining both horizontal and vertical directions (see fig. 6). The general procedure in dimension d is given in algorithm 1. Algorithm 1: Third order volume interpolation of the level of refinement n + 1 in dimension d Each cell has an attribute rc (received contribution) and rdc (received directional contribution) initialized to zero; for node N in n th level of refinement do for cell C i in node N do C rc i ← contribution of node N (weight 1); for direction k ∈ {0, ..., d − 1} do for node N in n th level of refinement do N left ← neighbor of N at relative position 2k + 1; N right ← neighbor of N at relative position 2k; For each cell, add the value of rdc to rc and initialize rdc to zero; A last detail we want to discuss relatively to interpolation is the handling of border cells. For them to be correctly interpolated, in the case of the third order interpolation we would have to iterate at the coarser level over 3 d nodes (the interpolation stencil). However, since the considered sub-cells are at the border of their level of refinement, it means that some of these nodes do not exist, in place of what there are undivided coarse cells (leaves at the coarser level). This is problematic because it prevents us from sending the accumulated contributions to the sub-cells as in algorithm 1. A simple solution is to virtually refine the coarse leaves to get the missing nodes: we call them ghost nodes.

Brief overview of adaptivity criteria
The adaptivity criteria directly derives from the interpolation process. Subtracting the interpolated value of a cell to its actual value, we are able to quantify exactly the information that it contains.
For instance applying the interpolation presented in Section 3.2, the detail stored in µ [1,2] (f ) is given by If this detail is small enough, it can be removed. If it is large enough, we trigger a refinement process redividing the cell into new smaller cells.

Finite volumes schemes on a non-uniform mesh
In this section, we propose an adaptation of finite volumes schemes to non-uniform grids. In order to do so, we first concentrate our attention on the main difference with the uniform case, which is the handling of fluxes at the boundary of a level of refinement. Support nodes are introduced to this end, after what we describe the communication process between levels of refinement allowing to compute the fluxes.
Given a refined mesh, we want to update the solution using a finite volumes scheme. As such, we need to iterate over the interfaces of each level of refinement to compute the fluxes. The problem we are confronted to is that some interfaces are located at the border of a level of refinement, meaning that they are not separating ∂Ω Border cell Support cell Figure 8. To be computed, some numerical fluxes require support cells. Here ∂Ω is the border of the refinement level. The blue area corresponds to a coarse cell that is being divided to get support cells at the finer level. After interpolation of those new cells, we are able to compute the numerical fluxes at the fine borders.
two fine cells but a fine cell and a coarse one (see fig. 8). A way to avoid this issue is to split the coarse cell into support cells that are then interpolated. As a result, we are able to compute the flux associated to this interface as usually. Note that this process is not properly speaking a refinement, since the support cells will not be updated -their only goal being to make it possible to compute the flux -and are due to be removed at the end of every iteration. As described in section 3.2, interpolating the support cells might require the introduction of ghost nodes according to the interpolation stencil.
Algorithm 2: Updating the solution over an adaptive non-uniform mesh According to some criterion, refine and coarsen the cells requiring it; Generate support cells at the border of each level of refinement; Generate ghost nodes; Interpolate the newly refined cells and support cells; Compute a time step ∆t verifying the CFL condition at every level of refinement to ensure stability; Initialize current level of refinement to the finest one; while coarsest level of refinement not reached do I ← gathering of current level's interfaces involving at least one "classic" cell (not two support cells); for interface in I do if interface has no sub-interfaces then neighboring cells are leaves Compute numerical flux F(U n L , U n R ) using an approximate Riemann solver; else at least one of the neighboring cells is a node Using (14), set the numerical flux as the average of finer fluxes; for cell in current level of refinement do Update solution inside the cell using a time integrator; Go to next level of refinement; Remove support cells and ghost nodes; Increment current time by ∆t; The next step is to effectively compute the fluxes while taking into account the links between consecutive levels of refinement. In fact, what is an interface at a coarse level will be the collection of 2 d−1 sub-interfaces at the finer level. In order to ensure the conservativity of the scheme, a necessary constraint is that the total flux contribution from each of the sub-interfaces should be equal to the flux contribution of the coarse interface. More precisely, we show that the coarse flux needs to be equal to the average of the sub-fluxes. Let K be a node (refined cell) with children (C i ) i∈{0,1} d and L a non-refined cell such that K and L are neighbors. Let J (K, L) denote the set of indices corresponding to the children of K sharing an interface with L: for all j ∈ J (K, L), the interface C j | L exists and is at the border of a level of refinement. According to the previous paragraph, the fluxes F C j |L can be determined through support cells derived from L. The coarse flux F K|L can then be related to these finer fluxes by considering the case where K and L form a confined system (no exchange with the exterior). In fact the discrete conservativity of total mass reads: The strategy we adopt here is to always compute the fluxes from the finest level to the coarsest level, since we might have to compute the average flux of sub-interfaces to get the coarse flux. The communication process occurs while iterating over interfaces which admit sub-interfaces, and the overall procedure is summarized in algorithm 2.

Visualization on unstructured grids
Previously discussed algorithms have been implemented in C. The whole implementation is based on a structure named Node presented in Table 2. This structure gathers all the physical values of the simulation in a field **tab and connects to the other nodes in order to form the tree structure. The node at the top of the tree is called the root. Following the parent-to-children links it gives access to the whole tree. Table 2. The Node: basic element of the tree structure struct node { double **tab; /* 2^dim cells referenced by dyadic writing */ struct noeud *prt; /* parent */ struct noeud **chd; /* children (0,1,...,2^dim -1) */ struct noeud **ngb; /* neighbors (0,1,...,2*dim -1) */ int frk; /* filiation rank */ int flag; /* flag indicating which nodes are active/support/ghost */ }; We use VTK-File-Format for visualization purposes. Unstructured grids are defined by points, cells, and cell types. The CELLS keyword requires two parameters: the number of cells n and the size of the cell list size. The cell list size is the total number of integer values required to represent the list (i.e., sum of numPoints and connectivity indices over each cell). The CELLS_TYPES keyword requires a single parameter: the number of cells n. This value should match the value specified by the CELLS keyword. The cell types data is a single integer value per cell that specifies cell type. In 2D, we choose for all cells VTK_PIXEL, see figure 9.

Numerical results
Numerical results based on our program are now presented. A first test-case is implemented to assess the order of convergence of the MUSCL and third order reconstructions. To this end, we consider the 1D scalar linear transport equation associated to the C ∞ initial datum u 0 (x) The velocity v is taken equal to 3/4. The mesh we use is made of an initial grid and two additional levels of refinement, and stays fixed throughout the simulation (no adaptivity). The initial condition is interpolated from the initial grid towards the finer levels. We also compare the results to the case of uniform meshes. We choose the Rusanov numerical flux, and the Runge-Kutta 4 method is used for time integration, the stability being ensured by taking ∆t = CFL × ∆x f /(2v) with ∆x f the cell size of the finest level of refinement and CFL = 0.45. The simulation is stopped at final time T = 73/15. We compute the L 2 -error at the coarsest level at time T , and we do so for initial grids ranging from depth 4 to depth 10. Results are gathered in tables 4 to 6. The order of convergence can be easily computed thanks to this table. In fact, a method of order at least k ∈ R + is characterized by the inequality e n+1 ≤ 2 −k ×e n , where e n denotes the error obtained on an initial grid at depth n (also called "base depth") with possible levels of refinement on top of it, assuming n is big enough. Thus, we see that k verifies k ≤ (log e n − log e n+1 )/ log 2. As expected, finite volumes without reconstruction are of order at least one, whereas with MUSCL reconstruction (resp. third order reconstruction) they are of order at least two (resp. at least three) no matter if the mesh is uniform or not. Without much surprise, the order three reconstruction is the most accurate and has a compact stencil of size three.  Figure 11 helps us to describe the qualitative behavior of these schemes. When no reconstruction is performed, the numerical solution exhibits a strongly diffusive behavior with a loss of approximately 13.5% in amplitude at the end of the simulation. This is not the case anymore when reconstructing the solution near cell interfaces. However the MUSCL and third order reconstructions tend to favor spurious oscillations, which is especially noticeable for the MUSCL reconstruction. To address this issue, a fix would consist to use slope limiters ensuring the Total Variation Diminishing property (TVD). We then test our program on a 2D transport problem with non-constant velocity across the domain: where v(x, y) = (y − 1/2)/T , w(x, y) = −(x − 1/2)/T , and T = 1/2 the final time. Again, the mesh is refined at the beginning and remains as such: there is no adaptivity involved here. More precisely, the mesh is made of two levels of refinement on top of the coarse grid initialized at depth 5. The initial condition is taken equal to a truncated Gaussian and is only C 0 . Results can be seen in figure 12. The overall shape of the Gaussian is better preserved by the third order reconstruction, closely followed by the MUSCL reconstruction. As for the 1D test case, the latter reconstruction adds more downwind oscillations. This can be seen in figure 13 which consists in a plot of the solution at final time over the line y = 1/4.

Conclusion
This paper was devoted to the combination of high order finite volumes and Adaptive Mesh Refinement. These topics are of great interest in computational physics, for they are an efficient and accurate way to deal with interacting scales, while conserving the total mass of quantities of interest. After quickly recalling the state of the art concerning high order finite volumes, we have concentrated our attention towards AMR. Algorithms for third order multidimensional volume interpolation and communication between levels of refinement have been proposed. It turns out that reconstruction methods work well through transitions between levels of refinement, and do not require any specific treatment during the communication process. Of course, one has to make sure the mass conservation holds during this process to make the use of finite volumes relevant. In the end, numerical simulations allow to validate our approach, although truly adaptive meshes remains to be tested.