About the minimal time crisis problem

We study the minimization of the so-called time crisis function that represents the time spent by a solution of a controlled system outside a given set K. One essential feature of this optimal control problem is the discontinuity of the functional at the boundary of K. We provide in this paper properties of the time crisis function together with necessary optimality conditions using the hybrid maximum principle. We also study an approximation of this problem based on the Moreau-Yosida regularization of the indicator function of K. Résumé. On s’intéresse à la minimisation du temps de crise qui représente le temps passé par une solution d’un système contrôlé à l’extérieur d’un certain ensemble K donné. Une des caractéristiques principales de ce problème est la discontinuité de la fonctionnelle à minimiser. Dans ce papier, on donne quelques propriétés du temps de crise ainsi que des conditions d’optimalité du premier ordre grâce au principe du maximum de Pontryagin hybride. On étudie également une approximation du problème de contrôle optimal en utilisant la régularisée de Moreau-Yosida de l’indicatrice de K.


Introduction
In this paper, we study the so-called time-crisis problem which consists in minimizing (w.r.t. the control) the total time spent by a solution of a controlled system outside a given set K. This problem has been introduced in [12] in the viability context whenever controlled systems are subject to state constraints (see e.g. [6,20]). Determining a control function for which the time spent outside K is minimal is a crucial issue in several applications (see e.g. [7,8]). This problem finds also typical interest whenever the initial condition is chosen outside the viability kernel of the set K (i.e. the set of initial conditions for which there exists an admissible control such that the corresponding solution stays in K for any time t ≥ 0, see e.g. [1,2]).
One essential feature of the time crisis problem is the discontinuity of the cost functional that is expressed in terms of the characteristic function of K c , the complementary of the set K. To derive first order necessary optimality conditions over a finite given horizon, a first approach consists in using the hybrid maximum principle (see e.g. [9,14,15]). To do so, a transverse assumption is required on optimal trajectories at a crossing time (i.e. a time where the boundary crosses the boundary of K): this assumption precisely guarantees the computation of the jump of the adjoint vector at the crossing time (see also [15]). However, it may be difficult to verify this assumption on optimal trajectories before applying optimality conditions. Therefore, it can be convenient to introduce a regularization scheme of the time crisis problem for which such an assumption is not required. Based on the Moreau-regularization of the indicator function of the set K (see e.g. [3,16,17]), we introduce an approximated optimal control problem. By a direct application of the Pontryagin Maximum Principle [18], we can then obtain necessary optimality conditions on the regularized problem. Note that a similar regularization scheme was used in [7,8] for studying an analogous optimal control problem in the context of linear parabolic equations (describing the continuous casting of steel).
The paper is organized as follows. In section 2, we state the time crisis problem. Necessary optimality conditions on the time crisis problem are given in section 3. The convergence of extremal solutions of the regularized controlled problem (for both state and adjoint vectors) to an extremal solution of the original problem is addressed in section 4. Theorem 4.1 is our main result and guarantees the validity of the approximation procedure.
The purpose of this paper is to summarize the results of [5]. Most results of section 3 and 4 can be found in [5]. However, we provide here several different approaches: • The proof of the convergence of approximated optimal solutions to an optimal solution of the time crisis problem uses Γ-convergence (see Proposition 4.1). • The convergence of the adjoint variable is addressed using properties of the adjoint equation (Lemma 4.2) instead of needle variations (to show the boundedness of adjoint vectors in L ∞ ([0, T ]; R n ), see [5,15]).

Statement and properties of the time crisis problem
We consider a dynamical controlled system:ẋ where f : R n × R m → R n is the dynamics, x ∈ R n is the state, and u ∈ R m is the control. Throughout the paper, we consider a non-empty subset U ⊂ R m and we define the admissible control set U T as: where T ∈ R * + ∪ {+∞}. We also suppose that the system satisfies the following assumptions: (H1) The dynamics f is continuous w.r.t. (x, u), of class C 1 w.r.t. x and satisfies the linear growth condition: there exist c 1 > 0 and c 2 > 0 such that for all x ∈ R n and all u ∈ U , one has: where | · | is the euclidean norm in R n . (H2) For any x ∈ R n , the set F (x) := {f (x, u) ; u ∈ U } is a non-empty compact convex set.
Given an initial condition x 0 ∈ R n , Cauchy-Lipschitz's Theorem implies that there exists a unique solution of (2.1) defined over [0, T ] such that x(0) = x 0 . This solution will be denoted by x u (·, x 0 ) hereafter.
The time crisis problem can be now stated as follows. Let K be a non-empty subset of R n and K c := R n \K its complementary. The characteristic function of K c 1 K c is then defined by: We consider the following optimal control problem called time crisis problem: We also introduce the time crisis problem over a finite horizon [0, T ] (with T > 0) that is defined by: For convenience, we suppose in the paper that the set K is smooth: (H3) The set K is a smooth (i.e. its boundary ∂K is of class C 1 ) non-empty closed subset of R n . The existence of an optimal control for (OCP) and (OCP T ) is proved in [5,12]) and follows from standard compactness arguments (see e.g. Theorem 19.2.3 p.771 of [2]). The viability kernel of K for the dynamics f is a central notion in this context. It is defined as the set of points of K from which there exists a control u such that the associated trajectory stays in K for any time t ≥ 0: We say that a non-empty subset A ⊂ R n is reachable from x 0 ∈ R n if there exists an admissible control and a time t ≥ 0 such that x u (t, x 0 ) ∈ A. The time crisis function is closely related to the minimal time function v(x 0 ) ∈ [0, +∞] to reach the set Viab(K) (if non-empty) from an initial condition x 0 ∈ R n and it is defined as: where T u ∈ [0, +∞] is the first entry time of x u (·, x 0 ) into Viab(K). We know from [12] (Proposition 4.1 p.11) that if K is viable (i.e. K = Viab(K)), then v(x 0 ) = θ(x 0 ) for any x 0 ∈ R n . More generally, if Viab(K) is non-empty, then Viab(K) ⊂ K, thus we obtain the inequality whereθ is the time crisis function associated to the set Viab(K) for the dynamics f : In the following, we also denote by d(·, K) the distance function to the set K defined for x ∈ R n by d(

Necessary optimality conditions for (OCP) T
In the rest of the paper, we shall omit the dependance of a solution of (2.1) w.r.t. the initial condition x 0 . Given u ∈ U, we will write x u (or x(·) if there is no ambiguity on the control u ∈ U) the unique solution of (2.1) associated to a control u ∈ U and starting at a given initial condition We now provide necessary optimality conditions for problem (OCP T ). As 1 K c is discontinuous, we can use the hybrid maximum principle by considering the partition of R n as R n = K ∪ K c (see e.g. [9,14,15]). First, we define the notion of regular crossing time which means that a trajectory cannot hit the set K tangentially. Definition 3.1. We say that a crossing time t c ∈ [0, T ] for a solution x u (·) of (2.1) is regular if the following properties hold true: where n(z) denotes the unit outward normal vector to the set K at a point z ∈ ∂K and a · b denotes the usual scalar product of two vectors a, b of R n .
Similarly, we define a regular crossing time t c from K c into K for a solution x u of (2.1) associated to a control u ∈ U T . Let H : R n × R n × R × R m → R be the Hamiltonian associated to (OCP T ) that is defined by: The next theorem is a direct consequence of the hybrid maximum principle (see Theorem 22.20 p.458 of [9]). Theorem 3.1. Suppose that an optimal trajectory x * (·) of (OCP T ) has k ≥ 1 regular crossing times {t 1 , · · · , t k }, and let u * ∈ U be the associated optimal control. Then, the following conditions are satisfied: • There exists p 0 ≤ 0 and a piece-wise absolutely continuous map p * : , and satisfies the adjoint equation: • The Hamiltonian satisfies the maximization condition: • At every regular crossing time t c ∈ {t 1 , · · · , t k }, one has the jump condition on the adjoint vector p * : where σ = −1, resp. σ = +1 if t c is a regular crossing time from K into K c , resp. from K c into K. • The adjoint vector satisfies the transversality condition p * (T ) = 0.
belongs to the normal cone to K at the point x * (t c ) (see Theorem 22.20 p.458 of [9]) and the fact that the Hamiltonian is globally constant over [0, T ] (the system is autonomous and the k crossing times are free). (iii) As x * (T ) is free, one has the condition p * (T ) = 0as in the usual classical transversality conditions in Pontryagin's Principle.
We now show that each extremal trajectory is normal i.e. p 0 = 0.

Proposition 3.1. Under the assumptions of Theorem 3.1, any extremal trajectory
Proof. By integrating backward in time (3.1) over (t k , T ] one finds that p * (t) = 0 for any time t ∈ (t k , T ]. As belongs to the normal cone to K at x * (t k ) (see Theorem 22.20 p.458 of [9]), there exists α ≥ 0 such that p * (t + k ) − p * (t − k ) = αn(x * (t k )). Now, using the constancy of the Hamiltonian and p 0 = 0, we get Thus, we find that αn(x * (t k ))·ẋ * (t − k ) = 0. As t k is a regular crossing time, we deduce that n(x * (t k ))·ẋ * (t − k ) = 0, thus α = 0 which together with (3.1) implies that p * (t) = 0 for any time t ∈ (t k−1 , t k ). By induction, this proves that p * is zero on each time interval (t i , t i+1 ), 0 ≤ i ≤ m − 1, and thus over [0, T ]. Hence, we have a contradiction with the hybrid maximum principle as the pair (p 0 , p * (·)) should be non-zero.

Regularization of problem (OCP) T
If we have no information on optimal trajectories, we may not be able to verify that optimal trajectories have a finite number of transverse crossing times before applying Theorem 3.1. In order to avoid to suppose that optimal trajectories are transverse when hitting the boundary of K, we introduce a regularized version of the time crisis problem (OCP T ).

Regularization scheme
The regularization scheme goes as follows. First, we denote by ψ K be the indicator function of the set K: We consider the Moreau envelope e ε (·) of ψ K with parameter ε, defined by (see e.g. [3,16,17]): It is standard that x −→ e ε (x) is Lipschitz continuous over R n . Moreover, one has: We consider the regularized optimal control problem: By standard compactness arguments, we can show similarly as for problems (OCP)-(OCP T ) that for any x 0 ∈ R n , there exists an optimal control u ε ∈ U of (OCP ε T ). Next, we denote by x ε (·) the associated trajectory.

Convergence in the state space
Our purpose is now to prove the following convergence result that guarantees that optimal solutions of (OCP ε T ) are close to a solution of (OCP T ) provided that the regularization parameter is small enough. Proposition 4.1. Let ε ↓ 0 and (x ε (·), u ε (·)) be an optimal pair for (OCP ε T ). Then, there exists an optimal solution u * of (OCP T ) such that up to a sub-sequence, x ε (·) uniformly converges to x * (·) over [0, T ] and , where x * is the unique solution of (2.1) associated to u * . Proof. Let ε n ↓ 0. Let us show that the sequence (J εn T ) n≥0 Γ−converges to J T (see e.g. chapter 12 of [11] for more details on the notion of Γ−convergence).
This proves the liminf inequality i.e. liminf n→+∞ J εn T (u n ) ≥ J T (u). Let us now show the limsup inequality. Let u ∈ U and x u (·) the associated solution of (2.1). Using the dominated convergence Theorem, we obtain when n → +∞. Thus, given u ∈ U, the constant sequence u n := u is such that limsup n→+∞ J εn T (u n ) ≤ J T (u) which shows the limsup inequality. Now to conclude, let (x n (·), u n (·)) be an optimal pair for (OCP ε T ) with ε = ε n . We know that there exists u * ∈ U such that up to a sub-sequence, x n strongly-weakly converges to x * over [0, T ] where x * is the unique solution of (2.1) with u = u * . To conclude, as the sequence (J εn T ) n≥0 Γ−converges to J T , standard results of Γ-convergence theory (see e.g. [11]) imply that u * is optimal for problem (OCP T ) and that J εn T (u n ) → J T (u * ) when n → +∞, which ends the proof.

Transformation into a Mayer problem
We are now in position to apply the Pontryagin Maximum Principle on problem (OCP ε T ). However, when the set K is not convex, the distance function to K is only Lipschitz continuous in R n . Indeed, the projection of a point x ∈ R n onto the set K may not be unique. Therefore, the adjoint equation for problem (OCP T ) (that involves the derivative of the distance function) becomes a differential inclusion (see e.g. [21] for Pontryagin's Principle with Lipschitz datas). Nevertheless, it is possible to avoid the use of a differential inclusion in the adjoint equation in this setting if we slightly modify (OCP ε T ). To do so, let us consider the admissible control set V defined by: and let W := U × V. Next, we call u = (u, v) an element of W and we introduce the augmented system: with initial conditions x(0) = x 0 , y(0) = 0, and (u, v) ∈ W. Next, we consider the Mayer problem: and we show that problems (OCP ε T ) and (OCP ε T ) are equivalent. As K is not necessarily convex, we introduce the set P K (x) defined for x ∈ R n by: Lemma 4.1. Problems (OCP ε T ) and (OCP ε T ) are equivalent. Proof. If u = (u, v) ∈ W, then one has |x u (t) − v(t)| ≥ d(x u (t), K) 2 , thus as γ is increasing, we deduce that y u (T ) ≥ J T (u) which shows that if u * is a solution of (OCP ε T ), then (u * , v * ) is a solution of (OCP ε T ) where v * (t) ∈ P K (x u * (t)) a.e. t ∈ [0, T ]. Now, let u * = (u * , v * ) ∈ W be an optimal solution of (OCP ε T ). As γ is increasing, we necessarily have d(x u * (t), K) = |x u * (t) − v * (t)| for a.e. t ∈ [0, T ]. By optimality of u * we deduce that for any control u = (u, v) ∈ W one has where x u is the unique solution of (2.1) associated to the control u. Therefore, choosing v(t) = P K (x u (t)) a.e. t ∈ [0, T ] yields that J T (u * ) ≤ J T (u) for any u ∈ U. Hence, u * is optimal for problem (OCP ε T ). This ends the proof.

Pontryagin Maximum Principle
We can now apply the Pontryagin Maximum Principle on (OCP ε T ) in order to derive necessary optimality conditions. Let H ε : R n+1 × R n+1 × R m × R n → R be the Hamiltonian associated to (OCP ε T ) defined by:
For convenience, let us write (u n , v n ) the solution of (OCP ε T ) where ε = ε n (recall that ε n ↓ 0), x n (·) the unique solution of (2.1) for the control u = u n , and p n the unique solution oḟ satifying p n (T ) = 0. By considering a sub-sequence if necessary, we may assume that there exists u * ∈ U such that x n (·) strongly-weakly converges to x * (·) over [0, T ] where x * (·) is the unique solution of (2.1) associated to the control u * . We now consider the following assumption on x * (·) that will be useful to show the boundedness of p n (·) in L ∞ ([0, T ] R n ).  Proof. As f is of class C 1 , there exists a constant C ≥ 0 such that for any time t ∈ [0, T ] and n ∈ N one has We now show that (q n (·)) n∈N is bounded over [0, First case. Suppose that x * (T ) ∈ K. Then, we have p n (t) = 0 for any time t ∈ [t n k , T ] where t n k is the last entry time of x n (·) into K (recall that x n (·) uniformly converges to x * ). Thus, we have x n (t) − v n (t) = 0 for t ∈ [t n k , T ]. Integrating (4.4) gives: By using Gronwall Lemma, it follows that (q n (·)) is uniformly bounded over [0, T − t k ]. Second case. Suppose now that x * (T ) ∈ K c . Similarly as in the previous case, (4.4) gives: where d n (t) := d(x n (t), K). As K is convex, the projection v n (t) of x n (t) onto K is uniquely defined and t → v n (t) is Lipschitz (recall that the projection is 1-Lipschitz over R n as K is convex). Hence, d n (·) is differentiable a.e. and as K is smoothv n (t) · (x n (t) − v n (t)) = 0 a.e. t ∈ [0, T ]. Thus, we find thaṫ Now, as t k is a regular crossing time, there exists α > 0, N ∈ N and η > 0 such that: Proof. The proof follows essentially by using Lemma 4.2. Then, we can apply Theorem 19.2.3 p.771 of [2] and deduce that there exists a function p * is (locally) absolutely continuous on each interval (t i , t i+1 ), 0 ≤ i ≤ k (where t k+1 = T ) such that the sequence (p n (·)) uniformly converges to p * on each compact subset of [0, T ]\{t 1 , . . . , t k }. The rest of the proof is standard and can be found in [5].
We now provide our main result related to the convergence of an extremal solution (x n (·), p n (·), u n (·)) of (OCP ε T ) with ε = ε n to an extremal solution (x * (·), p * (·), u * (·)) of (OCP T ) when the regularization parameter goes to zero. Theorem 4.1. Suppose that hypotheses (H1)-(H2)-(H3) are satisfied and let ε n ↓ 0 and (x n (·), u n (·)) be an optimal pair for (OCP ε T ) with ε = ε n . Then, there exists a pair (x * (·), u * (·)) such that u * is a solution of (OCP T ) and x * is the unique solution of (2.1) associated to u * . Moreover, the following properties hold true: • The sequence x n (·) uniformly converges to x * (·) over [0, T ] and J εn T (u n ) → J T (u * ) when n → +∞. Proof. The proof of this result is a consequence of Proposition 4.1 and Corollary 4.1. The essential property is to recover the jump condition (3.3) from the adjoint equation (4.4). The detailed proof can be found in [5].

Remark 4.3.
Suppose that (2.1) is affine w.r.t. the control u, that is, (2.1) is of the form: where f j : R n → R n , 0 ≤ j ≤ m are smooth vector fields. According to the Pontryagin Maximum Principle, an optimal control can be expressed as u j (t) = sign(φ j (t)), 1 ≤ j ≤ m where φ j is the switching function associated to u j , that is φ j (t) := p * (t) · f j (x * (t)). In the case where the u * is bang-bang (i.e. u * j (t) = ±1 which corresponds to φ j (t) > 0 or φ j (t) < 0), Theorem 4.1 then allows to relate the value of u n j (t) to u * j (t) if n is large enough. The convexity of K is also an important ingredient in the proof of Lemma 4.2 and Theorem 4.1. Future work will explore if these results can be extended to a non-convex smooth subset K of R n . Another interesting issue is to characterize a class of controlled systems for which the solution of the time crisis problem consists in minimizing the time spent in K c and maximizing the time spent in K.