MULTI-LEADER-FOLLOWER POTENTIAL GAMES

In this paper, we discuss a particular class of Nash games, where the participants of the game (the players) are devided into two groups (leaders and followers) according to their position or influence on the other players. Moreover, we consider the case, when the leaders’ and/or the followers’ game can be described as a potential game. This is a subclass of Nash games that has been introduced by Monderer and Shapley in 1996 and has beneficial properties to reformulate the bilevel Nash game. We develope necessary and sufficient conditions for Nash equilibria and present existence and uniqueness results. Furthermore, we discuss some Examples to illustrate our results.


Introduction
As an important mathematical modeling tool to analytically study the strategic decision making in a competitive environment, game theory has attracted the interest of researchers of various fields such as economics, political sciences, management, computer science and biology.
In classical game theory a game decribes the situation, where several individuals make a (strategic) choice by taking into account the choices of the others. Moreover, the fundamental elements of a game are the individuals (so-called players) of the game, the strategies that are available to each player and the individual payoff functions. Furthermore, games are classified into cooperative and noncooperative games. Here we will discuss a particular type of noncooperative games, where each individual players is only concerned with his/her own objective, which leads to the well-known concept of a Nash game or equilibrium, in contrast to e.g. Pareto optimality, i.e. the solution of a multi-objecitive optimization problem which represents a kind of cooperative game.
In the following, we consider so-called multi-leader-follower games (MLFG) that form a particular subclass of noncooperative problems, where the players are divided into two groups, namely leaders and followers, according to their influence (position) in the game. Mathematically, this yields a hierarchical game. However, in contrast to the well-known Stackelberg game, where one single leader is accompanied by one or more followers, MLFGs model the situation where several leaders exist. Games of this structure have recently attained the interest of mathematicians as well as scientists of related fields such as operations research, robotics, and computer science [1,9,20]. However, there is still a lack of theoretical results concerning the structure of such games, such as existence and uniqueness theory, characterization of equilibria etc.. Here we consider the following multi-leader-follower Nash game: Let the leaders' game be given by where x denotes the leaders' joint strategy vector and x −ν is defined by x −ν = (x 1 , .., x ν−1 , x ν+1 , .., x N ). Furthermore, the joint strategy vector of the followers y = (y j , y −j ) ∈ R mM (for simplicity we chose m j = m) is obtained through the Nash game modeled by the optimization problems with nonempty, convex, and closed strategy sets Y j ⊆ R m . The multi-leader-follower game is then given by Note, that if the solution of the lower-level problem (the Nash game of the followers) is not single-valued, we consider the optimistic case of a bilevel problem (cf. also [4]), since for each leader ν we take the followers response y ∈ S(x) that minimizes the leader's objective function value θ ν . In [12] A. Kulkarni and U. Shanbhag present an example of a similar type in the context of congestions control in communication networks. Here, the users of the network are the leaders that decide about the flow rates x on the network. Moreover, a single follower, namely the network manager solves an optimization problem of the form min where his/her strategy y represent the decisions that have to be made on the network, such as network capacities, flow specifications etc.. Note, that the feasibility regions for the decisions y also depend on the users' decision x. Furthermore, the users' problems are given by where U ν (x ν ) denotes the utility function of player ν and c(y) denote the congestion cost associated to the network manager's decision y, which is imposed on every user of the network. Other research on multi-leaderfollower games include their paper [11], where they consider the setting for multi-leader-follower games that can be reformulated as a Nash game with shared constraints. Further theoretical work on leader-follower games include the early work by Sherali [16] that generalizes Stackelberg games in the setting of Cournot competition and the more recent paper [7], where Hu and Fukushima discuss existence and uniqueness results of robust Nash equilibria for a class of quadratic MLFGs. Numerical schemes for multi-leader-follower games have e.g. been presented in [8,13,17,19]. This paper is organized as follows. In Section 1 we discuss the followers' problem, which then gives rise to a reformulation of the leaders' game, i.e. the full multi-leader-follower game. The reformulation and the subsequent anaysis thereof is then presented in Section 2.

The Followers' Game
In this section, we are concerned with the lower-level problem, i.e. the followers' game (2). In case that M = 1, we have a single follower and the game reduces to a lower-level minimization problem. In case that M is larger than one, a common solution concept for the resulting Nash game defined by (2) is the Nash equilibrium that was introduced by J.F. Nash in the 1950th [15]. Definition 1.1 (Nash Equilibrium). Consider the game (2) and letx ∈ X be fixed. Then, a joint strategy vector y * ∈ Y is a Nash equilibrium, if the following condition holds for all j = 1, .., M Hence, a Nash equilibrium corresponds to a multistrategy vector of all players, where no player of the game has the incentive to change his/her chosen strategy unilaterally.
If the followers' objective functions τ j are convex and continuously differentiable and the feasible sets Y j are also convex, each follower's minimization is a convex problem, i.e. a necessary and sufficient condition for (4) to hold for each follower j is given by the inequality [2] Define the f (y, x) = (∇ y1 τ 1 (y, x) T , .., ∇ y M τ M (y, x) T ) T and obtain the following characterization of the set of Nash equilibria of the parameterized Nash game (2).
Assume that Y j is nonempty, closed and convex for any j = 1, .., M . Let for all j = 1, .., M the objective τ j be continuously differentiable in x and y on an open set Ω ⊇ X × Y and convex in y j for any feasible y −j and any feasible, fixedx ∈ X. Then y The proof of the theorem is a standard result and can e.g. be found in [5] (Prop 1.4.2). Proposition 1.3. Assume, that Y j is nonempty, closed and convex for any j = 1, .., M . Letx ∈ X be fixed and assume that for all j = 1, .., M the objective τ j (·,x) is twice continuously differentiable on an open set Ω ⊇ Y and convex in y j for any feasible y −j . Furthermore, assume that the Jacobian D y f (y,x) is uniformly positive definite for all y ∈ Y . Then, the Nash game N EP (x) admits a unique Nash equilibrium.
Proof. Since the assumptions of Theorem 1.2 are satisfied, for any feasible and fixedx ∈ X, the set of Nash equilibria of (2) coincides with the solution set of the variational inequality (6). Moreover, f is strongly monotone on Y , since D y f (y,x) is uniformly positive definite (cf. [5], Prop. 2.3.2). Hence, we can apply Theorem 2.3.3 in [5] to obtain the result.
The assumptions of this existence and uniqueness result guides us to consider the case, where each individual follower's objective τ j can be replaced by a joint objective function. The particular structure leads to the subclass of Nash games that was first introduced by Monderer and Shapley in [14]. Definition 1.4 (Potential Game/ Potential). Consider the game (2) and letx ∈ X be fixed. If there exists a function π(y, x) such that the following condition holds for all j = 1, .., M the game (2) is called a potential game and π is called an exact potential function (potential) of (2). Now, if the followers' Nash game (2) is a potential game, i.e. there exists a potential function π that satisfies (7), τ j can be replaced in (2) by π for all j = 1, .., M . Hence, the symmetry of D y f (y,x) = D 2 y π(y,x) is directly given and moreover, if π is uniformly convex in y, D y f (y,x) is uniformly positive definite on Y and therefore f is strongly monotone on Y . Thus we have the following corollary.
Corollary 1.5. Assume, that Y j is nonempty, closed and convex for any j = 1, .., M . Let (2) be a potential game with potential π(y, x). Moreover, assume that π is twice continuously differentiable in x and y on an open set Ω ⊇ X × Y and uniformly convex in y for any feasible, fixedx ∈ X. Then, the Nash game N EP (x) admits a unique Nash equilibrium y * (x).
Proof. First we can apply Lemma 2.1 in [14]. Then, replacing τ j in (5) by π, (6) has to hold for f = D y π. Thus, D y f (y,x) = D 2 y π(y,x). Next, since the assumptions of Proposition 1.3 are all satisfied, we obtain the result.
The result of the corollary also leads to another advantage of the particular structure of potential games. In fact, the set of Nash equilibria of (2) corresponds to the set of global optima of the convex potential function on the joint feasible set Y .
Proof. Since the conditions of Corollary 1.5 are satisfied, the variational inequality (6) has to hold for f = Dπ. Thus, the neccessary and sufficient conditions for a global optimum of the convex minimization problem (8) hold.
Remark 1.7. If the convex sets Y j can be described by a finite number of affine linear equalities h j (x) = 0 and inequalities g j (y j ) ≤ 0 with continuously differentiable, convex functions g j,k and some regularity conditions such as the Slater constraint qualification are satisfied, one could also make use of the KKT-conditions associated with (6) or the minimization problem (8), respectively.
Moreover, an equivalent condition for the variational inequality (6) to hold, which is independent of the assumptions on π, is given as follows.
Next, let us consider some special cases with regard to the followers' strategy sets Y j and their objective functions τ j . Example 1.9. First, assume that the sets Y j are polyhedrons, i.e. they can be defined by a finite number of affine linear mappings Next, letŷ j ∈ Y j and define a matrix T j such that the range of T j recovers the nullspace of A j for all j = 1, .., M , i.e. any y j ∈ Y j satisfies y j = T j z j +ŷ j for some z j . If T j is defined such that it has full rank, then z j ∈ R mj with m j = m − rk(A j ). This variable transformation then yields a Nash game in reduced form for a suitable chosen matrix T = diag(T j ) M j=1 , a vectorŷ = (ŷ j ) M j=1 and K = M j=1 k j . Since the differentiability and convexity properties of π transfer toπ the necessary and sufficient optimality conditions ∇ zπ (z,x) = 0 (11) guarantee z(x) to be the possibly unique minimizer ofπ(·,x). Hence, since this yields an implicit or even explicit expression for the continuous (smooth) path y * (x) = T z * (x) +ŷ, the followers Nash game can be substituted by equation (11) or y can directly be replaced by y * (x) in the leaders' objective function θ(x, y). Example 1.10. Next, let each strategy set Y j (j = 1, .., M ) be the nonnegative orthant, i.e. Y j = R m + . Then, under the conditions of Proposition 1.6, the Nash equilibrium y * (x) of (2) is given by the solution of the Lipschitz-continuous, but nonsmooth equation (cf. Proposition 1.8): This can be reformulated as the complementarity condition: min(y, f (y,x)) = 0 ⇔ 0 ≤ y ⊥ f (y,x) ≥ 0.
If we assume in addition that each follower's objective function is given as a quadratic function of the form with a diagonal positive definite matrix Q j , then we obtain y * j (x) = max(0, Q −1 j b(x)) for all j = 1, .., M (see also [6,18]). Moreover, the exact potential function π in this case is given by Thus, the associated Nash game is in fact a potential game.
Example 1.11. In the more general case where τ j is given by τ j = α j (y j ) + β(y, x) for some continuously differentiable functions α j and β, we have since for all j = 1, .., M it holds Moreover, if each Y j (j = 1, .., M ) is defined by Y j = {y j ∈ R m | y j ≥ l j (x)}, the Nash equilibrium is decribed by the complementarity condition min(y − l j (x), f (y,x)) = 0, where f (y,x) = ∇ y π(y,x) = ∇ yj α j (y j ) M j=1 + ∇ y β(y,x) .

The Leaders' Potential Game
In the previous section, we discussed under which conditions, the Nash game of the followers admits a Nash equilibrium, when it is unique and how it can be characterized. In this section we now use this information and focus on the Nash game played by the leaders.
Hence, assume that for any feasible leader multi strategy vector x, there exists at least on equilibrium of the followers' Nash game, i.e. the set of equilibrium points S(x) of (2) is nonempty. Then the multi-leader-follower game (3) can be reformulated as Moreover, if e.g. the conditions of Proposition 1.3 or Corollary 1.5, respectively, are satisfied, then S(x) is single-valued and we might replace the variable y in (12) by the path y(x). This yields the single-level Nash Remark 2.1. If S(x) is not single-valued, a worst-case scenario might be considered which leads to the application of robust optimization tools such as e.g. presented in [7].
Next, assume that this Nash game forms a potential game, i.e. it admits a potential function P (x). In particular, we consider the case, where the objective functions θ ν can be decomposed into two parts: an individual part ϕ ν (x ν ) and a common part φ(x, y): for all ν = 1, .., N it holds whereφ(x) = φ(x, y(x)). Then, the potential function P (x) is given by as for all ν = 1, .., N holds for any s ν ∈ X ν . Due to this defining property of the potential function, we have the following result.
Theorem 2.2. Let x * be a global minimizer of the problem Then x * is a Nash-equilibrium of the MLFG (12).
Proof. Assume that x * is not a Nash equilibrium of (12).Then there exist a player ν that can benefit from unilaterally changing his/her strategy, i.e. there exist some s ν ∈ X ν such that which contradicts the global optimality of x * for (15).
Next, having related the global optima of (15) and the Nash equilibria of (12), we consider the question, under which assumptions a global minimizer of (15) exists. The main assumption that we make here concerns the fact, that the solution of the lower-level is given by a continuous (but not necessarily smooth) solution mapping y(x). In addition to this main assumption, the first result is based merely on the compactness assumption on the feasible set X, whereas the second one is based on the coercivity of P . Proof. Since by the assumptions the objective function P is continuous , the minimization problem (15) admits at least one global minimizer. Moreover, by Theorem 2.2, the Nash game (12) has at least one Nash equilibrium. Corollary 2.4. Let X be nonempty and closed, y(x) being continuous on an open set Ω ⊇ X and P be coercive on X, i.e. for any sequence (x k ) ⊆ X with lim k x k = +∞ it holds lim k P (x k ) = +∞. Then, the set of Nash equilibria is nonempty and compact. In particular, there exists at least one Nash equilibrium of the reformulated Nash game (12).
Proof. By assumption the set of global minimizers of (15) is nonempty and compact. Therefore, by Theorem 2.2, the Nash game (12) has at least one Nash equilibrium.
Next, we first discuss assumptions on the defining functions that guarantee the convexity of the objective function P , such that if X is convex (15) is a convex optimization problem.
Lemma 2.5. Assume that for all ν = 1, .., N the functions ϕ ν are convex. Let φ(x, y) be convex in x and y and nondecreasing in y. Moreover, suppose that y(x) is a continuous, convex function. Then P is a continuous, convex function.
Proof. The continuity of P is directly given by the continuity of the defining functions. Next, since y(x) is supposed to be a convex function, we have y(λ x 1 + (1 − λ) x 2 ) ≤ λ y(x 1 ) + (1 − λ) y(x 2 ). Next, because φ(x, y) is convex in x and y and nondecreasing in y, we obtain with Therefore, as a sum of convex functions, P is convex.
Note, that we obtain strict convexity of P , if one of the inequalities in the proof of Lemma 2.5 holds strictly. In this case we can furthermore apply a standard uniqueness result.
Theorem 2.6. Assume that X is nonempty, closed and convex and P is continuous and strictly convex on an open set Ω ⊇ X. Then (12) admits a unique Nash equilibrium.
Having discussed the existence of Nash equilibria of (12), we are now interested in the necessary and sufficient conditions that characterize these solutions. As for the existence, we will derive such conditions in terms of conditions for (15). Note, that we assume again that y(x) denotes a continuous (single-valued) solution mapping for the followers problem. However, we do not claim, that y(x) is a smooth function of x. We therefore apply convex, nonsmooth analysis results as in [3,10] to the (nonsmooth) problem (15). Theorem 2.7. Let the assumptions of Lemma 2.5 be satisfied and let N X (x * ) be the normal cone of X at x * and ∂P (x) be the convex subdifferential of P at x. Then x * is a global minimizer of (15) iff with ∂P (x) be given by ∇ϕ ν (x ν ) + ∇φ 1 (x, y(x)) + V T ∇φ 2 (x, y(x)), for some V ∈ ∂y(x) , with ∇φ 1,2 denoting the gradient of φ with respect to the first and second component, respectively.
Finally, let us apply these results to the Examples 1.9 and 1.10 of the previous section.
Example 2.8. Assume again that the sets Y j are polyhedrons and that we can replace the followers game by condition (11). If π is give as a quadratic function of the form π(y, x) = 1 2 M j=1 y T j Q j y j + b(x) T y with Q j being symmetric and positive definite on the nullspace of A j and T j being defined as in Example 1.9, then (11) corresponds to the condition Q z j z j + b z j (x) = 0 ∀j = 1, .., M , where b z j (x) = T T j Q jŷj + T T j b j (x) and Q z j = T T j Q j T j is regular. Therefore, the single-valued solution of (2) is given by y j (x) = T j (Q z j ) −1 b z j (x) +ŷ j . Hence, if b(x) is a continuously differentiable, convex function, this property transfers to y * (x) such that Theorem 2.7 can be applied with ∂y(x) = Dy * (x) under suitable assumptions on the leaders objectives and the feasible set X. Example 2.9. We consider again Example 1.10, where we have already derived the continuous solution mapping y j (x) = max(0, Q −1 j b(x)) for all j = 1, .., M , with each Q j being a diagonal, positive definite matrix. Hence, since taking the pointwise maximum of two convex functions yields again a convex function, if b(x) is convex, so is y(x). We can therefore apply Theorem 2.7 to obtain the necessary and sufficient conditions given by (16) with