DISCRETE HOTELLING PURE LOCATION GAMES: POTENTIALS AND EQUILIBRIA

We study two-player one-dimensional discrete Hotelling pure location games assuming that demand f(d) as a function of distance d is constant or strictly decreasing. We show that this game admits a best-response potential. This result holds in particular for f(d) = w with 0 < w ≤ 1. For this case special attention will be given to the structure of the equilibrium set and a conjecture about the increasingness of best-response correspondences will be made. Résumé. Nous étudions les jeux de localisation pure Hotelling discrets unidimensionnels à deux joueurs en supposant que la demande f(d) en fonction de la distance d est constante ou strictement décroissante. Nous montrons que ce jeu admet un potentiel de meilleure réponse. Ce résultat vaut notamment pour f(d) = w avec 0 < w ≤ 1. Dans ce cas, une attention particulière sera accordée à la structure de l’ensemble d’équilibre et une conjecture sur la croissance de la correspondance de meilleure réponse sera faite.


Introduction
In his seminal paper [6], Hotelling presents a location model of two competing retailers. Since then, this model has triggered an increasing flow of research in industrial organization. 1 The pure location part of that model concerns a game in strategic form with a common real segment as strategy set ("Main Street" [6] ). Normally demand is assumed to be inelastic (i.e. not dependant on the distance). The more general case, allowing also for elastic demand, has been thoroughly studied in [2,17]; we further refer to this location game as the cHg (i.e., continuous Hotelling game).
In the cHg payoff functions are discontinuous. As far as we know there is no general result in terms of the primitives of the game, like that in [10], that guarantees (Nash) equilibrium existence for the cHg. The proof of equilibrium existence in [2] is "by hand", by determining the equilibrium set. As shown in [8] a deeper reason for the cHg to have an equilibrium is that this game is a potential game. To be more specific, if the demand is highly elastic to the extent that the Principle of Minimum Differentiation fails, then it has a continuous best-response potential; and if not that elastic it has a continuous quasi-potential. In the present article we deal with the discrete variant of the cHg, which we shall refer to as discrete Hotelling game (dHg). Our main result is that the dHg is always a best-response potential game, a stronger result compared to cHg. We also scrutinize the condition for the Principle to hold in a special subclass of dHg for which demand has the exponential form f (d) = w d , since this Principle, which explains the tendency of agglomeration, is at the heart of location theory. Now let us have a quick look to the literature for discrete Hotelling games. Contrary to the continuous variant, this variant only got a little attention in the literature. As far as we know the first article to deal with such a game was [16]. It deals with two players and considers inelastic as well as elastic demand; however, the game deals with "relative payoffs" instead of the usual ones. Mixed Nash equilibria in case of inelastic demand with three players are studied in [7]. Games on a network (also called "Voronoi games") are studied in [5,14]. A theory for games with a finite number of players and inelastic demand where consumers have strict preference over the possible locations is developed in [12]. To our knowledge, [18] is the first article that theoretically analyses 2 Nash equilibria of two-player discrete Hotelling games under a setting of inelastic and elastic demand by means of the demand function f (d) = w d where 0 < w ≤ 1; so inelastic demand if w = 1 and elastic demand if w = 1. In [18] it was proven (again) "by hand" that this game has an equilibrium by determining the equilibrium set. As we mentioned, our main result, i.e. Theorem 4.1, shows that the the dHg is a best-response potential game.
The present article is concerned with two active areas of research in game theory: location games and games having a (pure) Nash equilibrium. Although the former already has wide range of applications, and the latter is being studied in more general frameworks, location games are one of major areas in which one still does not have general results on the existence of equilibria. Our aim is to shed some new light on such a theoretical aspect of location games.
The article is further organized as follows. In Section 2 we formally define the dHg. Section 3 makes some useful observations about equilibria of games with location and player symmetry. In Section 4, we show among other things that the dHg is a best-response potential game. Section 5 considers the structure of the equilibrium set for the case f (d) = w d ; in order to obtain this structure the main result in [18], giving explicit formulas for the equilibrium set, is further studied. Section 6 deals for this case with a conjecture about best-response correspondences. Finally, Section 7 compares results for the dHg with those of the cHg.

Setting
Let S denote an integer interval {0, 1, . . . , m} where m is a positive integer and f : S → R a constant or strictly decreasing positive function.
In this article we understand by a discrete Hotelling game (dHg) a two-player game in strategic form, with player set {1, 2}, common strategy set S and payoff functions u i : S × S → R given by We refer to f as a demand function. The case where f is constant (not constant) also is referred to as inelastic (elastic) demand. Note that the dHg is a symmetric game. Short possible real-world interpretation: S represents a space of m + 1 locations. Each location is occupied by consumers. There are two players, being retailers, who independently and simultaneously choose a location (may be the same). Next, the consumers of each location, say location x, will shop at a retailer who is located on a for these consumers best location in the sense that it is a closest one. If this location is unique, say y, then the consumers of location x contribute a payoff equal to f (|y − x|) to the retailer at y. If there are two locations which are best, then both retailers receive a payoff equal to f (|y − x|)/2.
where x denotes the largest integer not exceeding x. 3 Using these expressions, the payoff functions can be rewritten as follows: As there are two players, the game can be represented in a natural way as a m × m bi-matrix game; rows and columns are numbered from 0 to m. A special case, extensively studied in [18] and reconsidered below, is where the demand function f is f (d) = w d , with 0 < w ≤ 1; here we refer to w as distance factor. Here is a visualization in the case m = 7 and w = 1/2 for the situation of the strategy profile (2, 6): Locations 0, 1, 2, 3 (in black) completely contribute to the payoff of player 1, locations 5, 6, 7 (in white) completely to the payoff of player 2 and location 4 (in gray) is shared. The payoffs are u 1 (2, 6) = 1 4 + 1 2 +1+ 1 2 + 1 8 = 11 8 and u 2 (2, 6) = 1 8 + 1 2 + 1 + 1 2 = 17 8 . And for m = 3 and general distance factor w this demand function gives, in bi-matrix terms, for player 1 the payoffs It will be convenient to let

Games with player and location symmetry
In this section X = [0, L] where L is a positive real number ("continuous case") or X = {0, 1, . . . , m} with m a positive integer ("discrete case") and we consider a game in strategic form with two players 1 and 2, with common strategy set X and with payoff functions u 1 , u 2 : X × X → R.
and, denoting L and m also by v, location symmetry, i.e.
The cHg (see Section 7 for a formal definition) and the dHg are examples of such a game. Let E be the Nash equilibrium set of the game. For the cHg, there exists an interesting principle, the so called Principle of Minimum Differentiation. This principle, coined in [3], comes for the standard interpretation down to that the retailers like to locate together in the middle and formally that E = {(L/2, L/2)}. Note that in the discrete case there only is a middle if m is even. It is reasonable to formalize this principle as follows: The player symmetry implies that 4 the best-response correspondences B 1 , B 2 : X X are identical: The location symmetry implies that for every x ∈ X Player symmetry also implies for every (e 1 , e 2 ) ∈ E that {(e 1 , e 2 ), (e 2 , e 1 )} ⊆ E and location symmetry that This observation makes that we like to see (e 1 , e 2 ), (e 2 , e 1 ), We formalize this by defining on E the relation ∼ by (e 1 , e 2 ) ∼ (e 1 , e 2 ) means: (e 1 , e 2 ) ∈ {(e 1 , e 2 ), (e 2 , e 1 ), It is straightforward to check that this relation is an equivalence relation. Denote by [E], the set of its equivalence classes, to be called equilibrium classes, and by [(e 1 , e 2 )] the equilibrium class of (e 1 , e 2 ) ∈ E. We have By the multiplicity of an equilibrium we understand the number of elements of its equilibrium class. Of course, if the game has a unique equilibrium, then there is just one equilibrium class consisting of this equilibrium and this equilibrium has multiplicity 1. Note that with the action distance of an action profile (x 1 , x 2 ) defined by |x 2 − x 1 |, each element of a given equilibrium class has the same action distance. Also note that (4) implies: Thus: if there is a unique equilibrium in the discrete case, then v is even. And: if there is a unique equilibrium, then the Principle of Minimum Differentiation holds.
The following proposition is easy to prove: 5

Potential games
Consider a game in strategic form. Denote its player set by N := {1, . . . , n}, the strategy set of player i by X i and his payoff function by u i . We denote X := X 1 × · · · × X n , Xî := X 1 × · · · × X i−1 × X i+1 × · · · × X n , identify X with X i × Xî, and accordingly write x ∈ X as x = (x i ; xî).

The dHg is a best-response potential game
In this subsection we show that the dHg is a best-response potential game in the sense of [19]. 6 This means that there exists a best-response potential, i.e. a function P : S × S → R such that As (3) holds and the next theorem deals with a symmetric best-response potential P • , we have, with the correspondence B • : S S defined by Of courseP (x 1 ,  (1) if f is strictly decreasing, then P • : S × S → R defined by is a best-response potential.

Other potentials
Having Theorem 4.1, the question may arise whether a dHg admits another type of potential, like an exact potential or a generalized ordinal potential game.
If m = 1, then u 1 = u 2 , i.e. the game is an identical interest game and therefore an exact potential game. The following proposition shows that our class of dHg is neither contained in the class of exact potential games nor in the class of generalized ordinal potential games. (1) The game is for no w an exact potential game.
(2). By definition γ is an improvement path if and only if u 1 (2, 0) If the game has a generalized ordinal potential then it cannot have a non-trivial cyclic improvement path ( [11]). Thus for w > 1/2 the game is not a generalized ordinal potential game.

5.
Structure of the equilibrium set in case f (d) = w d We have seen that the dHg has an equilibrium. However, one may wish to have more insight in the structure of its equilibrium set E.
Below we present for the case f (d) = w d , where 0 < w ≤ 1, some results concerning the structure of E. These results will be proved by a straightforward inspection of the explicit formulas in Theorems 1 and 2 in [18]. Obtaining these formulas (for w = 1) was quite complicated: 10 cases were distinguished.
The analysis of the case of inelastic demand, meaning that w = 1, is simple: for the best-response correspon- if m is odd.
(1b). By Theorems 1 and 2(Id) in [18]. (1c). By Theorems 1 and 2(I) in [18]. (2a), (2b). By Theorems 1 and 2(IId) in [18]. (2c). By Theorems 1 and 2(II) in [18]. The inequality w > w c may be interpreted as "f is 'sufficiently inelastic". Note that . Now let us consider the equilibrium classes. First suppose m is even. Then if w > w c , Proposition 5.1 (1) shows that there is 1 equilibrium class and if w = w c , there are 2 equilibrium classes. For w < w c the situation is more complicated. Next suppose m is odd. Then, if w c < w < 1, Proposition 5.1 (2) shows that there is 1 equilibrium class and if w = w c there are three equilibrium classes. Again, if w < w c , the situation is more complicated. However, Proposition 5.3(4) holds.

Proposition 5.3.
(1) The game has a unique equilibrium if and only if m is even and w = 1. Proof. By Theorems 1 and 2 in [18].
A question for further research: what remains of these results for a general dHg?

A conjecture
Various classes of finite games in strategic form that have a (pure) equilibrium are identified. We mention here: potential games, supermodular games, symmetric games with integrally concave payoffs ( [9]) and games with increasing best-response correspondences.
In [18] it has been shown that a dHg may not be a supermodular game and that a dHg may not have integrally concave payoffs. In the present article we have shown that the dHg is a best-response potential game. Below we look to increasing best-response correspondences in case of the demand function f (d) = w d .
So consider the best-response correspondence B = B 1 = B 2 : S S. For w = 1 it is from (6) and (7) easy to check that B has an increasing selection. However this does not hold for all m and w: indeed, the next proposition implies that for m odd and w = 1 it holds that B(p) ⊆ {p + 1, . . . , m} and B(p + 1) ⊆ {0, . . . , p}.
Proposition 6.1. Suppose w = 1 or m is even.
Proof. (6) and (7) show that the statements hold for w = 1. If m = 1 or m = 2, then #S − = #S + = 1 and so the statements are trivial. As, for x 2 ∈ S, B(m − x 2 ) = {m} − B(x 2 ) the correctness of the statements for S − follows from those for S + . Now suppose w = 1 and m ≥ 3; we prove the statements for S + . First we prove that min(B) is increasing on S + . So fix x 2 , x 2 ∈ S + with x 2 < x 2 and let x 1 = min B(x 2 ) and x 1 = min B(x 2 ). As m ≥ 3, we have by Lemma 6.2 (2) 0 < x 1 < m and 0 < x 1 < m.
With an analogous proof one shows that max(B) is increasing on S + .

Comparing the dHg with the cHg
The cHg, i.e. the continuous Hotelling game, is the game that we obtain by replacing the formula (1) by where now S is a proper real interval [0, L]. Below we quickly compare results for the dHG and the cHg. As for the dHg general results for the structure of the equilibrium set only are available for the demand function f (d) = w d (0 < w ≤ 1), we do this for this demand function. 9 A new notion: the function σ : E → R defined by σ(e 1 , e 2 ) = e 1 + e 2 is referred to as Nash sum. The next table compares some of the above results with those in [8,17]: property/game cHg dHg # equilibria 1, 2 1,2, . . . , 8 multiplicity of equilibrium 1 or 2 1, 2, 4 Nash sum L m-2, m-1, . . . , m+2 # equilibrium classes 1 1, 2 or 3 potential continuous quasi best-response More detailed results for the cHg are contained in [8] and [17]. For example, the cHg has a continuous best-response potential if and only if w ≤ 2 −2/L . A final remark: as P • in Theorem 4.1 is a best-response potential for the dHg, it is well-known that a maximiser of P • is a Nash equilibrium. However it turns out that a Nash equilibrium may not be a maximiser of P • , i.e. P • may not be a quasi-potential. That this is true can be checked by an explicit calculation: for the concrete bi-matrix game (with m = 7) in Section 6 the set of maximisers of P • equals {(2, 5), (5, 2)}; however, the Nash equilibrium set is larger.