Learning the distribution with largest mean: two bandit frameworks

Over the past few years, the multi-armed bandit model has become increasingly popular in the machine learning community, in part because of applications including online content optimization. This paper reviews two different sequential learning tasks that have been considered in the bandit literature ; they can be formulated as (sequentially) learning the distribution that has the highest mean among a set of distributions, with some constraints on the learning process. For both of them (regret minimization and best arm identification), we present (asymptotically) optimal algorithms, some of which are quite recent. We compare the behavior of the sampling rule of each algorithm as well as the complexity terms associated to each problem.


Introduction
Bandit models can be traced back to the 1930s and the work of [Thompson, 1933] in the context of medical trials. It addresses the idealized situation where, for a given symptom, a doctor has several treatments at her disposal, but has no prior knowledge about their efficacies. These efficacies need to be learnt by allocating treatments to patients and observing the result. As the doctor aims at healing as many patients as possible, she would like to select the best treatment as often as possible, even though it is unknown to her at the beginning. After each patient, the doctor takes the outcome of the treatment into account in order to decide which treatment to assign to the next patient: the learning process is sequential.
This archetypal situation is mathematically captured by the multi-armed bandit model. It involves an agent (the doctor) interacting with a set of K probability distribution ν 1 , . . . , ν K called arms (the treatments), which * This work was partially supported by the CIMI (Centre International de Mathématiques et d'Informatique) Excellence program while Emilie Kaufmann visited Toulouse in November 2015. The authors acknowledge the support of the French Agence Nationale de la Recherche (ANR), under grants ANR-13-BS01-0005 (project SPADRO) and ANR-13-CORD-0020 (project ALICIA she may sequentially sample. The mean of arm a (which is unknown to the agent) is denoted by µ a . At round t, the agent selects an arm A t ∈ {1, . . . , K} and subsequently observes a sample X t ∼ ν At from the associated distribution. The arm A t is selected according to a sampling strategy denoted by π = (π t ) t≥1 , where π t maps the history of past arm choices and observations A 1 , X 1 , . . . , A t−1 , X t−1 to an arm. In a simplistic model for the clinical trial example, each arm is a Bernoulli distribution that indicates the success or failure of the treatment. After sampling an arm (giving a treatment) at time t, the doctor observes whether the patient was healed (X t = 1) or not (X t = 0). In this example as in many others, the samples gathered can be considered as rewards, and a natural goal for the agent is to adjust her sampling strategy so as to maximize the expected sum E[ T t=1 X t ] of the rewards gathered up to some given horizon T . This is equivalent to minimizing the regret which is defined as the gap between the expected efficiency of the strategy π and the expected cumulated reward of an oracle strategy always playing the best arm a * = argmax a µ a that has mean µ * = max a µ a .
A sampling strategy minimizing the regret should not only learn which arm has the highest mean: it should also not incur too big losses during this learning phase. In other words, it has to achieve a good trade-off between exploration (experimenting all the arms in order to estimate their means) and exploitation (focusing on the arm that appears best so far). Despite its simplicity, the multi-armed bandit model already captures the fundamental dilemma inherent to reinforcement learning [Sutton and Barto, 1998], where the goal is to learn how to act optimally in a random environment based on numeric feedback. The fundamental model of reinforcement learning is the Markov Decision Process [Puterman, 1994], which involves the additional notion of system state; a bandit model is simply a Markov Decision Process with a single state.
Sometimes, rewards actually correspond to profits for the agent. In fact, the imaginatively named multiarmed bandits refer to casino slot machines: a player sequentially selects one of them (also called a one-armed bandit), draws its arm, and possibly collects her wins. While the model was initially motivated by clinical trials, modern applications involve neither bandits, nor casinos, but for example the design of recommender systems [Chu et al., 2011], or more generally content optimization. Indeed, a bandit algorithm may be used by a company for dynamically selecting which version of its website to display to each user, in order to maximize the number of conversions (purchase or subscription for example). In the case of two competing options, this problem is known as A/B testing. It motivates the consideration of a different optimization problem in a bandit model: rather than continuously changing its website, the company may prefer to experiment during a testing phase only, which is aimed at identifying the best version, and then to use that one consistently for a much bigger audience.
In such a testing phase, the objective is different: one aims at learning which arm has highest mean without constraint on the cumulative reward. In other words, the company agrees to lose some profit during the testing phase, as long as the length of this phase is as short as possible. In this framework, called best arm identification, the sampling rule is designed so as to identify the arm with highest mean as fast and as confidently as possible. Two alternative frameworks are considered in the literature. In the fixed-budget setting [Audibert et al., 2010], the length of the trial phase is given and the goal is to minimize the probability of misidentifying the best arm. In the fixed-confidence setting [Even-Dar et al., 2006], a risk parameter δ is given and the procedure is allowed to choose when the testing phase stops. It must guarantee that the misidentification probability is smaller than δ while minimizing the sample complexity, that is the expected number of samples required before electing the arm. Although the study of best arm identification problems is relatively recent in the bandit literature, similar questions were already addressed in the 1950s under the name ranking and identification problems [Bechhofer, 1954, Bechhofer et al., 1968, and they are also related to the sequential adaptive hypothesis testing framework introduced by [Chernoff, 1959].
In this paper, we review a few algorithms for both regret minimization and best arm identification in the fixedconfidence setting. The algorithms and results are presented for simple classes of parametric bandit models, and we explain along the way how some of them can be extended to more general models. In each case we introduce an asymptotic notion of optimality and present algorithms that are asymptotically optimal. Our optimality notion is instance-dependent, in the sense that we characterize the minimal regret or minimal sample complexity achievable on each specific bandit instance. The paper is structured as follows: we introduce in Section 1 the parametric bandit models considered in the paper, and present some useful probabilistic tools for the analysis of bandit algorithms. We discuss the regret minimization problem in Section 2 and the best arm identification problem in Section 3. We comment in Section 4 on the different behaviors of the algorithms aimed at these distinct objectives, and on the different information-theoretic quantities characterizing their complexities.

Some Assumptions on the Arms Distributions
Unless specified otherwise, we assume in the rest of the paper that all the arms belong to a class of distributions parameterized by their means, D I = {ν µ : µ ∈ I}, where I is an interval of R. We assume that for all µ ∈ I, ν µ has a density denoted by f µ with respect to some fixed reference measure, and that E X∼νµ the Kullback-Leibler divergence between the distribution of mean µ and that of mean µ . We shall in particular consider examples in which D I forms a one-parameter exponential family (e.g. Bernoulli distributions, Gaussian distributions with known variance, exponential distributions), for which there is a closed form formula for the divergence function d (see, e.g. ). Under this assumption, a bandit model is fully described by a vector µ = (µ 1 , . . . , µ K ) in I K such that ν a = ν µa for all a ∈ {1, . . . , K}. We denote by P µ and E µ the probability and expectation under the bandit model µ. Under P µ , the sequence (Y a,s ) s∈N * of successive observations from arm a is i.i.d. with law ν µa , and the families (Y a,s ) a are independent. Given a strategy π, we let N π a (t) = t s=1 1 (As=a) be the number of draws of arm a up to and including round t ≥ 1. Hence, upon selection of the arm A t , the observation made at round t is X t = Y At,N π A t (t) . When the strategy π is clear from the context, we may remove the superscript π and write simply N a (t). We defineμ a,s = 1 s s i=1 Y a,i as the empirical mean of the first s observations from arm a, and µ a (t) =μ a,Na(t) as the empirical mean of arm a at round t of the bandit algorithm.
For the two frameworks that we consider, regret minimization and best arm identification, we adopt the same approach. First, we propose a lower bound on the target quantity (regret or sample complexity). Then, we propose strategies whose regret or sample complexity asymptotically matches the lower bound. Two central tools to derive lower bounds and algorithms are changes of distributions and confidence intervals.

Changes Of Distribution
Problem-dependent lower bounds in the bandit literature all rely in the end on change of distribution arguments (see e.g. [Lai and Robbins, 1985, Burnetas and Katehakis, 1996, Mannor and Tsitsiklis, 2004, Audibert et al., 2010). In order to control the probability of some event under the bandit model µ, the idea is to consider an alternative bandit model λ under which some assumptions on the strategy make it is easier to control the probability of this event. This alternative model λ should be close enough to µ, in the sense that the transportation cost should not be too high. This transportation cost is related to the log-likelihood ratio of the observations up to time t, that we denote by Letting F t = σ(X 1 , . . . , X t ) be the σ-field generated by the observations up to time t, it is indeed well known that for all E ∈ F t , P µ (E) = E λ [1 E exp (L π t (µ, λ))]. The most simple way of writing changes of distribution (see [Kaufmann et al., 2014, Combes and and [Garivier et al., 2016b]) directly relates the expected log-likelihood ratio of the observations under two bandit models to the probability of any event under the two models. If S is a stopping time, one can show that for any two bandit models µ, λ and for any event in F S , is the binary relative entropy, i.e. the Kullback-Leibler divergence between two Bernoulli distributions of means x and y. Using Wald's lemma, one can show that in the particular case of bandit models, the expected log-likelihood ratio can be expressed in terms of the expected number of draws of each arms, which yields the following result.
Lemma 1. Let S be a stopping time. For any event E ∈ F S , Two different proofs of this result can be found in  and [Garivier et al., 2016b], in which a slightly more general result is derived based on the entropy contraction principle. As we will see in the next sections, this lemma is particularly powerful to prove lower bounds on the regret or the sample complexity, as both quantities are closely related to the expected number of draws of each arm.

Confidence Intervals
In both the regret minimization and best arm identification frameworks, the sampling rule has to decide which arm to sample from at a current round, based on the observations gathered at previous rounds. This decision may be based on the set of statistically plausible values for the mean of each arm a, that is materialized by a confidence interval on µ a . Note that in this sequential learning framework, this interval has to be built based on a random number of observations.
The line of research leading to the UCB1 algorithm [Auer et al., 2002] worked under the assumption that each arm is a bounded distribution supported in [0, 1]. Bounded distributions are particular examples of sub-Gaussian distributions. A random variable X is said to be σ 2 -sub-Gaussian if E[e λ(X−E[X]) ] ≤ exp(λ 2 σ 2 /2) holds for all λ ∈ R. Hoeffding's lemma states that distributions with a support bounded in [a, b] are (b − a) 2 /4sub-Gaussian. If arm a is σ 2 -sub-Gaussian, Hoeffding's inequality together with a union bound to handle the random number of observations permits to show that Hence on can build an upper-confidence bound on µ a with probability of coverage 1 − δ by setting γ = log(t/δ). There are two levels of improvement here. First, under more specific assumption on the arms (for example if the arms belong to some exponential family of distributions), Chernoff's inequality has an explicit form that can be used directly in place of Hoeffding's inequality. It states that P (μ a,s > x) ≤ exp(−sd(x, µ a )), where d(x, y) is the KL-divergence function defined in Section 1.1. Then, to handle the random number of observations, a peeling argument can be used rather than a union bound. This argument, initially developed in the context of Markov order estimation (see [Garivier and Leonardi, 2011]), was used in [Garivier andMoulines, 2011,Bubeck, 2010] under sub-gaussian assumption. Combining these two ideas [Garivier and Cappé, 2011] show that, letting The improvement can be measured by specifying this result to Bernoulli distributions, for which the two bounds (1) and (2) hold. By Pinsker's inequality kl(x, y) > 2(x−y) 2 , it holds that u a (t) ≤μ a (t)+ 2σ 2 γ/N a (t). Hence for γ and t such that e γ log(t) ≤ t, u a (t) is a smaller upper-confidence bound on µ a with the same coverage guarantees. As we will see in the next sections, such refined confidence intervals have yield huge improvements in the bandit literature, and lead to simple UCB-type algorithms that are asymptotically optimal for regret minimization.

Optimal Strategies For Regret Minimization
After the initial work of [Thompson, 1933], bandit models were studied again in the 1950s, with for example the paper of [Robbins, 1952], in which the notion of regret is introduced. Interestingly, a large part of the early work on bandit models takes a slightly different Bayesian perspective: the goal is also to maximize the expected sum of rewards, but the expectation is also computed over a prior distribution for the arms (see [Berry and Fristedt, 1985] for a survey). It turns out that this Bayesian multi-armed bandit problem can be solved exactly using dynamic programming [Bellman, 1956], but the exact solution is in most cases intractable. Practical solutions may be found when one aims at maximizing the sum of discounted rewards over an infinite horizon: the seminal paper of [Gittins, 1979] shows that the Bayesian optimal policy has a simple form where, at each round an index is computed for each arm and the arm with highest index is selected.
Gittins's work motivated the focus on index policies, where an index is computed for each arm as a selection procedure. Such index policies have also emerged in the "frequentist" literature on multi-armed bandits. Some asymptotic expansions of the index put forward by Gittins were proposed. They have led to new policies that could be studied directly, forgetting about their Bayesian roots. This line of research includes in particular the seminal work of [Lai and Robbins, 1985].

A Lower Bound on the Regret
In 1985, Lai and Robbins characterized the optimal regret rate in one-parameter bandit models, by providing an asymptotic lower bound on the regret and a first index policy with a matching regret [Lai and Robbins, 1985]. In order to understand this lower bound, one can first observe that the regret can be expressed in terms of the number of draws of each sub-optimal arm. Indeed, a simple conditioning shows that for any strategy π, where we recall that N π a (t) = t s=1 1 (As=t) is the number of times arm a has been selected up to time t. A strategy is said to be uniformly efficient if its regret is small on every bandit model in our class, that is if for all µ ∈ I K and for every α ∈]0, 1], R π µ (T ) = o(T α ).
Theorem 2. [Lai and Robbins, 1985] Any uniformly efficient strategy π satisfies, for all µ ∈ I K , By Equation (3), this result directly provides a logarithmic regret lower bound on the regret: This lower bound motivates the definition of an asymptotically optimal algorithm (on a set of parametric bandit models D I ) as an algorithm for with for all µ ∈ I K , the regret is asymptotically upper bounded by C(µ) log(T ). This defines an instance-dependent notion of optimality, as we want an algorithm that attains the best regret rate for every bandit instance µ. However, for some instances µ such that some arms are very close to the optimal arm, the constant C(µ) may be really large and the C(µ) log(T ) bound is not very interesting in finitetime. For such instances, one may prefer having regret upper bounds that scale in √ KT and are independent of µ, matching the minimax regret lower bound obtained by [Cesa-Bianchi andLugosi, 2006, Bubeck andCesa-Bianchi, 2012] for Bernoulli bandits: Logarithmic instance-dependent regret lower bound have also been obtained under more general assumptions for the arms distributions [Burnetas and Katehakis, 1996], and even in some examples of structured bandit models, in which they take a less explicit form [Graves andLai, 1997, Magureanu et al., 2014]. All these lower bounds rely on a change of distribution argument, and we now explain how to easily obtain the lower bound of Theorem 2 by using the tool described in Section 1.2, Lemma 1.
Fixing a suboptimal arm a in the bandit model µ, we define an alternative bandit model λ such that λ i = µ i for all i = a and λ a = µ * + . In λ, arm a is now the optimal arm, hence a uniformly efficient algorithm will draw this arm very often. As arm a is the only arm that has been modified in λ, the statement in Lemma 1 takes the simple form: , for any event A T ∈ F T . Now the event A T := (N a (T ) < T /2) is very likely under µ in which a is suboptimal, and very unlikely under λ in which a is optimal. More precisely, the uniformly efficient assumption permits to show that P µ (A T ) → 1 and P λ (A T ) ≤ o(T α )/T for all α when T goes to infinity. This leads to kl (P µ (A T ), P λ (A T )) ∼ log(T ) and proves Theorem 2.

Asymptotically Optimal Index Policies and Upper Confidence Bounds
Lai and Robbins also proposed the first algorithm whose regret matches the lower bound (4) and this first asymptotically optimal algorithm is actually an index policy, i.e. it is of the form but the proposed indices U a (t) are quite complex to compute. [Agrawal, 1995, Katehakis andRobbins, 1995] later proposed slightly more simple indices and show that they can be interpreted as Upper Confidence Bounds (UCB) on the unknown means of the arms. UCB-type algorithms were popularized by [Auer et al., 2002], who introduce the UCB1 algorithm for (non-parametric) bandit models with bounded rewards, and give the first finite-time upper bound on its regret. Simple indices like those of UCB1 can be used more generally for σ 2 -sub-Gaussian rewards, and take the form for some function f which controls the confidence level. While the original choice of [Auer et al., 2002] is too conservative, one may safely choose f (t) = log(t) in practice; obtaining finite-time regret bounds is somewhat easier with a slightly larger choice, as in [Garivier and Cappé, 2011]. With such a choice, for Bernoulli distributions (which are 1/4-sub-Gaussian), the regret of this index policy can be shown to be which is only order-optimal with respect to the lower bound (4), as by Pinsker inequality d(µ a , µ * ) > 2(µ * −µ a ) 2 .
Since the work of [Auer et al., 2002], several improvements of UCB1 have been proposed. They aimed at providing finite-time regret guaranteess that would match the asymptotic lower bound (4) (see the review [Bubeck and Cesa-Bianchi, 2012]). Among them, the kl-UCB algorithm studied by ] is shown to be asymptotically optimal when the arms belong to a one-parameter exponential family. This algorithm is an index policy associated with u a (t) = max q : N a (t) d(μ a (t), q) ≤ f (t) , for the same choice of an exploration function f as mentioned above. The discussion of Section 1.3 explains why this index is actually an upper confidence bound on µ a : choosing f (t) = log(t) + 3 log log(t), one has P µ (u a (t) ≥ µ a ) 1 − 1/ t log 2 (t) . For this particular choice,  give a finite-time analysis of kl-UCB, proving its asymptotic optimality. To conclude on UCB algorithms, let us mention that several improvements have been proposed. A simple but significant one is obtained by replacing f (t) by log(t/N a (t)) in the definition of u a (t), leading to a variant sometimes termed kl-UCB + which has a slightly better empirical performance, but also minimax guarantees that plain UCB algorithms do not enjoy (for a discussion and related ideas, see the OCUCB algorithm of [Lattimore, 2016], [Ménard and Garivier, 2017] and the references therein).

Beyond the Optimism Principle
For simple parametric bandit models, in particular when rewards belong to a one-parameter exponential family, we showed that the regret minimization problem is solved, at least in an asymptotic sense: the kl-UCB algorithm, for example, attains the best possible regret rate on every problem instance. All the UCBtype algorithms described in the previous section are based on the so called principle of "optimism in face of uncertainty". Indeed, at each round of a UCB algorithm the confidence intervals materialize the set of bandit models that are compatible with our observations (see Figure 1, left), and choosing the arm with largest UCB amounts to acting optimally in an "optimistic" model in which the mean of each arm would be equal to its best possible value. This optimism principle has also been successfully applied in some structured bandit models [Abbasi-Yadkori et al., 2011], as well as in reinforcement learning [Jaksch et al., 2010] and other related problems [Bubeck et al., 2013].
While the Lai and Robbins' lower bound provides a good guideline to design algorithms, it has sometimes been misunderstood as a justification of the wrong folk theorem which is well-known by practitioners mostly interested in using bandit algorithms: "no strategy can have a regret smaller than C(µ) log(t), which is reached by good strategies". But experiments often infirm this claim: it is easy to show settings and algorithms where the regret is much smaller than C(µ) log(t) and does not look like a logarithmic curve. The reason is twofold: first, Lai and Robbins' lower result is asymptotic; a close look at its proof shows that it is relevant only when the horizon T is so large that any reasonable policy has identified the best arm with high probability; second, it only states that the regret divided by log(t) cannot always be smaller than C(µ). In [Garivier et al., 2016a], a more simple but similar bandit model of complexity C (µ) is given where some strategy is proved to have a regret smaller than C (µ) log(t) − c log(log(t)) for some positive constant c.
Some recent works try to complement this result and to give a better description of what can be observed in practice. Notably, [Garivier et al., 2016b] focuses mainly on the initial regime: the authors show in particular that all strategies suffer a linear regret before T reaches some problem-dependent value. When the problem is very difficult (for example when the number of arms is very large) this initial phase may be the only observable one... They give non-asymptotic inequalities, and above all show a way to prove lower bounds which may lead to further new results (see e.g. [Garivier et al., 2016a]). It would be of great interest (but technically difficult) to exhibit an intermediate regime where, after this first phase, statistical estimation becomes possible but is still not trivial. This would in particular permit to discriminate from a theoretical perspective between all the bandit algorithms that are now known to be asymptotically optimal, but for which significant differences may be observed in practice.
Indeed, one drawback of the kl-UCB algorithm is the need to construct tight confidence intervals (as explained in Section 1.3), which may not be generalized easily beyond simple parametric models. More flexible, Bayesian algorithms have recently also been shown to be asymptotically optimal, and to have good empirical performances. Given a prior distribution one the arms, Bayesian algorithms are simple procedures exploiting the posterior distributions of each arm. In the Bernoulli example, assuming a uniform prior on the mean of each arm, the posterior distribution of µ a at round t, defined as the conditional distribution of µ a given past observations, is easily seen to be a Beta distribution with parameters given by the number of ones and zeros observed so far from the arm. A Bayesian algorithm uses the different posterior distributions, that are illustrated in Figure 1 (right), to choose the next arm to sample from. The Bayes-UCB algorithm of [Kaufmann et al., 2012a] exploits these posterior distributions in an optimistic way: it selects at round t the arm whose posterior on the mean has the largest quantile of order 1 − 1/t. Another popular algorithm, Thompson Sampling, departs from the optimism principle by selecting arms at random according to their probability of being optimal. This principle was proposed by [Thompson, 1933] as the very first bandit algorithm, and can easily be implemented by drawing one sample from the posterior distribution of the mean of each arm, and selecting the arm with highest sample. This algorithm, also called probability matching, was rediscovered in the 2000s for its good empirical performances in complex bandit models [Scott, 2010, Chapelle and, but its first regret analysis dates back to [Agrawal and Goyal, 2012]. Both Thompson Sampling and Bayes-UCB have been shown recently to be asymptotically optimal in one-parameter models, for some choices of the prior distribution [Kaufmann et al., 2012b, Agrawal and Goyal, 2013. These algorithms are also quite generic, as they can be implemented in any bandit model in which one can define a prior distribution on the arms, and draw samples from the associated posterior. For example, they can be used in (generalized) linear bandit models, that can model recommendation tasks where the features of the items are taken into account (see, e.g. [Agrawal and Goyal, 2013] and the Chapter 4 of [Kaufmann, 2014]).

Optimal Strategies for Best Arm Identification
Finding the arm with largest mean (without trying to maximize the cumulated rewards) is quite a different task and relates more to classical statistics. It can indeed be cast into the framework of sequential adaptive hypothesis testing introduced by [Chernoff, 1959]. In this framework, one has to decide which of the (composite) hypotheses is true. In order to gain information, one can select at each round one out of K possible experiments, each of them consisting in sampling from one of the marginal distributions (arms). Moreover, one has to choose when to stop the trial and decide for one of the hypotheses. Rephrased in a "bandit" terminology, a strategy consists of • a sampling rule, that specifies which arm A t is selected at time t (A t is F t−1 -measurable), • a stopping rule τ , that indicates when the trial ends (τ is a stopping time wrt F t ), • a recommendation ruleâ τ that provides, upon stopping, a guess for the best arm (â τ is F τ -measurable).
However, the objective of the fixed-confidence best arm identification problem differs from that of [Chernoff, 1959], where one aims at minimizing a risk measure of the form where r i is the cost of wrongly rejecting hypothesis H i and c is a cost for sampling. Modern bandit literature rather focuses on so-called ( , δ)-PAC strategies (for Probably Approximately Correct) which output, with high probability, an arm whose mean is within of the mean of the best arm: The goal is to build a ( , δ)-PAC strategy with a sample complexity E µ [τ ] that is as small as possible. For simplicity, we focus here on the case = 0: a strategy is called δ-PAC 1 if ∀µ ∈ S, P µ (â τ = a * (µ)) ≥ 1 − δ, where S = {µ ∈ I K : ∃i : µ i > max j =i µ j } is the set of bandit models that have a unique optimal arm.
We show in the next section that, as in the regret minimization framework, there exists an instance-dependent lower bound on the sample complexity of any δ-PAC algorithm. We further present an algorithm whose sample complexity matches the lower bound, at least in the asymptotic regime where δ goes to 0. It is remarkable that this optimal algorithm, described in Section 3.2, is actually a by-product of the lower bound analysis described in Section 3.1, which sheds light on how a good strategy should distribute the draws between the arms.

The Sample Complexity of δ-PAC Best Arm Identification
The first lower bound on the sample complexity of a ( , δ)-PAC algorithm was given by [Mannor and Tsitsiklis, 2004]. Particularized to the case = 0, the lower bound says that for Bernoulli bandit models with means in [0, α], there exists a constant C α and a subset of the sub-optimal arms K α such that for any δ-PAC algorithm Following this result, the literature has provided several δ-PAC strategies together with upper bounds on their sample complexity, mostly under the assumption that the rewards are bounded in [0, 1]. Existing strategies fall into two categories: those based on successive eliminations [Even-Dar et al., 2006, Karnin et al., 2013, and those based on confidence intervals [Kalyanakrishnan et al., 2012, Gabillon et al., 2012, Jamieson et al., 2014.
For all these algorithms, under a bandit instance such that µ 1 > µ 2 ≥ · · · ≥ µ K , the number of samples used can be shown to be of order where C is a (large) numerical constant. While explicit finite-time bounds on τ can be extracted from most of the papers listed above, we mostly care here about the first-order term in δ, when δ goes to zero. Both the upper and lower bounds take the form of a sum over the arms of an individual complexity term (involving the inverse squared gap with the best or second best arm), but there is a gap as those sums do not involve the same number of terms; in addition, loose multiplicative constants make it hard to identify the exact minimal sample complexity of the problem. As for the regret minimization problem, the true sample complexity can be expected to involve informationtheoretic quantities (like the Kullback-Leibler divergence between arms distributions), for which the quantities above appear to be only surrogates; for example, for Bernoulli distributions, it holds that 2(µ 1 − µ a ) 2 < d(µ a , µ 1 ) < (µ 1 −µ a ) 2 /(µ 1 (1−µ 1 )). For exponential families, it has been shown that incorporating the KL-based confidence bounds described in Section 1.3 into existing algorithms lowers the sample complexity [Kaufmann and Kalyanakrishnan, 2013] but the true sample complexity was only recently obtained by . The result, and its proof, are remarkably simple: Theorem 3. Let µ ∈ S, define Alt(µ) := {λ ∈ S : a * (λ) = a * (µ)} and let Σ K = w ∈ [0, 1] K : K a=1 w a = 1 be the set of probability vectors. Any δ-PAC algorithm satisfies This lower bound against relies on a change of distribution, but unlike Lai and Robbins' result (and previous results for best arm identification), it is not sufficient to individually lower bound the expected number of draws of each arm using a single alternative model. One needs to consider the set Alt(µ) = {λ : a * (λ) = a * (µ)} of all possible alternative models λ in which the optimal arm is different from the optimal arm of µ.
Given a δ-PAC algorithm, let E = (â τ =â * (µ)). For any λ ∈ Alt(µ), the δ-PAC property implies that P µ (E) ≤ δ while P λ (E) ≥ 1 − δ. Hence, by Lemma 1, Combining all the inequalities thus obtained for the different possible values of λ ∈ Alt(µ), we conclude that: In the last step, we use the fact that the vector (E µ [N a (τ )]/E µ [τ ]) sums to one: upper bounding by the worst probability vector w yields a bound that is independent of the algorithm.
We thus obtain the (not fully explicit, but simple) lower bound of Theorem 3 that holds under the parametric assumption of Section 1.1. Its form involving an optimization problem is reminiscent of the early work of [Agrawal et al., 1989, Graves andLai, 1997] that provide a lower bound on the regret in general, possibly structured bandit models. For best arm identification, [Vaidhyan and Sundaresan, 2015] consider the particular case of Poisson distribution in which there is only one arm that is different from the others, where a very nice formula can be derived for the sample complexity. For general exponential family bandit models, we now provide a slightly more explicit expression of T * (µ), that permits to efficiently compute it.

Computing the complexity and the optimal weights
The proof of Theorem 3 reveals that the quantity can be interpreted as a vector of optimal proportions, in the sense that any strategy matching the lower bound should satisfy E µ [N a (τ )]/E µ [τ ] w * a (µ). Some algebra shows that the above optimization problem has a unique solution, and provides an efficient way of computing w * (µ) for any µ, which boils down to numerically solving a series of scalar equations. In this section, we shall assume that µ is such that µ 1 > µ 2 ≥ · · · ≥ µ K .
First, when the distribution belong to a one-dimensional exponential family (which we assume in the rest of this section), one can solve the inner optimization over λ in closed form, using Lagrange duality. This yields: w 1 d µ 1 , w 1 µ 1 + w a µ a w 1 + w a + w a d µ a , w 1 µ 1 + w a µ a w 1 + w a .
Then, one can prove that at the optimum the K − 1 quantities in the min are equal. Introducing their common value as an auxiliary variable, one can show that the computation of w * (µ) reduces to solving a one-dimensional optimization problem. For all a = 1, one introduces the strictly increasing mapping and defines x a : [0, d(µ 1 , µ a )[→ [0, +∞[ to be its inverse mapping. With this notation, the following Lemma 4 provides a way to compute w * (µ). The scalar equation F µ (y) = 0 defined therein may be solved using binary search. At each step of the search, the solution x a (y) of the equation g a (x) = y can again be computed by using binary search, or by Newton's method. This algorithm is available as a julia code at https://github.com/ jsfunc/best-arm-identification.
This result yields an efficient algorithm for computing T * (µ). But can a closed form formula be derived, at least in some special cases? In the two-armed case, it is easy to see that T * (µ) is equal to the inverse Chernoff information between the two arms (see [Kaufmann et al., 2014]). However, no closed form is available when K ≥ 3, even for simple families of distributions. For Gaussian arms with known variance σ 2 , only the following bound is known which captures T * (µ) up to a factor 2: Note that T * (µ) may be much smaller than 4σ 2 K/(µ 1 − µ 2 ) 2 , which is the minimal number of samples required by a strategy using uniform sampling (for which N a (t)/t 1/K). An optimal strategy actually uses quite unbalanced weights w * (µ).

An algorithm inspired from the lower bound
Back to general exponential families, building on the lower bound and our ability to compute w * (µ), we now introduce an efficient algorithm whose sample complexity matches the lower bound, at least for small values of δ. This Track-and-Stop strategy consists of two elements: • a tracking sampling rule, that forces the proportion of draws of each arm a to converge to the associated optimal proportion w * a (µ), by using the plug-in estimates w * (μ(t)), • the Chernoff stopping rule, that can be interpreted as the stopping rule of a sequential Generalized Likelihood Ratio Test (GLRT), whose closed form in this particular problem is very similar to our lower bound. When stopping, our guessâ τ is the empirical best arm. We now describe the sampling and stopping rule in details before presenting the theoretical guarantees for Track-and-Stop.

The Tracking sampling rule
Letμ(t) = (μ 1 (t), . . . ,μ K (t)) be the current maximum likelihood estimate of µ at time t: A first idea for matching the proportions w * (µ) is to track the plug-in estimates w * (μ(t)), by drawing at round t the arm a whose empirical proportion of draws lags furthest behind the estimated target w * a (μ(t)). But a closer inspection shows that (sufficiently fast) convergence ofμ(t) towards the true parameter µ requires some "forced exploration" to make sure each arm has not been under-sampled. More formally, defining F t = {a : N a (t) < √ t − K/2}, the Tracking rule is defined as {w * a (μ(t)) − N a (t)/t} otherwise (tracking the plug-in estimate) Simple combinatorial arguments prove that the Tracking rule draws each arm at least ( √ t − K/2) + − 1 times at round t, and relate the gap between N a (t)/t and w * a (µ) to the gap between w * a (µ) and w a (μ(t)). This permits in particular to show that the Tracking rule has the following desired behavior: Proposition 5. The Tracking sampling rule satisfies
Intuitively, this generalized likelihood ratioẐ(t) is large if the current maximum-likelihood estimate is far apart from its "closest alternative"μ(t) defined as the parameter maximizing the likelihood under the constraint that it belongs to Alt(μ(t)), i.e. that its optimal arm is different from that ofμ(t). This idea can be traced back to the work of [Chernoff, 1959], in whichẐ(t) is interpreted as the Neyman-Pearson statistic for testing the (data-dependent) pseudo-hyptothesis "µ =μ(t)" against "µ =μ(t)", based on all samples available up to round t. The analysis of Chernoff, however, only applies to two discrete hypotheses, whereas the best arm identification problem requires to consider K continuous hypotheses.
In the paper , we provide new insights on this Chernoff stopping rule, formally defined as where β(t, δ) is some threshold function. The first problem is to set the threshold β(t, δ) such that the probability of error of Track-and-Stop is upper bounded by δ. Our analysis relies on expressingẐ(t) in terms of pairwise sequential GLRTs of "µ a < µ b " against "µ a ≥ µ b ", for which we provide tight bounds on the type I error. Indeed, lettinĝ where X a Na(t) = (X s : A s = a, s ≤ t) is a vector that contains the observations of arm a available at time t, and where p λ (V 1 , . . . , V n ) is the likelihood of n i.i.d. observations drawn from ν λ , one can show that whereâ(t) is the empirical best arm at round t. In other words, one stops when for each arm b that is different from the empirical best armâ(t), a GLRT would reject the (data-dependent) pseudo-hypothesis "µâ (t) < µ b ". This expression also allows for a simple computation ofẐ(t), asμ a (t) >μ b (t) implieŝ Under the Tracking sampling rule, it is easy to see that theẐ(t) grows linearly with t, hence with a β(t, δ) that is sub-linear in t, τ δ is also surely finite. The probability of error of the Chernoff sampling rule is thus upper bounded by P µ (â τ = 1) ≤ P µ ∃t ∈ N :â(t) = 1,Ẑ(t) > β(t, δ) ≤ K a=2 P µ ∃t ∈ N :Ẑ a,1 (t) > β(t, δ) .
In the Bernoulli case, the following lemma permits to prove that the Chernoff stopping rule is δ-PAC for the choice β(t, δ) = log 2(K − 1)t δ .
Another rewriting of the Chernoff stopping rule permits to understand why it achieves the optimal sample complexity when coupled to the Tracking stopping rule. Indeed, using the particular form of the likelihood in an exponential family yieldsẐ This expression is reminiscent of the lower bound of Theorem 3. When t is large one expectsμ(t) to be close to µ thanks to the forced exploration, and N a (t)/t to be close to w * a (µ), due to Proposition 5. Hence one haŝ Z(t) t/T * (µ) for large values of t. Thus, with the threshold function (9), for small δ, τ δ is asymptotically upper bounded by the smallest t such that t ≥ T * (µ) log(2(K − 1)t/δ), which is of order T * (µ) log(1/δ) for small values of δ.

Optimality of Track-and-Stop
In the previous section, we sketched an upper bound on the number of samples used by Track-and-Stop that holds with probability one in the Bernoulli case.  also propose an asymptotic upper bound on the expected sample complexity of this algorithm, beyond the Bernoulli case. The results are summarized below.
Hence, Track-and-Stop can be qualified as asymptotically optimal, in the sense that its sample complexity matches the lower bound of Theorem 3, when δ tends to zero. Inspired by the regret minimization study, an important direction of future work is to obtain finite-time upper bounds on the sample complexity of an algorithm, that would still asymptotically match the lower bound of Theorem 3. A different line of research has studied, for sub-Gaussian rewards, the asymptotic behavior of the sample complexity for fixed values of δ in a regime in which the gap between the best and second best arm goes to zero [Jamieson et al., 2014, Chen et al., 2016, leading to a different notion of optimality. Hence, we should aim for the best of both worlds: an algorithm with a finite-time sample complexity upper bound that would also match the lower bound obtained in this alternative asymptotic regime.
Finally, while Theorem 7 gives asymptotic results for Track-and-Stop, we would like to highlight the practical impact of this algorithm. Experiments in  reveal that for relatively "large" values of δ (e.g. δ = 0.1) the sample complexity of Track-and-Stop appears to be twice smaller than that of state-ofthe-art algorithms in generic scenarios. The sampling rule of Track-and-Stop is slightly more computationally demanding than that of its competitors, as it requires to compute w * (μ(t)) at each round. However, the Tracking sampling rule is the most naive idea, and we will investigate whether other simple heuristics could be used to guarantee that the empirical proportions of draws converge towards the optimal proportions w * (µ), while being amenable for finite-time analysis.

Discussion
It is known at least since [Bubeck et al., 2011] that good algorithms for regret minimization and pureexploration are expected to be different: small regrets after t time steps imply a large probability of error P µ (µ * = µâ (t) ), whereâ(t) is the recommendation for the best arm at time t. In the dual fixed confidence setting that was studied in this paper, we provided other elements to assess the difference of the regret minimization and best arm identification problems.
First, the sampling strategy used by both type of algorithms are very different. Regret minimizing algorithm draw the best arm most of the time (t − O(log(t)) times in t rounds) while each sub-optimal arm gathers a vanishing proportion of draws. On the contrary, identifying the best arm requires the proportions of arm draws to converge to a vector w * (µ) will all non-zero components. Figure 2 illustrates this different behavior: the number of draws of each arm and associated KL-based confidence intervals are displayed for the kl-UCB (left) and Track-and-Stop (right) strategies. As expected, Track-and-Stop draws more frequently than its competitor the close-to-optimal arms, and has therefore tighter confidence intervals on their means. We also emphasize that the information-theoretic quantities characterizing the complexity of the two problems are different. For regret minimization, we saw that the minimal regret of uniformly efficient (u.e.) strategies satisfies: where d(x, y) is the Kullback-Leibler divergence between the distribution of mean x and the distribution of mean y in our class. For best arm identification, where T * (µ) is the solution of an optimization problem expressed with Kullback-Leibler divergences between arms that has no closed form solution for more than two arms. Although regret minimization and best arm identification are two very different objectives, both in terms of algorithms and of complexity, best arm identification tools have been used within regret minimization algorithms in so-called Explore-Then-Commit strategies [Perchet andRigollet, 2013, Perchet et al., 2015]. For minimizing regret up to a horizon T such strategies use a (elimination-based) fixed-confidence best arm identification algorithm with δ = 1/T to make a guess for the best arm and then commit to play this estimated best arm until the end of the horizon T . In a simple case (two Gaussian arms), we recently quantified the sub-optimality of such approaches: the regret of the best such Explore-Then-Commit strategy is at least twice larger than that of the kl-UCB algorithm [Garivier et al., 2016a]. Even if the article focuses on Gaussian rewards, other cases with possibly more than two arms are also discussed. Coming back to the introductory example of A/B testing, the take-home message is the following: if you prefer to experiment first the two options before using only one of them in production, instead of continuously allocating the two options using a good regret-minimizing strategy, then this will cost you twice larger a regret.
Unlike the asymptotically optimal regret minimizing strategies that we presented, the asymptotically optimal Track-and-Stop strategy for best arm identification has no finite-time guarantees, and its implementation is slightly more complex. An important future work is to see whether useful tools for regret minimization, like the optimism principle or Bayesian methods can be combined with Track-and-Stop to have a simpler algorithm with a finite-time analysis. A starting point may be found in [Russo, 2016], who recently proposed a modified Bayesian Thompson Sampling rule that has some promising properties.
Aknowledgement: The authors are extremely thankful to the reviewers of this paper, who contributed significantly to the clarity of the presentation by their numerous and always relevant comments.