Nov 28, 2022
We present a modified tuning of the algorithm of Zimmert and Seldin [2020] for adversarial multiarmed bandits with delayed feedback, which in addition to the minimax optimal adversarial regret guarantee shown by Zimmert and Seldin [2020] simultaneously achieves a near-optimal regret guarantee in the stochastic setting with fixed delays. Specifically, the adversarial regret guarantee is πͺ(β(TK) + β(dTlog K)), where T is the time horizon, K is the number of arms, and d is the fixed delay, whereas the stochastic regret guarantee is πͺ(β_i β i^*(1/Ξ_ilog(T) + d/Ξ_i) + d K^1/3log K), where Ξ_i are the suboptimality gaps. We also present an extension of the algorithm to the case of arbitrary delays, which is based on an oracle knowledge of the maximal delay d_max and achieves πͺ(β(TK) + β(Dlog K) + d_maxK^1/3log K) regret in the adversarial regime, where D is the total delay, and πͺ(β_i β i^*(1/Ξ_ilog(T) + Ο_max/Ξ_i) + d_maxK^1/3log K) regret in the stochastic regime, where Ο_max is the maximal number of outstanding observations. Finally, we present a lower bound that matches regret upper bound achieved by the skipping technique of Zimmert and Seldin [2020] in the adversarial setting.
Total of 0 viewers voted for saving the presentation to eternal vault which is 0.0%
Presentations on similar topic, category or speaker