publications
sorted by year
2022
- Universal CachingJoshi, A., and Sinha, A.Information Theory Workshop, Nov 2022
In the learning literature, the performance of an online policy is commonly measured in terms of the static regret metric, which compares the cumulative loss of an online policy to that of an optimal benchmark in hindsight. In the definition of static regret, the benchmark policy remains fixed throughout the time horizon. Naturally, the resulting regret bounds become loose in non-stationary settings where fixed benchmarks often suffer from poor performance. In this paper, we investigate a stronger notion of regret minimization in the context of an online caching problem. In particular, we allow the action of the offline benchmark at any round to be decided by a finite state predictor containing arbitrarily many states. Using ideas from the universal prediction literature in information theory, we propose an efficient online caching policy with an adaptive sub-linear regret bound. To the best of our knowledge, this is the first data-dependent regret bound known for the universal caching problem. We establish this result by combining a recently-proposed online caching policy with an incremental parsing algorithm, e.g., Lempel-Ziv ’78. Our methods also yield a simpler learning-theoretic proof of the improved regret bound as opposed to the more involved and problem-specific combinatorial arguments used in the earlier works.
2023
- No-regret Algorithms for Fair Resource AllocationNeurIPS, Sep 2023
We consider a fair resource allocation game in the no-regret setting against an unrestricted adversary. The objective of the problem is to allocate resources equitably among several agents in an online fashion so that the difference of the aggregate α-fair utilities of the agents between an optimal static clairvoyant allocation and that of the online policy grows sub-linearly with time. The problem is challenging due to the non-additive nature of the α-fairness function. Previously, it was shown that no online policy can exist for this problem with a sublinear standard regret. In this paper, we propose an efficient online resource allocation policy, called Online Proportional Fair (\textttOPF), that achieves c_α-approximate sublinear regret with the approximation factor c_α=(1-α)^-(1-α)≤1.445, for 0≤α≤1. The upper bound to the c_α-regret for this problem exhibits a surprising \emphphase transition phenomenon. The regret bound changes from the power-law to a constant at the critical exponent α=\nicefrac12. As a corollary, our result also resolves an open problem raised by \citeteven2009online on designing an efficient no-regret policy for the online job scheduling problem in certain parameter regimes. The proof of our results introduces new algorithmic and analytical techniques, including greedy estimation of the future gradients for non-additive global reward functions and bootstrapping adaptive regret bounds, which may be of independent interest.