Abstract:
I will discuss some recent bounds on the number of learner mistakes in the ‘transductive’ online learning setting of Ben-David, Kushilevitz and Mansour (1997). We show a trichotomy, stating that the optimal number of mistakes made by the learner as a function of the sequence length n can take only one of three possible values: n, Theta(log n), or Theta(1). This behavior is determined by a combination of the VC dimension and the Littlestone dimension. Understanding the precise value in the Theta(1) case is still open, but we show a quadratic improvement over the previously-known lower bound for this quantity.
The talk will be accessible to a general mathematical audience, no prior familiarity with online learning is required.
Joint work with Zachary Chase (Oxford Technion), Steve Hanneke (Purdue) and Shay Moran (Technion).