Tom Waknine (Technion)  —  On Reductions of Learning Problems and the Borsuk-Ulam theorem

Tom Waknine (Technion) — On Reductions of Learning Problems and the Borsuk-Ulam theorem

Tom Waknine (Technion) — On Reductions of Learning Problems and the Borsuk-Ulam theorem

Wednesday, January 29, 2025
  • Lecturer: Tom Waknine
  • Organizer: Chaim Even Zohar
  • Location: 814 Amado
Abstract:
Many practical prediction algorithms represent inputs in Euclidean space and replace the discrete 0/1 classification loss with a real-valued surrogate loss, effectively reducing classification tasks to stochastic optimization. In this talk, I will explore the expressiveness of such reductions in terms of key resources. I will formally define a general notion of reductions between learning problems, and relate it some well known examples such as representations by half-spaces.
I will then establish bounds on the minimum Euclidean dimension D required to reduce a concept class with VC dimension d to a Stochastic Convex Optimization (SCO) problem in a D dimensional Euclidean space. This result provides a formal framework for understanding the intuitive interpretation of the VC dimension as the number of parameters needed to learn a given class.
The proof leverages a clever application of the Borsuk-Ulam theorem, illustrating an interesting connection between topology and learning theory.
 
Print to PDF