Topological barrier to reduction of PAC learning to convex optimization

Topological barrier to reduction of PAC learning to convex optimization

Topological barrier to reduction of PAC learning to convex optimization

Sunday, November 9, 2025
  • Lecturer: Bogdan Chornomaz (Technion)
  • Organizer: Simeon Reich
  • Location: Amado 814
Abstract:
Abstract: The central problem of the PAC learning theory is binary classification in a distribution-independent setting. While in the original formulation the problem is not concerned with the algorithmic implementation, concrete algorithms are, of course, important for practical purposes, and one of the most known classes of algorithms is stochastic convex optimization (SCO).  While the sample complexity in PAC learning is regulated by the Vapnik-Chervonenkis (VC) dimension, we will show that in any SCO task, used to solve it, one might be forced to use the ambient Euclidean space whose dimension is at least exponential in VC, which is at odds with the naive interpretation of VC as the "number of parameters". The proof is topological and relies on a generalization of the Borsuk-Ulam theorem in the spirit of the Kakutani fixed point theorem.  The talk is based on joint work with Shay Moran and Tom Waknine.
Print to PDF