Abstract:
Ramsey theory refers to a growing body of literature showing that, given a partition of a mathematical structure into classes, if it is large enough, it will admit a substructure that is regular in some sense. Quantifying what "large enough" means is usually very challenging, and many existence results are not accompanied by quantitative bounds on the so-called Ramsey numbers. Applications of Ramsey theory are not limited to combinatorics and formal logic but extend to many areas of both mathematics and theoretical computer science. Recently, Ramsey theory has been employed in the area of learning theory to design attacks on the privacy of learning algorithms in the setting of binary classification (Alon, Bun, Livni, Malliaris, and Moran 2019). To extend their result to more general learning settings, we developed Ramsey theorems for trees.
In this talk, we will discuss some of the Ramsey-type results we have obtained, including quantitative bounds for some cases. We will also review previous work in the area, and briefly introduce concepts from learning theory and differential privacy that originally inspired our work in Ramsey theory.
This talk is based on joint work with Simone Fioravanti, Steve Hanneke, Shay Moran, and Iska Tsubari.