Differential Privacy (DP) is a mathematical framework for ensuring the privacy of individuals in a dataset. Roughly speaking, it guarantees that privacy is protected in data analysis by ensuring that the output of an analysis does not reveal sensitive information about any specific individual, regardless of whether their data is included in the dataset or not.
In this talk I will present a recent result regarding the link between DP learning and online learning: Alon, Livni, Malliaris, and Moran (2019) showed that for binary classification tasks, DP learnability implies online learnability. But does this connection extend to more general scenarios? We answer this positively by developing new Ramsey theorems for trees.
This talk is based on joint work with Simone Fioravanti, Steve Hanneke, Shay Moran, and Iska Tsubari. No prior knowledge of Learning Theory is required.