Relative-error first-order oracles provide a gradient approximations that are essentially biased and satisfy are multiplicative error bound on the distance from the true gradient.
These are used in many contemporary applications in machine learning, signal processing, and general continuous optimization settings.
This project’s aim is to study optimization methods using stochastic relative-error gradient oracles, and specifically their theoretical performance.
A recent work on deterministic relative-error gradient oracles that can illustrate the main goals of the research is:
Hallak, N. and Levy, K.Y., A Study of First-Order Methods with a Deterministic Relative-Error Gradient Oracle. In Forty-first International Conference on Machine Learning. 2024