PHD POSITION IN INFINITE-DIMENSIONAL OPTIMIZATION FOR THEORETICAL MACHINE LEARNING
Gradient methods, such as gradient descent and stochastic gradient descent, achieve remarkable performances in neural network training but suffer typically from strong instability that makes the optimization of specific architectures challenging, time-consuming, and susceptible to attacks. For this reason, a dynamic regularization of the optimization algorithm is often necessary. Our goal is to lay a … Read more