Vincent Roulet
Primary affiliation: Google, Inc., University of Washington
Affiliate Assistant Professor
| vroulet@uw.edu | |
| UW Box Number | 354322 |
| Homepage | Personal Home Page |
| ORCID iD |
|
Bio:
Vincent Roulet is an Acting Instructor in the Department of Statistics at the University of Washington. Previously, he was a Postdoctoral fellow in the Department of Statistics at the University of Washington, working with the Algorithmic Foundations for Data Science Institute (ADSI) members Zaid Harchaoui, Dmitriy Drusvyatskiy, Maryam Fazel, and Sham Kakade. He received his Ph.D. from Ecole Normale Superieure Ulm (Paris, France) under the supervision of Alexandre d’Aspremont, working in the Sierra team led by Francis Bach. During his thesis, he worked on mathematical optimization approaches for statistical problems with an underlying combinatorial structure and on the acceleration of optimization algorithms by restarts. He is now working on non-linear dynamical problems such as non-linear control problems and deep learning problems.
Preprints
Loss Functions and Operators Generated by f-Divergences
Vincent Roulet, Tianlin Liu, Nino Vieillard, Michael E. Sander, Mathieu Blondel
The logistic loss (a.k.a. cross-entropy loss) is one of the most popular loss functions used for multiclass classification. It is also the loss function of…
Iterative Linear Quadratic Optimization for Nonlinear Control: Differentiable Programming Algorithmic Templates
Vincent Roulet, Siddhartha Srinivasa, Maryam Fazel, Zaid Harchaoui
Iterative optimization algorithms depend on access to information about the objective function. In a differentiable programming framework, this information,…
Stepping on the Edge: Curvature Aware Learning Rate Tuners
Vincent Roulet, Atish Agarwala, Jean-Bastien Grill, Grzegorz Swirszcz, Mathieu Blondel, Fabian Pedregosa
Curvature information -- particularly, the largest eigenvalue of the loss Hessian, known as the sharpness -- often forms the basis for learning rate tuners…
Dual Gauss-Newton Directions for Deep Learning
Vincent Roulet, Mathieu Blondel
Inspired by Gauss-Newton-like methods, we study the benefit of leveraging the structure of deep learning objectives, namely, the composition of a convex loss…
On the Interplay Between Stepsize Tuning and Progressive Sharpening
Vincent Roulet, Atish Agarwala, Fabian Pedregosa
Recent empirical work has revealed an intriguing property of deep learning models by which the sharpness (largest eigenvalue of the Hessian) increases…
Per-example gradients: a new frontier for understanding and improving optimizers
Vincent Roulet, Atish Agarwala
Training algorithms in deep learning usually treat a mini-batch of samples as a single object; they average gradients over the mini-batch, and then process the…
Joint Learning of Energy-based Models and their Partition Function
Michael E. Sander, Vincent Roulet, Tianlin Liu, Mathieu Blondel
Energy-based models (EBMs) offer a flexible framework for parameterizing probability distributions using neural networks. However, learning EBMs by exact…
An Elementary Approach to Convergence Guarantees of Optimization Algorithms for Deep Networks
Vincent Roulet, Zaid Harchaoui
We present an approach to obtain convergence guarantees of optimization algorithms for deep networks based on elementary arguments and computations. The…
Target Propagation via Regularized Inversion
Vincent Roulet, Zaid Harchaoui
Target Propagation (TP) algorithms compute targets instead of gradients along neural networks and propagate them backward in a way that is similar yet…
Differentiable Programming à la Moreau
Vincent Roulet, Zaid Harchaoui
The notion of a Moreau envelope is central to the analysis of first-order optimization algorithms for machine learning. Yet, it has not been developed and…
On Global and Local Convergence of Iterative Linear Quadratic Optimization Algorithms for Discrete Time Nonlinear Control
Vincent Roulet, Siddhartha Srinivasa, Maryam Fazel, Zaid Harchaoui
A classical approach for solving discrete time nonlinear control on a finite horizon consists in repeatedly minimizing linear quadratic approximations of the…
Sharpness, Restart and Acceleration
Vincent Roulet, Alexandre d'Aspremont
The {\L}ojasiewicz inequality shows that sharpness bounds on the minimum of convex optimization problems hold almost generically. Sharpness directly controls…
On the Convergence of the Iterative Linear Exponential Quadratic Gaussian Algorithm to Stationary Points
Vincent Roulet, Maryam Fazel, Siddhartha Srinivasa, Zaid Harchaoui
A classical method for risk-sensitive nonlinear control is the iterative linear exponential quadratic Gaussian algorithm. We present its convergence analysis…
Iterative Linearized Control: Stable Algorithms and Complexity Guarantees
Vincent Roulet, Siddhartha Srinivasa, Dmitriy Drusvyatskiy, Zaid Harchaoui
We examine popular gradient-based algorithms for nonlinear control in the light of the modern complexity analysis of first-order optimization algorithms. The…
Discriminative Clustering with Representation Learning with any Ratio of Labeled to Unlabeled Data
Corinne Jones, Vincent Roulet, Zaid Harchaoui
We present a discriminative clustering approach in which the feature representation can be learned from data and moreover leverage labeled data. Representation…
Computational Complexity versus Statistical Performance on Sparse Recovery Problems
Vincent Roulet, Nicolas Boumal, Alexandre d'Aspremont
We show that several classical quantities controlling compressed sensing performance directly match classical parameters controlling algorithmic complexity. We…
Integration Methods and Accelerated Optimization Algorithms
Damien Scieur, Vincent Roulet, Francis Bach, Alexandre d'Aspremont
We show that accelerated optimization methods can be seen as particular instances of multi-step integration schemes from numerical analysis, applied to the…
Learning with Clustering Structure
Vincent Roulet, Fajwel Fogel, Alexandre d'Aspremont, Francis Bach
We study supervised learning problems using clustering constraints to impose structure on either features or samples, seeking to help both prediction and…