Publications
2025
- PreprintA Lyapunov analysis of Korpelevich’s extragradient method with fast and flexible extensionsManu Upadhyaya, Puya Latafat, and Pontus GiselssonarXiv:2502.00119, 2025
We present a Lyapunov analysis of Korpelevich’s extragradient method and establish an \(O(1/k)\,\,\)last-iterate convergence rate. Building on this, we propose flexible extensions that combine extragradient steps with user-specified directions, guided by a line-search procedure derived from the same Lyapunov analysis. These methods retain global convergence under practical assumptions and can achieve superlinear rates when directions are chosen appropriately. Numerical experiments highlight the simplicity and efficiency of this approach.
@article{upadhyaya2025lyapunovanalysiskorpelevichsextragradient, title = {A Lyapunov analysis of Korpelevich's extragradient method with fast and flexible extensions}, author = {Upadhyaya, Manu and Latafat, Puya and Giselsson, Pontus}, year = {2025}, journal = {arXiv:2502.00119}, }
2024
- JournalAutomated tight Lyapunov analysis for first-order methodsMathematical Programming, 2024
We present a methodology for establishing the existence of quadratic Lyapunov inequalities for a wide range of first-order methods used to solve convex optimization problems. In particular, we consider (i) classes of optimization problems of finite-sum form with (possibly strongly) convex and possibly smooth functional components, (ii) first-order methods that can be written as a linear system on state-space form in feedback interconnection with the subdifferentials of the functional components of the objective function, and (iii) quadratic Lyapunov inequalities that can be used to draw convergence conclusions. We present a necessary and sufficient condition for the existence of a quadratic Lyapunov inequality within a predefined class of Lyapunov inequalities, which amounts to solving a small-sized semidefinite program. We showcase our methodology on several first-order methods that fit the framework. Most notably, our methodology allows us to significantly extend the region of parameter choices that allow for duality gap convergence in the Chambolle-Pock method when the linear operator is the identity mapping.
@article{upadhyaya2024tight_lyapunov_analysis, title = {Automated tight Lyapunov analysis for first-order methods}, author = {Upadhyaya, Manu and Banert, Sebastian and Taylor, Adrien B. and Giselsson, Pontus}, year = {2024}, journal = {Mathematical Programming}, doi = {10.1007/s10107-024-02061-8} }
2023
- PreprintThe Chambolle-Pock method converges weakly with \(θ> 1/2\,\,\)and \(τσ\|L\|^2 < 4/(1+2θ)\)Sebastian Banert, Manu Upadhyaya, and Pontus GiselssonarXiv:2309.03998, 2023
The Chambolle-Pock method is a versatile three-parameter algorithm designed to solve a broad class of composite convex optimization problems, which encompass two proper, lower semicontinuous, and convex functions, along with a linear operator \(L\). The functions are accessed via their proximal operators, while the linear operator is evaluated in a forward manner. Among the three algorithm parameters \(τ\), \(σ\), and \(θ\); \(τ\), \(σ> 0\,\,\)serve as step sizes for the proximal operators, and \(θ\,\,\)is an extrapolation step parameter. Previous convergence results have been based on the assumption that \(θ= 1\). We demonstrate that weak convergence is achievable whenever \(θ> 1/2\,\,\)and \(τσ\|L\|^2 < 4/(1+2θ)\). Moreover, we establish tightness of the step size bound by providing an example that is nonconvergent whenever the second bound is violated.
@article{banert2023chambolle_pock, title = {The Chambolle-Pock method converges weakly with \(\theta > 1/2\,\,\) and \(\tau\sigma\|L\|^2 < 4/(1+2\theta)\)}, author = {Banert, Sebastian and Upadhyaya, Manu and Giselsson, Pontus}, journal = {arXiv:2309.03998}, year = {2023}, }
2020
- ThesisCovariance matrix regularization for portfolio selection: Achieving desired riskManu UpadhyayaMaster’s thesis, 2020
The modus operandi of most asset managers is to promise clients an annual risk target, where risk is measured by realized standard deviation of portfolio returns. Moreover, Markowitz (1952) portfolio selection requires an estimate of the covariance matrix of the returns of the financial instruments under consideration. To address both these problems, we develop a data-driven method for covariance matrix regularization. The data-driven method critically depends on a novel risk targeting loss function. In addition, the risk targeting loss function is analyzed under large-dimensional asymptotics, resulting in an asymptotically optimal covarinace matrix regularization. In an ex-post analysis, using historical price data from multiple future markets, the data-driven method outperforms other regularization methods compared against.
2017
- ConferenceThe feeling of success: Does touch sensing help predict grasp outcomes?Roberto Calandra, Andrew Owens, Manu Upadhyaya, Wenzhen Yuan, Justin Lin, Edward H. Adelson, and Sergey LevineIn Proceedings of the 1st Annual Conference on Robot Learning (CoRL), 13–15 nov 2017
A successful grasp requires careful balancing of the contact forces. Deducing whether a particular grasp will be successful from indirect measurements, such as vision, is therefore quite challenging, and direct sensing of contacts through touch sensing provides an appealing avenue toward more successful and consistent robotic grasping. However, in order to fully evaluate the value of touch sensing for grasp outcome prediction, we must understand how touch sensing can influence outcome prediction accuracy when combined with other modalities. Doing so using conventional model-based techniques is exceptionally difficult. In this work, we investigate the question of whether touch sensing aids in predicting grasp outcomes within a multimodal sensing framework that combines vision and touch. To that end, we collected more than 9,000 grasping trials using a two-finger gripper equipped with GelSight high-resolution tactile sensors on each finger, and evaluated visuo-tactile deep neural network models to directly predict grasp outcomes from either modality individually, and from both modalities together. Our experimental results indicate that incorporating tactile readings substantially improve grasping performance.
@inproceedings{calandra2017feeling, title = {The feeling of success: Does touch sensing help predict grasp outcomes?}, author = {Calandra, Roberto and Owens, Andrew and Upadhyaya, Manu and Yuan, Wenzhen and Lin, Justin and Adelson, Edward H. and Levine, Sergey}, year = {2017}, booktitle = {Proceedings of the 1st Annual Conference on Robot Learning (CoRL)}, pages = {314--323}, editor = {Levine, Sergey and Vanhoucke, Vincent and Goldberg, Ken}, volume = {78}, series = {Proceedings of Machine Learning Research}, month = {13--15 Nov}, publisher = {PMLR}, }