Pranshu Malviya
Continual learning · Optimization
PhD Candidate at MILA / Polytechnique Montreal
Research
I work at the intersection of continual learning and neural network optimization. In continual learning, I’m interested in how models should adapt under distribution shift as new tasks or data arrive over time. In optimization, I focus on understanding the loss landscape better in order to guide learning toward minima that generalize better. I’m advised by Prof. Sarath Chandar.
Some highlights: Manifold Metric (CoLLAs 2025, Oral), Lookbehind-SAM (ICML 2024), and Critical Momenta (TMLR 2024). These projects owe a great deal to collaborators—especially Aristide Baratin (Samsung SAIT AI Lab Montreal) and Razvan Pascanu (Google DeepMind)—and to many other co-authors.
Before Montreal, I completed my MS at IIT Madras with Prof. Balaraman Ravindran and Prof. Sarath Chandar at RBCDSAI. There I worked on TAG (CoLLAs 2022) and Causal Fairness (ACML 2021).
Updates
New preprint: CoPeP on continual pretraining for protein language models
New preprint: CoPeP on continual pretraining for protein language models
Benchmarking how protein language models handle continual pretraining -- with Darshan Patil, Mathieu Reymond, Quentin Fournier, and Sarath Chandar.
Joined DRW as AI Research Scientist Intern
Joined DRW as AI Research Scientist Intern
Working on AI/ML research at DRW in Montreal.
Paper accepted at CoLLAs 2025 (Oral): Manifold Metric
Paper accepted at CoLLAs 2025 (Oral): Manifold Metric
A loss landscape approach for predicting model performance.
Awarded PBEEE Doctoral Research Scholarship by FRQNT Quebec
Awarded PBEEE Doctoral Research Scholarship by FRQNT Quebec
Fonds de recherche du Québec — Nature et technologies doctoral scholarship.
Paper accepted at ICML 2024: Lookbehind-SAM
Paper accepted at ICML 2024: Lookbehind-SAM
k steps back, 1 step forward -- sharpness-aware minimization with lookbehind.
Papers
Manifold Metric: A Loss Landscape Approach for Predicting Model Performance
Manifold Metric: A Loss Landscape Approach for Predicting Model Performance
Using loss landscape geometry to predict model generalization without held-out data.
Authors: P. Malviya, J. Huang, A. Baratin, Q. Fournier, S. Chandar
Lookbehind-SAM: k steps back, 1 step forward
Lookbehind-SAM: k steps back, 1 step forward
An efficient extension to Sharpness-Aware Minimization that leverages historical gradient information.
Authors: G. Mordido, P. Malviya, A. Baratin, S. Chandar
Promoting Exploration in Memory-Augmented Adam using Critical Momenta
Promoting Exploration in Memory-Augmented Adam using Critical Momenta
A memory-augmented optimizer that stores and retrieves critical momenta to promote exploration in the loss landscape.
Authors: P. Malviya, G. Mordido, A. Baratin, R. Babanezhad, J. Huang, S. Lacoste-Julien, R. Pascanu, S. Chandar
TAG: Task-based Accumulated Gradients for Lifelong Learning
TAG: Task-based Accumulated Gradients for Lifelong Learning
A gradient accumulation method for continual learning that prevents catastrophic forgetting.
Authors: P. Malviya, B. Ravindran, S. Chandar
An Introduction to Lifelong Supervised Learning
An Introduction to Lifelong Supervised Learning
A comprehensive primer on lifelong/continual supervised learning — survey of the field covering task-incremental, class-incremental, and domain-incremental settings.
Authors: S. Sodhani, M. Faramarzi, S.V. Mehta, P. Malviya, M. Abdelsalam, J. Rajendran, S. Chandar
Other projects
Including preprints and workshop papers
D. Patil, P. Malviya, M. Reymond, Q. Fournier, S. Chandar
P. Malviya, G. Mordido, A. Baratin, R. Babanezhad, G.K. Dziugaite, R. Pascanu, S. Chandar
P. Malviya, D. Patil, M. Hashemzadeh, Q. Fournier, S. Chandar
D. Patil, P. Malviya, M. Hashemzadeh, S. Chandar
Education
MILA / Polytechnique Montreal
Indian Institute of Technology Madras
IIIT Bhubaneswar
Roles
DRW
DRW
Polytechnique Montreal
NPTEL, IIT Madras
IIT Madras
Beyond Work
Photos from travel and hikes, sketches, book notes, and reading log.
Contact
Want to chat? Feel free to reach out via email.