Pranshu Malviya

Continual learning · Optimization

PhD Candidate at MILA / Polytechnique Montreal

PM

Research

I work at the intersection of continual learning and neural network optimization. In continual learning, I’m interested in how models should adapt under distribution shift as new tasks or data arrive over time. In optimization, I focus on understanding the loss landscape better in order to guide learning toward minima that generalize better. I’m advised by Prof. Sarath Chandar.

Some highlights: Manifold Metric (CoLLAs 2025, Oral), Lookbehind-SAM (ICML 2024), and Critical Momenta (TMLR 2024). These projects owe a great deal to collaborators—especially Aristide Baratin (Samsung SAIT AI Lab Montreal) and Razvan Pascanu (Google DeepMind)—and to many other co-authors.

Before Montreal, I completed my MS at IIT Madras with Prof. Balaraman Ravindran and Prof. Sarath Chandar at RBCDSAI. There I worked on TAG (CoLLAs 2022) and Causal Fairness (ACML 2021).

Updates

2026.03

New preprint: CoPeP on continual pretraining for protein language models

Benchmarking how protein language models handle continual pretraining -- with Darshan Patil, Mathieu Reymond, Quentin Fournier, and Sarath Chandar.

2025.09

Joined DRW as AI Research Scientist Intern

Working on AI/ML research at DRW in Montreal.

2025.05

Paper accepted at CoLLAs 2025 (Oral): Manifold Metric

A loss landscape approach for predicting model performance.

2024.09

Awarded PBEEE Doctoral Research Scholarship by FRQNT Quebec

Fonds de recherche du Québec — Nature et technologies doctoral scholarship.

2024.05

Paper accepted at ICML 2024: Lookbehind-SAM

k steps back, 1 step forward -- sharpness-aware minimization with lookbehind.

Papers

Manifold Metric: A Loss Landscape Approach for Predicting Model Performance

Using loss landscape geometry to predict model generalization without held-out data.

Authors: P. Malviya, J. Huang, A. Baratin, Q. Fournier, S. Chandar

Loss Landscape
Architectures

Lookbehind-SAM: k steps back, 1 step forward

An efficient extension to Sharpness-Aware Minimization that leverages historical gradient information.

Authors: G. Mordido, P. Malviya, A. Baratin, S. Chandar

Optimization
SAM

Promoting Exploration in Memory-Augmented Adam using Critical Momenta

A memory-augmented optimizer that stores and retrieves critical momenta to promote exploration in the loss landscape.

Authors: P. Malviya, G. Mordido, A. Baratin, R. Babanezhad, J. Huang, S. Lacoste-Julien, R. Pascanu, S. Chandar

Optimization

TAG: Task-based Accumulated Gradients for Lifelong Learning

A gradient accumulation method for continual learning that prevents catastrophic forgetting.

Authors: P. Malviya, B. Ravindran, S. Chandar

Continual Learning
Optimization

An Introduction to Lifelong Supervised Learning

A comprehensive primer on lifelong/continual supervised learning — survey of the field covering task-incremental, class-incremental, and domain-incremental settings.

Authors: S. Sodhani, M. Faramarzi, S.V. Mehta, P. Malviya, M. Abdelsalam, J. Rajendran, S. Chandar

Continual Learning
Survey

A Causal Approach for Unfair Edge Prioritization and Discrimination Removal

Using causal inference to identify and remove discriminatory edges in decision systems.

Authors: P. Ravishankar, P. Malviya, B. Ravindran

Causal Inference
Fairness

Other projects

Including preprints and workshop papers

CoPeP: Benchmarking Continual Pretraining for Protein Language Models

D. Patil, P. Malviya, M. Reymond, Q. Fournier, S. Chandar

(arXiv 2026)
Torque-Aware Momentum

P. Malviya, G. Mordido, A. Baratin, R. Babanezhad, G.K. Dziugaite, R. Pascanu, S. Chandar

(arXiv 2024)
Interpolate: How Resetting Active Neurons can also improve Generalizability in Online Learning

P. Malviya, D. Patil, M. Hashemzadeh, Q. Fournier, S. Chandar

(Under review 2025)
Experimental Design for Nonstationary Optimization

D. Patil, P. Malviya, M. Hashemzadeh, S. Chandar

(Under review 2025)

Beyond Work

Photos from travel and hikes, sketches, book notes, and reading log.

Contact

Want to chat? Feel free to reach out via email.