My research focuses on enhancing the usability of machine learning models through efficient adaptation. While large foundation models have shown impressive progress, they remain costly, rigid, and difficult to deploy in real-world domains where data is scarce, resources are limited, and safety is critical. I aim to build models that are more flexible, efficient, and controllable.

Three questions guide my work: (1) How can models generalize under strict data constraints? (2) How can they interact efficiently with external resources like databases or the web? (3) How can users meaningfully shape and control model behavior?

At Microsoft Research India, I work on efficient grounding for retrieval-augmented models. Our work on FrugalRAG shows how small language models can match larger ones by learning when to stop searching, reducing computation while retaining performance. I also co-developed LiveDRBench, a scalable benchmark for evaluating modern reasoning agents.

With Prof. Vineeth N. Balasubramanian, I explored task transfer in vision-language models. By systematically finetuning across diverse perception datasets, we uncovered clusters of related tasks, offering insights about efficient finetuning.

I am also interested in enhancing user trust and control. At NAACL 2025, we introduced the first framework for unlearnable text datasets, showing how imperceptible modifications can protect individual data from model training. Earlier, with MIT Media Lab, I developed privacy-preserving vision systems that prevent sensitive attribute leakage from point clouds (ECCV 2022).

Broadly, I want to make foundation models more efficient, trustworthy, and practical for high-stakes settings where today’s models often break down.

I received my B.Tech in Computer Science and Engineering from Delhi Technological University in 2022. For more details, you can find my CV, or reach out via email.

Updates

Aug 2025: LiveDRBench is released on arxiv along with HF dataset and eval code.

Jul 2025: Will attend ICML 2025 in Vancouver (July 13–19).

Jun 2025: FrugalRAG accepted at ICML EsFoMo workshop (arXiv 2507.07634).

Feb 2025: Code released for Unlearnable Text Datasets.

Feb 2025: NAACL (Main) announcement! Unlearnable Text Datasets accepted!

Nov 2024: Preprint on unlearnable text datasets published.

Nov 2024: Code and Website released for “ReEdit: Multimodal Exemplar-Based Image Editing”.

Nov 2024: WACV 2025 paper “ReEdit: Multimodal Exemplar-Based Image Editing” accepted.

Sept 2024: EMNLP 2024–Main acceptance: “Thinking Fair and Slow: Structured Prompts for LLM Debiasing.”

Aug 2024: Joined Microsoft Research India as Pre-Doctoral Research Fellow.