I am a Pre-Doctoral Research Fellow at Microsoft Research India , where I am fortunate to be advised by Dr. Amit Sharma, Dr. Nagarajan Natarajan, Prof. Vineeth N. Balasubramanian, and Dr. Srivathsan Koundinyan. Currently, I work on improving the reliability of LLMs by efficiently grounding them in real-world information. I am also working on understanding the transferrability of perception tasks in vision language models in order to make them more efficient.
Previously, I was a Research Associate at Adobe Media & Data Science Lab, where I worked on multimodal learning (WACV 2025, WACV 2023) and responsible deployment of foundation models (NAACL 2025, AAAI 2024).
I’m keen to explore problems at the intersection of reliable and efficient methods for training foundation models to enhance their real-world robustness. My overarching goal is to enable their safe and effective use in high-stakes applications.
I received my B.Tech in Computer Science and Engineering from Delhi Technological University in 2022, where I worked with the MIT Media Lab’s Camera Culture Group on privacy-preserving vision systems (ECCV 2022).
For more details, you can find my CV, or reach out via email (java DOT abhinav99 AT gmail DOT com).
Updates
Jul 2025: Will attend ICML 2025 in Vancouver (July 13–19).
Jun 2025: FrugalRAG accepted at ICML EsFoMo workshop (arXiv 2507.07634).
Feb 2025: Code released for Unlearnable Text Datasets.
Feb 2025: NAACL (Main) announcement! Unlearnable Text Datasets accepted!
Nov 2024: Preprint on unlearnable text datasets published.
Nov 2024: WACV 2025 paper “ReEdit: Multimodal Exemplar-Based Image Editing” accepted.
Sept 2024: EMNLP 2024–Main acceptance: “Thinking Fair and Slow: Structured Prompts for LLM Debiasing.”
Aug 2024: Joined Microsoft Research India as Pre-Doctoral Research Fellow.