About Me
Hello! I am currently the lead of the AI research group at Ritual. Before that, I was a researcher at Abacus.AI. Prior to a diversion
into high-frequency trading, I began my AI career at DeepMind.
I have broad interests in AI and ML; my past work includes vision models, reinforcement learning, LLMs, open-source models, inference optimisation, and privacy.
I have worked in both large labs and smaller startups, and with both academic and highly applied focuses. My work has been cited over 10,000 times (as of February 2026).
Current/Past Mentees
I have been fortunate to work with some amazing interns, students, and reports over the past few years.
- Rahul Thomas (2024–present) — PhD student with Micah Goldblum at Columbia University
- Teo Kitanovski (2025–present) — Undergraduate at Vanderbilt University
- Erica Choi (2024–2025) — Research assistant at Columbia University
- William Gvozdjak (2024–2025) — Undergraduate at MIT
- Arthur Liang (2024–2025) — Isomorphic Labs
- Deep Karkhanis (2023–2024) — Google DeepMind
Selected Recent Publications
Global Resolution: Optimal Multi-Draft Speculative Sampling via Convex Optimization
Oral, ICLR 2026 (top 1% of papers).
LiveBench: A Challenging, Contamination-Limited LLM Benchmark
* equal contribution
Spotlight, ICLR 2025 (top 3% of papers).
Fixing Failure Modes of Preference Optimisation with DPO-Positive
Open Science for Foundation Models Workshop, ICLR 2025.
The work in this paper forms the core of the Smaug LLM model, which was the top open-source model on the HF LLM Leaderboards at launch, and remained so for over 2 months.