Hello! I am a third-year PhD student at NYU, advised by Prof. Eugene Vinitsky. I build fast multi-agent simulation environments and explore how to train effective agents by combining self-play RL with human data. Before my PhD, I completed an undergraduate degree in Neuroscience and a master’s in AI, where I worked on cooperative game theory.
My work is supported by a CAIF Fellowship and a Chishiki AI Fellowship (UT-Austin and NSF).
I interned at Waymo in the summer of 2025.
News
Feb 2025. Preprint: We show that regularized self-play RL can effectively adapt policies to new cities, without using human demonstrations!
Feb 2026. Blog: How to think about human-likeness in the age of autonomy.
Dec 2025. Release: We released PufferDrive 2.0! Train driving agents from scratch in 15 minutes and drive with them, too.
Oct 2025. Blog: How to catch subtle RL bugs before they catch you.
Oct 2025. Blog: Human behavior modeling in naturalistic driving: Trends and opportunities.
Jul 2025. Talk: Gave a talk at Waymo on guided self-play.
May-Aug. Internship: Spent the summer interning at Waymo's safety research team.
Apr 2025. Fellowship: I was selected as a Chishiki AI Fellow (funded by NSF).
Mar 2025. Talk: Gave an invited talk at the AI and Social Good Lab (AISOC) at CMU.
Jan 2025. Workshop: I am co-organizing the workshop on Computational Models of Human Road User Behavior for Autonomous Vehicles at IEEE IAVVC.
Jan 2025. Fellowship: I'm honored to have been awarded the 2025 PhD Cooperative AI Fellowship!
Oct 2024. Talk: Gave an invited talk on HR-PPO and GPUDrive at the RL reading group at UoE! You can watch the recording here.
May 2024. Paper: Human-compatible driving partners through data-regularized self-play reinforcement learning was accepted to RLC 2024.
Apr 2024. Talk: Gave an invited talk on Human-Regularized PPO at the Berkeley Multiagent Learning Seminar! The slides are here.
Sep 2023. Started my Ph.D. at NYU with the EMERGE Lab!
Selected projects
Learning to drive in new cities without human demonstrations
Zilin Wang*, Saeed Rahmani*, Daphne Cornelisse, Bidipta Sarkar, Alexander David Goldie, Jakob Nicolaus Foerster, Shimon Whiteson
Preprint
Paper
PufferDrive 2.0: A fast and friendly driving simulator for training and evaluating RL agents
Daphne Cornelisse*, Spencer Cheng*, Pragnay Mandavilli, Julian Hunt, Kevin Joseph, Waël Doulazmi, Valentin Charraut, Aditya Gupta, Joseph Suarez, Eugene Vinitsky
Software release
GitHub | Tweet | Release post
Building reliable sim driving agents by scaling self-play
Daphne Cornelisse, Aarav Pandya, Kevin Joseph, Joseph Suarez, Eugene Vinitsky
ArXiv
Paper | Tweet | Project page
Neural payoff machines: predicting fair and stable payoff allocations among team members
Daphne Cornelisse, Thomas Rood, Mateusz Malinowski, Yoram Bachrach, Tal Kachman
NeurIPS 2022
Paper | Poster | Tweet | Master thesis