Daphne Cornelisse

Hello! I am a second-year PhD student at NYU, advised by Prof. Eugene Vinitsky. I build agents that behave like humans for multi-agent, safety-critical settings. To do this, I combine reinforcement learning, generative modeling, and principles from cognitive science. Before my PhD, I completed an undergraduate degree in Neuroscience and a master’s in AI, where I worked on cooperative game theory. 


I am excited to be interning at Waymo this summer. 


Outside the lab, I enjoy a run along the East River, reading, and sketching.


Google Scholar / GitHub / X / Bluesky / Goodreads

News

Selected papers 

Building reliable sim driving agents by scaling self-play

Daphne Cornelisse, Aarav Pandya, Kevin Joseph, Joseph Suárez, Eugene Vinitsky

In submission


Paper | Tweet | Project page 

GPUDrive: Data-driven, multi-agent driving simulation at 1 million FPS

Saman Kazemkhani*, Aarav Pandya*, Daphne Cornelisse*, Brennan Shacklett, Eugene Vinitsky

*Equal Contribution

ICLR 2025


Paper | Tweet | Code 

Human-compatible driving partners through data-regularized self-play reinforcement learning

Daphne Cornelisse, Eugene Vinitsky

Reinforcement Learning Conference (RLC), 2024 


Paper | Tweet | Project page | Code | Slides | Talk

Neural payoff machines: predicting fair and stable payoff allocations among team members 

Daphne Cornelisse, Thomas Rood, Mateusz Malinowski, Yoram Bachrach, Tal Kachman

NeurIPS 2022

Paper | Poster | Tweet | Master thesis