Hello! I am a second-year PhD student at NYU, advised by Prof. Eugene Vinitsky. I build agents that behave like humans for multi-agent, safety-critical settings. To do this, I combine reinforcement learning, generative modeling, and principles from cognitive science. Before my PhD, I completed an undergraduate degree in Neuroscience and a master’s in AI, where I worked on cooperative game theory.
I am excited to be interning at Waymo this summer.
Outside the lab, I enjoy a run along the East River, reading, and sketching.
Google Scholar / GitHub / X / Bluesky / Goodreads
News
Mar 2025. Talk: Gave an invited talk at the AI and Social Good Lab (AISOC) at CMU.
Jan 2025. Workshop: I am co-organizing the workshop on Computational Models of Human Road User Behavior for Autonomous Vehicles at IEEE IAVVC.
Jan 2025. Fellowship: I'm honored to have been awarded the 2025 PhD Cooperative AI Fellowship!
Oct 2024. Talk: Gave an invited talk on HR-PPO and GPUDrive at the RL reading group at UoE! You can watch the recording here.
May 2024. Paper: Human-compatible driving partners through data-regularized self-play reinforcement learning was accepted to RLC 2024.
Apr 2024. Talk: Gave an invited talk on Human-Regularized PPO at the Berkeley Multiagent Learning Seminar! The slides are here.
Sep 2023. Started my Ph.D. at NYU with the EMERGE Lab!
Selected papers
Building reliable sim driving agents by scaling self-play
Daphne Cornelisse, Aarav Pandya, Kevin Joseph, Joseph Suárez, Eugene Vinitsky
In submission
Paper | Tweet | Project page
Neural payoff machines: predicting fair and stable payoff allocations among team members
Daphne Cornelisse, Thomas Rood, Mateusz Malinowski, Yoram Bachrach, Tal Kachman
NeurIPS 2022
Paper | Poster | Tweet | Master thesis