Organizers


audrey.jpg

Audrey Huang is a fourth-year PhD candidate in Computer Science advised by Nan Jiang at the University of Illinois Urbana-Champaign. Currently, she is excited about combining ideas from RL theory with the capabilities and structures of language modeling to design efficient and provable algorithms. She also works on the complexity of online/offline RL with function approximation, and imitation learning. Audrey has interned at Microsoft Research, Google, and Adobe, and received her MS from Carnegie Mellon University and BS from Caltech.


adam.jpg

Adam Block is an Assistant Professor in the Department of Computer Science at Columbia University. Previously, he was postdoctoral researcher at Microsoft Research NYC, and a PhD student in the math department at MIT with affiliations in the Laboratory for Information & Decisions Systems and the Statistics and Data Science Center. His research focuses on bridging theory and practice within machine learning by designing efficient algorithms with provable guarantees under realistic assumptions on data, with a special focus on learning in sequential decision making tasks. His research was generously supported by an NSF Graduate Research Fellowship. Before MIT, Adam completed a B.A in Mathematics at Columbia University, where he was an I.I. Rabi scholar.


sadhika.jpg

Sadhika Malladi is an incoming Assistant Professor at UC San Diego and currently a postdoctoral researcher at Microsoft Research NYC. She completed her PhD at Princeton University, advised by Sanjeev Arora. Her work focuses on using mathematical insights into deep learning (especially language models) to design and analyze performant and efficient algorithms.


will.jpg

William Merrill is an incoming Assistant Professor at the Toyota Technical Institute at Chicago and currently a Young Investigator at the Allen Institute for AI. He received his PhD from New York University. A major focus of Will’s research has been developing theory on the computational power and limitations of transformers, with an eye towards guiding the analysis and design of new architectures and inference methods.


pavel.jpg

Pavel Izmailov is a Researcher at Anthropic, and contributed to the recent Claude 3.7 coding and reasoning model. Starting in Fall 2025, he will be joining New York University as an Assistant Professor in the Tandon Computer Science and Engineering Department, and Courant Computer Science by courtesy. Previously he worked on reasoning and problem solving in language models at OpenAI. He contributed to the OpenAI o1 models, a new state-of-the-art in LLM reasoning, and also worked on weak-to-strong-generalization on the superalignment team under Jeff Wu, Jan Leike and Ilya Sutskever. He completed his PhD in Computer Science at NYU under the supervision of Andrew Gordon Wilson, and received an outstanding paper award at ICML 2022 for his work on Bayesian models and methods.


dylan.jpg

Dylan Foster is a principal researcher at Microsoft Research, New England (and New York City) where he is a member of the Reinforcement Learning Group. Previously, he was a postdoctoral fellow at the MIT Institute for Foundations of Data Science, and received his PhD in computer science from Cornell University, advised by Karthik Sridharan. His research focuses on problems at the intersection of machine learning, AI and interactive decision making. He has received several awards for his work, including the best paper award at COLT (2019) and best student paper award at COLT (2018, 2019).


akshay.jpg

Akshay Krishnamurthy is a senior principal research manager at Microsoft Research, New York City. Previously, he spent two years as an assistant professor in the College of Information and Computer Sciences at the University of Massachusetts, Amherst and a year as a postdoctoral researcher at Microsoft Research, NYC. His research interests are in machine learning and statistics, with a focus on interactive learning, or learning settings that involve feedback-driven data collection. His recent interests revolve around decision making problems with limited feedback, including contextual bandits and reinforcement learning.


tatsu.jpg

Tatsunori Hashimoto is an assistant professor at the computer science department in Stanford university. His research uses tools from statistics to make machine learning systems more robust and trustworthy — especially in complex systems such as large language models. The goal of his research is to use robustness and worst-case performance as a lens to understand and make progress on several fundamental challenges in machine learning and natural language processing. A few topics of recent interest are long-tail behavior, developing systemic understanding, and fairness. Previously, he was a post-doc at Stanford working for John C. Duchi and Percy Liang on tradeoffs between the average and worst-case performance of machine learning models. Before that, he was a graduate student at MIT co-advised by Tommi Jaakkola and David Gifford and a undergraduate student at Harvard in statistics and math advised by Edoardo Airoldi.