Social
ML Safety Social
Rishub Tamirisa · Bhrugu Bharathi
Peridot 201
As AI systems become increasingly capable and widely deployed, ensuring their safety and reliability is more important than ever. Researchers in the ML Safety community are working on various challenges, including interpretability, adversarial robustness, and alignment, which have become more complex with advances in multi-modal and agentic systems. This rapidly evolving field spans industry labs and academic groups, united by the need to address emerging risks.
We want to host a semi-structured meet-up for researchers who are currently working on or interested in safety-related topics to foster discussion and collaboration. We expect at least 150 people to attend. We previously hosted similar events at NeurIPS, ICML, and ICLR in 2023 and 2024, which were very well attended (150-300 people).
The event will open with a 30-minute panel discussion on the state of ML safety research, followed by a brief Q&A session. The rest of the event will consist of informal discussion and mingling among attendees. We will provide drinks and snacks.
Live content is unavailable. Log in and register to view live content