Social
ML Safety Social
Mantas Mazeika · David Krueger
Stolz 1
Abstract:
Designing systems to operate safely in real-world settings is a topic of growing interest in machine learning. We want to host a meet-up for researchers who are currently working on or interested in topics relating to AI safety and security, such as adversarial robustness, interpretability, and backdoors, to foster discussion and collaboration. We hosted similar events at NeurIPS and ICML in 2023 which were very well attended (>200 and >150 concurrent attendees, respectively).
Chat is not available.