Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: Pitfalls of limited data and computation for Trustworthy ML

How (not) to Model an Adversary (Ruth Urner)


Abstract:

Statistical learning (and theory) traditionally relies on training and test data being generated by the same process, an assumption that rarely holds in practice. Conditions of data-generation might change over time, or agents might (strategically or adversarially) respond to a published predictor aiming for a specific outcome for their manipulated instance. Developing methods for adversarial robustness has received a lot of attention in recent years, and both practical tools and theoretical guarantees developed. In this talk, I will focus on the learning theoretic treatment of these scenarios and survey how different modeling assumptions can lead to drastically different conclusions. I will argue that for robustness we should aim for minimal assumptions on how an adversary might act, and present recent results on a variety of relaxations of learning with standard adversarial (or strategic) robustness.

Chat is not available.