hosted by PhD Program in CS @ TU KL
"On The Generalization Analysis of Adversarial Learnin"
Many recent studies have highlighted the susceptibility of virtually all machine-learning models to adversarial attacks. Adversarial attacks are imperceptible changes to an input example of a given prediction model. Such changes are carefully designed to alter the otherwise correct prediction of the model. In this talk, we discuss the generalization properties of adversarial learning. In particular, we present high-probability generalization bounds on the adversarial risk in terms of the empirical adversarial risk, the complexity of the function class, and the adversarial noise set. Our bounds are generally applicable to many models, losses, and adversaries. We showcase its applicability by deriving adversarial generalization bounds for the multi-class classification setting and various prediction models (including linear models and Deep Neural Networks).
|Time:||Monday, 25.07.2022, 16:00|
|Place:||In-person, Room 48-680|