Zoom Logo

PhDOpen: Aleksander Mądry - Shared screen with speaker view
Wojciech Jabłoński
44:09
Are we even able to robustify a model against adversarial attacks? I was under the impression that if there is gradient descent, there are always going to be adversarial attacks (and the only way to counteract them is to "break" the gradient by introducing non-differentiable operations)
Piotr Tempczyk
46:26
Does massive scale weight prunning helps the network to be more robust for those adversarial attacks?
Maciej
01:10:18
Is there an example of such attack in the wild? for instance on some site that are often sources for images like Flickr.
Tomasz Miśków
01:15:40
Will the slides be available anywhere?
Michal
01:15:46
What are we going to do during the exercises sessions :) ?
Michal
01:16:43
awesome, thanks!
Piotr Wygocki
01:19:21
For excercises purpose. Please got tohttps://colab.research.google.com/drive/16dDBiAdCU6bw3VF1PfEuAWynxGgJQE8v?hl=en#scrollTo=d7sM-inKv-G9and run file->Save a copy in Drive
Michal
01:19:23
games also trained on the same data as they are tested on
Weronika Hryniewska
01:19:47
Let's assume that I trained DNN. Then I found adversarial examples. Does it make any sense to add that examples to the training dataset and retrain DNN?
Michal
01:21:47
thank you!