
44:09
Are we even able to robustify a model against adversarial attacks? I was under the impression that if there is gradient descent, there are always going to be adversarial attacks (and the only way to counteract them is to "break" the gradient by introducing non-differentiable operations)

46:26
Does massive scale weight prunning helps the network to be more robust for those adversarial attacks?

01:10:18
Is there an example of such attack in the wild? for instance on some site that are often sources for images like Flickr.

01:15:40
Will the slides be available anywhere?

01:15:46
What are we going to do during the exercises sessions :) ?

01:16:43
awesome, thanks!

01:19:21
For excercises purpose. Please got tohttps://colab.research.google.com/drive/16dDBiAdCU6bw3VF1PfEuAWynxGgJQE8v?hl=en#scrollTo=d7sM-inKv-G9and run file->Save a copy in Drive

01:19:23
games also trained on the same data as they are tested on

01:19:47
Let's assume that I trained DNN. Then I found adversarial examples. Does it make any sense to add that examples to the training dataset and retrain DNN?

01:21:47
thank you!