Monday, November 18
12:30-1:20pm
JCL 390
Convolutional Algorithms and Defense Techniques for Deep Neural Networks
Despite the recent chilly weather, the Student Seminar plows onward. This week Adam Dziedzic (@ady) will be giving an interesting talk involving neural networks, fourier transforms, and security,
Convolutional Algorithms and Defense Techniques for Deep Neural Networks
Deep convolutional neural networks take GPU-days of computation to train on large data sets. They also need substantial memory workspace for input and intermediate data. The main bottleneck is the convolution operation and we show how it can be accelerated in different domains. For small filters, Winograd’s minimal filtering algorithm provides the fastest computation. For big filters, we introduce FFT-based convolution with compression, called band-limiting. The band-limited training controls resource usage (GPU and memory), retains high accuracy, and also provides a new perspective on adversarial robustness.
The existence of adversarial examples, or intentional mis-predictions constructed from small changes to correctly predicted examples, is one of the most significant challenges in neural network research today. The adversarial inputs themselves are not robust and FFT-based compression of the attacking input often recovers the desired prediction. We present when the compression-based and random perturbation defenses work and why they lead to similar gains in robustness.
No previous background on the convolution operation or adversarial neural networks is assumed.