Activation maximization is technique mainly used to generade adversary examples. Its utility in interpreting and explaining decisions made by neural network is somewhat limited for now. I believe that with some improvements this method can shed some more light on neural networks decision making proces.
Files:
- MNIST.py is a python script containing learning algorithm and neural network architecture
- model_MNIST.pt is a trained model
- AM.py is a python script containing activation maximisation algoritm
- relu x-0.01 is are pictures of results for x digit using regularization factor lambda = 0.01