You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+6-6Lines changed: 6 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -14,10 +14,10 @@ This repo contains following CNN visualization techniques implemented in Pytorch
14
14
15
15
**Adversarial example generation** techniques have been moved to [here](https://github.com/utkuozbulak/pytorch-cnn-adversarial-attacks).
16
16
17
-
* Fast Gradient Sign, Untargeted [11]
18
-
* Fast Gradient Sign, Targeted [11]
19
-
* Gradient Ascent, Adversarial Images [7]
20
-
* Gradient Ascent, Fooling Images (Unrecognizable images predicted as classes with high confidence) [7]
17
+
- Fast Gradient Sign, Untargeted [11]
18
+
- Fast Gradient Sign, Targeted [11]
19
+
- Gradient Ascent, Adversarial Images [7]
20
+
- Gradient Ascent, Fooling Images (Unrecognizable images predicted as classes with high confidence) [7]
21
21
22
22
It will also include following operations in near future as well:
23
23
@@ -207,10 +207,10 @@ The samples below show the produced image with no regularization, l1 and l2 regu
207
207
208
208
Produced samples can further be optimized to resemble the desired target class, some of the operations you can incorporate to improve quality are; blurring, clipping gradients that are below a certain treshold, random color swaps on some parts, random cropping the image, forcing generated image to follow a path to force continuity.
209
209
210
-
## Fooling Image Generation
210
+
## Adversarial Example Generation with Fast Gradient Sign
211
211
Adversarial example generation techniques have been moved to [here](https://github.com/utkuozbulak/pytorch-cnn-adversarial-attacks).
0 commit comments