Skip to content

Commit c1f1838

Browse files
committed
Rename to low-shot
1 parent 8da2c67 commit c1f1838

File tree

5 files changed

+11
-11
lines changed

5 files changed

+11
-11
lines changed

DiffAugment-stylegan2/README.md

Lines changed: 8 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -105,37 +105,37 @@ Here, `PATH_TO_THE_TFRECORDS_OR_LMDB_FOLDER` specifies the folder containing the
105105
| `mit-han-lab:stylegan2-lsun-cat-1k.pkl` | LSUN-Cat (1k samples) | 182.85 |
106106
| `mit-han-lab:DiffAugment-stylegan2-lsun-cat-1k.pkl` | LSUN-Cat (1k samples) | **42.26** |
107107

108-
## 100-Shot Generation
108+
## Low-Shot Generation
109109

110-
<img src="../imgs/100-shot-interp.jpg" width="1000px"/>
110+
<img src="../imgs/low-shot-interp.jpg" width="1000px"/>
111111

112-
To run the 100-shot generation experiments on the 100-shot datasets:
112+
To run the low-shot generation experiments on the 100-shot datasets:
113113

114114
```bash
115-
python run_100_shot.py --dataset=WHICH_DATASET --num-gpus=NUM_GPUS --DiffAugment=color,translation,cutout
115+
python run_low_shot.py --dataset=WHICH_DATASET --num-gpus=NUM_GPUS --DiffAugment=color,translation,cutout
116116
```
117117

118118
or the following command to run on the AnimalFace datasets (with a longer training length):
119119

120120
```bash
121-
python run_100_shot.py --dataset=WHICH_DATASET --num-gpus=NUM_GPUS --DiffAugment=color,translation,cutout --total-kimg=500
121+
python run_low_shot.py --dataset=WHICH_DATASET --num-gpus=NUM_GPUS --DiffAugment=color,translation,cutout --total-kimg=500
122122
```
123123

124124
`WHICH_DATASET` specifies `100-shot-obama`, `100-shot-grumpy_cat`, `100-shot-panda`, `100-shot-bridge_of_sighs`, `100-shot-medici_fountain`, `100-shot-temple_of_heaven`, `100-shot-wuzhen`, `AnimalFace-cat`, or `AnimalFace-dog`, which will be automatically downloaded, or the path of a folder containing your own training images. `NUM_GPUS` specifies the number of GPUs to use; we recommend using 4 or 8 GPUs to replicate our results. The training typically takes several hours. Set `--DiffAugment=""` to run the baseline model. Specify `--resolution=RESOLUTION` to run at a different resolution from the default `256`. You may also fine-tune from an FFHQ pre-trained model listed above, e.g., by specifying `--resume=mit-han-lab:DiffAugment-stylegan2-ffhq.pkl --fmap-base=8192`.
125125

126126
### Preparing Your Own Datasets
127127

128-
Our method can generate good results using a small number of samples, e.g., 100 images. You may create a new dataset at such scale easily, but note that the generated results may be sensitive to the quality of the training samples. You may wish to crop the raw images and discard some bad training samples. After putting all images into a single folder, pass it to `WHICH_DATASET` in `run_100_shot.py`, the images will be resized to the specified resolution if necessary, and then enjoy the outputs! Note that,
128+
Our method can generate good results using a small number of samples, e.g., 100 images. You may create a new dataset at such scale easily, but note that the generated results may be sensitive to the quality of the training samples. You may wish to crop the raw images and discard some bad training samples. After putting all images into a single folder, pass it to `WHICH_DATASET` in `run_low_shot.py`, the images will be resized to the specified resolution if necessary, and then enjoy the outputs! Note that,
129129

130130
- The training length (defaults to 300k images) may be increased for larger datasets, but there may be overfitting issues if the training is too long.
131131
- The cached files will be stored in the same folder with the training images. If the training images in your folder is *changed* after some run, please manually clean the cached files, `*.tfrecords` and `*.pkl`, from your image folder before rerun.
132132

133133
### Pre-Trained Models and Evaluation
134134

135-
To evaluate a model on a 100-shot dataset, run the following command:
135+
To evaluate a model on a low-shot generation dataset, run the following command:
136136

137137
```bash
138-
python run_100_shot.py --dataset=WHICH_DATASET --resume=WHICH_MODEL --eval
138+
python run_low_shot.py --dataset=WHICH_DATASET --resume=WHICH_MODEL --eval
139139
```
140140

141141
Here, `WHICH_DATASET` specifies the folder containing the training images, or one of our pre-defined datasets, including `100-shot-obama`, `100-shot-grumpy_cat`, `100-shot-panda`, `100-shot-bridge_of_sighs`, `100-shot-medici_fountain`, `100-shot-temple_of_heaven`, `100-shot-wuzhen`, `AnimalFace-cat`, and `AnimalFace-dog`, which will be automatically downloaded. `WHICH_MODEL` specifies the path of a checkpoint, or a pre-trained model in the following list, which will be automatically downloaded:
File renamed without changes.

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -18,9 +18,9 @@
1818

1919
This repository contains our implementation of Differentiable Augmentation (DiffAugment) in both PyTorch and TensorFlow. It can be used to significantly improve the data efficiency for GAN training. We have provided the TensorFlow code of [DiffAugment-stylegan2](https://github.com/mit-han-lab/data-efficient-gans/tree/master/DiffAugment-stylegan2), the PyTorch code of [DiffAugment-biggan-cifar](https://github.com/mit-han-lab/data-efficient-gans/tree/master/DiffAugment-biggan-cifar) for GPU training, and the TensorFlow code of [DiffAugment-biggan-imagenet](https://github.com/mit-han-lab/data-efficient-gans/tree/master/DiffAugment-biggan-imagenet) for TPU training.
2020

21-
<img src="imgs/100-shot-comparison.jpg" width="1000px"/>
21+
<img src="imgs/low-shot-comparison.jpg" width="1000px"/>
2222

23-
*100-shot generation without pre-training. With DiffAugment, our model can generate high-fidelity images using only 100 Obama portraits, grumpy cats, or pandas from our collected 100-shot datasets, 160 cats or 389 dogs from the AnimalFace dataset at 256×256 resolution.*
23+
*Low-shot generation without pre-training. With DiffAugment, our model can generate high-fidelity images using only 100 Obama portraits, grumpy cats, or pandas from our collected 100-shot datasets, 160 cats or 389 dogs from the AnimalFace dataset at 256×256 resolution.*
2424

2525
<img src="imgs/cifar10-results.jpg" width="1000px"/>
2626

@@ -49,7 +49,7 @@ python generate_gif.py -r mit-han-lab:DiffAugment-stylegan2-100-shot-obama.pkl -
4949
or to train a new model:
5050

5151
```bash
52-
python run_100_shot.py --dataset=100-shot-obama --num-gpus=4
52+
python run_low_shot.py --dataset=100-shot-obama --num-gpus=4
5353
```
5454

5555
You may also try out `100-shot-grumpy_cat`, `100-shot-panda`, `100-shot-bridge_of_sighs`, `100-shot-medici_fountain`, `100-shot-temple_of_heaven`, `100-shot-wuzhen`, or the folder containing your own training images. Please refer to the [DiffAugment-stylegan2](https://github.com/mit-han-lab/data-efficient-gans/tree/master/DiffAugment-stylegan2#100-shot-generation) README for the dependencies and details.
File renamed without changes.
File renamed without changes.

0 commit comments

Comments
 (0)