Skip to content

Commit f346fcc

Browse files
authored
Update README.md
1 parent d36c17e commit f346fcc

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -44,7 +44,7 @@ Decoder:
4444
- c1_bilinear (1 conv + bilinear upsample)
4545
- c1_bilinear_deepsup (c1_blinear + deep supervision trick)
4646
- ppm_bilinear (pyramid pooling + bilinear upsample, see PSPNet paper for details)
47-
- ppm_bilinear_deepsup (psp_bilinear + deep supervision trick)
47+
- ppm_bilinear_deepsup (ppm_bilinear + deep supervision trick)
4848

4949
***Coming soon***:
5050
- UPerNet based on Feature Pyramid Network (FPN) and Pyramid Pooling Module (PPM), with down-sampling rate of 4, 8 and 16. It doesn't need dilated convolution, a operator that is time-and-memory consuming. *Without bells and whistles*, it is comparable or even better compared with PSPNet, while requires much shorter training time and less GPU memory.
@@ -66,7 +66,7 @@ IMPORTANT: We use our self-trained base model on ImageNet. The model takes the i
6666
<td>27.5 hours</td>
6767
</tr>
6868
<tr>
69-
<td rowspan="2">ResNet-50_dilated8 + psp_bilinear_deepsup</td>
69+
<td rowspan="2">ResNet-50_dilated8 + ppm_bilinear_deepsup</td>
7070
<td>No</td><td>41.26</td><td>79.73</td><td>60.50</td>
7171
<td rowspan="2">33.4 hours</td>
7272
</tr>
@@ -110,7 +110,7 @@ The code is developed under the following configurations.
110110
chmod +x download_ADE20K.sh
111111
./download_ADE20K.sh
112112
```
113-
2. Train a network (default: ResNet-50_dilated8 + psp_bilinear_deepsup). During training, checkpoints will be saved in folder ```ckpt```.
113+
2. Train a network (default: ResNet-50_dilated8 + ppm_bilinear_deepsup). During training, checkpoints will be saved in folder ```ckpt```.
114114
```bash
115115
python3 train.py --num_gpus NUM_GPUS
116116
```

0 commit comments

Comments
 (0)