You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+26-20Lines changed: 26 additions & 20 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -5,7 +5,7 @@ This is a PyTorch implementation of semantic segmentation models on MIT ADE20K s
5
5
ADE20K is the largest open source dataset for semantic segmentation and scene parsing, released by MIT Computer Vision team. Follow the link below to find the repository for our dataset and implementations on Caffe and Torch7:
6
6
https://github.com/CSAILVision/sceneparsing
7
7
8
-
If you simply want to play with our demo, please try this link: http://scenesegmentation.csail.mit.edu You can upload your own photo and segment it!
8
+
If you simply want to play with our demo, please try this link: http://scenesegmentation.csail.mit.edu You can upload your own photo and parse it!
- We use configuration files to store most options which were in argument parser. The definitions of options are detailed in ```config/defaults.py```.
22
+
- HRNet model is now supported.
22
23
23
24
24
25
## Highlights
@@ -36,21 +37,24 @@ For the task of semantic segmentation, it is good to keep aspect ratio of images
36
37
37
38
<sup>*Now the batch size of a dataloader always equals to the number of GPUs*, each element will be sent to a GPU. It is also compatible with multi-processing. Note that the file index for the multi-processing dataloader is stored on the master process, which is in contradict to our goal that each worker maintains its own file list. So we use a trick that although the master process still gives dataloader an index for `__getitem__` function, we just ignore such request and send a random batch dict. Also, *the multiple workers forked by the dataloader all have the same seed*, you will find that multiple workers will yield exactly the same data, if we use the above-mentioned trick directly. Therefore, we add one line of code which sets the defaut seed for `numpy.random` before activating multiple worker in dataloader.</sup>
38
39
39
-
### An Efficient and Effective Framework: UPerNet
40
-
UPerNet is a model based on Feature Pyramid Network (FPN) and Pyramid Pooling Module (PPM). It doesn't need dilated convolution, an operator that is time-and-memory consuming. *Without bells and whistles*, it is comparable or even better compared with PSPNet, while requiring much shorter training time and less GPU memory (e.g., you cannot train a PSPNet-101 on TITAN Xp GPUs with only 12GB memory, while you can train a UPerNet-101 on such GPUs). Thanks to the efficient network design, we will soon open source stronger models of UPerNet based on ResNeXt that is able to run on normal GPUs. Please refer to [UperNet](https://arxiv.org/abs/1807.10221) for details.
40
+
### State-of-the-Art models
41
+
-**PSPNet** is scene parsing network that aggregates global representation with Pyramid Pooling Module (PPM). It is the winner model of ILSVRC'16 MIT Scene Parsing Challenge. Please refer to [https://arxiv.org/abs/1612.01105](https://arxiv.org/abs/1612.01105) for details.
42
+
-**UPerNet** is a model based on Feature Pyramid Network (FPN) and Pyramid Pooling Module (PPM). It doesn't need dilated convolution, an operator that is time-and-memory consuming. *Without bells and whistles*, it is comparable or even better compared with PSPNet, while requiring much shorter training time and less GPU memory. Please refer to [https://arxiv.org/abs/1807.10221](https://arxiv.org/abs/1807.10221) for details.
43
+
-**HRNet** is a recently proposed model that retains high resolution representations throughout the model, without the traditional bottleneck design. It achieves the SOTA performance on a series of pixel labeling tasks. Please refer to [https://arxiv.org/abs/1904.04514](https://arxiv.org/abs/1904.04514) for details.
41
44
42
45
43
46
## Supported models
44
-
We split our models into encoder and decoder, where encoders are usually modified directly from classification networks, and decoders consist of final convolutions and upsampling.
47
+
We split our models into encoder and decoder, where encoders are usually modified directly from classification networks, and decoders consist of final convolutions and upsampling. We have provided some pre-configured models in the ```config``` folder.
45
48
46
49
Encoder:
47
50
- MobileNetV2dilated
48
-
- ResNet18dilated
49
-
- ResNet50dilated
50
-
- ResNet101dilated
51
+
- ResNet18/ResNet18dilated
52
+
- ResNet50/ResNet50dilated
53
+
- ResNet101/ResNet101dilated
54
+
- HRNet (HRNetV2-W48)
51
55
52
56
Decoder:
53
-
- C1 (1 convolution module)
57
+
- C1 (one convolution module)
54
58
- C1_deepsup (C1 + deep supervision trick)
55
59
- PPM (Pyramid Pooling Module, see [PSPNet](https://hszhao.github.io/projects/pspnet) paper for details.)
56
60
- PPM_deepsup (PPM + deep supervision trick)
@@ -66,12 +70,10 @@ IMPORTANT: We use our self-trained base model on ImageNet. The model takes the i
The training is benchmarked on a server with 8 NVIDIA Pascal Titan Xp GPUs (12GB GPU memory), ***except for*** ResNet101dilated, which is benchmarked on a server with 8 NVIDIA Tesla P40 GPUS (22GB GPU memory), because of the insufficient memory issue when using dilated conv on a very deep network. The inference speed is benchmarked a single NVIDIA Pascal Titan Xp GPU, without visualization.
157
+
The training is benchmarked on a server with 8 NVIDIA Pascal Titan Xp GPUs (12GB GPU memory), the inference speed is benchmarked a single NVIDIA Pascal Titan Xp GPU, without visualization.
153
158
154
159
## Environment
155
160
The code is developed under the following configurations.
0 commit comments