Skip to content

Commit 6cb7fe0

Browse files
authored
Imagenet-s dataset for large-scale semantic segmentation (open-mmlab#2480)
## Motivation Based on the ImageNet dataset, we propose the ImageNet-S dataset has 1.2 million training images and 50k high-quality semantic segmentation annotations to support unsupervised/semi-supervised semantic segmentation on the ImageNet dataset. paper: Large-scale Unsupervised Semantic Segmentation (TPAMI 2022) [Paper link](https://arxiv.org/abs/2106.03149) ## Modification 1. Support imagenet-s dataset and its' configuration 2. Add the dataset preparation in the documentation
1 parent ba7608c commit 6cb7fe0

File tree

6 files changed

+1165
-1
lines changed

6 files changed

+1165
-1
lines changed

README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -188,6 +188,7 @@ Supported datasets:
188188
- [x] [Vaihingen](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/en/dataset_prepare.md#isprs-vaihingen)
189189
- [x] [iSAID](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/en/dataset_prepare.md#isaid)
190190
- [x] [High quality synthetic face occlusion](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/en/dataset_prepare.md#delving-into-high-quality-synthetic-face-occlusion-segmentation-datasets)
191+
- [x] [ImageNetS](https://github.com/open-mmlab/mmsegmentation/blob/master/docs/en/dataset_prepare.md#imagenets)
191192

192193
## FAQ
193194

Lines changed: 61 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,61 @@
1+
# dataset settings
2+
dataset_type = 'ImageNetSDataset'
3+
subset = 919
4+
data_root = 'data/ImageNetS/ImageNetS919'
5+
img_norm_cfg = dict(
6+
mean=[123.675, 116.28, 103.53], std=[58.395, 57.12, 57.375], to_rgb=True)
7+
crop_size = (224, 224)
8+
train_pipeline = [
9+
dict(type='LoadImageNetSImageFromFile', downsample_large_image=True),
10+
dict(type='LoadImageNetSAnnotations', reduce_zero_label=False),
11+
dict(type='Resize', img_scale=(1024, 256), ratio_range=(0.5, 2.0)),
12+
dict(
13+
type='RandomCrop',
14+
crop_size=crop_size,
15+
cat_max_ratio=0.75,
16+
ignore_index=1000),
17+
dict(type='RandomFlip', prob=0.5),
18+
dict(type='PhotoMetricDistortion'),
19+
dict(type='Normalize', **img_norm_cfg),
20+
dict(type='Pad', size=crop_size, pad_val=0, seg_pad_val=1000),
21+
dict(type='DefaultFormatBundle'),
22+
dict(type='Collect', keys=['img', 'gt_semantic_seg']),
23+
]
24+
test_pipeline = [
25+
dict(type='LoadImageNetSImageFromFile', downsample_large_image=True),
26+
dict(
27+
type='MultiScaleFlipAug',
28+
img_scale=(1024, 256),
29+
flip=False,
30+
transforms=[
31+
dict(type='Resize', keep_ratio=True),
32+
dict(type='RandomFlip'),
33+
dict(type='Normalize', **img_norm_cfg),
34+
dict(type='ImageToTensor', keys=['img']),
35+
dict(type='Collect', keys=['img']),
36+
])
37+
]
38+
data = dict(
39+
samples_per_gpu=4,
40+
workers_per_gpu=4,
41+
train=dict(
42+
type=dataset_type,
43+
subset=subset,
44+
data_root=data_root,
45+
img_dir='train-semi',
46+
ann_dir='train-semi-segmentation',
47+
pipeline=train_pipeline),
48+
val=dict(
49+
type=dataset_type,
50+
subset=subset,
51+
data_root=data_root,
52+
img_dir='validation',
53+
ann_dir='validation-segmentation',
54+
pipeline=test_pipeline),
55+
test=dict(
56+
type=dataset_type,
57+
subset=subset,
58+
data_root=data_root,
59+
img_dir='validation',
60+
ann_dir='validation-segmentation',
61+
pipeline=test_pipeline))

docs/en/dataset_prepare.md

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -155,6 +155,25 @@ mmsegmentation
155155
│ │ │ ├── img
156156
│ │ │ ├── mask
157157
│ │ │ ├── split
158+
│ ├── ImageNetS
159+
│ │ ├── ImageNetS919
160+
│ │ │ ├── train-semi
161+
│ │ │ ├── train-semi-segmentation
162+
│ │ │ ├── validation
163+
│ │ │ ├── validation-segmentation
164+
│ │ │ ├── test
165+
│ │ ├── ImageNetS300
166+
│ │ │ ├── train-semi
167+
│ │ │ ├── train-semi-segmentation
168+
│ │ │ ├── validation
169+
│ │ │ ├── validation-segmentation
170+
│ │ │ ├── test
171+
│ │ ├── ImageNetS50
172+
│ │ │ ├── train-semi
173+
│ │ │ ├── train-semi-segmentation
174+
│ │ │ ├── validation
175+
│ │ │ ├── validation-segmentation
176+
│ │ │ ├── test
158177
```
159178

160179
### Cityscapes
@@ -580,3 +599,31 @@ OCCLUDER_DATASET.IMG_DIR "path/to/jw93/mmsegmentation/data_materials/DTD/images"
580599
```python
581600

582601
```
602+
603+
### ImageNetS
604+
605+
The ImageNet-S dataset is for [Large-scale unsupervised/semi-supervised semantic segmentation](https://arxiv.org/abs/2106.03149).
606+
607+
The images and annotations are available on [ImageNet-S](https://github.com/LUSSeg/ImageNet-S#imagenet-s-dataset-preparation).
608+
609+
```
610+
│ ├── ImageNetS
611+
│ │ ├── ImageNetS919
612+
│ │ │ ├── train-semi
613+
│ │ │ ├── train-semi-segmentation
614+
│ │ │ ├── validation
615+
│ │ │ ├── validation-segmentation
616+
│ │ │ ├── test
617+
│ │ ├── ImageNetS300
618+
│ │ │ ├── train-semi
619+
│ │ │ ├── train-semi-segmentation
620+
│ │ │ ├── validation
621+
│ │ │ ├── validation-segmentation
622+
│ │ │ ├── test
623+
│ │ ├── ImageNetS50
624+
│ │ │ ├── train-semi
625+
│ │ │ ├── train-semi-segmentation
626+
│ │ │ ├── validation
627+
│ │ │ ├── validation-segmentation
628+
│ │ │ ├── test
629+
```

docs/zh_cn/dataset_prepare.md

Lines changed: 47 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -119,6 +119,25 @@ mmsegmentation
119119
│ │ ├── ann_dir
120120
│ │ │ ├── train
121121
│ │ │ ├── val
122+
│ ├── ImageNetS
123+
│ │ ├── ImageNetS919
124+
│ │ │ ├── train-semi
125+
│ │ │ ├── train-semi-segmentation
126+
│ │ │ ├── validation
127+
│ │ │ ├── validation-segmentation
128+
│ │ │ ├── test
129+
│ │ ├── ImageNetS300
130+
│ │ │ ├── train-semi
131+
│ │ │ ├── train-semi-segmentation
132+
│ │ │ ├── validation
133+
│ │ │ ├── validation-segmentation
134+
│ │ │ ├── test
135+
│ │ ├── ImageNetS50
136+
│ │ │ ├── train-semi
137+
│ │ │ ├── train-semi-segmentation
138+
│ │ │ ├── validation
139+
│ │ │ ├── validation-segmentation
140+
│ │ │ ├── test
122141
```
123142

124143
### Cityscapes
@@ -317,3 +336,31 @@ python tools/convert_datasets/isaid.py /path/to/iSAID
317336
```
318337

319338
使用我们默认的配置 (`patch_width`=896, `patch_height`=896, `overlap_area`=384), 将生成 33978 张图片的训练集和 11644 张图片的验证集。
339+
340+
### ImageNetS
341+
342+
ImageNet-S是用于[大规模无监督/半监督语义分割](https://arxiv.org/abs/2106.03149)任务的数据集。
343+
344+
ImageNet-S数据集可在[ImageNet-S](https://github.com/LUSSeg/ImageNet-S#imagenet-s-dataset-preparation)获取。
345+
346+
```
347+
│ ├── ImageNetS
348+
│ │ ├── ImageNetS919
349+
│ │ │ ├── train-semi
350+
│ │ │ ├── train-semi-segmentation
351+
│ │ │ ├── validation
352+
│ │ │ ├── validation-segmentation
353+
│ │ │ ├── test
354+
│ │ ├── ImageNetS300
355+
│ │ │ ├── train-semi
356+
│ │ │ ├── train-semi-segmentation
357+
│ │ │ ├── validation
358+
│ │ │ ├── validation-segmentation
359+
│ │ │ ├── test
360+
│ │ ├── ImageNetS50
361+
│ │ │ ├── train-semi
362+
│ │ │ ├── train-semi-segmentation
363+
│ │ │ ├── validation
364+
│ │ │ ├── validation-segmentation
365+
│ │ │ ├── test
366+
```

mmseg/datasets/__init__.py

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,8 @@
1111
from .drive import DRIVEDataset
1212
from .face import FaceOccludedDataset
1313
from .hrf import HRFDataset
14+
from .imagenets import (ImageNetSDataset, LoadImageNetSAnnotations,
15+
LoadImageNetSImageFromFile)
1416
from .isaid import iSAIDDataset
1517
from .isprs import ISPRSDataset
1618
from .loveda import LoveDADataset
@@ -27,5 +29,7 @@
2729
'PascalContextDataset59', 'ChaseDB1Dataset', 'DRIVEDataset', 'HRFDataset',
2830
'STAREDataset', 'DarkZurichDataset', 'NightDrivingDataset',
2931
'COCOStuffDataset', 'LoveDADataset', 'MultiImageMixDataset',
30-
'iSAIDDataset', 'ISPRSDataset', 'PotsdamDataset', 'FaceOccludedDataset'
32+
'iSAIDDataset', 'ISPRSDataset', 'PotsdamDataset', 'FaceOccludedDataset',
33+
'ImageNetSDataset', 'LoadImageNetSAnnotations',
34+
'LoadImageNetSImageFromFile'
3135
]

0 commit comments

Comments
 (0)