Skip to content

Commit 8f33d68

Browse files
authored
[Feature] Provide URLs of STDC, Segmenter and Twins pretrained models (open-mmlab#1357)
1 parent e8cc322 commit 8f33d68

21 files changed

+61
-38
lines changed

configs/_base_/models/segmenter_vit-b16_mask.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,8 +1,9 @@
1+
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/segmenter/vit_base_p16_384_20220308-96dfe169.pth' # noqa
12
# model settings
23
backbone_norm_cfg = dict(type='LN', eps=1e-6, requires_grad=True)
34
model = dict(
45
type='EncoderDecoder',
5-
pretrained='pretrain/vit_base_p16_384.pth',
6+
pretrained=checkpoint,
67
backbone=dict(
78
type='VisionTransformer',
89
img_size=(512, 512),

configs/_base_/models/twins_pcpvt-s_fpn.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,13 @@
1+
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/pcpvt_small_20220308-e638c41c.pth' # noqa
2+
13
# model settings
24
backbone_norm_cfg = dict(type='LN')
35
norm_cfg = dict(type='SyncBN', requires_grad=True)
46
model = dict(
57
type='EncoderDecoder',
68
backbone=dict(
79
type='PCPVT',
8-
init_cfg=dict(
9-
type='Pretrained', checkpoint='pretrained/pcpvt_small.pth'),
10+
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
1011
in_channels=3,
1112
embed_dims=[64, 128, 320, 512],
1213
num_heads=[1, 2, 5, 8],

configs/_base_/models/twins_pcpvt-s_upernet.py

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,12 +1,13 @@
1+
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/twins/pcpvt_small_20220308-e638c41c.pth' # noqa
2+
13
# model settings
24
backbone_norm_cfg = dict(type='LN')
35
norm_cfg = dict(type='SyncBN', requires_grad=True)
46
model = dict(
57
type='EncoderDecoder',
68
backbone=dict(
79
type='PCPVT',
8-
init_cfg=dict(
9-
type='Pretrained', checkpoint='pretrained/pcpvt_small.pth'),
10+
init_cfg=dict(type='Pretrained', checkpoint=checkpoint),
1011
in_channels=3,
1112
embed_dims=[64, 128, 320, 512],
1213
num_heads=[1, 2, 5, 8],

configs/segmenter/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -33,9 +33,9 @@ Image segmentation is often ambiguous at the level of individual image patches a
3333

3434
## Usage
3535

36-
To use the pre-trained ViT model from [Segmenter](https://github.com/rstrudel/segmenter), it is necessary to convert keys.
36+
We have provided pretrained models converted from [ViT-AugReg](https://github.com/rwightman/pytorch-image-models/blob/f55c22bebf9d8afc449d317a723231ef72e0d662/timm/models/vision_transformer.py#L54-L106).
3737

38-
We provide a script [`vitjax2mmseg.py`](../../tools/model_converters/vitjax2mmseg.py) in the tools directory to convert the key of models from [ViT-AugReg](https://github.com/rwightman/pytorch-image-models/blob/f55c22bebf9d8afc449d317a723231ef72e0d662/timm/models/vision_transformer.py#L54-L106) to MMSegmentation style.
38+
If you want to convert keys on your own to use the pre-trained ViT model from [Segmenter](https://github.com/rstrudel/segmenter), we also provide a script [`vitjax2mmseg.py`](../../tools/model_converters/vitjax2mmseg.py) in the tools directory to convert the key of models from [ViT-AugReg](https://github.com/rwightman/pytorch-image-models/blob/f55c22bebf9d8afc449d317a723231ef72e0d662/timm/models/vision_transformer.py#L54-L106) to MMSegmentation style.
3939

4040
```shell
4141
python tools/model_converters/vitjax2mmseg.py ${PRETRAIN_PATH} ${STORE_PATH}

configs/segmenter/segmenter_vit-l_mask_8x1_512x512_160k_ade20k.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -3,9 +3,10 @@
33
'../_base_/datasets/ade20k.py', '../_base_/default_runtime.py',
44
'../_base_/schedules/schedule_160k.py'
55
]
6+
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/segmenter/vit_large_p16_384_20220308-d4efb41d.pth' # noqa
67

78
model = dict(
8-
pretrained='pretrain/vit_large_p16_384.pth',
9+
pretrained=checkpoint,
910
backbone=dict(
1011
type='VisionTransformer',
1112
img_size=(640, 640),

configs/segmenter/segmenter_vit-s_mask_8x1_512x512_160k_ade20k.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,9 +4,11 @@
44
'../_base_/schedules/schedule_160k.py'
55
]
66

7+
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/segmenter/vit_small_p16_384_20220308-410f6037.pth' # noqa
8+
79
backbone_norm_cfg = dict(type='LN', eps=1e-6, requires_grad=True)
810
model = dict(
9-
pretrained='pretrain/vit_small_p16_384.pth',
11+
pretrained=checkpoint,
1012
backbone=dict(
1113
img_size=(512, 512),
1214
embed_dims=384,

configs/segmenter/segmenter_vit-t_mask_8x1_512x512_160k_ade20k.py

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -4,8 +4,10 @@
44
'../_base_/schedules/schedule_160k.py'
55
]
66

7+
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/segmenter/vit_tiny_p16_384_20220308-cce8c795.pth' # noqa
8+
79
model = dict(
8-
pretrained='pretrain/vit_tiny_p16_384.pth',
10+
pretrained=checkpoint,
911
backbone=dict(embed_dims=192, num_heads=3),
1012
decode_head=dict(
1113
type='SegmenterMaskTransformerHead',

configs/stdc/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,9 +35,9 @@ BiSeNet has been proved to be a popular two-stream network for real-time segment
3535

3636
## Usage
3737

38-
To use original repositories' [ImageNet Pretrained STDCNet Weights](https://drive.google.com/drive/folders/1wROFwRt8qWHD4jSo8Zu1gp1d6oYJ3ns1) , it is necessary to convert keys.
38+
We have provided [ImageNet Pretrained STDCNet Weights](https://drive.google.com/drive/folders/1wROFwRt8qWHD4jSo8Zu1gp1d6oYJ3ns1) models converted from [official repo](https://github.com/MichaelFan01/STDC-Seg).
3939

40-
We provide a script [`stdc2mmseg.py`](../../tools/model_converters/stdc2mmseg.py) in the tools directory to convert the key of models from [the official repo](https://github.com/MichaelFan01/STDC-Seg) to MMSegmentation style.
40+
If you want to convert keys on your own to use official repositories' pre-trained models, we also provide a script [`stdc2mmseg.py`](../../tools/model_converters/stdc2mmseg.py) in the tools directory to convert the key of models from [the official repo](https://github.com/MichaelFan01/STDC-Seg) to MMSegmentation style.
4141

4242
```shell
4343
python tools/model_converters/stdc2mmseg.py ${PRETRAIN_PATH} ${STORE_PATH} ${STDC_TYPE}
Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1+
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/stdc/stdc1_20220308-5368626c.pth' # noqa
12
_base_ = './stdc1_512x1024_80k_cityscapes.py'
23
model = dict(
34
backbone=dict(
45
backbone_cfg=dict(
5-
init_cfg=dict(
6-
type='Pretrained', checkpoint='./pretrained/stdc1.pth'))))
6+
init_cfg=dict(type='Pretrained', checkpoint=checkpoint))))
Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
1+
checkpoint = 'https://download.openmmlab.com/mmsegmentation/v0.5/pretrain/stdc/stdc2_20220308-7dbd9127.pth' # noqa
12
_base_ = './stdc2_512x1024_80k_cityscapes.py'
23
model = dict(
34
backbone=dict(
45
backbone_cfg=dict(
5-
init_cfg=dict(
6-
type='Pretrained', checkpoint='./pretrained/stdc2.pth'))))
6+
init_cfg=dict(type='Pretrained', checkpoint=checkpoint))))

0 commit comments

Comments
 (0)