Skip to content

Commit 793e082

Browse files
littleSunlxylinxinyangMengzhangLIJunjun2016
authored
[Feature] Support Twins (NeurIPS2021) (open-mmlab#989)
* debug * debug * debug * this is a debug step, and needs to be recovered * need recover * git * debug * git * git * git * git * git * git * debug need recover * debug * git * debug * debug * debug * debug * debug * debug * debug * debug * debugf * debug * debug * debug * debug * debug * debug * debug * debug * git * git * git * use config small/base/large * debug * debug * git * debug * git * debug * debug * debug args * debug * debug * git * git * debug * git * git * git * git * git * debug * debug * git * debug * git * debug * debug * debug * debug * git * debug * git * git * debug * debug * git * git * git * git * debug * debug * debug * debug * git * debug * debug * git * git * debug * debug * git * debug * debug * debug * git * debug * debug * debug * Please enter the commit message for your changes. Lines starting * git * git * debug * debug * debug * git * git * debug * debug * debug * debug * debug * debug * debug * debug * debug * debug * debug * git * debug * debug * debug * debug * debug * debug * debug * git * fix pre-commit error * fix error * git * git * git * git * git * git * debug * debug * debug * debug * debug * debug * git * debug * debug * debug * debug * debug * debug * debug * debug * debug * git * git * git * debug * debug * debug * git * git * git * git * git * git * git * git * git * debug * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * fix unittest error * fix config errors * fix twins2mmseg bug * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * git * fix init_weights() in twins.py * git * git * git * git * fix comment * fix comment * fix comment * fix comment * fix unit test coverage in TwinsPR * Add Twins README * Add Twins README * twins refactor * twins refactor * delete init_cfg in FFN * delete init_cfg in FFN * Update mmseg/models/backbones/twins.py * Update mmseg/models/backbones/twins.py * Update mmseg/models/backbones/twins.py Co-authored-by: Junjun2016 <[email protected]> * Update mmseg/models/backbones/twins.py * add conference name Co-authored-by: linxinyang <[email protected]> Co-authored-by: MengzhangLI <[email protected]> Co-authored-by: Junjun2016 <[email protected]>
1 parent 14a64bc commit 793e082

23 files changed

+1473
-5
lines changed

README.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -65,7 +65,8 @@ Supported backbones:
6565
- [x] [MobileNetV2 (CVPR'2018)](configs/mobilenet_v2)
6666
- [x] [MobileNetV3 (ICCV'2019)](configs/mobilenet_v3)
6767
- [x] [Vision Transformer (ICLR'2021)](configs/vit)
68-
- [x] [Swin Transformer (ArXiv'2021)](configs/swin)
68+
- [x] [Swin Transformer (ICCV'2021)](configs/swin)
69+
- [x] [Twins (NeurIPS'2021)](configs/twins)
6970

7071
Supported methods:
7172

@@ -99,7 +100,7 @@ Supported methods:
99100
- [x] [BiSeNetV2 (IJCV'2021)](configs/bisenetv2)
100101
- [x] [SETR (CVPR'2021)](configs/setr)
101102
- [x] [DPT (ArXiv'2021)](configs/dpt)
102-
- [x] [SegFormer (ArXiv'2021)](configs/segformer)
103+
- [x] [SegFormer (NeurIPS'2021)](configs/segformer)
103104

104105
Supported datasets:
105106

README_zh-CN.md

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -64,7 +64,8 @@ MMSegmentation 是一个基于 PyTorch 的语义分割开源工具箱。它是 O
6464
- [x] [MobileNetV2 (CVPR'2018)](configs/mobilenet_v2)
6565
- [x] [MobileNetV3 (ICCV'2019)](configs/mobilenet_v3)
6666
- [x] [Vision Transformer (ICLR'2021)](configs/vit)
67-
- [x] [Swin Transformer (ArXiv'2021)](configs/swin)
67+
- [x] [Swin Transformer (ICCV'2021)](configs/swin)
68+
- [x] [Twins (NeurIPS'2021)](configs/twins)
6869

6970
已支持的算法:
7071

@@ -98,7 +99,7 @@ MMSegmentation 是一个基于 PyTorch 的语义分割开源工具箱。它是 O
9899
- [x] [BiSeNetV2 (IJCV'2021)](configs/bisenetv2)
99100
- [x] [SETR (CVPR'2021)](configs/setr)
100101
- [x] [DPT (ArXiv'2021)](configs/dpt)
101-
- [x] [SegFormer (ArXiv'2021)](configs/segformer)
102+
- [x] [SegFormer (NeurIPS'2021)](configs/segformer)
102103

103104
已支持的数据集:
104105

Lines changed: 44 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,44 @@
1+
# model settings
2+
backbone_norm_cfg = dict(type='LN')
3+
norm_cfg = dict(type='SyncBN', requires_grad=True)
4+
model = dict(
5+
type='EncoderDecoder',
6+
backbone=dict(
7+
type='PCPVT',
8+
init_cfg=dict(
9+
type='Pretrained', checkpoint='pretrained/pcpvt_small.pth'),
10+
in_channels=3,
11+
embed_dims=[64, 128, 320, 512],
12+
num_heads=[1, 2, 5, 8],
13+
patch_sizes=[4, 2, 2, 2],
14+
strides=[4, 2, 2, 2],
15+
mlp_ratios=[8, 8, 4, 4],
16+
out_indices=(0, 1, 2, 3),
17+
qkv_bias=True,
18+
norm_cfg=backbone_norm_cfg,
19+
depths=[3, 4, 6, 3],
20+
sr_ratios=[8, 4, 2, 1],
21+
norm_after_stage=False,
22+
drop_rate=0.0,
23+
attn_drop_rate=0.,
24+
drop_path_rate=0.2),
25+
neck=dict(
26+
type='FPN',
27+
in_channels=[64, 128, 320, 512],
28+
out_channels=256,
29+
num_outs=4),
30+
decode_head=dict(
31+
type='FPNHead',
32+
in_channels=[256, 256, 256, 256],
33+
in_index=[0, 1, 2, 3],
34+
feature_strides=[4, 8, 16, 32],
35+
channels=128,
36+
dropout_ratio=0.1,
37+
num_classes=150,
38+
norm_cfg=norm_cfg,
39+
align_corners=False,
40+
loss_decode=dict(
41+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
42+
# model training and testing settings
43+
train_cfg=dict(),
44+
test_cfg=dict(mode='whole'))
Lines changed: 52 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,52 @@
1+
# model settings
2+
backbone_norm_cfg = dict(type='LN')
3+
norm_cfg = dict(type='SyncBN', requires_grad=True)
4+
model = dict(
5+
type='EncoderDecoder',
6+
backbone=dict(
7+
type='PCPVT',
8+
init_cfg=dict(
9+
type='Pretrained', checkpoint='pretrained/pcpvt_small.pth'),
10+
in_channels=3,
11+
embed_dims=[64, 128, 320, 512],
12+
num_heads=[1, 2, 5, 8],
13+
patch_sizes=[4, 2, 2, 2],
14+
strides=[4, 2, 2, 2],
15+
mlp_ratios=[8, 8, 4, 4],
16+
out_indices=(0, 1, 2, 3),
17+
qkv_bias=True,
18+
norm_cfg=backbone_norm_cfg,
19+
depths=[3, 4, 6, 3],
20+
sr_ratios=[8, 4, 2, 1],
21+
norm_after_stage=False,
22+
drop_rate=0.0,
23+
attn_drop_rate=0.,
24+
drop_path_rate=0.2),
25+
decode_head=dict(
26+
type='UPerHead',
27+
in_channels=[64, 128, 320, 512],
28+
in_index=[0, 1, 2, 3],
29+
pool_scales=(1, 2, 3, 6),
30+
channels=512,
31+
dropout_ratio=0.1,
32+
num_classes=150,
33+
norm_cfg=norm_cfg,
34+
align_corners=False,
35+
loss_decode=dict(
36+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
37+
auxiliary_head=dict(
38+
type='FCNHead',
39+
in_channels=320,
40+
in_index=2,
41+
channels=256,
42+
num_convs=1,
43+
concat_input=False,
44+
dropout_ratio=0.1,
45+
num_classes=150,
46+
norm_cfg=norm_cfg,
47+
align_corners=False,
48+
loss_decode=dict(
49+
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
50+
# model training and testing settings
51+
train_cfg=dict(),
52+
test_cfg=dict(mode='whole'))

configs/twins/README.md

Lines changed: 75 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,75 @@
1+
# Twins: Revisiting the Design of Spatial Attention in Vision Transformers
2+
3+
## Introduction
4+
5+
<!-- [ALGORITHM] -->
6+
7+
<a href = "https://github.com/Meituan-AutoML/Twins">Official Repo</a>
8+
9+
<a href="https://github.com/open-mmlab/mmsegmentation/blob/v0.20.0/mmseg/models/backbones/twins.py#L352">Code Snippet</a>
10+
11+
## Abstract
12+
13+
Very recently, a variety of vision transformer architectures for dense prediction tasks have been proposed and they show that the design of spatial attention is critical to their success in these tasks. In this work, we revisit the design of the spatial attention and demonstrate that a carefully-devised yet simple spatial attention mechanism performs favourably against the state-of-the-art schemes. As a result, we propose two vision transformer architectures, namely, Twins-PCPVT and Twins-SVT. Our proposed architectures are highly-efficient and easy to implement, only involving matrix multiplications that are highly optimized in modern deep learning frameworks. More importantly, the proposed architectures achieve excellent performance on a wide range of visual tasks, including image level classification as well as dense detection and segmentation. The simplicity and strong performance suggest that our proposed architectures may serve as stronger backbones for many vision tasks. Our code is released at [this https URL](https://github.com/Meituan-AutoML/Twins).
14+
15+
<!-- [IMAGE] -->
16+
<div align=center>
17+
<img src="https://user-images.githubusercontent.com/24582831/145021310-57826cf5-5e03-4c7c-9081-ffa744bdae27.png" width="80%"/>
18+
</div>
19+
20+
<details>
21+
<summary align = "right"> <a href = "https://arxiv.org/pdf/2104.13840.pdf" >Twins (NeurIPS'2021)</a></summary>
22+
23+
```latex
24+
@article{chu2021twins,
25+
title={Twins: Revisiting spatial attention design in vision transformers},
26+
author={Chu, Xiangxiang and Tian, Zhi and Wang, Yuqing and Zhang, Bo and Ren, Haibing and Wei, Xiaolin and Xia, Huaxia and Shen, Chunhua},
27+
journal={arXiv preprint arXiv:2104.13840},
28+
year={2021}altgvt
29+
}
30+
```
31+
32+
</details>
33+
34+
## Usage
35+
36+
To use other repositories' pre-trained models, it is necessary to convert keys.
37+
38+
We provide a script [`twins2mmseg.py`](../../tools/model_converters/twins2mmseg.py) in the tools directory to convert the key of models from [the official repo](https://github.com/Meituan-AutoML/Twins) to MMSegmentation style.
39+
40+
```shell
41+
python tools/model_converters/twins2mmseg.py ${PRETRAIN_PATH} ${STORE_PATH} ${MODEL_TYPE}
42+
```
43+
44+
This script convert `pcpvt` or `svt` pretrained model from `PRETRAIN_PATH` and store the converted model in `STORE_PATH`.
45+
46+
For example,
47+
48+
```shell
49+
python tools/model_converters/twins2mmseg.py ./alt_gvt_base.pth ./pretrained/alt_gvt_base.pth svt
50+
```
51+
52+
## Results and models
53+
54+
### ADE20K
55+
56+
| Method| Backbone | Crop Size | Lr schd | Mem (GB) | Inf time (fps) | mIoU | mIoU(ms+flip) | config | download |
57+
| ----- | ------- | --------- | ------| ------ | -------------- | ----- | ------------- | ------ |------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ |
58+
| Twins-FPN | PCPVT-S | 512x512 | 80000| 6.60 | 27.15 | 43.26 | 44.11 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/twins/twins_pcpvt-s_fpn_fpnhead_8x4_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_pcpvt-s_fpn_fpnhead_8x4_512x512_80k_ade20k/twins_pcpvt-s_fpn_fpnhead_8x4_512x512_80k_ade20k_20211201_204132-41acd132.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_pcpvt-s_fpn_fpnhead_8x4_512x512_80k_ade20k/twins_pcpvt-s_fpn_fpnhead_8x4_512x512_80k_ade20k_20211201_204132.log.json) |
59+
| Twins-UPerNet | PCPVT-S | 512x512 | 160000| 9.67 | 14.24 | 46.04 | 46.92 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/twins/twins_pcpvt-s_uperhead_8x4_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_pcpvt-s_uperhead_8x4_512x512_160k_ade20k/twins_pcpvt-s_uperhead_8x4_512x512_160k_ade20k_20211201_233537-8e99c07a.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_pcpvt-s_uperhead_8x4_512x512_160k_ade20k/twins_pcpvt-s_uperhead_8x4_512x512_160k_ade20k_20211201_233537.log.json) |
60+
| Twins-FPN | PCPVT-B | 512x512 | 80000| 8.41 | 19.67 | 45.66 | 46.48 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/twins/twins_pcpvt-b_fpn_fpnhead_8x4_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_pcpvt-b_fpn_fpnhead_8x4_512x512_80k_ade20k/twins_pcpvt-b_fpn_fpnhead_8x4_512x512_80k_ade20k_20211130_141019-d396db72.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_pcpvt-b_fpn_fpnhead_8x4_512x512_80k_ade20k/twins_pcpvt-b_fpn_fpnhead_8x4_512x512_80k_ade20k_20211130_141019.log.json) |
61+
| Twins-UPerNet (8x2) | PCPVT-B | 512x512 | 160000| 6.46 | 12.04 | 47.91 | 48.64 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/twins/twins_pcpvt-b_uperhead_8x2_512x512_160k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_pcpvt-b_uperhead_8x2_512x512_160k_ade20k/twins_pcpvt-b_uperhead_8x2_512x512_160k_ade20k_20211130_141020-02094ea5.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_pcpvt-b_uperhead_8x2_512x512_160k_ade20k/twins_pcpvt-b_uperhead_8x2_512x512_160k_ade20k_20211130_141020.log.json) |
62+
| Twins-FPN | PCPVT-L | 512x512 | 80000| 10.78 | 14.32 | 45.94 | 46.70 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/twins/twins_pcpvt-l_fpn_fpnhead_8x4_512x512_80k_ade20k.py) | [model](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_pcpvt-l_fpn_fpnhead_8x4_512x512_80k_ade20k/twins_pcpvt-l_fpn_fpnhead_8x4_512x512_80k_ade20k_20211201_105226-bc6d61dc.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_pcpvt-l_fpn_fpnhead_8x4_512x512_80k_ade20k/twins_pcpvt-l_fpn_fpnhead_8x4_512x512_80k_ade20k_20211201_105226.log.json) |
63+
| Twins-UPerNet (8x2) | PCPVT-L | 512x512 | 160000| 7.82 | 10.70 | 49.35 | 50.08 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/twins/twins_pcpvt-l_uperhead_8x2_512x512_160k_ade20k.py) |[model](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_pcpvt-l_uperhead_8x2_512x512_160k_ade20k/twins_pcpvt-l_uperhead_8x2_512x512_160k_ade20k_20211201_075053-c6095c07.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_pcpvt-l_uperhead_8x2_512x512_160k_ade20k/twins_pcpvt-l_uperhead_8x2_512x512_160k_ade20k_20211201_075053.log.json)|
64+
| Twins-FPN | SVT-S| 512x512 | 80000| 5.80 | 29.79 | 44.47 | 45.42 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/twins/twins_svt-s_fpn_fpnhead_8x4_512x512_80k_ade20k.py) |[model](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_svt-s_fpn_fpnhead_8x4_512x512_80k_ade20k/twins_svt-s_fpn_fpnhead_8x4_512x512_80k_ade20k_20211130_141006-0a0d3317.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_svt-s_fpn_fpnhead_8x4_512x512_80k_ade20k/twins_svt-s_fpn_fpnhead_8x4_512x512_80k_ade20k_20211130_141006.log.json)|
65+
| Twins-UPerNet (8x2) | SVT-S| 512x512 | 160000| 4.93 | 15.09 | 46.08 | 46.96 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/twins/twins_svt-s_uperhead_8x2_512x512_160k_ade20k.py) |[model](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_svt-s_uperhead_8x2_512x512_160k_ade20k/twins_svt-s_uperhead_8x2_512x512_160k_ade20k_20211130_141005-e48a2d94.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_svt-s_uperhead_8x2_512x512_160k_ade20k/twins_svt-s_uperhead_8x2_512x512_160k_ade20k_20211130_141005.log.json)|
66+
| Twins-FPN | SVT-B| 512x512 | 80000| 8.75 | 21.10 | 46.77 | 47.47 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/twins/twins_svt-b_fpn_fpnhead_8x4_512x512_80k_ade20k.py) |[model](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_svt-b_fpn_fpnhead_8x4_512x512_80k_ade20k/twins_svt-b_fpn_fpnhead_8x4_512x512_80k_ade20k_20211201_113849-88b2907c.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_svt-b_fpn_fpnhead_8x4_512x512_80k_ade20k/twins_svt-b_fpn_fpnhead_8x4_512x512_80k_ade20k_20211201_113849.log.json)|
67+
| Twins-UPerNet (8x2) | SVT-B| 512x512 | 160000| 6.77 | 12.66 | 48.04 | 48.87 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/twins/twins_svt-b_uperhead_8x2_512x512_160k_ade20k.py) |[model](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_svt-b_uperhead_8x2_512x512_160k_ade20k/twins_svt-b_uperhead_8x2_512x512_160k_ade20k_20211202_040826-0943a1f1.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_svt-b_uperhead_8x2_512x512_160k_ade20k/twins_svt-b_uperhead_8x2_512x512_160k_ade20k_20211202_040826.log.json)|
68+
| Twins-FPN | SVT-L| 512x512 | 80000| 11.20 | 17.80 | 46.55 | 47.74 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/twins/twins_svt-l_fpn_fpnhead_8x4_512x512_80k_ade20k.py) |[model](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_svt-l_fpn_fpnhead_8x4_512x512_80k_ade20k/twins_svt-l_fpn_fpnhead_8x4_512x512_80k_ade20k_20211130_141005-1d59bee2.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_svt-l_fpn_fpnhead_8x4_512x512_80k_ade20k/twins_svt-l_fpn_fpnhead_8x4_512x512_80k_ade20k_20211130_141005.log.json)|
69+
| Twins-UPerNet (8x2) | SVT-L| 512x512 | 160000| 8.41 | 10.73 | 49.65 | 50.63 | [config](https://github.com/open-mmlab/mmsegmentation/blob/master/configs/twins/twins_svt-l_uperhead_8x2_512x512_160k_ade20k.py) |[model](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_svt-l_uperhead_8x2_512x512_160k_ade20k/twins_svt-l_uperhead_8x2_512x512_160k_ade20k_20211130_141005-3e2cae61.pth) &#124; [log](https://download.openmmlab.com/mmsegmentation/v0.5/twins/twins_svt-l_uperhead_8x2_512x512_160k_ade20k/twins_svt-l_uperhead_8x2_512x512_160k_ade20k_20211130_141005.log.json)|
70+
71+
72+
Note:
73+
74+
- `8x2` means 8 GPUs with 2 samples per GPU in training. Default setting of Twins on ADE20K is 8 GPUs with 4 samples per GPU in training.
75+
- `UPerNet` and `FPN` are decoder heads utilized in corresponding Twins model, which is `UPerHead` and `FPNHead`, respectively. Specifically, models in [official repo](https://github.com/Meituan-AutoML/Twins) all use `UPerHead`.

0 commit comments

Comments
 (0)