Skip to content

[NEW][Feature]Support SegNeXt(NeurIPS'2022) in master branch #2600

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 17 commits into from
Feb 24, 2023
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
fix readme
  • Loading branch information
MengzhangLI committed Feb 21, 2023
commit a4fd8c448333164f46f215fa74b3f7965ea0a905
2 changes: 1 addition & 1 deletion configs/segnext/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ The pretrained model could be found [here](https://cloud.tsinghua.edu.cn/d/c15b2

Note:

- The total batch size is 16. We trained for SegNeXt with a single GPU as the performance degrades significantly when using`SyncBN` (mainly in `OverlapPatchEmbed` modules of `MSCAN`) of PyTorch 1.9.
- The total batch size is 16. We trained for SegNeXt with a single GPU as the performance degrades significantly when using`SyncBN` (mainly in `OverlapPatchEmbed` modules of `MSCAN`) of PyTorch 1.9.

- `Inf time (fps)` is collected from A100.
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why add this note? we always collect inference time on A100, but never emphasized it

Copy link
Contributor Author

@MengzhangLI MengzhangLI Feb 16, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because before SegNeXt, FPS of most of our ckpts are collected on V100.


Expand Down
8 changes: 4 additions & 4 deletions configs/segnext/segnext.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ Models:
lr schd: 160000
inference time (ms/im):
- value: 19.09
hardware: V100
hardware: A100
backend: PyTorch
batch size: 1
mode: FP32
Expand All @@ -43,7 +43,7 @@ Models:
lr schd: 160000
inference time (ms/im):
- value: 23.66
hardware: V100
hardware: A100
backend: PyTorch
batch size: 1
mode: FP32
Expand All @@ -65,7 +65,7 @@ Models:
lr schd: 160000
inference time (ms/im):
- value: 28.45
hardware: V100
hardware: A100
backend: PyTorch
batch size: 1
mode: FP32
Expand All @@ -87,7 +87,7 @@ Models:
lr schd: 160000
inference time (ms/im):
- value: 43.65
hardware: V100
hardware: A100
backend: PyTorch
batch size: 1
mode: FP32
Expand Down