-
Notifications
You must be signed in to change notification settings - Fork 2.7k
[NEW][Feature]Support SegNeXt(NeurIPS'2022) in master branch #2600
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from 1 commit
af8e986
ea5e0a2
45c9a26
91c8f73
75dbe07
81597e3
a4fd8c4
a976b8b
9c96bc2
b58d10a
98bd70b
5c881c3
3448db8
c1b60d8
a79e4d3
b626c72
0123aeb
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
- Loading branch information
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -48,7 +48,7 @@ The pretrained model could be found [here](https://cloud.tsinghua.edu.cn/d/c15b2 | |
|
||
Note: | ||
|
||
- The total batch size is 16. We trained for SegNeXt with a single GPU as the performance degrades significantly when using`SyncBN` (mainly in `OverlapPatchEmbed` modules of `MSCAN`) of PyTorch 1.9. | ||
- The total batch size is 16. We trained for SegNeXt with a single GPU as the performance degrades significantly when using`SyncBN` (mainly in `OverlapPatchEmbed` modules of `MSCAN`) of PyTorch 1.9. | ||
|
||
- `Inf time (fps)` is collected from A100. | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. why add this note? we always collect inference time on A100, but never emphasized it There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Because before SegNeXt, FPS of most of our ckpts are collected on V100.
MeowZheng marked this conversation as resolved.
Show resolved
Hide resolved
|
||
|
||
|
Uh oh!
There was an error while loading. Please reload this page.