Skip to content

Commit 0391dcd

Browse files
authored
Upgrade pre commit hooks master (open-mmlab#2155)
* Upgrade pre commit hooks * Upgrade pre commit hooks * mim install mmcv-full * install mim * install mmcv-full * test mmcv-full 1.6.0 * fix timm * fix timm * fix timm
1 parent 9d2312b commit 0391dcd

File tree

7 files changed

+16
-14
lines changed

7 files changed

+16
-14
lines changed

.github/workflows/build.yml

Lines changed: 7 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -70,13 +70,14 @@ jobs:
7070
coverage run --branch --source mmseg -m pytest tests/
7171
coverage xml
7272
coverage report -m
73-
if: ${{matrix.torch >= '1.5.0'}}
73+
# timm from v0.6.11 requires torch>=1.7
74+
if: ${{matrix.torch >= '1.7.0'}}
7475
- name: Skip timm unittests and generate coverage report
7576
run: |
7677
coverage run --branch --source mmseg -m pytest tests/ --ignore tests/test_models/test_backbones/test_timm_backbone.py
7778
coverage xml
7879
coverage report -m
79-
if: ${{matrix.torch < '1.5.0'}}
80+
if: ${{matrix.torch < '1.7.0'}}
8081

8182
build_cuda101:
8283
runs-on: ubuntu-18.04
@@ -144,13 +145,14 @@ jobs:
144145
coverage run --branch --source mmseg -m pytest tests/
145146
coverage xml
146147
coverage report -m
147-
if: ${{matrix.torch >= '1.5.0'}}
148+
# timm from v0.6.11 requires torch>=1.7
149+
if: ${{matrix.torch >= '1.7.0'}}
148150
- name: Skip timm unittests and generate coverage report
149151
run: |
150152
coverage run --branch --source mmseg -m pytest tests/ --ignore tests/test_models/test_backbones/test_timm_backbone.py
151153
coverage xml
152154
coverage report -m
153-
if: ${{matrix.torch < '1.5.0'}}
155+
if: ${{matrix.torch < '1.7.0'}}
154156
- name: Upload coverage to Codecov
155157
uses: codecov/[email protected]
156158
with:
@@ -249,7 +251,7 @@ jobs:
249251
run: pip install -e .
250252
- name: Run unittests
251253
run: |
252-
python -m pip install timm
254+
python -m pip install 'timm<0.6.11'
253255
coverage run --branch --source mmseg -m pytest tests/
254256
- name: Generate coverage report
255257
run: |

.pre-commit-config.yaml

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -1,18 +1,18 @@
11
repos:
22
- repo: https://gitlab.com/pycqa/flake8.git
3-
rev: 3.8.3
3+
rev: 5.0.4
44
hooks:
55
- id: flake8
66
- repo: https://github.com/PyCQA/isort
77
rev: 5.10.1
88
hooks:
99
- id: isort
1010
- repo: https://github.com/pre-commit/mirrors-yapf
11-
rev: v0.30.0
11+
rev: v0.32.0
1212
hooks:
1313
- id: yapf
1414
- repo: https://github.com/pre-commit/pre-commit-hooks
15-
rev: v3.1.0
15+
rev: v4.3.0
1616
hooks:
1717
- id: trailing-whitespace
1818
- id: check-yaml
@@ -34,7 +34,7 @@ repos:
3434
- mdformat_frontmatter
3535
- linkify-it-py
3636
- repo: https://github.com/codespell-project/codespell
37-
rev: v2.1.0
37+
rev: v2.2.1
3838
hooks:
3939
- id: codespell
4040
- repo: https://github.com/myint/docformatter

docs/en/faq.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -53,7 +53,7 @@ Briefly, it is a deep supervision trick to improve the accuracy. In the training
5353

5454
## Why is the log file not created
5555

56-
In the train script, we call `get_root_logger`at Line 167, and `get_root_logger` in mmseg calls `get_logger` in mmcv, mmcv will return the same logger which has beed initialized in 'mmsegmentation/tools/train.py' with the parameter `log_file`. There is only one logger (initialized with `log_file`) during training.
56+
In the train script, we call `get_root_logger`at Line 167, and `get_root_logger` in mmseg calls `get_logger` in mmcv, mmcv will return the same logger which has been initialized in 'mmsegmentation/tools/train.py' with the parameter `log_file`. There is only one logger (initialized with `log_file`) during training.
5757
Ref: [https://github.com/open-mmlab/mmcv/blob/21bada32560c7ed7b15b017dc763d862789e29a8/mmcv/utils/logging.py#L9-L16](https://github.com/open-mmlab/mmcv/blob/21bada32560c7ed7b15b017dc763d862789e29a8/mmcv/utils/logging.py#L9-L16)
5858

5959
If you find the log file not been created, you might check if `mmcv.utils.get_logger` is called elsewhere.

docs/en/tutorials/customize_datasets.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -33,7 +33,7 @@ data = dict(
3333
- `train`, `val` and `test`: The [`config`](https://github.com/open-mmlab/mmcv/blob/master/docs/en/understand_mmcv/config.md)s to build dataset instances for model training, validation and testing by
3434
using [`build and registry`](https://github.com/open-mmlab/mmcv/blob/master/docs/en/understand_mmcv/registry.md) mechanism.
3535

36-
- `samples_per_gpu`: How many samples per batch and per gpu to load during model training, and the `batch_size` of training is equal to `samples_per_gpu` times gpu number, e.g. when using 8 gpus for distributed data parallel trainig and `samples_per_gpu=4`, the `batch_size` is `8*4=32`.
36+
- `samples_per_gpu`: How many samples per batch and per gpu to load during model training, and the `batch_size` of training is equal to `samples_per_gpu` times gpu number, e.g. when using 8 gpus for distributed data parallel training and `samples_per_gpu=4`, the `batch_size` is `8*4=32`.
3737
If you would like to define `batch_size` for testing and validation, please use `test_dataloaser` and
3838
`val_dataloader` with mmseg >=0.24.1.
3939

mmseg/models/backbones/vit.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -337,7 +337,7 @@ def init_weights(self):
337337
constant_init(m, val=1.0, bias=0.)
338338

339339
def _pos_embeding(self, patched_img, hw_shape, pos_embed):
340-
"""Positiong embeding method.
340+
"""Positioning embeding method.
341341
342342
Resize the pos_embed, if the input image size doesn't match
343343
the training size.

mmseg/models/losses/focal_loss.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -78,7 +78,7 @@ def sigmoid_focal_loss(pred,
7878
valid_mask=None,
7979
reduction='mean',
8080
avg_factor=None):
81-
r"""A warpper of cuda version `Focal Loss
81+
r"""A wrapper of cuda version `Focal Loss
8282
<https://arxiv.org/abs/1708.02002>`_.
8383
Args:
8484
pred (torch.Tensor): The prediction with shape (N, C), C is the number

setup.cfg

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,4 +19,4 @@ default_section = THIRDPARTY
1919
skip = *.po,*.ts,*.ipynb
2020
count =
2121
quiet-level = 3
22-
ignore-words-list = formating,sur,hist,dota,ba
22+
ignore-words-list = formating,sur,hist,dota,ba,warmup

0 commit comments

Comments
 (0)