Skip to content

Commit 251c429

Browse files
committed
update readme with some training notes
1 parent 3d790bc commit 251c429

File tree

1 file changed

+3
-3
lines changed

1 file changed

+3
-3
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -172,9 +172,6 @@ for i, result in enumerate(inference_detector(model, imgs, cfg, device='cuda:0')
172172
mmdetection implements distributed training and non-distributed training,
173173
which uses `MMDistributedDataParallel` and `MMDataParallel` respectively.
174174

175-
We suggest using distributed training even on a single machine, which is faster,
176-
and non-distributed training are left for debugging or other purposes.
177-
178175
### Distributed training
179176

180177
mmdetection potentially supports multiple launch methods, e.g., PyTorch’s built-in launch utility, slurm and MPI.
@@ -202,6 +199,9 @@ Expected results in WORK_DIR:
202199
- saved checkpoints (every k epochs, defaults=1)
203200
- a symbol link to the latest checkpoint
204201

202+
> **Note**
203+
> 1. We recommend using distributed training with NCCL2 even on a single machine, which is faster. Non-distributed training is for debugging or other purposes.
204+
> 2. The default learning rate is for 8 GPUs. If you use less or more than 8 GPUs, you need to set the learning rate proportional to the GPU num. E.g., modify lr to 0.01 for 4 GPUs or 0.04 for 16 GPUs.
205205
206206
## Technical details
207207

0 commit comments

Comments
 (0)