-
Notifications
You must be signed in to change notification settings - Fork 2.8k
Closed
Labels
enhancementNew feature or requestNew feature or request
Description
Describe the feature
Support multiple images per gpu for testing the model.
Motivation
A clear and concise description of the motivation of the feature.
Ex1. It is inconvenient when I have a lot of data to evaluate. In this case, I hope that a larger batch size can be used.
Ex2. (https://github.com/open-mmlab/mmsegmentation/blob/master/tools/test.py#L103) This TODO should be considered to be completed.
Additional context
For single_gpu_test, this problem is easy to solve. But I am not sure how such a modification will affect multi_gpu_test. Can anyone provide some suggestions?
For single_gpu_test, we can modify
mmsegmentation/mmseg/apis/test.py
Lines 31 to 45 in c3e4dbc
| for i, data in enumerate(data_loader): | |
| with torch.no_grad(): | |
| result = model(return_loss=False, **data) | |
| if isinstance(result, list): | |
| results.extend(result) | |
| else: | |
| results.append(result) | |
| if show or out_dir: | |
| img_tensor = data['img'][0] | |
| img_metas = data['img_metas'][0].data[0] | |
| imgs = tensor2imgs(img_tensor, **img_metas[0]['img_norm_cfg']) | |
| assert len(imgs) == len(img_metas) | |
| for img, img_meta in zip(imgs, img_metas): |
to:
...
for i, data in enumerate(data_loader):
with torch.no_grad():
result_per_batch = model(return_loss=False, **data)
if isinstance(result_per_batch, list):
results.extend(result_per_batch)
else:
results.append(result_per_batch)
if show or out_dir:
img_tensor = data['img'][0]
img_metas = data['img_metas'][0].data[0]
imgs = tensor2imgs(img_tensor, **img_metas[0]['img_norm_cfg'])
assert len(imgs) == len(img_metas)
for i, img, img_meta in enumerate(zip(imgs, img_metas)):
result = result_per_batch[i:i+1]Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request