Skip to content

Commit f4d495b

Browse files
Provide help - insufficient max locked memory error for Nvidia runs (mlcommons#2355)
Co-authored-by: Miro <[email protected]>
1 parent d9efabf commit f4d495b

File tree

1 file changed

+1
-0
lines changed

1 file changed

+1
-0
lines changed

main.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -619,6 +619,7 @@ def get_docker_info(spaces, model, implementation,
619619
if implementation.lower() == "nvidia":
620620
info += f"{pre_space} - Default batch size is assigned based on [GPU memory](https://github.com/mlcommons/cm4mlops/blob/dd0c35856969c68945524d5c80414c615f5fe42c/script/app-mlperf-inference-nvidia/_cm.yaml#L1129) or the [specified GPU](https://github.com/mlcommons/cm4mlops/blob/dd0c35856969c68945524d5c80414c615f5fe42c/script/app-mlperf-inference-nvidia/_cm.yaml#L1370). Please click more option for *docker launch* or *run command* to see how to specify the GPU name.\n\n"
621621
info += f"{pre_space} - When run with `--all_models=yes`, all the benchmark models of NVIDIA implementation can be executed within the same container.\n\n"
622+
info += f"{pre_space} - If you encounter an error related to ulimit or max locked memory during the run_harness step, please refer to the [this](https://github.com/mlcommons/mlperf-automations/issues/664) issue for details and resolution steps.\n\n"
622623
else:
623624
if model == "sdxl":
624625
info += f"\n{pre_space}!!! tip\n\n"

0 commit comments

Comments
 (0)