-
Notifications
You must be signed in to change notification settings - Fork 637
Qwen2VL微调到某个step就报错 #3924
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
遇到相同错误 |
maybe OOM |
@Jintao-Huang 我重跑了一遍 还是报错了 记录了显卡的情况 如下所示 |
我将batch_size从2调到1 可以正常运行了,不过2的时候其实卡也没跑完,有点奇怪 |
Your GPU mem usage are extremly imbalanced. |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Describe the bug
What the bug is, and how to reproduce, better with screenshots(描述bug以及复现过程,最好有截图)
对Qwen2VL做多模态任务的all-linear层lora微调,Train到第200步就报错,之前也有做过同样的任务能正常训练,用的是单图像的数据样本,现在是每个数据样本两张图像(不知道这有没有影响)
运行命令:
run sh: `/home/cjy/miniconda3/envs/swift/bin/python /home/cjy/miniconda3/envs/swift/lib/python3.10/site-packages/swift/cli/sft.py --torch_dtype bfloat16 --model /home/cjy/model/Qwen/Qwen2-VL-2B-Instruct --model_type qwen2_vl --template qwen2_vl --system You are a helpful and harmless assistant. --dataset /home/cjy/data/xxx_indoor/label/part1_three.jsonl --max_length 1024 --init_weights True --per_device_train_batch_size 2 --learning_rate 1e-4 --num_train_epochs 1 --attn_impl flash_attn --gradient_accumulation_steps 16 --eval_steps 200 --output_dir /home/cjy/model/xxx/Qwen2-VL-2B-Instruct/ --report_to tensorboard --add_version False --output_dir /home/cjy/model/xxx/Qwen2-VL-2B-Instruct/v9-20250417-183456 --logging_dir /home/cjy/model/xxx/Qwen2-VL-2B-Instruct/v9-20250417-183456/runs --ignore_args_error True`
Your hardware and system info
Write your system info like CUDA version/system/GPU/torch version here(在这里给出硬件信息和系统信息,如CUDA版本,系统,GPU型号和torch版本等)
cuda版本:12.6
驱动版本:560.35.05
4卡A100
ubuntu:20.04
torch:2.5.1
Additional context
Add any other context about the problem here(在这里补充其他信息)
The text was updated successfully, but these errors were encountered: