You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, thank you very much for your incredible work!
I have a question regarding fine-tuning with LoRA.
If I fine-tune a LoRA and then perform a merge, and afterward start a new LoRA fine-tuning using this merged model as the base (instead of the original base model), like this:
swift sft \
--model OpenGVLab/InternVL2_5-78B-MPO \
--train_type lora \
and then like this:
swift sft \
--model ./InternVL2_5-78B-MPO/checkpoint-xxxx-merged \
--train_type lora \
what will happen to the previous LoRA?
(especially if the rank and alpha parameters are changed in the second stage)?
Will a new LoRA be created, or will the training somehow continue from the previous one?
Will the previous LoRA be erased?
(especially if no specific parameter for continuing training was set)
What should I do if I want to train a LoRA and then add another one on top of it?
Specifically: I want to first train one LoRA, and then, while keeping the first one applied, start a second fine-tuning on top of it, without losing the previous LoRA.
How should I properly set this up?
The text was updated successfully, but these errors were encountered:
Bezdarnost
changed the title
LoRa multi step training question
LoRA multi step training question
Apr 27, 2025
Hello, thank you very much for your incredible work!
I have a question regarding fine-tuning with LoRA.
If I fine-tune a LoRA and then perform a merge, and afterward start a new LoRA fine-tuning using this merged model as the base (instead of the original base model), like this:
and then like this:
what will happen to the previous LoRA?
(especially if the rank and alpha parameters are changed in the second stage)?
Will a new LoRA be created, or will the training somehow continue from the previous one?
Will the previous LoRA be erased?
(especially if no specific parameter for continuing training was set)
What should I do if I want to train a LoRA and then add another one on top of it?
Specifically: I want to first train one LoRA, and then, while keeping the first one applied, start a second fine-tuning on top of it, without losing the previous LoRA.
How should I properly set this up?
The text was updated successfully, but these errors were encountered: