Skip to content

Feature: Support for resizing multiple format loras. #2057

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 3 commits into
base: main
Choose a base branch
from

Conversation

Symbiomatrix
Copy link
Contributor

Various trainers use different naming conventions for loras, which break the resize script. A ubiquitous one is lora_A / lora_B instead of lora_down / lora_up. This change reads whichever convention was used in the given model and adjusts all hardcoded references to that name.
A bit of a patchwork fix using global variables since I don't know the full flow of the code, but I've tested it extensively over several types of base models' loras (from 1.5 to wan) and it works for every lora I've tested.

@woct0rdho
Copy link

In f-strings, we need to replace LORAFMT[0] and LORAFMT[1] with {LORAFMT[0]} and {LORAFMT[1]}

@Symbiomatrix
Copy link
Contributor Author

Symbiomatrix commented Jun 21, 2025

@woct0rdho Right, oops. Fixed. Eyeballing refactors in PR versions, bad habit of mine.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants