Skip to content

Conversation

@spacepxl
Copy link
Contributor

What does this PR do?

Fixes #8511 (issue)

If text encoder training is not enabled, and the text encoders are not needed during training, they are deleted, and garbage collection/cuda cache clear are run to free up the memory. Previously the deletion only included the lists referencing the text encoders and tokenizers, but because they are also referenced directly in their own variables, the referenced models weren't actually being deleted. This fix is a crude attempt to delete all references to the text encoders, so they can actually be removed from memory.

Before submitting

Who can review?

Anyone in the community is free to review the PR once the tests have passed. Feel free to tag
members/contributors who may be interested in your PR.

@sayakpaul

@HuggingFaceDocBuilderDev

The docs for this PR live here. All of your documentation changes will be reflected on that endpoint. The docs are available until 30 days after the last update.

Copy link
Member

@sayakpaul sayakpaul left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a ton!

@sayakpaul sayakpaul merged commit 8e1b7a0 into huggingface:main Jun 16, 2024
sayakpaul added a commit that referenced this pull request Dec 23, 2024
… the text encoders are not being trained (#8536)

* Update train_dreambooth_sd3.py to fix TE garbage collection

* Update train_dreambooth_lora_sd3.py to fix TE garbage collection

---------

Co-authored-by: Kashif Rasul <[email protected]>
Co-authored-by: Sayak Paul <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

SD3 Dreambooth/Lora training scripts don't actually unload the text encoders

4 participants