You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now the finetuning example seems to only work correctly for CPU-only training or max. GPU layers. But in principle it should be possible to use the same partial offloading logic that is used for prompt processing to accelerate the training of models that need more memory than there is VRAM available.
The text was updated successfully, but these errors were encountered:
Right now the finetuning example seems to only work correctly for CPU-only training or max. GPU layers. But in principle it should be possible to use the same partial offloading logic that is used for prompt processing to accelerate the training of models that need more memory than there is VRAM available.
The text was updated successfully, but these errors were encountered: