Nvidia 50 Series (Blackwell) support thread: How to get ComfyUI running on your new 50 series GPU. #6643
Replies: 73 comments 274 replies
-
There are also Windows prebuilt wheels of pytorch on cuda 128 that nvidia gave w-e-w to publish https://huggingface.co/w-e-w/torch-2.6.0-cu128.nv |
Beta Was this translation helpful? Give feedback.
-
The Windows version said: |
Beta Was this translation helpful? Give feedback.
-
nf4 node does not work. |
Beta Was this translation helpful? Give feedback.
-
ImportError: tokenizers>=0.21,<0.22 is required for a normal functioning of this module, but found tokenizers==0.20.3. How can I do to fix problem? |
Beta Was this translation helpful? Give feedback.
-
Sorry - I have been banging my head against this for 6 hours - trying to run a cosmos workflow with 5090 - without SageAttention or TorchCompile - a 14 minute job on 4090 is taking 25 minutes. To turn them on I installed Triton and Sageattention - then I get this error afterwards whether I bypass the patch and compile or not:- KSampler I have run this through GPT and followed every instruction. Step 1: Verify CUDA Toolkit Installation python If it does not match 12.8, you may need to reinstall PyTorch with the correct CUDA version. ✅ Step 2: Check NVIDIA Toolkit and Drivers Update your NVIDIA driver (Download latest) ✅ Step 3: Fix Microsoft Visual Studio Build Tools Run: sh |
Beta Was this translation helpful? Give feedback.
-
anyone knows if other pytorch's cuda versions like 12.6 will work with blackwell? |
Beta Was this translation helpful? Give feedback.
-
When I use [ComfyUI package with a cuda 12.8 torch build], many custom_nodes are "IMPORT FAILED" including Manager, InstantID, ReActor ... |
Beta Was this translation helpful? Give feedback.
-
Is is possible to build a pytorch that works for 5090? If so how to do it? |
Beta Was this translation helpful? Give feedback.
-
Having given up on Portable for now - too many errors lol - I am using WSL - Ubuntu - everything else is set up and working in Comfyui - I have the latest pytorch nightly from pip install --pre torch torchvision torchaudio --index-url[ https://download.pytorch.org/whl/nightly/cu128] - but when I use Sageattention I get this: |
Beta Was this translation helpful? Give feedback.
-
im using comfyui over pinokio, is there any way to update my comfy to work with my new 5080? i deleted the old files and changed in the torch.js on the patch above, but it wont start |
Beta Was this translation helpful? Give feedback.
-
Will using a docker allow for a working torchvision in windows? |
Beta Was this translation helpful? Give feedback.
-
Can someone write a little guide for getting a docker running torchvision, etc on window with the portable Blackwell comfyui release? Please write it for a normal user who has no programming knowledge. There are basic instructions in the OP, but what even is a docker? "docker run -p 8188:8188 --gpus all -it --rm nvcr.io/nvidia/pytorch:25.01-py3" Where is this command supposed to be run? Is a docker something to be installed to system python or standalone folder? How would this be installed on a fresh portable comfyui install? With more Blackwell cards trickling out, there will most likely be more users needing help setting this up. Thank you. |
Beta Was this translation helpful? Give feedback.
-
To anyone having trouble with the portable version for Blackwell gpus, do not update it! I noticed during updating that it uninstalled the cu128 and installed another version for example. Torchvision works with a fresh install without updating! |
Beta Was this translation helpful? Give feedback.
-
Excelente |
Beta Was this translation helpful? Give feedback.
-
Are there no windows pytorch options available for cuda 12.8 still? Any help would be greatly appreciated. |
Beta Was this translation helpful? Give feedback.
-
What is the current method of installing the manager without getting errors? I use ComfyUI portable Windows. Thank you. |
Beta Was this translation helpful? Give feedback.
-
The official standalone package has now been updated to pytorch 2.7 cu128 so just download that if you have a 50 series. The desktop package will be updated in the next week or so assuming everything goes well. |
Beta Was this translation helpful? Give feedback.
-
After finally getting ComfyUI to work and installing what was needed... wtf? Generating basic images takes pretty much the same exact speed using RTX 5080 then it did for my RTX 4060? Is this normal? |
Beta Was this translation helpful? Give feedback.
-
Question for RTX 50 graphics card owners. Do you have the basic FLUX models working? Not the unet (nf4) models, not *.GGUF, but the ones called flux1-dev.safetensors or flux1-schnell.safetensors. |
Beta Was this translation helpful? Give feedback.
-
Followed a guide for the Windows portable for 50xx series cards and am receiving an error: Error running sage attention: PY_SSIZE_T_CLEAN macro must be defined for '#' formats, using pytorch attention instead. Is this due to Triton 3.3.0? I read that it should be 3.2.0. |
Beta Was this translation helpful? Give feedback.
-
Hello everyone, Thanks so far for all the great efforts by all on this thread. Thank you Comfy for your tireless work on this project. I am not sure my question has been answered after going through pretty much the whole thread since the OP. The install that I use every day is one that I installed from scratch using ComfyUI Portable about a year go (or more, I think January 2024) I have updated and updated and updated almost once per day or multiple times per day via Manager ever since then. I just updated a few minutes ago. EVERYTHING works absolutely perfectly on my 4090. Getting a 5090 is in my near future and I am terrified of breaking my current install or worse, starting from scratch yet again. is there any solution for someone like me with an old Portable version that has been updated endlessly? I see that a fresh install "just works" but was hoping that there was a similar "run this batch file and it just works" from inside the updates directory :D Thanks again everyone. Cheers! |
Beta Was this translation helpful? Give feedback.
-
I installed the windows portable v.0.3.31 that was released yesterday: https://github.com/comfyanonymous/ComfyUI/releases/tag/v0.3.31 This is from the release notes on github: |
Beta Was this translation helpful? Give feedback.
-
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. |
Beta Was this translation helpful? Give feedback.
-
did you manged to run this workflow? drag and drop the video in comfui web: https://civitai.com/images/58389191 |
Beta Was this translation helpful? Give feedback.
-
After installed fresh comfyui using this standalone ComfyUI package with a cuda 12.8 torch build (https://github.com/comfyanonymous/ComfyUI/releases/download/latest/ComfyUI_cu128_50XX.7z) But, i have a new problem. The all my workflow collection is broken when i open it. Right clicking : Fix nodes (recreate) didnt fix it. Need to replace it manually one by one. Any idea how to solve this? Please help. |
Beta Was this translation helpful? Give feedback.
-
May i ask for the status in regards to the desktop version, also running a 5090. |
Beta Was this translation helpful? Give feedback.
-
For my ComfyUI I ditched the default python/pip/poetry (I guess?) installation and dependency mechanism and migrated to uv. It just works, either on Windows or on Linux. If anyone is interested, I can post my uv pyproject.toml somewhere. Everything works on 5090, couldn't get sageattention to work, though. Plus migration to other device/operating system/whatever is completely painless. |
Beta Was this translation helpful? Give feedback.
-
Message ID: ***@***.***>
Wow, you can use Sage Attention. I have tried installing it many times but it has failed. I am using CUDA128+Python 2.7, as well as the two wheels I sent you, xformers version: 0.0.30+9a2cd3ef.d20250321。 There is currently the latest version of Python 2.7.1, but I haven't tried it yet because I need too many things to support their operation. Too tired.
从网易163邮箱发来的超大附件推荐客户端极速下载
python.rar (322.67M, 2025年6月21日 14:40 到期)
下载
|
Beta Was this translation helpful? Give feedback.
-
Hello, I don't know why I'm updating all Triton 3.3.1 xformer 0.0.30 and installing Sageattention, but it still reports the same error
[2025-06-06 15:18:06.851] [info] Set attention implementation to nunchaku-fp16
Loading configuration from E:\models\diffusion_models\svdq-fp4-flux.1-dev\comfy_config.json
model_type FLUX
Requested to load CLIPVisionModelProjection
loaded completely 6937.49091796875 787.7150573730469 True
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
clip missing: ['text_projection.weight']
Waiting for your reply
Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
-
Thank you, friend.
My previous problem has been solved,
My gguf and nunkaku can both run.
But there is an old problem,
[2025-06-10 06:31:08.647] [info] Initializing QuantizedFluxModel on device 0
[2025-06-10 06:31:08.681] [info] Loading weights from E:\models\diffusion_models\svdq-fp4-flux.1-dev\transformer_blocks.safetensors
[2025-06-10 06:31:08.685] [warning] Failed to load safetensors using method MIO: CUDA error: operation not supported (at C:\Users\muyang\Desktop\nunchaku-dev\src\Serialization.cpp:130)
[2025-06-10 06:31:18.948] [info] Done.
Injecting quantized module
[2025-06-10 06:31:19.296] [info] Set attention implementation to nunchaku-fp16
Loading configuration from E:\models\diffusion_models\svdq-fp4-flux.1-dev\comfy_config.json
model_type FLUX
Requested to load CLIPVisionModelProjection
loaded completely 6937.49091796875 787.7150573730469 True
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
clip missing: ['text_projection.weight']
It's always like this CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cpu, dtype: torch.float16
clip missing: ['text_projection.weight'],
"clip missing: ['text_projection.weight']" What exactly does this mean
It can also be generated in the end, why does it keep prompting this! Have you ever seen a similar problem?
At 2025-05-15 22:08:22, "H.W.Prinz" ***@***.***> wrote:
@a1chera
did you get it fixed?
i had the same issue, and it is not solvable from my point of view atm (already had some discussion with Dr. LT.Data on it)
the Nighty portable packages come with python 3.13.2
that is the only problem
you can install a bunch of nodes and stuff not taking care on it, but some, even essential as like Manager
don't like 3.13..... at all.
but if you're going for an actual download default released package, it will have python 3.12.10 pytorch2.7.0+cu12.8
all good with runs flawlessly
have fun
—
Reply to this email directly, view it on GitHub, or unsubscribe.
You are receiving this because you commented.Message ID: ***@***.***>
|
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
I will try keeping this post up to date as much as possible with the latest developments.
To get your nvidia 50 series GPU working with ComfyUI you need a pytorch that has been built against cuda 12.8
In the next few months there will likely be a lot of performance improvements landing in pytorch for these GPUs so I recommend coming back to this page and updating frequently.
Windows
The recommended download is the latest standalone portable package or desktop installer that you can download from the README
Manual Install
If you install stable pytorch make sure it is cu128.
pytorch nightly cu128 is available for Windows and Linux:
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu128
You can also use the Nvidia Pytorch Docker Container as an alternative which might give more performance.
Link: https://catalog.ngc.nvidia.com/orgs/nvidia/containers/pytorch
Here's how to use it:
docker run -p 8188:8188 --gpus all -it --rm nvcr.io/nvidia/pytorch:25.01-py3
Inside the docker container:
Beta Was this translation helpful? Give feedback.
All reactions