Replies: 3 comments
-
Similar issue. Whenever I have problems to install, like "to even get started", for a week long, I start hating the product without even trying it! When will we have some, as simple as "pip install comfy", or "pip install comfy --fix-missing", or "pip install comfy --i-dont-care-just-get-it-working" ? |
Beta Was this translation helpful? Give feedback.
-
Which version of Python did you install on your virtual environment? When you make the virtual environment it needs to have Python 3.12. After that point you want to install on rocm and the rest of the dependencies on that virtual environment. I recently switched to Arch and spend a lot of time banging my head on the wall due to having python 3.13 on my main install, which will cause compatibility issues. |
Beta Was this translation helpful? Give feedback.
-
Have you reviewed the Post-installation instructions from ROCm document? It looks like an issue with your ROCm installation... which is quite difficult to resolve. I even went for a full system reinstall just to get through it. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
For the last several days, I have been trying to get ComfyUI to work, but without much success. I have followed the instructions here for my configuration. Below is some information about what has transpired. A lot of it with fractured help from where I could get it:
I have:
Ryzen 5600X
32 GB of RAM @ 3600 MT/s
1 TB SSD with more than 300 GB of free space.
AMD 6700 XT (RDNA2, 12 GB of VRAM)
I am a novice with Linux.
I want to use Comfyui to generate images for TTRPGs. Maybe later for music and clip generation if that is a thing, but one step at a time.
linux Mint Cinnamon
Rocm 6.2 is installed. When attempting to install it, it shows it is already installed via terminal, stating after most lines "Requirement already satisfied".
Followed instructions for this link: https://github.com/comfyanonymous/ComfyUI#installing
Also installed Pytorch from https://pytorch.org/ with the following configuration:
Stable (2.5.1), Linux, Pip, Python, Rocm 6.2
Also followed instructions from November 16th post here: pytorch/pytorch#103973 (comment)
I have found that a lot of commands I have seen or been given require I put a 3 after "python". Otherwise, any command with just python" does not work.
I have installed comfyui requirements already via this command:
pip install -r requirements.txt
venv has been activated.
Used these commands per instructions from doctorpangloss to get comfyui running while I had my 5700XT:
python -m venv .venv
source .venv/bin/activate
pip install "comfyui[withtorch]@git+https://github.com/hiddenswitch/ComfyUI.git"
comfyui --create-directories
comfyui
However, it was using my CPU at the time and not seeing or using my GPU.
I also tried the following command at that time:
HSA_OVERRIDE_GFX_VERSION=10.3.0 python main.py
But it still used just my CPU.
I can no longer get ComfyUI to run at all after shutting it down. So I now have my RX 6700 XT in the system since it is newer than my 5700 XT and has 12 GB of VRAM vs 8 GB.
I have uninstalled rocm and reinstalled rocm 6.2, but get the following:
Traceback (most recent call last):
File "/home/user/ComfyUI/main.py", line 91, in
import execution
File "/home/user/ComfyUI/execution.py", line 13, in
import nodes
File "/home/user/ComfyUI/nodes.py", line 21, in
import comfy.diffusers_load
File "/home/user/ComfyUI/comfy/diffusers_load.py", line 3, in
import comfy.sd
File "/home/user/ComfyUI/comfy/sd.py", line 5, in
from comfy import model_management
File "/home/user/ComfyUI/comfy/model_management.py", line 143, in
total_vram = get_total_memory(get_torch_device()) / (1024 * 1024)
File "/home/user/ComfyUI/comfy/model_management.py", line 134, in get_total_memory
_, mem_total_cuda = torch.cuda.mem_get_info(dev)
File "/home/user/.local/lib/python3.10/site-packages/torch/cuda/memory.py", line 721, in mem_get_info
return torch.cuda.cudart().cudaMemGetInfo(device)
RuntimeError: HIP error: invalid argument
HIP kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing AMD_SERIALIZE_KERNEL=3
Compile with
TORCH_USE_HIP_DSA
to enable device-side assertions.This is with the overide as well. Also used "python3 main.py --disable-cuda-malloc" which has the same result.
Followed instructions on 7/25/23 from link below, but got same results:
AUTOMATIC1111/stable-diffusion-webui#11900
As a not "rocminfo" does not work:
Command 'rocminfo' not found, but can be installed with:
sudo apt install rocminfo
After installing per above, just loops back to same message.
Beta Was this translation helpful? Give feedback.
All reactions