Description
Custom Node Testing
- I have tried disabling custom nodes and the issue persists (see how to disable custom nodes if you need help)
Frontend Version
ComfyUI_frontend v1.22.2-sub.9
Expected Behavior
The use of Set and Get frontend nodes is expected to work fine between subgraphs, and between subgraphs and the top graph. The order of execution will be exactly the same as before using subgraphs.
Actual Behavior
Custom Set and Get nodes (e.g. from the KJNodes package) in subgraphs stop working correctly when the user tries to use them outside the subgraph.
Probably because Set and Get look for declared variables somewhere on the frontend, they do not find them in the current canvas window of the subgraph or the top graph to connect them with invisible lines.
This breaks the behavior and use of the Set and Get functionality (where their main key function is to connect two Set and Get nodes by key with invisible lines).
Example:
In top graph:
In "new subgraph":
Of course, I understand that in subgraphs such behavior (work of such Get and Set nodes) is not expected, since subgraphs work differently than standard workflows, but what can you offer instead of Set and Get?
How can a subgraph be connected to data from other subgraphs and the top graph, in addition to the input and output values of the subgraph?
I know about the existence of data caching on the backend (for example, the Backend Cache nodes from the ComfyUI-Inspire-Pack package), which should work fine, since they do not use the frontend in this way.
But what to do with the implementation that is on the frontend with Set and Get? Not use it within subgraphs?
Steps to Reproduce
Create a subgraph.
Create a Set node in the subgraph and connect it to something.
Create a Get node in another subgraph or in the top graph.
Try to select the key specified in the Set node in it.
Debug Logs
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --windows-standalone-build --disable-auto-launch --front-end-version Comfy-Org/ComfyUI_frontend@prerelease --fast fp16_accumulation
Checkpoint files will always be loaded safely.
Total VRAM 10240 MB, total RAM 16209 MB
pytorch version: 2.7.1+cu128
Enabled fp16 accumulation.
Set vram state to: NORMAL_VRAM
Device: cuda:0 NVIDIA GeForce RTX 3080 : cudaMallocAsync
Using pytorch attention
Python version: 3.12.10 (tags/v3.12.10:0cc8128, Apr 8 2025, 12:21:36) [MSC v.1943 64 bit (AMD64)]
ComfyUI version: 0.3.41
Initializing frontend: Comfy-Org/ComfyUI_frontend@prerelease, requesting version details from GitHub...
[Prompt Server] web root: F:\AIWork\AITools\_Test_ComfyUI\ComfyUI\web_custom_versions\Comfy-Org_ComfyUI_frontend\1.22.2-sub.9
Import times for custom nodes:
0.0 seconds: F:\AIWork\AITools\_Test_ComfyUI\ComfyUI\custom_nodes\websocket_image_save.py
0.1 seconds: F:\AIWork\AITools\_Test_ComfyUI\ComfyUI\custom_nodes\comfyui-kjnodes
Context impl SQLiteImpl.
Will assume non-transactional DDL.
No target revision found.
Starting server
To see the GUI go to: http://127.0.0.1:8188
got prompt
Failed to validate prompt for output 30:
* VAEDecode 29:
- Required input is missing: samples
Output will be ignored
model weight dtype torch.float16, manual cast: None
model_type EPS
Using pytorch attention in VAE
Using pytorch attention in VAE
VAE load device: cuda:0, offload device: cpu, dtype: torch.bfloat16
Requested to load SDXLClipModel
loaded completely 9.5367431640625e+25 1560.802734375 True
CLIP/text encoder model load device: cuda:0, offload device: cpu, current: cuda:0, dtype: torch.float16
loaded diffusion model directly to GPU
Requested to load SDXL
loaded completely 9.5367431640625e+25 4897.0483474731445 True
Requested to load SDXLClipModel
loaded completely 2744.7037399291994 1560.802734375 True
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:06<00:00, 2.95it/s]
Requested to load AutoencoderKL
loaded completely 378.66217041015625 159.55708122253418 True
loaded completely 7412.108073043823 4897.0483474731445 True
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:06<00:00, 2.93it/s]
Prompt executed in 29.59 seconds
got prompt
Failed to validate prompt for output 30:
* VAEDecode 29:
- Required input is missing: samples
Output will be ignored
loaded completely 7412.108073043823 4897.0483474731445 True
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:06<00:00, 3.28it/s]
loaded completely 7412.108073043823 4897.0483474731445 True
100%|██████████████████████████████████████████████████████████████████████████████████| 20/20 [00:06<00:00, 3.15it/s]
Prompt executed in 15.78 seconds
Browser Logs
Setting JSON
What browsers do you use to access the UI ?
Google Chrome
Other Information
The latest version of ComfyUI is used (git clone repostory, not realeses) and the --front-end-version version is Comfy-Org/ComfyUI_frontend@prerelease
Everything is installed from scratch
Only one packages of custom nodes - https://github.com/kijai/ComfyUI-KJNodes
@kijai , please learn this issue.
P.s. For ComfyUI_frontend devs - sorry, I checked the "I have tried disabling custom nodes and the issue persists" box for this ticket, otherwise I couldn't create this ticket (this box is mandatory).
┆Issue is synchronized with this Notion page by Unito