Replies: 2 comments
-
This should be fixed in the next release. Have you tried v0.0.3 ? |
Beta Was this translation helpful? Give feedback.
0 replies
-
Thanks for the reply, @GuillaumeLeclerc. Edit 1: I just noticed that this was mention in this pull request. Will try to install from the v0.0.4 branch Edit 2: Ok I manually installed v0.0.4 from the git repo. Seems to be fixed now - sorry for the hassle 😢 |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
When training on ImageNet with FFCV, I've noticed that additional processes will spawn on GPU:0 that use up some memory. The additional processes spawn at the first iteration, if I haven't previously cached the dataset onto RAM.

Below is a screenshot of my
nvidia-smi
output when loading data via FFCV, loading a model to GPU, but not actually running training (no forward or backward pass through the model):In the figure above, processes 49846, 49847, and 49848 on GPU:0 are spawned at the first iteration.
Based on the behavior, I assume the caching operation uses some GPU memory. Is this correct? Also, is there a way to disable this?
The issue is that I want to use a specific batch size, but will run into OOM because of the additional memory used.
FFCV version: 0.0.2
Beta Was this translation helpful? Give feedback.
All reactions