You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I saw the ffmpeg -hwaccel cuda -hwaccel_output_format cuda -i test/resources/nasa_13013.mp4 -f null - command in the doc, which I can use to verify that it's working on the ffmpeg side. Is there any way to verify that it also works on the torchcodec side?
The text was updated successfully, but these errors were encountered:
Probably not a perfect method, but I typically check for GPU usage via nvidia-smi or nvtop, if there is GPU usage while using torchcodec, I assume that everything is actually running on the GPU.
For TorchCodec-specific usage, you are guaranteed that decoding will happen on GPU as long as you pass a cuda device as the device parameter. There is no automatic CPU fallback, i.e. if you ask for a CUDA device and the decoding somehow doesn't work on GPU, you'd get a loud error.
I saw the
ffmpeg -hwaccel cuda -hwaccel_output_format cuda -i test/resources/nasa_13013.mp4 -f null -
command in the doc, which I can use to verify that it's working on the ffmpeg side. Is there any way to verify that it also works on the torchcodec side?The text was updated successfully, but these errors were encountered: