Skip to content

Crash: Out of memory #23

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
pilzinho opened this issue Feb 6, 2020 · 2 comments
Closed

Crash: Out of memory #23

pilzinho opened this issue Feb 6, 2020 · 2 comments
Labels
problem Something isn't working, but it might not be a bug question Further information is requested

Comments

@pilzinho
Copy link

pilzinho commented Feb 6, 2020

We are trying to get our Daniel2 implementation based on this repository production ready and are performing a lot of stress tests.
Theoretically our application can load an unlimited number of Daniel2 videos and display them side by side. But at some point this always leads to a crash of the application. We would like to be able to present feedback to the user that no more videos can be loaded.
For this reason we are already catching all the errors that can happen in the DecodeDaniel2 and Render classes (every function that may return a negative (H)result or CUDA error) and prevent loading the video or close it if it's already decoding/playing.
But we still have crashes that we cannot catch often with the following call stack:

d2cudalib returns error -1002: out of memory ((null), fffffc16h) at D2D.cpp (193)
out of memory ((null), fffffc16h) at d2_decoder_impl.h (410)
CUDA decoder creation failure, using CPU decoder instead ((null), fffffc16h) at d2_decoder_impl.h
(376)
  

Maybe with the help of this call stack you can locate something in your code that might result in these crashes.

@nick-velichko
Copy link
Collaborator

Hello,

Could you provide a bit more information about your case?
What the GPU card do you use?
How many streams you can play before crash?
What is the stream format? resolution, chroma_format, etc...

Actually, as you can see, the error does happen because you ran out of GPU memory in your application, and, which is worse, the CUDA error may propagate into any other decoder currently working within the same CUDA context, so it is not easy to detect what and where did particularly crash.

We need at least to be able to reproduce the case to find out what can go wrong.

@lewk2 lewk2 added question Further information is requested problem Something isn't working, but it might not be a bug labels Feb 12, 2020
@lewk2
Copy link
Contributor

lewk2 commented May 20, 2020

closing as abandoned

@lewk2 lewk2 closed this as completed May 20, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
problem Something isn't working, but it might not be a bug question Further information is requested
Projects
None yet
Development

No branches or pull requests

3 participants