-
-
Notifications
You must be signed in to change notification settings - Fork 656
Ignore empty predictions in Accuracy calculation #1024
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
First of all, thanks for the package! I really enjoy using it. Here is my first contribution: I got errors coming from torch.argmax: `cannot perform reduction function argmax on a tensor with no elements because the operation does not have an identity` Because I'm masking using indexing, training step sometime outputs empty predictions. I think it is more general to allow empty predictions in metric calculation. Let me know if you want any tests covering the added code path.
|
@oguzserbetci Thank you for your comment and this PR 😊 I agree with you, but doc and tests are needed on that feature. Could you, please, improve your PR ? And could you check others metrics ? Thank you for your help 👍🏻 |
|
@oguzserbetci thanks for the PR ! |
|
Good point. Skipping iteration was also something I wished for! However in my setup, I have many different outputs and that have different metrics each, so skipping iterations wouldn’t solve the problem. I actually started looking at pytorch code and will try to fix the problem there. Otherwise it will be a lot of code to add to metrics that will be rendered useless once it’s fixed in pytorch. |
|
@vfdev-5 if this alternative matches the use case, of @oguzserbetci, I think it’s a good idea 👍🏻 |
|
@oguzserbetci okay I see, so your output of eval step is something like output = {
"y_pred1": y_pred1[mask1],
"y1": y1[mask1],
"y_pred2" : y_pred2[mask2],
"y2": y2[mask2],
}and you compute various metrics with several output transforms def ot1(output):
return output["y_pred1"], output["y1"]
def ot2(output):
return output["y_pred2"], output["y2"]and in some cases you can have Maybe we can deal with that as following. We can make ignite/ignite/metrics/metric.py Line 123 in e3fc04e
skip output=None. So, in this case we skip a single output by Metric instead of complete iteration. In this case, you can check your output inside, for example
def ot1(output):
y_pred1, y1 = output["y_pred1"], output["y1"]
if len(y_pred1) < 1:
return None
return y_pred1, y1What do you think ? |
|
I like your approach, but this still seems pretty overkill for something that is actually to be fixed by pytorch. I will make a pull request there to have reduction over empty tensors behave similar to numpy. This would elavate the problem here as well... However maybe skipping metrics is a cool thing to have in general anyways, in which case I like your solution! |
|
@oguzserbetci sure that fix from pytorch side can be helpful. However, we should anyway make sure that thoses cases will correctly update internals of metrics like accuracy...
Let's change this PR in this way :) |
First of all, thanks for the package! I really enjoy using it. Since this was a pretty quick fix, I just post it as a PR.
Description:
I got errors coming from torch.argmax:
cannot perform reduction function argmax on a tensor with no elements because the operation does not have an identityBecause I'm masking using indexing, training step sometime outputs empty predictions. I think it is more general to allow empty predictions in metric calculation.
Let me know if you want any tests covering the added code path. Furthermore, this idea can be applied to many other metrics. This is related to pytorch issue #28380.