-
-
Notifications
You must be signed in to change notification settings - Fork 656
Label-wise accuracy metrics implemented for multilabel classification. #516
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Signed-off-by: James P Howard <[email protected]>
…sification. Signed-off-by: James P Howard <[email protected]>
…sification & text8 clean-up. Signed-off-by: James P Howard <[email protected]>
|
Thanks @anmolsjoshi - I've written some tests and hopefully fixed the flake8 errors. There unfortunately is no scikit-learn equivalent of labelwise accuracy, so I have written an analogous way of doing it in numpy. |
…sification & text8 clean-up. Signed-off-by: James P Howard <[email protected]>
|
@jphdotam thanks for the PR ! To merge it I think we need to discuss about the API. I'm not a fan of introducing another flag. Maybe we can opt something like in torch the arguments of @jphdotam could you please provide a very simple example of manually computing such accuracy score labelwise. For example, I have |
|
Hi @vfdev-5. Your example shows a batch size of 3 for a binary classifier with 3 labels. So there is 75% accuracy for the 1st category, 75% accuracy for the second, and 50% for the third. It's very useful if one wishes to see which label in a multi-label classifier is compromising the overall accuracy. Re: merging, would you rather I instead created a new separate metric from Accuracy called MultilabelAccuracy, or something? And submit it to either ignite.metrics or ignite.contrib.metrics? |
|
@jphdotam thanks for the explanation. Now it is clear that we speak about the same computation method.
Previously, we had Let me think about the new API and I'll comment out here. If you have other ideas on the API we can discuss about. |
|
Ok great. In the mean time I will just use it as a new class as I've posted here #513 , since that's probably easier until we decide. |
|
@jphdotam thanks for providing the code base. In discussion with @vfdev-5, we were thinking the following:
Would you be interested in continuing this PR? We might be working towards a minor release for now, so we shouldn't do major API changes. For the next major release (0.3.0), we can introduce a new_multilabel_arg with options None (binary/multiclass), multilabel (single accuracy value), labelwise (each label for multilabel). What are your thoughts? |
@jphdotam why do you want to add this feature only for |
Signed-off-by: James P Howard [email protected]
Fixes: Feature added - label-wise accuracy option for Accuracy metric.
Description:
Accuracy()metrics withis_multilabel=Truecan now be passedlabelwise=True. When present, the metric returns a tensor of accuracies for each class. For example:Yields:
Check list: