Skip to content

Conversation

@jasonkriss
Copy link
Contributor

No description provided.

@jasonkriss jasonkriss changed the title use absolute imports Precision/Recall Fixes Apr 5, 2018
y = torch.ones(2).type(torch.LongTensor)
recall.update((y_pred, y))
result = list(recall.compute())
# assert isnan(result[0])

This comment was marked as off-topic.

This comment was marked as off-topic.

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Apr 5, 2018

@jasonkriss what do you think about to add an option to compute the mean in the compute() method :

self.agg_fn = torch.mean if option else lambda x: x

def compute(self):
    if self._all_positives is None:
        raise NotComputableError('Precision must have at least one example before it can be computed')
    return self.agg_fn(self._true_positives / (self._all_positives + eps))

@jasonkriss
Copy link
Contributor Author

@vfdev-5 I'd be ok with that. Would it be better to have it take an optional agg_fn like that or just have a average or mean flag? @alykhantejani any thoughts on this?

@alykhantejani
Copy link
Contributor

For precision and recall what other type of reducer functions would you want except for mean?

If there are more than that, then I'm in favor of a constructor arg reducer which can be lambda x : x by default. If there are no other (common) reducers then just a compute_average or average kwarg would do?

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Apr 6, 2018

We choose the way of more flexibility and leave the user a choice of a custom function :)
@jasonkriss could you, please, also add eps in the division self._true_positives / (self._all_positives + eps) to avoid nans ?

@jasonkriss
Copy link
Contributor Author

@vfdev-5 yep I'll add a commit to take care of the nans and warn.

For the reduce argument, I think we should make the API as clear as possible. So unless there are clear use cases for other reductions, I'd prefer the flag. Similar to loss functions in pytorch.

@jasonkriss
Copy link
Contributor Author

@vfdev-5 @alykhantejani made a couple updates if you don't mind taking a peek.

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Apr 9, 2018

@jasonkriss yes, I saw your commits, looks good for me! Maybe just a little question on performance of

result = self._true_positives / self._actual
result[result != result] = 0.0

vs

result = self._true_positives / (self._actual + eps)

Sure, that if we have 10-100 classes, it wont be remarkable...

@jasonkriss
Copy link
Contributor Author

Yea I don't think that will make much of a difference performance-wise (even with 1000s of classes).

raise NotComputableError('Precision must have at least one example before it can be computed')
return self._true_positives / self._all_positives
elif self._all_positives.eq(0.0).any():
warnings.warn('Labels with no predicted examples are set to have precision of 0.0.', UndefinedMetricWarning)

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

from .metric import Metric
from ignite.exceptions import NotComputableError
from ignite.metrics.metric import Metric
from ignite.exceptions import NotComputableError, UndefinedMetricWarning

This comment was marked as off-topic.

This comment was marked as off-topic.

This comment was marked as off-topic.

actual = actual_onehot.sum(dim=0)
true_positives = correct_onehot.sum(dim=0)
if correct.sum() == 0:
true_positives = torch.zeros(num_classes)

This comment was marked as off-topic.

This comment was marked as off-topic.

all_positives = pred_onehot.sum(dim=0)
true_positives = correct_onehot.sum(dim=0)
if correct.sum() == 0:
true_positives = torch.zeros(num_classes)

This comment was marked as off-topic.

This comment was marked as off-topic.

@jasonkriss
Copy link
Contributor Author

What would I do with you @vfdev-5 😄. Great catch. This should be fixed.

This also brings up a bigger question. Not specific to this PR, but in general does anyone have ideas on how we can catch these kind of bugs earlier. i.e. is there any way we can we run CI/tests on a GPU?

@vfdev-5
Copy link
Collaborator

vfdev-5 commented Apr 12, 2018

😄 @jasonkriss

Probably, if you run the tests locally on your machine with GPU it can be catched :)
However, not sure that we have any test with .cuda(). We can introduce them as a simple hack with

has_cuda = torch.cuda.is_available()
if has_cuda:
    def test_with_cuda():
         pass

Otherwise, pytorch does this with jenkins here and there are probably some machines with GPU.

@alykhantejani alykhantejani merged commit 2304e5e into pytorch:master Apr 13, 2018
@alykhantejani
Copy link
Contributor

Thanks @jasonkriss!

@jasonkriss jasonkriss deleted the precision-recall-fixes branch April 16, 2018 08:23
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants